Session: Play – THATCamp Virginia 2013 http://virginia2013.thatcamp.org Just another THATCamp site Thu, 12 Dec 2013 22:09:47 +0000 en-US hourly 1 https://wordpress.org/?v=4.9.12 Play / Make session: Listen to Wikipedia Guided Meditation http://virginia2013.thatcamp.org/2013/11/08/play-make-session-listen-to-wikipedia-guided-meditation/ Fri, 08 Nov 2013 02:36:33 +0000 http://virginia2013.thatcamp.org/?p=360

Recently a student in the “Digital Past” class I’m teaching posted a link to our Diigo Group which she described as “not very informative” but “interesting.” Listen to Wikipedia be edited: listen.hatnote.com There’s a map, too: hatnote.com

I thought we could do a session where we take 10-15 minutes to just watch the site together, 10-15 minutes to interact with it (choose a different language, click on some of the links, visit the GitHub repo, whatever), and then use the remainder of the time to do some collaborative reflective writing on what we thought, saw, felt, learned. Participad would be a great tool for doing the collaborative writing part if you don’t mind my using my admin privileges to activate it on this site.

Having had the site open in a tab for quite a while on a couple of separate days, I think it actually is very informative — about visualizations, about whatever the audio equivalent of visualizations is, about Wikipedia, about knowledge, about the world. I’m also generally interested in similar sorts of interactive art / games / projects built on functional internet tools: GlobeGenie and Twistori come to mind. We could have a discussion, of course, but for some reason I’m keen on the idea of a completely silent session …

]]>
Tools for exploring big sound archives http://virginia2013.thatcamp.org/2013/11/07/tools-for-exploring-big-sound-archives/ Thu, 07 Nov 2013 16:53:08 +0000 http://virginia2013.thatcamp.org/?p=323

Brandon Walsh has already proposed a session about tools for curating sound, so what I’m proposing here might well fit into his session, but in case what I’m proposing is too different, I wanted to elaborate.

At THATCamp VA 2012, I proposed and then participated in a discussion about how digital tools could help us not just think about tidily marked plain-text files, but also the messier multimedia data of image files, sound files, movie files, etc. We ended up talking at length about commercial tools that search images with other images (for example, Google’s Search By Image) and that search sound with sound (for example, Shazam). A lot of our discussion revolved around the limitations of such tools–yes, we can use them to search images with other images, but, we asked, would a digital tool ever be able to tell that a certain satiric cartoon is meant to represent a certain artwork. For example, would a computer ever be able to tell that this cartoon represents this artwork?

cubiesImagematisse_b_nude_m

 

Our conversation was largely speculative (and if anyone wanted to continue it, I’d be happy to have a similar session this time around).

Since then, however, I’ve become involved with a project that takes such thinking beyond speculation. As a participant in the HiPSTAS institute, I’ve been experimenting with ARLO, a tool originally designed to train supercomputers to recognize birdcalls. With it, we can, for example, try to teach the computer to recognize instances of laughter, and have it query all of PennSound, a large archive of poetry recordings, for similar sounds. We might be able, then, to track intentional and unintentional instances when audiences laugh at poetry readings.

The project involves both archivists and scholars–the archivists are interested in adding value to their collections (for example, by identifying instances of song in the StoryCorps archive), and the scholars are interested in how this new tool might help us better visualize and explore poetic sound and historical sound recordings.

My sound-related proposal, then, is this: to have a conversation about potential use cases for this and similar tools. Now that we know we can identify certain kinds of sounds in large sound collections, how should we use such a tool? Since Brandon’s already interested in developing sound collections using Audacity, I thought we might also add this big-data/machine-learning tool into the mix of the conversation.

]]>
Audacity and Audio – in Play and in Practice http://virginia2013.thatcamp.org/2013/10/24/audacity-and-audio-in-play-and-in-practice/ http://virginia2013.thatcamp.org/2013/10/24/audacity-and-audio-in-play-and-in-practice/#comments Thu, 24 Oct 2013 00:41:14 +0000 http://virginia2013.thatcamp.org/?p=272
A session on working and playing with audio files using Audacity, which has a fairly low barrier to entry for editing sound objects. Depending on interest and ability, we can take either a practical or a playful approach. I’m happy to walk people through some of its basic functions useful to DHers working with sound- how to slice out clips properly, deal with proprietary formats, repair audio clips, overlay tracks, etc. Or we can play around with some of audacity’s fun effects – phase shifting, echoes, pitch alterations, reversing sound waves – useful to more creative endeavors and creating sound art. I’m especially interested in how tinkering with sound artifacts might offer us new ways to interpret them. When does a sound object become something else-something new? We can work with any sound files that people may bring in, though I’ll bring in some samples to play with. The prize goes to the person who can process an otherwise human voice into the scariest thing we’ve ever heard.
]]>
http://virginia2013.thatcamp.org/2013/10/24/audacity-and-audio-in-play-and-in-practice/feed/ 3
data mining bodies in motion http://virginia2013.thatcamp.org/2013/10/23/data-mining-bodies-in-motion/ http://virginia2013.thatcamp.org/2013/10/23/data-mining-bodies-in-motion/#comments Wed, 23 Oct 2013 14:50:09 +0000 http://virginia2013.thatcamp.org/?p=254

Although there are projects considering parsing pedestrian movement (e.g. sitting, walking, waving), there is a great deal of abstract movement going on in the world. The DOD would really like to be able to mine 2D film for patterns to prevent and or locate actions…but I want to look at possible tools for mining 3D and 2D data. For instance, how can GIS help map stage settings and flow? might seem to be an off-the-wall idea, but those of us studying movement in and out of performing arts, are desirous of the ability to mine our texts…non-verbal texts. well, it’s a thought!

]]>
http://virginia2013.thatcamp.org/2013/10/23/data-mining-bodies-in-motion/feed/ 3