danwinckler.com/manifesto


As I said earlier today, I’m writing, writing, writing my thesis paper stuff, which, thank god, is not as painful as my writing process used to be just a few months back. It’s an engaging challenge putting my motivations for Kids Connect as clearly as possible…without using bullet points. 😉 Here’s one of my objectives, which will form the template/questionnaire for the thesis paper itself. Your feedback would be much appreciated. Here are some guiding words on clarity if you need them. If you prefer, add your thoughts on my page on the ZoomLab wiki.

Objective [1]: teach read/write media literacy and cultivate a critical stance to mass media

Why: One of the primary goals of Kids Connect (KC) is read/write media literacy. What does this mean? To be literate is to be able to read and write. A full understanding of media (new, mass and otherwise) necessitates practical know-how of audio and video recording/editing, creation and synthesis. [Quote Mark Twain about reading the river]. In order to be critical of media, you must be able to distance yourself from it. A practical understanding of the craft of media creation and manipulation cultivates that distance. Moreover, a one-sided conversation is a lecture. Few young people are learning how to master the written word, to produce a compelling argument in nouns and verbs. It is vital that young people learn to write media, to raise their voices over and around the constant shouting match and join the discussion.

How: In the first two weeks of workshops, students learn to shoot video with cameras of various quality, record audio with a variety of microphones, go on sound walks and video walks (experiential exercises in listening and seeing), composition and framing, editing and compression. Each technology is approached through exercises with storytelling, improvisational and/or experiential frames. For example, convey a given emotion through a sequence of still images. In the subsequent weeks, these skills are built upon in exercises exploring expression of identity, neighborhood and community experience. Example: take photos, audio and video of your home in your neighborhood, edit together a gestalt, share it through Second Life. Furthermore, we introduce our students to the world of live visual performance. They learn the techniques of live visuals and VJ-ing: how to mix and synthesize live, streaming, and pre-recorded media, how to express emotion and narative through abstracted light and sound, and to do this collaboratively over networks. They’ve [Some have] already given up on the written word [for formal purposes, e.g., an argument –thank you, Anton]: we teach them the new multimedia communication skills they passionately desire.

Evaluation: How can you tell if someone has developed read/write media literacy? By seeing what they’ve expressed through various media. At the end of the workshops, we will have a large collection of work by our students to examine, as well as many hours of teaching experience to consider. We’ll sift this for patterns and I will write it up in my thesis paper.

Since Daniel Smith called it excellent, I thought I’d post this explanation of SHARE that I sent to a Houston Press journalist last night, just in case any y’all are confused. (Notice the “y’all”? I never used to say that when I lived in Texas — I made a point of it. I’m a bit less uptight than I was in high school. A bit.)

SHARE hosts open jams for audio and visual artists. Anyone can come and participate. We provide the infrastructure: multichannel audio mixing and amplification, video projectors and screens, and the expertise to help first-timers learn the basics of audio and visual performance. Share is completely content-agnostic: you can play anything you want on any instrument you can carry in. No structure is imposed on the jam by the Share team. Rather, we encourage structure to emerge from the participants in the jam. Although our audio infrastructure is designed to allow electronic musicians play together, people bring many different kinds of instruments, from traditional/acoustic to electronic to homemade to far-out. No one conducts or actively mixes the sound so the performers communicate the old-fashioned way: by listening to each other and following the flow of the improvisation. Some people prepare extensively, laying down tracks at home to try them out in the mix at Share. Others do almost no pre-recording or pre-structuring apart from practice — like jazz musicians.

I’ve seen Share participants make sound with violins, cellos, laptops, guitars, double basses, lutes, Gameboys, hand drums, kit drums, contact microphones affixed to plastic waterfalls, homemade noiseboxes, analog synthesizers, microphones, circuit-bent toys, keyboards, beatbox, voice, and something in a bright green custom fiberglass body called the Green Bean*. Likewise, the video participants use laptops, cameras, movie clips, film, slide projectors, flashlights, lightboxes, custom screens, DVD players, paper dioramas and more. Then there’s the really far out stuff: motion and light sensors worn by dancers, communicating their movements to audio and visual performers who use the sensor data to affect the sound and light. The distinction between media gets blurred. The separation between performer and audience breaks down and changes.

That’s not the half of it, of course.

* Made by Randy Jones. Really nice guy. Very helpful on the Max/MSP/Jitter mailing lists.