Aaron Koblin Interview
This email interview, with questions from Casey Reas, took place from 1 – 21 Apr 2009.
Aaron Koblin is the Technology Lead at Google's Creative Lab in San Francisco. He maintains a portfolio of work at http://www.aaronkoblin.com. He received his MFA degree from UCLA Design Media Arts in 2006.
When and why did you start using Processing?
I started using Processing in 2004. I've been around computers since I was a child, but had very little formal education in computer science. In college I was given a book on procedural graphics and generative art. I remember being particularly intrigued by Golan Levin's work, and was amazed that he provided a clear explanation of his process, and more so that he included much of his source code. I played with his examples and began making lots of small projects with Flash (ActionScript), Director (Lingo) and Max/MSP. I applied to Design Media Arts MFA program at UCLA and was first introduced to Processing by Daniel Sauter. Having played a fair share of computer games growing up I was excited by the speed and capabilities of C++ and "real" programming languages, but always got lost in the cumbersome process of getting things working. I found Max/MSP to fulfill my performance needs, but the graphical interface wasn't intuitive to me, and my interests were more focused on visuals than audio. At the time Processing didn't yet have all the functionality it does now. In a sense, perhaps this was good as I was forced to spend time ironing out the basics which has probably been helpful in the long run. It was exciting to watch it evolve too, and see powerful additions like OpenGL rendering. Conveniently (thanks to many people's hard work) it's turned into the programming language I always wanted. Although I generally use Processing as a library in Eclipse now, I still end up using it in just about every project I create.
How do you use Processing now?
Often. In addition to creating artworks, I find myself using Processing for all kinds of tasks, personal and work-related. Recently, my friend and collaborator Daniel Massey and I used Processing to create an audio-based art project with the Mechanical Turk. We used Processing for everything from the recording applet, to the image generation, and even the file management. Since Processing is built on Java I've found it to be a great entry towards all kinds of different libraries and data structures. For large tasks like data visualizations I generally use Processing as a library in Eclipse. This way I get all the benefits of a full IDE (things like code-complete, package navigation, and heaps of hot keys). For smaller tasks and experiments I find myself using the Processing editor. Because it's so quick to get going with the Processing environment it's extremely useful for writing small programs to convert data, batch process, and analyze.
Bicycle Built for Two Thousand, 2009. Aaron Koblin and Daniel Massey
Over the years, you've had many projects featured in the Processing Exhibition. Before I start asking more technical questions, please elaborate on these projects.
Sure. The first project in the exhibition was Flight Patterns - this was actually one of the first projects I created with Processing. It came about as a piece of a larger project that I was working on with two of my colleagues at UCLA, Gabriel Dunne and Scott Hessels. They had this great idea to visualize all man-made systems in the skies above Los Angeles for a planetarium dome show. With some guidance from Mark Hansen in the Statistics department we were able to get some amazing data from RLM Software which outlined the locations of each airplane in the sky over a 24 hour period. We completed Celestial Mechanics in 2005, but I continued to be intrigued with the data. Working further with RLM, and later Wired Magazine, I was able to get new data with three-times the sampling and higher accuracy - showing a much more detailed look at the animated paths of each airplane over North America. In many ways this project has been a great learning experience. I remember the first day that I looked at the data. Within about fifteen minutes I went from cryptic numbers and letters to easily recognizable patterns. It was a thrilling experience. The data is really beautiful, and even now, four years later I'm still experimenting with it. The visualizations have received amazingly broad exposure as well. In 2007, I received the National Science Foundation's First Place award for Scientific Visualization, and beyond the scientific community, the visualizations have also appeared on Adult Swim, have been screened at the Coachella Music Festival, and appeared in numerous magazines and television programs. Last year the project was added to the permanent collection at the Museum of Modern Art in New York (MoMA).
Flight Patterns, 2005-2009. Aaron Koblin
Perhaps the best aspect of the Flight Patterns project however, was the way that it intrigued others with equally exciting data to approach me with other visualization opportunities. Last year, I had the privilege of working with the Senseable Cities lab at MIT on a couple visualization projects. One of these, The New York Talk Exchange, was part of the Processing exhibition as well. We asked AT&T to provide data pertaining to the ways that New York City communicates with the rest of the world, and we created a series of visualizations illustrating international city relationships to New York as a whole, as well as how specific areas of New York communicate uniquely with different parts of the world. Using IP flow data (e-mail, peer-to-peer, etc.) as well as long distance phone calls, we were able to visualize the spatial relationships within the communication infrastructure. This project was part of the Design and the Elastic Mind Exhibition at MoMA.
New York Talk Exchange (NYTE), 2008. SENSEable City Lab with Aaron Koblin
Perhaps my favorite project in the Processing exhibition, however, is The Sheep Market. The Sheep Market is a collection of 10,000 hand-drawn sheep from online workers collected through the Mechanical Turk. The Mechanical Turk is a web service created by Amazon to provide "artificial artificial intelligence," now known more commonly as "crowdsourcing." I was immediately intrigued by the concept of using thousands of idle brains, and have long been impressed by projects like SETI@home, which use idle CPU time on people's computers to tackle problems too big for a single machine or cluster. This however, was different; these aren't idle boxes, these are people. I wanted to visualize this and think about this kind of system, which will inevitably become more common. (Especially with advancements in international micro-payment mechanisms.) I used Processing to create a tool for recording drawings and posted the tool online paying $.02 (USD) for each worker's sheep. I also used Processing to organize all the files and interface directly with the Mechanical Turk. In this way I was able to view, approve, and reject each sheep (662 drawings didn't meet "sheep-like" criteria). Finally, I used Processing to render all 10,000 sheep into a matrix that I used for the interface at http://www.thesheepmarket.com, a market place for inspecting and collecting the individual sheep.
The Sheep Market, 2006. Aaron Koblin
Are there some other projects that you'd like to introduce?
Well, last year I had a once-in-a-lifetime experience. I had the pleasure to work with Director James Frost to create a music video for one of my favorite bands - Radiohead. As if that wasn't enough, we shot the video using lasers and sensors and released everything open-source. In a sense, the project really started back at UCLA in the Center For Embedded Networked Sensing (CENS) where I was using Processing to write software to visualize laser scanners in real-time (again big thanks to Mark Hansen for all his guidance and insights). Maxim Batalin and I had a lot of fun coding up software to turn distance points from lasers into 3D scenes at CENS, and when James saw the laser images he immediately recognized the potential to use them in storytelling. I told him that the laser we were using wasn't really going to cut it for making a music video, but that I had a few ideas of what would do the trick. A couple months later we were mounting the Velodyne Lidar 64 laser array to the top of a school bus and shooting millions of triangles onto Thom York's face with Geometric Informatics structured light scanner. We had all kinds of fun messing with the sensors and collected hundreds of gigabytes of point data. I made some data tools for testing and debugging with Processing which we released on the Google code project page, along with enough of the data for anyone to make a music video of their own. The real magic of the video though came from the guys at The Syndicate in Venice, CA who did a great job on the post production and rendering.
House of Cards, 2008. Directed by James Frost, Director of Technology Aaron Koblin
A really fun part of the project was seeing what people did with the Processing code and data we release. There's a YouTube Group of some of the experiments. I'm particularly fond of these two:
[http:// Lego Time-Lapse House of Cards] [http:// Augmented Reality on "In Rainbows" LP]
The most recent project I've completed is a collaboration with my good friend, and talented sound artist, Daniel Massey. The project, "Bicycle Built for Two Thousand," is a collection of more than two thousand sound clips from workers around the world to synthesize the song "Daisy Bell." You may recognize the song from the end of 2001, the song HAL is singing as he dies - a reference to the 1961 version by Max Mathews and John Kelly which was the first example of computer synthesized singing. To create the project we asked each worker to imitate a very simple sound. We created an audio recording tool with Processing which would play a simple sound and record the user's imitation. The tool then sent the binary data to our server to be stored and recombined into the final chorus. We also used Processing to generate images of each waveform and to combine them into a visualization of the entire song which could be investigated at a more granular level. If you have a moment have a listen at http://www.bicyclebuiltfortwothousand.com.
Please elaborate on how you use Processing in different ways. Some projects use Processing as the primary software and others use Processing as a part of a larger pipeline.
Yes, as I've hinted, some projects use one Processing application to create the entire project. For example, I've been creating software to visualize SMS messages in Amsterdam. This is basically one Processing application that I've been developing to allow interactive investigation, frame rendering, and saving still images. This is how I started with Flight Patterns as well - one application to create the project.
Visualizing Amsterdam SMS Messages, 2007-2009. Aaron Koblin
Often though, I've found myself using a number of smaller Processing sketches as part of a larger pipeline. In fact, I've often found that Processing is most important as the glue to bind all aspects together. Having become familiar with Processing I've been able to use other software in ways I wouldn't have imagined possible. For instance, I'm by no means an expert at Maya, however, with a few glances at the reference manual I was able to write Processing sketches that export MEL scripts. This allowed me to render data using Maya's powerful rendering engine (as well as Mental Ray) without wasting days teaching myself an entirely new language and pipeline. This isn't to say that I wouldn't have had better results the "proper" way, but it's safe to say it took a tiny fraction of the time and provided the results I was looking for (my father has always said "don't confuse effort with results"... this is one sense I couldn't agree more, as I've watched many people get caught up in tool mastery at the cost of production). I've used Processing in this way to export content to After Effects, Photoshop, and Audacity, as well as data for R, MySQL, and the Mechanical Turk to name a few. Working directly with the data makes switching between open formats really easy. Since everything is accessible and open source in Processing, one has the ability to stitch together content from other applications which support open formats in all kinds of interesting ways. It can be very liberating (and helpful) to clean up and organize your data with some simple conversion tools (always keeping the original data of course).
Thank you for the thoughtful answers. To wrap up, please tell us about something you're currently working on or about something you're excited to start.
I've got a few things going on right now. For one, I've taken a position as Technology Lead at Google's Creative Lab in San Francisco. It's a lot of fun and there's lots of interesting data and stories to explore here. I've also been putting together an installation for the San Jose Mineta International Airport along with Dan Goods from NASA's Jet Propulsion Lab and Art Center's Nik Hafermaas. We're creating a three-dimensionally distributed matrix of privacy glass panels. Privacy glass is a material that turns from opaque to transparent with electricity, so the plan is to take atmospheric data from around the plant and represent those conditions abstractly as patterns through the suspended "cloud" sculpture. It's an exciting project for me as it's a departure from screen-based work and a great learning experience to work with architects and engineers.
Finally, please share a short piece of Processing code with us.