(This document is on a wiki, so it should go without saying that it's a work in progress. In this case, the document is in an extremely incomplete state.)
This text discusses the potential of the software environment Processing within the context of architecture. To simplify, we can say that Processing was designed to make it easier and more convenient to make computer graphics with code. The original audience was mostly university students studying in areas such as graphic design and media art. The goal was to contribute to a growing programming culture unique to the visual arts, rather than adopt the tools and attitudes toward programming established in computer science.
As an open-source project, Processing was designed for community contributions and expansion. As the primary developers, Casey and Ben made Processing for their own needs (interactive graphics) while simultaneously opening the project to other directions through software extensions called Libraries. Since libraries were introduced in 2004, individuals and small teams have pushed Processing into new directions through their contributions. The Libraries have moved the software further into geometric manipulation, computer vision, simulation, and much more. In our opinion, there is untapped potential to fully utilize the Library system to make Processing more useful for architects.
This is a collaborative text written in analogous manner to how Processing is developed. The founders of Processing, Ben and Casey, created the outline and rough text, then they opened it up to a larger group to modify and extend. The printed text reflects the edits and additions through color coding, strikethroughs, highlights, and notes.
Through this text and its writing process, we hope to reveal new information about the current and potential use for Processing within architecture, both in education and in practice.
Processing is an open-ended programming environment with soft boundaries and as such it is hard to compare features with other software tools used by architects. Yet, there are three significant differences between the native Processing functionality and these other tools:
First, Processing has no graphical user interface (GUI); all projects are purely code. Many architecture tools include a way to program graphics, but these scripting languages are typically integrated within the GUI environment: AutoCad/AutoLisp, Maya/MEL, Rhino/Visual Basic. However, there are a number of powerful GUI libraries available for Processing, with ControlP5 by Andreas Schlegel being the most widely used and feature complete. Together with the cp5magic helper library, making use of Java's annotation system, any variable can quickly be turned into a GUI controller with a single line of code.
Second, Processing can only operate on geometry at a primitive level. The high-level commands featured in specialized CAD and animation software are not currently present. For instance, there's no command in Processing or merge two shapes together into a single piece of geometry, though this feature is provided by at least one of the available geometry libraries (see below).
Third, Processing is a small, community-supported, open-source project. Unlike software written (or acquired) by Adobe, Autodesk, and smaller vendors like Rhinoceros. This is reflected in the marginal use of Processing within the profession – it's more a curiosity than a standard tool. But, the prevalence of Processing in education is curious. This begs the questions:
Why is Processing useful? What desire does it fulfill that other software tools don't?
Processing works well as an introduction into programming in teaching Architecture as it provides very quickly interactive feedback and makes abstract concepts much more tangible while holding off on the more GUI based parametric and scripting based modeling packages. It also helps to shift expectations from what seems difficult and easy to achieve in a short sketching session for people coming from modeling - where certain geometries can be quickly modeled manually which would be very awkward to program and vice versa. One also quickly realizes the reliance on models such as data structures or search techniques or geometric principles for achieving anything more interesting, which also leads to certain models being heavily used for their elegance and relative simplicity in achieving interesting results. Whether the results relate to a broad spectrum of architectural questions, especially harder ones that go beyond formal novelty is often unclear as the models tend to develop their own dynamic within this generative context. This is not specific to Processing alone but applies to programming in general but Processing provides many avenues that lower the bar of otherwise complicated procedures and make them more accessible. One big advantage of a relatively lean processing program running with little overhead compared to parametric models in conventional CAD packages is the possibility to process much larger data sets in interactive fashion which opens up exciting possibilities of real time exploration to those having entered the generative field from scripting CAD in packages. It is also very open in allowing different fields to come together and is developed by a community of collaborators who often shape it in the direction they want to use it. Things may not be as robust as a commercially verified software package geared towards a particular industry but as an experimentation ground it is very stimulating.
Within architecture, Processing is used primarily to generate form; most often through parameterization and simulation.
As a minimal sandbox in which to deploy computational objects, Processing can support generative work at scales and quantities impossible within other platforms. However, the infinitely-smooth quadratic surfaces that characterized the work of so much digital architecture are simply not present in Processing; instead one finds a variety of simpler primitives like points, lines, triangles and boxes. Put together in populations they form clouds, meshes, matrices and topologically rich manifolds. Work in Processing is ultimately to define how these primitives will come together in a coordinated way; as the number of elements jumps by a magnitude self-organization has proven to be an indispensable tool for putting together and maintaining this coordination. However, in a computational design process there're often at least two components/phases present: the creation of abstract models and their visual representation. Processing sadly only provides its geometry primitives purely for display purposes (the latter phase), which, considering the tool's original usage context, is one of its strengths. It's also a weakness, since architecture and other disciplines related to form finding do require actual geometrical models and a set of operations for manipulating them.
To fill this conceptual gap, several geometry libraries have been developed by regular users. For example, the toxiclibs project by Karsten Schmidt provides a set of geometric primitives, tools & operations allowing users to deal with complex geometrical topics very easily. Contrary to Processing's approach, the design philosophy of this library is based on a polymorphic software architecture and the strict separation of abstract models and their actual representation(s). A model of a sphere is really just the mathematical concept, but too can be treated as generic 3D form and can be used and manipulated independently from its representation, e.g. as a mesh of triangle facets. This approach is only possible through the extensive use of object-oriented programming principles and as such constitutes a somewhat alien approach to the established Processing way of doing things, which is much more procedural. However, the modular nature and extensibility of the toxiclibs classes acts as a system of (currently 280+) abstract, interoperable building blocks, which can be easily assembled to solve complex design problems. The core library covers: 2D/3D vector maths, primitive forms, point clouds, voronoi maps, spatial trees, matrix operations, quaternions, splines, terrains, meshes with support for perforation, subdivision and smoothing, implicit surfaces, intersections between primitives, import/export of geometry data at STL/OBJ. All in all, toxiclibs is collection of 8 libraries and this core functionality is further supplemented by e.g. a particle based physics simulation with Verlet integration, simulations of natural processes (like DLA, reaction diffusion, thermal erosion, cellular automata) and voxel based volumetric modeling tools for creating highly complex geometry, calculate Iso surfaces, convert between polygon meshes and voxels, merge models, create shells and boolean operations. The volumetric brush tools provide an alternative modeling approach similar to ZBrush and have been used for several award-winning digital fabrication projects both in the professional design field as well as in education. Furthermore, the libraries have been designed to be extensible at all points & form mini domain-specific languages and so make it very easy to express sequences of geometric operations in code form.
Processing has two default modes: "setup", which occurs only once at the start of the script, and "draw", which runs continuously. To write code is to define a toy universe, and worlds described in Processing will at least have this "arrow of time". The integration of various, often conflicting goals and systems in buildings (as well as cities) create problems which can not simply be "solved", but form a search space of potential solutions and outcomes. A design process should explore that space and identify both the mechanisms and mechanics of why certain outcomes do better. Simulation is a powerful tool available to designers to explicitly determine and explore these issues, and the arrow of time built into the software is the motive force behind that potential. For instance, in Processing an architect might model a building program as a system of springs and weights, which may rearrange themselves by pulling towards similar programs and pushing away from noxious uses. Such a simulation would need to unfold within time in an environment where physical forces acted in a very similar way to our own reality - an isomorphism between code and reality. These isomorphisms can extend beyond the simple rules of matter and can engage with social, institutional and environmental dynamics through the same object-oriented framework.
Crucial to the success of Processing has been the exposure of object-oriented methodologies in architecture. Building scenarios defined through the OOP framework synthesize behavior and state in such a way that structures emerge through local interactions and information sharing. In other words, structure self-innovate. What do we mean by structures that can innovate themselves? In this framework, parts, assemblies and entire organizations are only loosely defined, capable of independent action at the lowest level possible and able to knit together in novel ways. These networks, either through an internal evaluation or through interaction with an environment, are empowered to discover solutions that simply “work”. Innovation is a change that just works - consider the random changes brought about through evolutions. These changes are agitations to the genome of which countless fail. The process that we recognize as evolution in the rare success, and the con-sequent proliferation of that successful pattern. Such evolutionary processes and performative models are difficult if not impossible in other architectural platforms.
In the context of infrastructure, cybernetics proposes looking at datastreams passing through the systems. With the spread of crowd sourcing, cloud computing, reality mining, internet of things and similar resources, we are increasingly capable of harnessing and mutating those datastreams. Emerging infrastructures for such molecularization of massive data, and methods of encapsulating, encoding and processing will have the power to retexture energy resources, information, and even matter. What are the possible futures of architecturalizing infrastructures in a context where all the agencies are inseparably embedded into one another, where life is infused into matter, where context itself is alive...
Every block of information has its nature and a geometrical/graphical model that could be associated to it. Processing has become a powerful tool to blend information and matter. The working model can become a rich hybrid cartography both intuitive and informed where the designer can strategically highlight working variables and take informed design decisions. The strong graphical features of processing makes it easy not only map and visualize data but enter a state of re-wiring the conception of space from several different inputs. Spreadsheets, images, sound or any kind of sensor can be mapped into diverse space/geometrical model. Is not only the idea of associating variables but of tailoring in detail the channeling of information of a system into another.
In such context, within framework of generative design, Processing opened the door for working with larger populations of generative agents. Large populations of agents are interlinked by micro-transactions taking place over a vast territory and reflecting self-regulatory pressures within environment, global migratory patterns and complexified programming. This intelligence is being encapsulated as series of proto-architectural entities capable of rewriting existing protocols, including long inability of architecture to productively and creatively address acute issue of sustainability. Such large scale/high population multi-agent systems provide resilient fabric that can eventually adapt and learn when connected to external inputs (such as weather data, programmatic inputs, fabrication and constructability constrains including material science). It works to absorb and process various scales of ecological and human/non-human probabilistic population patterns. Redundancy and temporal rhythms (including parallel realities of computational time with its massive probabilistic iterations and poly-dimensionality), could be synchronized with the targeted agencies of the host conditions. Programming specific durations into movement and interaction of agents results in highly expressive behavioral patterns. As much we can see a processing script running as an animation, the truth of the matter is that its simply a recalculation of pieces of the code generating an update. This can, sometimes, generate the illusion of an animation but it has implicit a much more complex idea of time. The processing model (Java’s Object Oriented Programming)allows for parallel time setups independent to each object or even each instance of an object. The definition of a ‘time-line’ becomes obsolete. Time is define by the ‘event’ or interaction of processes, allowing the user to stretch or collapse time at will. The Processing work-flow allows for mutual dependencies between objects encouraging the user to deal with complexity explicitly. Non-linear time becomes ubiquitous.
Similar situation occurs with space; the idea of the voxel as one of the possible geometrical models, allows for the space of possibilities, ‘n dimensions’. A point not only posses x, y and z coordinates but ‘n’ slots of information. The way this information is triggered enable for the fabric to mutate and adapt or dissipate when needed. Even void space is charged with a rich information matrix. In architecture this becomes extremely useful when dealing with conflicting criteria; The designer can quickly visualize the the emergent result from the fine-tuning of hierarchies, often finding hidden singularities.
Underlying data gets instantiated into different facets of emergent pre-geometry states, resilience and malleability of which allows for the retexturing of structural, material and organizational expressions. Agent behaviors are deriving singularizations resulting in finer grain heterogeneous and highly resilient proto-architectural fabric. It can go through rapid phase shifts rather than prior modernist and parametric models based on more linear gradient differentiation.
Designing through multi-agent systems enables a re-thinking of the role of matter in the design process. Instead of form being imposed upon matter, matter plays an active role in it's formation. This conceptual programming of matter encodes micro design decisions within a distributed population of agents that interact locally to give rise to a self-organized macro design intent. This opens the potential for the emergence of complex formations and the dissolution of linear hierarchies. The non-linear operation of multi-agent or swarm systems enable discrete concerns or decisions to interact within a continuous matter, negotiating between often conflicting design criteria. Consequently systems as diverse as urban infrastructure or tectonics that are considered discrete and operate in a linear sequence are instead able to interact within an ecology. The complex nature of these systems has the potential to generate a radical form of continuous tectonic and intensive affect. Encoding design intent within systems capable of negotiating fundamental architectural problems radically departs from the more simplistic use of multi-agent systems for the generation of emergent patterns that act as templates for architecture. This mode of dissolving normative architectural hierarchies has been explored through a series of projects by Kokkugia across a range of scales including: urbanism (Swarm Urbanism), the negotiation of structure and ornament (Fibrous Tower), component substrates (Swarm Matter) and ornamental order (Kiev Monument).
High population agent systems increase the resolution of the fabric of architecture through local interaction and replication these systems generate a heterogeneous intricacy. The open nature of Processing enables agency to be encoded within diverse substrates. Consequently the notion of the agent can be expanded from a position or vector to consider the agency of a line, network, surface or component.
Techniques such as subdivision allow for local growth and development maintaining the geometric consistency. For this, the designer must be aware of the ‘geometric agency’ implicit in the model used; for instance the vectorial relation between neighbor vertexes to avoid intersections. This geometric agency allows for a direct connection between the model and manufacturing and not considering the digital model a form of output
Examples of projects include agent-based building skins that respond to the local weather conditions and programmatic needs, larger scale endangered coastline proposals being designed within conditions of dynamic stability and similar. Additional applications of such resilient high population intelligence were tested for programming behavior of production machines directly _ without mediation through more representational/visualization modes. Generative vector-based behaviors were fed directly into machining paths of CNC machines or Arduino based modules embedded into transformable material fabrics. The quality of architectural fabrics emerging from such millieu is radically different from previous generations. For example, in || fissures project by Biothing, infrastructure of a large transportation hub is designed in such way that it constitutes rewritable signaling fields more alike to glow of fireflies adapting to the weather conditions, than mechanical rigid systems commonly found in buildings. Such finely distributed and high-population adaptive infrastructure allows for fine-tuning and larger precision of energy consumption within build environment. Power of complexity similar to material formations of natural origin is captured through the multi-agent generative mathematics. It produces fresh kind of aesthetic sensibility with increased level of detail and complexity within artificially made structures. The more in not just more _ the more is different///
[MOS 2009_MITList, 2009_JTG_Lilly, 2008_Onthevergeofcollapse]
[kokkugia Swarm Urbanism, Fibrous Tower?, Yeosue Pavillion?]
Processing becomes most interesting in projects that utilize live data streams, or the result is situated in an environment other than just the computer screen. For instance, an installation that utilizes computer vision to know something about the people occupying the room. Or a project that reads data from sensors around a building or city and then visualizes the result. These are more typically the domain of interaction design studios and media artists, but this is changing as the technology becomes more ubiquitous.
Processing's core and contributed libraries create a versatile framework for users and covers a wide range of applications including screen, print, audio or physical formats amongst others. Since Processing is built on top of the Java programming language, virtually any Java library can be used, which makes the Processing environment a highly flexible system allowing many different input and output methods, such as the graphical user interface.
When developing software with Processing, a common practice is to tweak and adjust parameters inside the source code itself in order to achieve an anticipated outcome which is often driven by form finding. Changes made are then evaluated while the program is running and can only be readjusted when the program stops. Repeating these steps can be very time-consuming and often results in compromising a smooth work flow; and since changing code during run time has no immediate effect, custom graphical user interfaces come in handy. Processing does provide basic examples of some controllers to create simple GUIs. A range of contributed libraries are available which makes the implementation of custom GUIs even easier. Adding controllers such as buttons, sliders, knobs, menus, or radio buttons enables a dynamic adjustment of parameters resulting in an immediate response during run time. GUIs can be arranged on top of the main display window or in separate windows to build simple or more complex applications and tools.
'VolcanoSlices', a project by syntfarm, for example is a custom software tool written to design volcano look-alike 3D objects fabricated by a Laser Cutting Machine. A series of sliders is used to adjust the shape of an initially cone-like object. More specifically, these controllers are used to modify individual parameters of the Perlin Noise algorithm (already build into Processing) to form the outer shape of the object. Processing's OpenGL renderer draws the object into 3D space, zoom and rotation to view the object from different angles is controlled by the mouse. Single horizontal slices cut through the object are saved as vector graphics files using Processing's PDF library. The design process is completed and the object is ready for fabrication, one of the many examples where Processing's core framework combined with additional library packages can transform a form-driven idea into a specific outcome. Here, a graphical user interface is the obvious but certainly not the only method for a parametric design process.
While GUIs are a very direct way to interact with a software, other methods are less dependent on user input but provide data streams for example from databases, sound analyses, physics simulations, particle systems, web services, network protocols or sensor based hardware to feed particular parameters. Various contributed libraries have been written to implement or interface such data streams with Processing.
Processing is often used as a conduit for data and information. We sometimes refer to it as a software utility belt, consisting of many tools that can be used in different combinations. Data in many formats can be imported, manipulated, and exported – in a range of export formats.
Data can come into Processing from many sources. It can be 3D geometry rendered in another tool, live data sampled from the environment, a stream of numbers sent over the internet, and so on.
One difficulty for architecture in general that also filters into using Processing is the lack of ubiquitous and open file formats for working with 3D data. There's no single 3D file format that can be read and written with ease from one software tool to another. The major formats are de-facto standards and proprietary.
The problem of file formats is not trivial - the way that we describe objects is incredibly important in how we see them, not to mention how we make them. Currently, the industry is experiencing a shift from Computer Aided Drafting to Building Information Modeling, which allows architects to capture more fully the productive efficiencies made possible by digital production. BIM adoption promises to catch the profession up to what many innovative small practices had already been employing: highly mutable parametric models, auto-generation of construction information and a tight integration with world dynamics and performance data.
In lieu of explicitly subscribing to a particular format, Processing provides simple access to an object oriented language (Java) to define objects, behaviors and relationships. Any number of schema can be used to describe the manifold set of relationships and material investments at work within a building. In this framework, practitioners carefully curate a growing toolbox of objects and algorithms. The thinking and inspiration is somewhere "in-between" the objects, in their selection, their deployment and the other myriad decisions the designer makes when assembling the object graph and specifying interactions.
[Pachube and EEML]
The first Processing programs created nearly a decade ago created 2D images on screen. In the time between now and then, it's gained the ability to generate realistic 3D scenes, to create vector files for high-resolution printing and laser cutting, and to export 3D geometry for CNC (computer numerical control) machine tools. Concurrently in the last ten years, Processing and digital fabrication have undergone amazing transformations.
Lunar House is a Corian™-clad spec home designed by davidclovers in collaboration with C.E.B. Reas. Working with Processing, Lunar House invigorates the relationship between architectural graphic and architectural mass. The relationship between the texture generated from Processing and the mass of the building operates at three scales: the massing scale, the surface scale and the material scale. At the massing scale, the “setup” of Processing is customized in order to relate to edges and corners of the building mass. At the surface scale, the “tooling” of the Processing is designed to maximize variability within and across the lines that form the texture. At the material scale, the “tactility” of the texture is amplified and animated with lighting. Each scale is designed three dimensionally and rides the limits of both the material and their processes.
Materializing and dimensionalizing these processes is the intersection point between Processing and architecture. Line quantities are reduced and edited, but they are anything but stabilized. Instead of trying to ‘freeze’ them at a certain moment, each line is activated in multiple ways. By forming tool paths three dimensionally, shadow lines vibrate and translucency varies across the surface. Cut into flat sheets of Corian™ prior to forming, each line is designed to capture artificial light and shadow. The overall “tooling” and forming was calibrated in relation to the CNC milled texture. By selecting a specific CNC bit, the uniform and two-dimensional lines generated from Processing are transformed into etched lines of varying depth and thickness. This process requires an intricate three-dimensional choreography of the CNC texture with each corresponding, formed, Corian™ sheet. Once they are aligned, this detail allows the texture to flatten the overall form at places while augmenting it at others.
As such, the procedure of dimensionalizing Processing is interwoven and dynamic, demanding us to constantly re-evaluate the various inputs based on the resultant spatial, visual and tactile effects. The inputs of thickness, depth, light and shadow renders the primitives as no longer simple, and required precise and calculated editing.
[Fluid Forms library]
Fluid Forms offers customer the chance to take part in the product design process. By selecting a location on earth that is of particular meaning the user creates unique jewelry or household accessories which are 3d-printed, CNC-milled or photo-etched using Polyamide , wood, silver, stainless steel, etc. By punching a boxing bag at the Ars Electronica Center or virtually online a custom lamp can be created.
Fluid Forms Libs began as the common denominator of projects completed at Fluid Forms. At the time Fluid Forms was dealing exclusively with 3D parametric forms to be either 3d-printed lamps or CNC-milled. Whilst processing provides the means to create 3D forms with just one line of code, the navigation and handles for modifying parameters required in Fluid Forms projects are missing. The decision was made to package these tools as a processing library for the benefit of future Fluid Forms projects and the processing community.
"For the first few 3D projects I cut and pasted code from here and there. With one line of code I now add mouse navigation, a GUI for parametrization and file exporting for milling, 3d-printing or lasercutting and photo-etching. This keeps my sketches as simple as possible and enables me to concentrate on my creative goals." explains Stephen Williams.
At Fluid Forms the goal is to create products that encapsulate human emotion whilst maintaining an affordable price. This requires a certain amount of process automation. Selecting small sections of a landscape out of a dataset entailing the whole world in real time or displaying sections of the OpenStreetMap database without delay are hidden hurdles that processing helps to overcome.
"Processing does not always remain in the production code but due to its power and simplicity it is almost always part of the initial proof of concept." says Stephen.
As an emerging open-source project, Processing has more in common with public-focused fabrication initiatives like Makerbot, RepRap, and Fab@Home, than the dominant proprietary companies. Like Processing, these tools develop iteratively through the involvement of a few passionate individuals and core community, the people who use and contribute to the software. This new category of open-source fabrication tools are extremely inexpensive, but they are crude in comparison to the high-end competition. But, these new tools are advancing extremely rapidly and the gap is diminishing.
Processing Spatial Cognition
Processing is also capable of computing the performative aspects of the architecture in relationship to real-life occupancy, allowing architecture to incorporate the informational patterns of real-life (and even real-time) scenarios. When the hardware of architecture is paired with software and sensor platforms, window openings sense the users temperature, lighting adjusts itself to the activities occurring inside the space. The ubiquitous network of the “Internet of things” promises to more substantially integrate architectural space into the patterns of living, also extending into the fabric of the city.
Processing courses lead at architectural schools provide an introduction to the concept of “object” oriented programming to students in a visual manner, important to students used to scripting languages on top of a GUI for linking given objects. The integration between Processing and available hardware has ever more accelerates the relationship between hardware / software for first time tinkerers to explore the Internet of things in conjunction with the domain of architecture, positing “space as a technology” that exists in an ecosystem of other technologies and practices.
The word “programming” has a particular usage in architecture already - programming space is normally about assigning and supporting functions. Our world is in fact much more dynamic, with utilization of space being a dynamic process that is tied to changing attitudes towards leisure, work and sociability. This dynamism opens the door to considering new modes of support for architectural form and environmental systems. Processing runs in an infinite feedback loop, and sensor devices such as web-cams, computer vision libraries such as OpenCV and commercially available hardware resources allow us to capture and respond to the space in real time, opening a door to new forms of human to non-human relationships. The BCI (Brain Computing Interface) / BIM (Building Information Modeling) is a project of mapping senses data from EEG in relation to the physical geometry.
This mode of thinking will increasingly be integrated in architectural education to think through the process of designing space - not just as a hardware problem, but a performance of the architecture to support human perception, development and lifestyle.