From the Expo

expo74badgeLast week I was a participant in Expo ’74, Cycling ’74‘s first Max user conference.  It was 120 Max-heads totally stoked-up about each other’s work and the work we’ve been doing at Cycling ’74 over the last while.  Unlike the academic conferences or trade shows to which I’m more accustomed, the attendees at Expo ’74 were exuding profound amounts of happiness.  It is really rare to experience the amount of joy and community that I saw this past week.

There are a number of topics from the Expo that I’d like to discuss in more length than I will do right now.  For the moment, I’ll give an overview and some highlights.

First, though, I will start with an admission.  I left for San Francisco with some doubts about the premise of Expo ’74.  The idea that a conference would be organized around a tool, rather than organized around a problem or research area, seemed problematic.  It ended up being quite the opposite.  I found that by all of the participants sharing a common platform that people were able to discuss the actual essence of their research, development, artistic practice, etc. without the distraction of having to preface everything with information about the technology.  Other participants understood the technology and tools, so they could fade away and allow for a clearer focus on the real artistic issues.

Day One

Lots of things were happening on day one.  The morning included some well done presentations by my Cycling ’74 colleagues Manuel Poletti and Darwin Grosse.  They did a nice job with Max 5 and Max for Live.  As someone already quite well acquainted their work, I was more interested by the afternoon where the presentations were about work with which I was less familiar.

cablecarPamela Z gave a history of her work with Max and showed the real practical aspects of how her performance system (not just Max) is structured.  This was capped off with a performance of Broom, which was both effective and inspiring too.  Robert Henke (of Ableton fame), showed a number of his Max patchers, many of them the search for the perfect step sequencer.  He also showed some Max patches that he uses for image processing to produce album covers (I will talk about this topic more on another day).  Finally, Barney Haynes showed us some documentation of his work.  There wasn’t any significant discussion of the role that Max played — and this is one of the things that made the conference really nice.  Because Max was assumed, it didn’t have to be discussed and there was no pressure to discuss it (or to hide it for that matter).

In the afternoon we split into groups named after Max objects.  My group was the Buddy group (as signified on the scan of badge).  In this first of two meetings we were to “collect data” from an excursion in the city.  Our group’s excursion consisted of a cable car ride and a visit to the San Francisco Cable Car Museum.  The data we collected was pretty impressive.  We had 4 video camera with time information embedded, we had a couple of people with professional portable audio recording gear, we had GPS and heart-rate-monitor data logged for the entire trip, and Joshua Goldberg (a fellow ‘buddy’) had his iPhone transmitting accelerometer data to his computer which logged the time-tagged accelerometer data into a coll object in Max (and yes, he carried his computer around open and running through for the whole experience).

Day Two

Days two and three split the morning into two tracks, so it is impossible to give an overview of everything that happened.  Ipresenation-stage happened to see presentations that included Gregory Taylor (discussing how he generates control data and, more importantly, why he does it that way), Andrew Pask (discussing time management in Max), and Andrew Benson giving an incredible introduction to writing OpenGL Shaders.  Gregory’s presentation has inspired me to start on some new work that includes some new chaotic generators for Jamoma and Andrew Benson’s presentation has got me writing shaders now, making this the session with the most immediate impact on what I’m doing.

In the afternoon there were presentations including Brad Garton showing the rtcmix object and a panel session on education.  I’ll talk more about the education panel in another post.  Brad’s talk was interesting, as I have long been a proponent of mixing graphical and text-based approaches in Max to leverage the strengths of each.  I feel pretty strongly that advocating graphical or text interfaces (all the way back to my work on Jade in 2001) for every purpose really puts you at a disadvantage because sometimes you end up with an inferior tool for a given task.

Day Three

group-projectI bounced between rooms in the morning to cover some advanced Max external developer topics and to see Luke Dubois present his work with Max.  Luke’s presentation was really well done and one of the highlights of the entire Expo for me.  I was familiar with a number of things Luke has done, and I’ve admired them before, but seeing a larger body of work together really brought it together for me in a new way.  It’s really impressive.  I was also nothing sort of stunned when at the end he gave away all of his Max patches.

We had an open mic session to solicit feature requests.  I had reservations about this, but no one threw any fruit at us and it seemed like we had an answer for about half of the feature requests that went like “good idea; it’s already done and will be released soon”.  There was something really funny about Robert Henke coming up to the mic and humbly submitting his feature requests for Max with everyone else.  I think it was a very democratizing experience.

The second meeting our ‘buddy’ group in the afternoon was provided so that we could take all of the data we had collected to create 5 minute performance/work for the rest of the Expo attendees (their groups had the same challenge).  We used a rope to physically connect the group as we moved through the space with out computers to provide a spatially shifting audio and visual performance.  I’m not convinced that our patches actually work though.  We only had 74 minutes to put it together, so given that constraint I think we did okay.

The conference ended on what may be the climax for me.  After the group projects we made our way up to Berkeley for a a barbeque and performance at CNMAT.  Bob Ostertag and Pierre Hébert put on a stunning show as a part of their Living Cinema collaboration.  The performance featured hand drawing created and then animated in real time.  The hand drawings had a visceral and organic feel to them.  The morphological qualities of these figures was explored through gestural sequences that changes the context of the figures, the meaning of the figures, and the meanings of how the figures related to each other.  Gestural energy and and visual articulation was beautifully at one with the sonic material.

On many levels this is one of the best, if not the best, real-time audiovisual performance collaboration I have ever experienced.  What great way to end the Expo ’74 event.  And I didn’t even mention the food, the wine, the sushi…  it was all extraordinary.  I hope there are more of these events in the future!

Not So Big…

This past week I received a gift.  It was a DVD called “The Not So Big House” by Sarah Susanka.  She has also written a couple of books, though I haven’t read them (or at least not yet).  It doesn’t say it so bluntly, but it essentially provides a foil to the bankruptcy of architectural trends in the U.S. urban-sprawl markets (which is to say, most of the U.S.).  There is an interview with her in the Washington Post (though it is more nuts-and-bolts than philosophical).

We often get caught in the trap of scale.  We want a ‘big’ orchestra.  We want to want to create a ‘large’ or ‘significant’ work, like a concerto or symphony.  Or a giant installation vs. a small sculpture.  This is often encouraged by our academic and accrediting institutions.  It is much easier to judge based on the quantity of music or art rather than subtle issues of a work’s quality.  The same is true of houses — is bigger better?  Most everyone will tell you ‘yes’ without giving much thought to the various qualities that may affect the persons living in the house.

While I don’t have any earth-shattering conclusions to share, I have been thinking about applications of this architectural philosophy to software design.  There are very superficial ways to apply it (using small focused tools, etc.), but I think there are deeper applications which even impact the structural aspects of code-bases.

Architectural patterns and issues are among the most fascinating subjects.  As an artist I find the same approaches to design showing up in my artistic output as well as my code and hardware development.  How I approach building furniture with hand tools, sketch an idea for remodeling a room in the house, shape the flower beds for landscaping, craft contrapuntal lines in my orchestration, and pattern software are all expressions of the same essence and character.

And now?  Now it is time go design a meal to enjoy.  Yum!

The Jamoma Platform

In the series Custom Data-Types in Max there is frequent reference to Jamoma.  Jamoma is “A Platform for Interactive Art-based Research and Performance”.  Sounds great, right?  But what does it mean by “A Platform”?  How is  it structured?  What are the design considerations behind Jamoma’s architecture?  Many people are aware of some of Jamoma’s history or what it was in the past, but it has come a long way in the last couple of years.

The Jamoma Platform

The Jamoma Platform comprises a group of projects addressing the needs of composers, performers, artists, and researchers.  These projects are orchestrated in a number of layers with each layer dependent on the layers below it, but the layers below not dependent upon the layers above them.


The modular framework is built on top of the Max environment while others are completely independent of Max.  For example, the Jamoma DSP layer is actually used to write objects for Pd and SuperCollider, plug-ins in the VST and AU formats, C++ applications in addition to creating objects for Max or user by the Jamoma Modular Framework.

The modular layer also bypasses some intermediary layers, which is indicated in this graphic with the lines that directly connect the layers.

Let’s take a look at each of these layers (bypassing the System Layer).

Jamoma DSP Layer

At the bottom of the stack is the Jamoma DSP Layer, also known as TTBlue for historical reasons.  The DSP layer, logically enough, is where all signal processing code for Jamoma is written in C++.  There is a library of processing blocks and utilities from which to draw.  The library is extensible and can load third-party extensions to the system dynamically or at start-up.  Finally, the DSP Layer is more than just a bunch of DSP processing blocks: it includes an entire reflective OO environment in which to create the processing blocks and send them messages.

All by itself the Jamoma DSP Library doesn’t actually do anything, because it is completely agnostic about the target environment.  The Jamoma DSP repository includes example projects that can wrap or use the DSP library in Max/MSP, Pd, SuperCollider, VST and AU plug-ins, etc.  In some cases there are class wrappers that will do this in one line of code.  In all of these examples, the DSP library is used, but no other part of Jamoma is required, nor will it ever be required, as we keep a clear and firm firewall between the different layers.

Jamoma Multicore Layer

Jamoma Multicore, hereafter we’ll simply say ‘Multicore’, is built on top of the DSP layer.  Multicore creates and manages graphs of Jamoma DSP objects to produce signal processing chains.  One can visualize this as an MSP patcher with lots of boxes connected to each other, patchcords fanning and combining, generator objects feeding processing objects etc.  Multicore does not, however, provide any user interface or visual representation; it creates the signal processing graph in memory and performs the actual operations ‘under-the-hood’.

At this time I would describe the status of the Multicore layer as “pre-alpha” – meaning it is not very stable and is in need of further research and development to fulfill its vision.

Jamoma Modular

When most people say Jamoma, they typically are referring to the Jamoma Modular Layer, and more specifically the Jamoma Modular Framework.  The Modular framework provides a structured context for fully leveraging the power of the Max/MSP environment.  The modular layer consists of both the modular framework and a set of components (external objects and abstractions).  The components  are useful both with and without the modular framework.

To exemplify the Modular Components, we can consider the jcom.dataspace external.  This is an object that converts between different units of representation for a given number (e.g. decibels, linear gain, midi gain, etc.).  This is a useful component in Max/MSP regardless of whether the modular framework is being used for patcher construction or management.

The Modular Framework, on the other hand, is system of objects and conventions for structuring the interface of a Max patcher – both the user interface and the messaging interface.


The screenshot above (from Spatial sound rendering in Max/MSP with ViMiC by N. Peters, T. Matthews, J. Braasch & S. McAdams) demonstrates the Jamoma framework in action.  There a number of modules connected together in a graph to render virtual microphone placements in a virtual space.  The module labeled ‘/cuelist’ communicates remotely with the other modules to automate their behavior.

Digging Deeper

In future articles I’ll be treating the architecture of each of these layers in more detail.  I also will be demoing Jamoma at the Expo ’74 Science Fair next week.  If you are going to be at Expo ’74, be sure to stop by and say hello.

The Hemisphere as Architecture


I first experienced the hemisphere loudspeakers in 1998 at the SEAMUS national conference held at Dartmouth College.  At that conference I saw an amazing performance by Curtis Bahn and Dan Trueman in a genre/style/practice to which I had never before been exposed.  I saw (and heard) them again at a performance at the Peabody Conservatory in the Spring of 1999.  The speakers sound amazing in the right context, not because of the quality of the drivers, but because of how they engage the acoustics of the space.

Fast forward a couple of years and my good friend Stephan Moore, at this time studying with Curtis at RPI, became involved with producing a fair number of a new generation of hemisphere for use in installations and performances. At the 2002 SEAMUS Conference at the University of Iowa, Curtis and Stephan presented their work with the Hemispheres. Their work included experiments with different sound dispersion paradigms. One example is a laying the speakers throughout a space on the floor and distributing the sound material amongst the loudspeaker arrays using the Boids algorithm.

Later that year, in the middle of a very hot July in Upstate New York, I helped Stephan build 43 of the third-generation Hemispheres.  The hemisphere’s have evolved a few more times since then.  The fifth generation hemisphere’s now sold by Electrotap are produced through a joint effort of Stephan and master furniture maker Ken Malz.


Incidentally, Stephan has been touring this Spring with the sixth generation Hemisphere — a powered version that has the amplifiers built into the cabinet.

A couple months ago an inquiry came in to Electrotap’s support asking about different ways to mount or suspend the Hemispheres in a gallery.  In the process, Stephan sent me a couple of the photos you see in this post (and gave me permission to post them).


These photos are from his installation “Outside Information”  that was shown July-November 2008 at the Mandeville Gallery of Union College in Schenectady, NY, in the Nott Memorial.  From the photos I would say the architecture of the building is pretty fascinating.  Here is Stephan’s artist statement from the exhibition:
Depending on how you listen to it, Outside Information is a decorative soundscape for an already highly-decorated space, or a means of listening to and navigating a complicated acoustic environment.  The eight Hemisphere speakers suspended in the giant column of air carry layers of small, shifting sounds to all parts of the Nott Memorial, activating the space’s acoustics and providing opportunities to explore its sonic eccentricities.  The small sounds in the speakers create a wash of sound in the space, which can resolve into high, unexpected detail when a speaker is approached closely.  Every point on each of the floors provides a different perspective on these sounds.

The title is inspired by the Jason Martin song “Inside Information”, which was written about the Nott Memorial and the mysteries (and potential conspiracies) surrounding its geometry and decoration.  In Martin’s song, he describes trying to discover the secret meanings of the Nott, suggesting that “If you ask, no one will tell you/but you should ask anyway.”  Outside Information, by contrast, supplies the space with an extrinsic layer of activity, geometry, decoration, and meaning, no less mysterious, but gradually yielding to investigation and exploration.

In many ways, this piece expands upon my 2007 Steepings series of sound installations, which were made to flood smaller spaces with intimate, shifting sounds that varied based on a set of simple rules.  Outside Information uses similarly-conceived custom sound software to generate algorithmic sound with both greater momentary variability and the capacity for long-term drift.  As the resulting environment can sound quite different from hour to hour, day to day, and will interact with changes in the air conditioning’s rumble and human activity in the space, my hope is that it will reward repeated visits from the members of the Union College community that encounter it daily or weekly.


Unlike many loudspeakers, the Hemispheres work visually with the space and the concept of the art.  They become a part of the artistic expression itself rather than a force acting in contradiction (or in orthogonality) to the creative concept.

These loudspeakers seem to lend themselves well to this in a number of different contexts as well.  A couple of months ago I saw a video of a work by Michael Theodore, who used hemispherical speaker arrays for this particular work, and they appears as mounds of earth rising from the surface.  The Princeton Laptop Orchestra also situates the speakers on the floor next to a person who is also sitting the floor, invoking a ritualistic image which perfectly reinforces the context of the performance ecosystem created by PLOrk.

I had a conversation with Trond Lossius walking back to BEK from the Landmark in Bergen a few years ago, following a sound installation exhibition.  Trond was interested in using the Hemisphere not just for it’s radiant acoustic qualities, but for its visual quality.  Two hemispheres placed back-to-back to create sphere, on top of a pole about 5 or 6 feet tall.  The resulting visual invokes a human image. How then do we interact with the sound in space that emanates from this creature/sculpture/environment?

Custom Data-Types in Max Part 3: Binding to Symbols

When people design systems in Max that are composed of multiple objects that share data, they have a problem: how do you share the data between objects? The coll object, for example, can share its data among multiple coll objects with the same name.  The buffer~ object can also do this, and other objects like play~ and poke~ can also access this data.  These objects share their data by binding themselves to symbols.

This is the third article in a series about working with custom data types in Max.  In the first two articles we laid the groundwork for the various methods by discussing how we wrap the data that we want to pass.  The next several articles will be focusing on actually passing the custom data between objects in various ways.  In this series:

  1. Introduction
  2. Creating “nobox” classes
  3. Binding to symbols (e.g. table, buffer~, coll, etc.)
  4. Passing objects directly (e.g. Jamoma Multicore)
  5. Hash-based reference system (similar to Jitter)

The Symbol Table

Before we can talk about binding objects to symbols, we should review what a symbol is and how Max’s symbol table works. First, let’s consider the definition of a t_symbol from the Max API:

typedef struct _symbol {
    char      *s_name;
    t_object  *s_thing;
} t_symbol;

So the t_symbol has two members: a pointer to a standard C-string and a pointer to a Max object instance.  We never actually create, free, or otherwise manage t_symbols directly.  This is a function of the Max kernel.  What we do instead is call the gensym() function and Max will give us a pointer to a t_symbol that is resident in memory, like so:

t_symbol *s;

s = gensym("ribbit");

What is it that gensym() does exactly?  I’m glad you asked…  The gensym() function looks in a table maintained by Max to see if this symbol exists in that table.  If it does already exist, then it returns a pointer to that t_symbol.  If does not already exist then it creates it, adds it to the table, and then returns the pointer.  In essence, it is a hash table that maps C-strings to t_symbol struct instances.

As a side note, one of the really fantastic improvements introduced in Max 5 is dramatically faster performance from the gensym() function and the symbol table.  It’s not a sexy feature you will see on marketing materials, but it is one of the significant under-the-hood features that make Max 5 such a big upgrade.

As seasoned developers with the Max or Pd APIs will know, this makes it extremely fast to compare textual tidbits.  If you simply try to match strings with strcmp(), then each character of the two strings you are comparing will need to be evaluated to see if they match.  This is not a fast process, and Max is trying to do things in real time with a huge number of textual messages being passed between objects.  Using the symbol table, you can simply compare two t_symbol pointers for equality.  One equality check and you are done.

The symbol table is persistent throughout Max’s life cycle, so every symbol gensym()’d into existance will be present in the symbol table until Max is quit.  This has the benefit of knowing that you can cache t_symbol pointers for future comparisons without worrying about a pointer’s future validity.

There’s s_thing You Need to Know

So we have seen that Max maintains a table of t_symbols, and that we can get pointers to t_symbols in the table by calling the gensym() function.  Furthermore, we have seen that this is a handy and fast way to deal with strings that we will be re-using and comparing frequently.  That string is the s_name member of the t_symbol.  Great!

Now lets think about the problem we are trying to solve.  In the first part of this series we established that we want to have a custom data structure, which we called a ‘frog’.  In the second part of this series we implemented that custom data structure as a boxless class, which is to say it is Max object.  An now we need a way to access our object and share it between other objects.

You are probably looking at the s_thing member of the t_symbol and thinking, “I’ve got it!”  Well, maybe.  Let’s imagine that we simply charge ahead and start manipulating the s_thing member of our t_symbol.  If we did our code might look like this:

t_symbol *s;
t_object *o;

s = gensym("ribbit");
o = object_new_typed(_sym_nobox, gensym("frog"), 0, NULL);
s->s_thing = 0;

Now, in some other code in some other object, anywhere in Max, you could have code that looks like this:

t_symbol *s;
t_object *o;

s = gensym("ribbit");
o = s->s_thing;

// o is now a pointer to an instance of our frog
// which is created in another object

Looks good, right? That’s what we want. Except that we’ve made a lot of assumptions:

  1. We aren’t checking the s->s_thing before we assign it our frog object instance.  What if it already has a value?  Remember that the symbol table is global to all of Max.  If there is a buffer~, or a coll, or a table, or a detonate object, (etc.) bound to the name “ribbit” then we just broke something.
  2. In the second example, where we assign the s_thing to the o variable, we don’t check that the s_thing actually is anything.  It could be NULL.  It could be an object other than the frog object that we think it is.
  3. What happens if we assign the pointer to o in the second example and then the object is freed immediately afterwards in another thread before we actually start dereferencing our frog’s member data?  This thread-safety issue is not academic – events might be triggered by the scheduler in another thread or by the audio thread.

So clearly we need to do more.

Doing More

Some basic sanity checks are in order, so let’s see what the Max API has to offer us:

  1. First, we should check if the s_thing is NULL.  Most symbols in Max will probably have a NULL s_thing, because most symbols won’t have objects bound to them.
  2. If it is something, there is no guarantee that the pointer is pointing to a valid object.  You can use the NOGOOD macro defined in ext_mess.h to find out.  If you pass a pointer to NOGOOD then it will return true if the pointer is, um, no good.  Otherwise it returns false – in which case you are safe.
  3. If you want to be safe in the event that you have multiple objects accessing your object, then you may want to incoporate some sort of reference counting or locking of your object.  This will most likely involve adding a member to your struct which is zero when nothing is accessing your object (in our case the frog), and non-zero when something is accessing it.  You can use ATOMIC_INCREMENT and ATOMIC_DECREMENT (defined in ext_atomic.h) to modify that member in a thread safe manner.
  4. Finally, there is the “globalsymbol” approach is demonstrated in an article about safely accessing buffer~ data that appeared here a few weeks ago.

Alternative Approaches

There are some alternative approaches that don’t involve binding to symbols.  For example, you could have a class that instead implements a hash table that maps symbols to objects.  This would allow you to have named objects in a system that does not need to concern itself with conflicts in the global namespace.  This is common practice in the Jamoma Modular codebase.

The next article in this series will look at a very different approach where the custom data is passed through a chain of objects using inlets and outlets. Stay tuned…

Custom Data-Types in Max Part 2: Nobox Classes

In this series I am offering a few answers to the question “what’s the best way to have multiple max objects refering to a custom data structure?”  Another variation on that question is ”I want to define a class that will never become a full-blown external for instantiation in a Max patcher, but will be instantiated invisibly as a (possibly singleton) object that can serve some functions for other objects.”  In essence, the answer to both of these questions begins with the creation of ‘nobox’ objects.

This is the second of a multi-part series. Over the next several weeks I will be writing about several different approaches to passing custom data types in Max, and I’ll be using some real-world examples to demonstrate how and why these various strategies are effective.

  1. Introduction
  2. Creating “nobox” classes
  3. Binding to symbols (e.g. table, buffer~, coll, etc.)
  4. Passing objects directly (e.g. Jamoma Multicore)
  5. Hash-based reference system (similar to Jitter)

Boxless Objects

When people use Max they typically think about objects, created in a little ‘boxes’ in a patcher document.  These boxes are then connected with patch cords.

In the first part of this series I introduced a new data type called a ‘frog’.

The the frog class could be defined as a C++ object, a C struct, or some other way. We will define our custom ‘frog’ type as a Max class.  In Max there are two common ways to define a class.  The first is a “box” class, which is to say that it is an object that can be instantiated in a box that is in a patcher.  Most objects are box classes.

The second way is to create a “nobox” class.  A nobox is a class that cannot be created directly in a Max patcher. Instead this is a class that exists solely under-the-hood to be used by other classes, or by Max itself. We will create our ‘frog’ data type as a ‘nobox’ class.

One example of a nobox class that is defined internally to Max is the t_atomarray in the Max API.  Let’s consider it’s definition from ext_atomarray.h:

typedef struct _atomarray {
	t_object	ob;
	long		ac;
	t_atom		*av;
} t_atomarray;

The atomarray is simply an object that manages an array of atoms.  However it is a proper object. You can instantiate it by calling object_new() and you free it by calling object_free().  It has a the typical Max methods which can be invoked by sending messages to the object using the object_method(), object_method_typed(), and other similar functions.

If you poke around the Max SDK you will probably notice a number of these nobox classes.  They include t_linklist, t_hashtab, t_dictionary, t_symobject, etc.  Even The Max object, the one that you send cryptic messages to using message boxes that say things like “;max sortpatcherdictonsave 1″, is a nobox object.

Defining a Frog

If we define our frog as a nobox class, we may have a struct like this:

typedef struct _frog {
	t_object	ob;
	long		num_flies;
 	t_atom		*flies;
} t_frog;

This is basically the same thing as an atomarray, but we will make it ourselves from scratch.  And we can define some more whimsical names for its methods.

Just like any other class, we need to cache our class definition in a static or global toward the top of our file.  So we can simply do that like usual:

t_class *s_frog_class = NULL;

Then we can get to the class definition, which once again will look pretty familiar to anyone who has written a few objects using the Max 5 SDK.

int main(void)

	s_frog_class = class_new("frog",

	class_addmethod(s_frog_class, (method)frog_getflies, 	"getflies", A_CANT, 0);
	class_addmethod(s_frog_class, (method)frog_appendfly, 	"appendfly", A_CANT, 0);
	class_addmethod(s_frog_class, (method)frog_getnumflies,	"getnumflies", 0);
	class_addmethod(s_frog_class, (method)frog_clear,	"clear", 0);

	class_register(_sym_nobox, s_frog_class);
	return 0;


  1. we define the class itself, which includes providing the instance create and destroy methods, the size of the object’s data, and what kind of arguments the creation method expects.
  2. we initialize commonsyms — this means we can refer to a whole slew of pre-defined symbols without having to make computationally expensive gensym() calls.  For example, we can use _sym_nobox instead of gensym(“nobox”).
  3. we add some message bindings so that we can call the methods using object_method() and friends.  One aspect of these messages is that we gave a couple of them A_CANT types.  This is uncommon for normal box classes, but quite common for nobox classes.  It essentially indicates that Max “can’t” typecheck the arguments.  This allows us to bind the message to a method with virtually any prototype we want.
  4. we register the class as a nobox object

Take special note of that last step.  Instead of registering the class in the “box” namespace, we register it in the “nobox” namespace.

We could also define attributes for our class, but for the sake of simplicity we are just using messages in this example.

The Froggy Lifecycle

When we go to use our frog class we will expect to be able to do the following:

t_object *myfroggy;
myfroggy = object_new_typed(_sym_nobox, gensym("frog"), 0, NULL);

// do a bunch of stuff
// snap up some flies
// sit around the pond and talk about how the mud was in the good ole days...


Notice that once again we have to specify the correct namespace for the object, _sym_nobox, in our call to object_new_typed().  We used object_new_typed() because we defined the class to take arguments in the A_GIMME form.  If we use object_new() instead of object_new_typed() then the arguments passed to our instance creation routine would be pointing to bogus memory (and we definitely do not want that – unless you are a crash loving masochist).

Speaking of the object creation routine, it can be pretty simple:

t_object* frog_new(t_symbol *name, long argc, t_atom *argv)
    t_frog	*x;

    x = (t_frog*)object_alloc(s_frog_class);
		// in Max 5 our whole struct is zeroed by object_alloc()
		// ... so we don't need to do that manually

		// handle attribute arguments.
		// we don't have any attributes now, but we might add some later...
		attr_args_process(x, argc, argv);
	return x;

In addition to the things noted in the method’s code, I’ll point out the obvious fact that we don’t need to worry about creating inlets or outlets — our object will never be visible in a box in a patcher, and thus never have patch cords connected to it.

Our free method is also quite simple.  We just call the clear method.

void frog_free(t_frog *x)

Sending Messages to a Frog

At the beginning of the previous section we created an instance of the frog object with object_new_typed().  We probably didn’t do this just to free the object again.  We want to send some messages to get our frog to do something – like collect flies.

Let’s define the four methods we specified above:

void frog_getflies(t_frog *x, long *numflies, t_atom **flies)
	if (numflies && flies) {
		numflies = x->num_flies;
		flies = x->flies;

void frog_appendfly(t_frog *x, t_atom *newfly)
	if (x->num_flies == 0) {
		x->num_flies = 1;
		flies = (t_atom*)sysmem_newptr(x->num_flies * sizeof(t_atom));
	else {
		flies = (t_atom*)sysmem_resizeptr(flies, x->num_flies * sizeof(t_atom));
	x->flies[x->num_flies-1].a_type = newfly->a_type;
	sysmem_copyptr(newfly, x->flies+(x->num_flies-1), sizeof(t_atom));

long frog_getnumflies(t_frog *x)
	return x->num_flies;

void frog_clear(t_frog *x)
	if (x->num_flies && x->flies) {
		x->flies = NULL;
		x->num_flies = 0;

None of the messages are defined with argument types of A_GIMME, so we will use object_method() to send messages, and not object_method_typed() or its immediate descendants.  Usage of these methods might then look like this:

t_atom	mosquito;
t_atom	bee;
long	n;
long	ac = 0;
t_atom	*av = NULL;

atom_setsym(&mosquito, gensym("eeeeyeyeeyyeyyyyeeyyye"));
atom_setsym(&bee, gensym("bzzzzzzzz"));
// here we have the frog snap up the flies using one its A_CANT methods
object_method(myfroggy, gensym("appendfly"), &mosquito);
object_method(myfroggy, gensym("appendfly"), &bee);

// this call returns a value -- we have to cast it, but that's okay
n = (long)object_method(myfroggy, gensym("getnumflies"));
// another A_CANT method, passing two pointer args
object_method(myfroggy, gensym("getflies"), &ac, &av);

// we're all done and the froggy has a bowel movement
object_method(myfroggy, gensym("clear"));

Wrap Up

So we have a boxless class now.  It isn’t all that different from a regular class, but there are always people asking me about example code that shows how to do this sort of thing.  And this information lays the foundation for the upcoming articles in this series.

If you have any questions, please leave a comment!

Peacock’s Interface Design

The Peacock Visual Laboratory from Aviary is a node-based raster image processing environment that shares the same ‘patching’ paradigm used by other environments like Cycling ’74′s Max, Apple’s Quartz composer, and a number of others.  You can kind of think of it like Max for still images.  If you visit the Aviary website they have a video showing it in action.  Or you can just use it from within your web browser.

I love Peacock’s interface experience.  It seems to be very well designed and thought out.


First, I love the one window design, which I find to be a lot more fluid and usable than any other environment like this that I’ve tried.  There aren’t lots of palettes or floating toolbars or additional windows in the way (or too far out of the way).  I’ve commented about this issue before.  I can simply see my work without distraction or stress.

Second, I like the use of colors: the gray is neutral, thus not distorting the user’s ability to work with color.  The colors are dark, and thus the non-active parts of the UI do not command attention.  I use syntax highlighting in TextMate this way: the background is a very dark gray, and comments are less dark gray — they are there when I want them but they don’t command my attention when I’m skimming through the screen.

The patch cords are great.  They automatically color themselves with varying colors which makes it easy to trace the path of a patchcord and not get confused.  The look of the cords are very fluid and lead to a smooth reading of the data flow.  You are able to put ‘segments’ in a patch cord, which are more like anchor points in an SVG application, and they allow you to push patch cords out of the way if you want to control them.

The “file browsers” and “object palettes” and “inspectors” (to use Max 5 terminology) are all tabs that are nicely tucked away and can be drawn out when needed.   All drag and drop is initiated from these side wells.

To create a new object you choose an object from the relevant tab in the upper-left and then drag that object into the work area.  If you drag the object such that its outlet touches the inlet of another object then you can draw out a connection between to the two objects before you release the mouse and place the object.  This is a really slick feature that helps to make the whole experience to feel natural and efficient.

What does not feel natural or efficient is how slow Peacock is.  This is, I suspect, a limitation of trying to run this application inside of a web browser.  In fact it is so slow that it seems to bring not just my web browser but pretty much the whole computer to its knees.  This is unfortunate.  The design of the user interaction is so good that hopefully the folks working on it can work on optimizing the back-end now.  At some point performance too is a usability issue, and I think that’s clearly the case here.

That one issue withstanding, this brilliant work and hats-off to the Aviary team.

Custom Data-Types in Max Part 1: Introduction

The Max API makes it easy to pass a handful of standard data types in Max: ints, floats, symbols, lists of the aforementioned.  But what happens when you want to pass a frog from one object to the next?  A frog is not a standard data type.  Instead it is something made up that we want to send hopping through our Max patch from one green object box to the next.

Where do we start?

Before we can pass the frog from one object to another, we first need to define the frog type.  What is it?  Is it an object (meaning a proper Max class with a t_object as it’s first member)?  Or is it a naked struct or C++ class?  Or something else entirely?  Are we passing the data by value, or by reference (meaning a pointer)?

That last question may be more difficult that it seems at first glance.  Answering the question may help to determine the answer to the other questions.  If we pass by value then we have a certain amount of simplicity, but for anything other than rudimentary types it quickly becomes a very computational expensive situation.  So the obvious answer here is to pass by pointer, right?  Not so fast…  Consider the following patcher topology:

simple-patcherIf we pass by value from the first number object, then we get the results that are shown.  If we simply pass a pointer to the value (pass by reference) without some sort of management in place then we will get very different results.  The result could be the following:

  1. The address of the data (2) is passed to the [+ 5] object.
  2. 5 is added to 2, the data now has a value of 7 and this new value is passed to the lower-right number box.
  3. The address of the data (which now has the value 7!) is now passed to the [+ 7] object.
  4. 7 is added to 7, the data now has a value of 14(!) and this new value is passed to the lower-left number box.

Indeed.  A subtle problem with real life consequences.  In our example the problem may seem trivial, but when you are operating on more complex structures (e.g. Jitter or FTM) then there needs to be a system in place that allows for the graph to bifurcate without downstream operations corrupting the output of other operations happening ‘in parallel’.

Series Overview

This introduction to the problem is the first of a multi-part series. Over the next several weeks I will be writing about several different approaches to passing custom data types in Max, and I’ll be using some real-world examples to demonstrate how and why these various strategies are effective.

  1. Introduction
  2. Creating “nobox” classes
  3. Binding to symbols (e.g. table, buffer~, coll, etc.)
  4. Passing objects directly (e.g. Jamoma Multicore)
  5. Hash-based reference system (similar to Jitter)

Accessing buffer~ Objects in Max5

One thing that has always been a bit tricky, and perhaps a bit under-documented, has been writing good code for accessing the contents of a buffer~ object in Max.  What has made the situation a bit more confusing is that the API has changed slowly over a number of versions of Max to make the system more robust and easier to use.  This is certainly true of Max 5, and the most recent version of the Max 5 Software Developer Kit makes these new facilities available.

I’ll be showing the favored way to access buffer~ objects for Max 5 in the context of a real object: tap.buffer.peak~ from Tap.Tools.  I’ll show how it should be done now, and in some places I’ll show how it was done in the past for reference.

Getting a Pointer

The first thing we need to do is get a pointer to the buffer~ bound to a given name.  If you know that there is a buffer~ object with the name “foo” then you could simply do this:

t_symbol* s = gensym("foo");
t_buffer* b = s->s_thing;

However, there are some problems here.  What if “foo” is the name of a table and not a buffer~?  What if there is a buffer~ named foo in the patcher, but when the patcher is loaded the buffer~ is instantiated after your object.  What if you execute the above code and then the user delete’s the buffer~ from their patch?  These are a few of the scenarios that happen regularly.

A new header in Max 5 includes a facility for eleganty handling these scenarios:

#include "ext_globalsymbol.h"

Having included that header, you can now implement a ‘set’ method for your buffer~-accessing object like so:

// Set Buffer Method
void peak_set(t_peak *x, t_symbol *s)
	if(s != x->sym){
		x->buf = (t_buffer*)globalsymbol_reference((t_object*)x, s->s_name, "buffer~");
			globalsymbol_dereference((t_object*)x, x->sym->s_name, "buffer~");
		x->sym = s;
		x->changed = true;

By calling globalsymbol_reference(), we will bind to the named buffer~ when it gets created or otherwise we will attach to an existing buffer.  And when I say attached, I mean it.  Internally this function calls object_attach() and our object, in this case tap.buffer.peak~, will receive notifications from the buffer~ object.  To respond to these notifications we need to setup a message binding:

class_addmethod(c, (method)peak_notify,		"notify",		A_CANT,	0);

And then we need to implement the notify method:

t_max_err peak_notify(t_peak *x, t_symbol *s, t_symbol *msg, void *sender, void *data)
	if (msg == ps_globalsymbol_binding)
		x->buf = (t_buffer*)x->sym->s_thing;
	else if (msg == ps_globalsymbol_unbinding)
		x->buf = NULL;
	else if (msg == ps_buffer_modified)
		x->changed = true;

	return MAX_ERR_NONE;

As you may have deduced, the notify method is called any time a buffer~ is bound to the symbol we specified, unbound from the symbol, or any time the contents of the buffer~ are modified.  For example, this is how the waveform~ object in MSP knows to update its display when the buffer~ contents change.

Accessing the Contents

Now that you have a pointer to a buffer~ object (the t_buffer*), you want to access the contents.  Having the pointer to the buffer~ is not enough, because if you simply start reading or writing to the buffer’s b_samples member you will not be guaranteed of thread-safety, meaning that all matter of subtle (and sometimes not so subtle) problems may ensue at the most inopportune moment.

In Max 4 you might have used code that looked like the following before and after you accessed a buffer~’s contents:

    saveinuse = b->b_inuse;
    b->b_inuse = true;

    // access buffer contents here

    b->b_inuse = saveinuse;
    object_method((t_object*)b, gensym("dirty"));

The problem is that the above code is not entirely up to the task.  There’s a new sheriff in town, and in Max 5 the above code will be rewritten as:

    // access buffer contents here
    object_method((t_object*)b, gensym("dirty"));

This is truly threadsafe. (Note that you only need to call the dirty method on the buffer to tell that it changed if you wrote to the buffer).

Here is the code from tap.buffer.peak~ that access the buffer~’s contents to find the hottest sample in the buffer:

	t_buffer	*b = x->buf;		// Our Buffer
	float		*tab;		        // Will point to our buffer's values
	long		i, chan;
	double		current_samp = 0.0;	// current sample value

	if (!x->buf->b_valid) {

	tab = b->b_samples;			// point tab to our sample values
	for(chan=0; chan < b->b_nchans; chan++){
		for(i=0; i < b->b_frames; i++){
			if(fabs(tab[(chan * b->b_nchans) + i]) > current_samp){
				current_samp = fabs(tab[(chan * b->b_nchans) + i]);
				x->index = (chan * b->b_nchans) + i;