Custom Data-Types in Max Part 4: Passing Object Pointers

How do you pass data between objects in Max?  If the data is a simple number or a symbol then the answer is easy.  What happens when you are trying to pass around audio vectors, dictionaries, images, or some other kind of object?  The implementation of Jamoma Multicore for Max deals with these issues head-on as it provides an illustration of how this problem can be tackled.

This is the fourth article in a series about working with custom data types in Max.  In the first two articles we laid the groundwork for the various methods by discussing how we wrap the data that we want to pass.  The third article demonstrated the use of Max’s symbol binding as means to pass custom data between objects.  This article will show an example of passing pointers directly between objects without using the symbol table.  In this series:

  1. Introduction
  2. Creating “nobox” classes
  3. Binding to symbols (e.g. table, buffer~, coll, etc.)
  4. Passing objects directly (e.g. Jamoma Audio Graph)
  5. Hash-based reference system (similar to Jitter)

A Peer Object System

Jamoma Audio Graph for Max is implemented as what might be called a Peer Object System.  What is meant is that for every object that a user creates and manipulates in a Max patcher, there is a matching object that exists in a parallel system. As detailed in Designing an Audio Graph, a Jamoma Audio Graph object has inlets and outlets and maintains connections to other objects to create a graph for processing audio through the objects.  The implementation of Jamoma Audio Graph for Max then has the task of creating and destroying these objects, sending them messages, and making the connections between them.  Once the objects are connected Jamoma Audio Graph will take care of itself.   The end result is that no audio processing actually happens in the Max objects for Jamoma Audio Graph — instead the Max objects are a thin façade that helps to set up the relationships between the objects as they exist in something akin to a parallel universe.

A Patcher

A Jamoma Multicore patcher in Max For context, let’s take a look at a Max patcher using Jamoma Audio Graph. In this patcher we have 4 Jamoma Audio Graph objects, identified by the ≈ symbol at the tail of the object name.  Each of these Max objects have a peer Audio Graph object internal to themselves.  Each Audio Graph object then contains a Jamoma DSP object that performs the actual signal processing. For example, the jcom.overdrive≈ object contains a pointer to a  Jamoma Audio Graph object that contains an instance of the Jamoma DSP overdrive class.  The attributes of the overdrive class, such as bypass, mute, and drive are then exposed as Max attributes so that they can be set in the patcher. Remember that each connection may carry N channels of audio.  The jcom.oscil≈ is, in this case, producing a stereo signal which is then propagated through the processing graph down to the jcom.dac≈ object.

Configuring the Graph

The exciting work doesn’t begin until the start message is sent to the jcom.dac≈ object.  As with all Jamoma Audio Graph externals, jcom.dac≈ Max external has a peer object.  In this case the peer object that it wraps is the multicore.output object.  This is the same multicore.output object that is shown in the Ruby examples in the Designing an Audio Graph article. When the start message is sent, the jcom.dac≈ object performs the following sequence:

  1. Send a multicore.reset message to all objects in the patcher.  This message sends a reset message to the peer objects underneath, which tells them to forget all of their previous connections.
  2. Send a multicore.setup message to all objects in the patcher.  This message tells the objects to try and connect to any object below it in the patcher.
  3. Tell the audio driver to start running.  When it is running it will periodically request blocks of samples from us, which in turn means that we will ask the other objects in the graph to process.

The processing happens completely within the Jamoma Multicore objects, thus not involving the Max objects at all.  It is the set up of the network of objects in the graph (steps 1 and 2) that involve our passing of custom data types in Max.

Diving into the code

For a full source listing of the jcom.dac≈ object, you can find the code in Jamoma Audio Graph source code repository.  We’ll abstract the important parts from that code below.  Let’s start with the method that is executed when the start message is sent:

TTErr DacStart(DacPtr self)
{
	MaxErr			err;
	ObjectPtr		patcher = NULL;
	long			vectorSize;
	long			result = 0;
	TTAudioGraphInitData	initData;

	self->multicoreObject->mUnitGenerator->getAttributeValue(TT("vectorSize"), vectorSize);

 	err = object_obex_lookup(self, gensym("#P"), &patcher);
 	object_method(patcher, gensym("iterate"), (method)DacIterateResetCallback, self, PI_DEEP, &result);
 	object_method(patcher, gensym("iterate"), (method)DacIterateSetupCallback, self, PI_DEEP, &result);

 	initData.vectorSize = vectorSize;
 	self->multicoreObject->init(initData);
 	return self->multicoreObject->mUnitGenerator->sendMessage(TT("start"));
}

As previously discussed, the last we thing we do is send a start message to our peer object, the multicore.output, so that the audio driver will start pulling audio vectors from us. Prior to that we iterate the Max patcher recursively (so the messages go to subpatchers too) to send the multicore.setup and multicore.reset messages.   To do this, we send the iterate message to the patcher and pass it a pointer to a method we define.  Those two methods are defined as follows.

void DacIterateResetCallback(DacPtr self, ObjectPtr obj)
{
	TTUInt32	vectorSize;
	method		multicoreResetMethod = zgetfn(obj, gensym("multicore.reset"));

	if (multicoreResetMethod) {
		self->multicoreObject->mUnitGenerator->getAttributeValue(TT("vectorSize"), vectorSize);
		multicoreResetMethod(obj, vectorSize);
	}
}

void DacIterateSetupCallback(DacPtr self, ObjectPtr obj)
{
	method multicoreSetupMethod = zgetfn(obj, gensym("multicore.setup"));

	if (multicoreSetupMethod)
		multicoreSetupMethod(obj);
}

These functions are called on every object in the patcher.  If we start with the last function, we can see that we first call zgetfn() on the object, obj, which is passed to us.  If that object possesses a multicore.setup method then we will receive a pointer to that method.  Otherwise we receive NULL.  If that method exists then we call it. The multicore.reset method works the same way.  The only difference is that the method takes an additional argument — the vector size at which the jcom.dac≈ is processing.

The Other End

At the other end of this calling sequence are the remaining objects in the patcher.  The full jcom.oscil≈ source code will show how this Max object is implemented.  In brief, we have two message bindings in the main function:

	class_addmethod(c, (method)OscilReset, "multicore.reset",	A_CANT, 0);
	class_addmethod(c, (method)OscilSetup, "multicore.setup",	A_CANT,	0);

These two methods respond to the those called by the jcom.dac≈ object.  They both have an A_CANT argument signature, which is how you define messages in Max that use function prototypes different than the standard  method prototypes.  These messages can’t be called directly by the user, and they are not listed in the object assistance, but we can send them from other parts of Max such as our jcom.dac≈ object.  The reset message (for forgetting about all previous connections) is simply passed on to the oscillator’s Multicore peer object:

TTErr OscilReset(OscilPtr self)
{
	return self->multicoreObject->reset();
}

The setup method, as we discussed, tells our object that we need to try and make a connection to any object below us in the patcher. To do this we wrap our peer Multicore object’s pointer up into a Max atom.  That, together with the outlet number (zero), are passed as arguments to the multicore.connect message which is sent out our outlet.

TTErr OscilSetup(OscilPtr self)
{
	Atom a[2];

	atom_setobj(a+0, ObjectPtr(self->multicoreObject));
	atom_setlong(a+1, 0);
	outlet_anything(self->multicoreOutlet, gensym("multicore.connect"), 2, a);
	return kTTErrNone;
}

One More Time…

That took care of the jcom.oscil≈ object.  Once it sends the multicore.connect message out its outlet, its work is done.  But what happens with that message when it is received?

In our example it is going to a jcom.overdrive≈ object.  The source code for jcom.overdrive≈ isn’t going to be very helpful though.  It uses a magic class-wrapper that wraps any Jamoma DSP object as a Multicore object using 1 line of code.  That’s really convenient for coding, but not for seeing how all of the parts communicate.  So for our discussion, we will look at the jcom.dcblocker≈ source code instead — beginning with the main() function.

	class_addmethod(c, (method)DCBlockerReset,	"multicore.reset",	A_CANT, 0);
	class_addmethod(c, (method)DCBlockerSetup,	"multicore.setup",	A_CANT, 0);
	class_addmethod(c, (method)DCBlockerConnect,	"multicore.connect",	A_OBJ, A_LONG, 0);

You should recognize the multicore.reset and multicore.setup messages.  Those are exactly the same as they were for our oscillator.  We now also have a multicore.connect message.  The oscillator was generating a signal but has no signal inputs, so it had no need for a multicore.connect message.  Any object that requires an input, however, will require this message binding.  How that method is implemented?

TTErr DCBlockerConnect(DCBlockerPtr self, TTMulticoreObjectPtr audioSourceObject, long sourceOutletNumber)
{
	return self->multicoreObject->connect(audioSourceObject, sourceOutletNumber);
}

We simply wrap a call to our peer object’s connect method, sending the audioSourceObject (which is the peer object that the jcom.oscil≈ object sent us), and the outlet number from which that object was sent.  If you compare this to the connect message from the Ruby example in Designing an Audio Graph, it may illuminate the process.

Some Final Details

The example code that we’ve seen from Jamoma Audio Graph demonstrates the passing of custom data (pointers to C++ objects) from one object to the next through the multicore.connect message. Because we are sending this custom data type, and not all inlets of all objects will understand this data type, it would be nice if we could protect users from hooking up the objects in a way that will not function.  For this task, Max makes it possible to give outlets type information.  When the type of an outlet is specified, a user will not be able to connect the patch cord to any inlet that doesn’t accept the specified message. To get this functionality, in DCBlockerNew(), we create our outlet like this:

	self->multicoreOutlet = outlet_new(self, "multicore.connect");

So instead of the customary NULL for the argument to outlet_new(), we specify that this outlet will be sending only multicore.connect messages.

Surfacing for Air

Jamoma Audio Graph provides a fairly intense example of passing custom data types in Max.  However, it presents not just the basics of how you would pass a pointer, but rather a context for why you might want to pass a custom type, and a real-world example to show what you can do.  I think that objective has been accomplished.

Designing an Audio Graph

In previous articles about the Jamoma Platform and the Jamoma DSP Library, there have been references to Jamoma Audio Graph (also previously known as Jamoma Multicore).  Up to this point, Jamoma Audio Graph has not been significantly documented or written about.  The authoritative information has been an Electrotap blog post showing the initial prototype in 2008.

At a workshop in Albi in 2009 we attempted to further expand Jamoma Audio Graph — and we failed.  The architecture was not able to handle N multichannel inputs and M multichannel outputs.  So we had to redesign a major portion of the inner-workings.  Get out your pipe wrench; it’s time to take a look at some plumbing…

What Is Jamoma Audio Graph ?

Let’s back up for moment to get the big picture.  The Jamoma Platform is essentially a layered architecture implementing various processes for interactive art, research, music, etc.  At the lowest level, the Jamoma Foundation delivers basic components for creating objects, passing values, storing values in lookup-tables, etc.  The Jamoma DSP library then extends the Foundation classes and provides a set of pre-built objects for audio signal processing.

Jamoma Audio Graph then gives us the ability to create Jamoma DSP objects and combine them into a graph.  In other words, we can connect the objects together like you might connect modules together on a Moog synthesizer.

A Moog Modular patch. Photo: Maschinenraum

A Moog Modular Patch. Photo: Maschinenraum

Unlike the Moog synthesizers of old, however, we can do a few new tricks.  Instead of sending a single channel of audio through a connection, we can send any number of channels through a connection.  While Jamoma Audio Graph does not currently implement any particular features for parellel processing on multiple cores/processors, the design of the system is ideal for such parallelization in the future.

The Audio Graph In Action

At the time of this writing, Jamoma Audio Graph has bridges to make it available in the Max and Ruby environments.  Most of the work is done on making it available to Pd as well (though if you are really interested in this then let us know so we can put you to work!).

In Ruby, you can code scripts that are executed in a sequence.  This provides a static interface to Jamoma Audio Graph even though all of the synthesis and processing is typically happening in real-time.  Alternatively, the irb environment allows you to type and execute commands interactively.  Jamoma Audio Graph, together with irb, then functions much like the ChucK environment for live coding performance.

Example

If you’ve been jonesin’ for an Atari/Amiga/Commodore fix then this might be your perfect example of Jamoma Audio Graph in Ruby:

# This is the standard require for the Jamoma Platform's Ruby bindings
require 'TTRuby'

# Create a couple of objects:
dac = TTAudio.new "multicore.output"
osc = TTAudio.new "wavetable"

# connect the oscillator to the dac
dac.connect_audio osc

# turn on the dac
dac.send "start"

# play a little tune...
osc.set "frequency", 220.0
sleep 1.0
osc.set "frequency", 440.0
sleep 1.0
osc.set "frequency", 330.0
sleep 0.5
osc.set "frequency", 220.0
sleep 2.0

# all done
dac.send "stop"

It’s a pretty cheesy example, but it should give you a quick taste.  If you want a flashback to kinds of music you could make with MS-DOS, be sure you set the oscillator to use a square waveform.

After creating a couple of objects, you connect two objects by passing the source object to the destination object using a connect message.  If you provide no further arguments, then the connection is made between the first outlet of the source object and first inlet of the destination object.  The inlets and outlets are numbered from zero, so the connect message in our example could also have been written as

dac.connect osc, 0, 0

The sleep commands are standard Ruby.  They tell Ruby to pause execution for the specified number of seconds.  Everything else is performed with the basic Jamoma Ruby bindings.  These provide the send method for sending messages and the set method for setting attribute values.

If you want to know the messages or attributes that an object possesses, you can use the messages? or attributes? methods.  This is particularly useful when coding on the fly in irb.  In the following example, I requested the list of attributes for the oscillator in the previous example:

>> osc.attributes?
=> ["gain", "mode", "size", "processInPlace", "maxNumChannels", "frequency", "mute", "interpolation", "sr", "bypass"]

How It Operates

If you create a visual data-flow diagram of the objects in a graph, like you would see in Max or PureData, then you would get a good sense of how audio starts at the top and works its way through various filters until it gets to the bottom.  The same is true for a Jamoma Audio Graph.  However, what is happening under the surface is exactly the opposite.

Pull I/O Model

Multicore Graph Flow

The flow of a Jamoma Audio Graph.

Jamoma Audio Graph is based on a “Pull” I/O Model.  Some other examples of audio graph solutions using a similar model include ChucK and Apple’s AUGraph.  In this model a destination, sink, or terminal node object sits at the bottom of any given graph — and this is the object driving the whole operation.  In Max, on the other hand, messages (e.g. a ‘bang’ from a metro) begins at the top of the graph and pushes down through the objects in the chain.

The image to the left visualizes the operation of the audio graph.  Let’s assume the the destination object is an interface to your computer’s DAC.  The DAC will request blocks of samples (vectors) every so often as it needs them.  To keep it simple, we’ll say that we are processing samples at a sample rate of 44.1KHz and a block size of 512 samples.  In this case, every 11 milliseconds the DAC will tell our destination object that it needs a block of samples and the process begins.

The process flows through the light blue lines.  The destination asks the limiter for a block of samples, which then asks the overdrive for a block of samples which then asks both the source and the multitap delay for samples, and then the multitap delays asks the source for a block of samples.  To summarize it: each object receives a request for a block of samples, and in response it needs to produce that block of sample values, possibly pulling blocks of samples from additional objects in the process.

One Object At A Time

To understand in finer detail what happens in each object, the graphic below zooms-in to view a single instance in the graphic above.  Here we can see that we have the actual unit generator, which is a Jamoma DSP object, and then a host of other objects that work to make the interface for the audio graph.

Anatomy of a Multicore Object

Jamoma Audio Graph class structure

The text in graphic explains each of the classes contained in a Jamoma Audio Graph object.  Implied in both of the figures, is the ability to handle “fanning” connections where many inlets are connected to an outlet, or an inlet is connected to many outlets.

In essence, the outlets are only buffers storing samples produced by the unit generator.  Each time a block is processed the unit generator is invoked only once.  Subsequent requests for the object’s samples then simply access the samples already stored in the outlet buffers.

As explained in the graphic, the inlets have more work to do, as they need to sum the signals that are connected.  And remember, each connection can have zero or more channels!

Benefits

The most obvious  benefit is the ability to easily handle multiple channels in a single connection.  So imagine that you create a Max patcher for mono operation.  It can then function in stereo or 8-channel or 32-channel without a single modification.

But there’s a lot more than that here.  The number of channels is dynamic and can change at any time.  One place this is valuable is in ambisonic encoding and decoding where the order of the encoding can dramatically alter the number of channels required for the encoded signal.  If you want to try changing the ambisonic order on-the-fly, which changes the number of channels passed, you can.

Likewise, the vectorsize can be altered dynamically on a per-signal basis.  The benefit here may not be immediately obvious, but for granular synthesis, spectral work, and analysis based on the wave length of an audio signal (e.g. the kinds of things in IRCAM’s Gabor) this can be a huge win.

Writing the objects is also very simple.  If you write a Jamoma DSP object, then all you have to do to make it available in Jamoma Audio Graph is…

Nothing!

That’s right.  In Ruby, for example, all Jamoma DSP classes are made available with no extra work.  If you want to make a Max external for a particular object then you can use a class wrapper (1 line of code) to create the Max external.

Interested in join the fun?  Come find us!

Custom Data-Types in Max Part 3: Binding to Symbols

When people design systems in Max that are composed of multiple objects that share data, they have a problem: how do you share the data between objects? The coll object, for example, can share its data among multiple coll objects with the same name.  The buffer~ object can also do this, and other objects like play~ and poke~ can also access this data.  These objects share their data by binding themselves to symbols.

This is the third article in a series about working with custom data types in Max.  In the first two articles we laid the groundwork for the various methods by discussing how we wrap the data that we want to pass.  The next several articles will be focusing on actually passing the custom data between objects in various ways.  In this series:

  1. Introduction
  2. Creating “nobox” classes
  3. Binding to symbols (e.g. table, buffer~, coll, etc.)
  4. Passing objects directly (e.g. Jamoma Multicore)
  5. Hash-based reference system (similar to Jitter)

The Symbol Table

Before we can talk about binding objects to symbols, we should review what a symbol is and how Max’s symbol table works. First, let’s consider the definition of a t_symbol from the Max API:

typedef struct _symbol {
    char      *s_name;
    t_object  *s_thing;
} t_symbol;

So the t_symbol has two members: a pointer to a standard C-string and a pointer to a Max object instance.  We never actually create, free, or otherwise manage t_symbols directly.  This is a function of the Max kernel.  What we do instead is call the gensym() function and Max will give us a pointer to a t_symbol that is resident in memory, like so:

t_symbol *s;

s = gensym("ribbit");

What is it that gensym() does exactly?  I’m glad you asked…  The gensym() function looks in a table maintained by Max to see if this symbol exists in that table.  If it does already exist, then it returns a pointer to that t_symbol.  If does not already exist then it creates it, adds it to the table, and then returns the pointer.  In essence, it is a hash table that maps C-strings to t_symbol struct instances.

As a side note, one of the really fantastic improvements introduced in Max 5 is dramatically faster performance from the gensym() function and the symbol table.  It’s not a sexy feature you will see on marketing materials, but it is one of the significant under-the-hood features that make Max 5 such a big upgrade.

As seasoned developers with the Max or Pd APIs will know, this makes it extremely fast to compare textual tidbits.  If you simply try to match strings with strcmp(), then each character of the two strings you are comparing will need to be evaluated to see if they match.  This is not a fast process, and Max is trying to do things in real time with a huge number of textual messages being passed between objects.  Using the symbol table, you can simply compare two t_symbol pointers for equality.  One equality check and you are done.

The symbol table is persistent throughout Max’s life cycle, so every symbol gensym()’d into existance will be present in the symbol table until Max is quit.  This has the benefit of knowing that you can cache t_symbol pointers for future comparisons without worrying about a pointer’s future validity.

There’s s_thing You Need to Know

So we have seen that Max maintains a table of t_symbols, and that we can get pointers to t_symbols in the table by calling the gensym() function.  Furthermore, we have seen that this is a handy and fast way to deal with strings that we will be re-using and comparing frequently.  That string is the s_name member of the t_symbol.  Great!

Now lets think about the problem we are trying to solve.  In the first part of this series we established that we want to have a custom data structure, which we called a ‘frog’.  In the second part of this series we implemented that custom data structure as a boxless class, which is to say it is Max object.  An now we need a way to access our object and share it between other objects.

You are probably looking at the s_thing member of the t_symbol and thinking, “I’ve got it!”  Well, maybe.  Let’s imagine that we simply charge ahead and start manipulating the s_thing member of our t_symbol.  If we did our code might look like this:

t_symbol *s;
t_object *o;

s = gensym("ribbit");
o = object_new_typed(_sym_nobox, gensym("frog"), 0, NULL);
s->s_thing = 0;

Now, in some other code in some other object, anywhere in Max, you could have code that looks like this:

t_symbol *s;
t_object *o;

s = gensym("ribbit");
o = s->s_thing;

// o is now a pointer to an instance of our frog
// which is created in another object

Looks good, right? That’s what we want. Except that we’ve made a lot of assumptions:

  1. We aren’t checking the s->s_thing before we assign it our frog object instance.  What if it already has a value?  Remember that the symbol table is global to all of Max.  If there is a buffer~, or a coll, or a table, or a detonate object, (etc.) bound to the name “ribbit” then we just broke something.
  2. In the second example, where we assign the s_thing to the o variable, we don’t check that the s_thing actually is anything.  It could be NULL.  It could be an object other than the frog object that we think it is.
  3. What happens if we assign the pointer to o in the second example and then the object is freed immediately afterwards in another thread before we actually start dereferencing our frog’s member data?  This thread-safety issue is not academic – events might be triggered by the scheduler in another thread or by the audio thread.

So clearly we need to do more.

Doing More

Some basic sanity checks are in order, so let’s see what the Max API has to offer us:

  1. First, we should check if the s_thing is NULL.  Most symbols in Max will probably have a NULL s_thing, because most symbols won’t have objects bound to them.
  2. If it is something, there is no guarantee that the pointer is pointing to a valid object.  You can use the NOGOOD macro defined in ext_mess.h to find out.  If you pass a pointer to NOGOOD then it will return true if the pointer is, um, no good.  Otherwise it returns false – in which case you are safe.
  3. If you want to be safe in the event that you have multiple objects accessing your object, then you may want to incoporate some sort of reference counting or locking of your object.  This will most likely involve adding a member to your struct which is zero when nothing is accessing your object (in our case the frog), and non-zero when something is accessing it.  You can use ATOMIC_INCREMENT and ATOMIC_DECREMENT (defined in ext_atomic.h) to modify that member in a thread safe manner.
  4. Finally, there is the “globalsymbol” approach is demonstrated in an article about safely accessing buffer~ data that appeared here a few weeks ago.

Alternative Approaches

There are some alternative approaches that don’t involve binding to symbols.  For example, you could have a class that instead implements a hash table that maps symbols to objects.  This would allow you to have named objects in a system that does not need to concern itself with conflicts in the global namespace.  This is common practice in the Jamoma Modular codebase.

The next article in this series will look at a very different approach where the custom data is passed through a chain of objects using inlets and outlets. Stay tuned…

Custom Data-Types in Max Part 2: Nobox Classes

In this series I am offering a few answers to the question “what’s the best way to have multiple max objects refering to a custom data structure?”  Another variation on that question is ”I want to define a class that will never become a full-blown external for instantiation in a Max patcher, but will be instantiated invisibly as a (possibly singleton) object that can serve some functions for other objects.”  In essence, the answer to both of these questions begins with the creation of ‘nobox’ objects.

This is the second of a multi-part series. Over the next several weeks I will be writing about several different approaches to passing custom data types in Max, and I’ll be using some real-world examples to demonstrate how and why these various strategies are effective.

  1. Introduction
  2. Creating “nobox” classes
  3. Binding to symbols (e.g. table, buffer~, coll, etc.)
  4. Passing objects directly (e.g. Jamoma Multicore)
  5. Hash-based reference system (similar to Jitter)

Boxless Objects

When people use Max they typically think about objects, created in a little ‘boxes’ in a patcher document.  These boxes are then connected with patch cords.

In the first part of this series I introduced a new data type called a ‘frog’.

The the frog class could be defined as a C++ object, a C struct, or some other way. We will define our custom ‘frog’ type as a Max class.  In Max there are two common ways to define a class.  The first is a “box” class, which is to say that it is an object that can be instantiated in a box that is in a patcher.  Most objects are box classes.

The second way is to create a “nobox” class.  A nobox is a class that cannot be created directly in a Max patcher. Instead this is a class that exists solely under-the-hood to be used by other classes, or by Max itself. We will create our ‘frog’ data type as a ‘nobox’ class.

One example of a nobox class that is defined internally to Max is the t_atomarray in the Max API.  Let’s consider it’s definition from ext_atomarray.h:

typedef struct _atomarray {
	t_object	ob;
	long		ac;
	t_atom		*av;
} t_atomarray;

The atomarray is simply an object that manages an array of atoms.  However it is a proper object. You can instantiate it by calling object_new() and you free it by calling object_free().  It has a the typical Max methods which can be invoked by sending messages to the object using the object_method(), object_method_typed(), and other similar functions.

If you poke around the Max SDK you will probably notice a number of these nobox classes.  They include t_linklist, t_hashtab, t_dictionary, t_symobject, etc.  Even The Max object, the one that you send cryptic messages to using message boxes that say things like “;max sortpatcherdictonsave 1″, is a nobox object.

Defining a Frog

If we define our frog as a nobox class, we may have a struct like this:

typedef struct _frog {
	t_object	ob;
	long		num_flies;
 	t_atom		*flies;
} t_frog;

This is basically the same thing as an atomarray, but we will make it ourselves from scratch.  And we can define some more whimsical names for its methods.

Just like any other class, we need to cache our class definition in a static or global toward the top of our file.  So we can simply do that like usual:

t_class *s_frog_class = NULL;

Then we can get to the class definition, which once again will look pretty familiar to anyone who has written a few objects using the Max 5 SDK.

int main(void)
{
	common_symbols_init();

	s_frog_class = class_new("frog",
				(method)frog_new,
				(method)frog_free,
				sizeof(t_frog),
				(method)NULL,
				A_GIMME,
				0L);

	class_addmethod(s_frog_class, (method)frog_getflies, 	"getflies", A_CANT, 0);
	class_addmethod(s_frog_class, (method)frog_appendfly, 	"appendfly", A_CANT, 0);
	class_addmethod(s_frog_class, (method)frog_getnumflies,	"getnumflies", 0);
	class_addmethod(s_frog_class, (method)frog_clear,	"clear", 0);

	class_register(_sym_nobox, s_frog_class);
	return 0;
}

Essentially:

  1. we define the class itself, which includes providing the instance create and destroy methods, the size of the object’s data, and what kind of arguments the creation method expects.
  2. we initialize commonsyms — this means we can refer to a whole slew of pre-defined symbols without having to make computationally expensive gensym() calls.  For example, we can use _sym_nobox instead of gensym(“nobox”).
  3. we add some message bindings so that we can call the methods using object_method() and friends.  One aspect of these messages is that we gave a couple of them A_CANT types.  This is uncommon for normal box classes, but quite common for nobox classes.  It essentially indicates that Max “can’t” typecheck the arguments.  This allows us to bind the message to a method with virtually any prototype we want.
  4. we register the class as a nobox object

Take special note of that last step.  Instead of registering the class in the “box” namespace, we register it in the “nobox” namespace.

We could also define attributes for our class, but for the sake of simplicity we are just using messages in this example.

The Froggy Lifecycle

When we go to use our frog class we will expect to be able to do the following:

t_object *myfroggy;
myfroggy = object_new_typed(_sym_nobox, gensym("frog"), 0, NULL);

// do a bunch of stuff
// snap up some flies
// sit around the pond and talk about how the mud was in the good ole days...

object_free(myfroggy);

Notice that once again we have to specify the correct namespace for the object, _sym_nobox, in our call to object_new_typed().  We used object_new_typed() because we defined the class to take arguments in the A_GIMME form.  If we use object_new() instead of object_new_typed() then the arguments passed to our instance creation routine would be pointing to bogus memory (and we definitely do not want that – unless you are a crash loving masochist).

Speaking of the object creation routine, it can be pretty simple:

t_object* frog_new(t_symbol *name, long argc, t_atom *argv)
{
    t_frog	*x;

    x = (t_frog*)object_alloc(s_frog_class);
    if(x){
		// in Max 5 our whole struct is zeroed by object_alloc()
		// ... so we don't need to do that manually

		// handle attribute arguments.
		// we don't have any attributes now, but we might add some later...
		attr_args_process(x, argc, argv);
	}
	return x;
}

In addition to the things noted in the method’s code, I’ll point out the obvious fact that we don’t need to worry about creating inlets or outlets — our object will never be visible in a box in a patcher, and thus never have patch cords connected to it.

Our free method is also quite simple.  We just call the clear method.

void frog_free(t_frog *x)
{
	frog_clear(x);
}

Sending Messages to a Frog

At the beginning of the previous section we created an instance of the frog object with object_new_typed().  We probably didn’t do this just to free the object again.  We want to send some messages to get our frog to do something – like collect flies.

Let’s define the four methods we specified above:

void frog_getflies(t_frog *x, long *numflies, t_atom **flies)
{
	if (numflies && flies) {
		numflies = x->num_flies;
		flies = x->flies;
	}
}

void frog_appendfly(t_frog *x, t_atom *newfly)
{
	if (x->num_flies == 0) {
		x->num_flies = 1;
		flies = (t_atom*)sysmem_newptr(x->num_flies * sizeof(t_atom));
	}
	else {
		x->num_flies++;
		flies = (t_atom*)sysmem_resizeptr(flies, x->num_flies * sizeof(t_atom));
	}
	x->flies[x->num_flies-1].a_type = newfly->a_type;
	sysmem_copyptr(newfly, x->flies+(x->num_flies-1), sizeof(t_atom));
}

long frog_getnumflies(t_frog *x)
{
	return x->num_flies;
}

void frog_clear(t_frog *x)
{
	if (x->num_flies && x->flies) {
		sysmem_freeptr(x->flies);
		x->flies = NULL;
		x->num_flies = 0;
	}
}

None of the messages are defined with argument types of A_GIMME, so we will use object_method() to send messages, and not object_method_typed() or its immediate descendants.  Usage of these methods might then look like this:

t_atom	mosquito;
t_atom	bee;
long	n;
long	ac = 0;
t_atom	*av = NULL;

atom_setsym(&mosquito, gensym("eeeeyeyeeyyeyyyyeeyyye"));
atom_setsym(&bee, gensym("bzzzzzzzz"));
// here we have the frog snap up the flies using one its A_CANT methods
object_method(myfroggy, gensym("appendfly"), &mosquito);
object_method(myfroggy, gensym("appendfly"), &bee);

// this call returns a value -- we have to cast it, but that's okay
n = (long)object_method(myfroggy, gensym("getnumflies"));
// another A_CANT method, passing two pointer args
object_method(myfroggy, gensym("getflies"), &ac, &av);

// we're all done and the froggy has a bowel movement
object_method(myfroggy, gensym("clear"));

Wrap Up

So we have a boxless class now.  It isn’t all that different from a regular class, but there are always people asking me about example code that shows how to do this sort of thing.  And this information lays the foundation for the upcoming articles in this series.

If you have any questions, please leave a comment!

Custom Data-Types in Max Part 1: Introduction

The Max API makes it easy to pass a handful of standard data types in Max: ints, floats, symbols, lists of the aforementioned.  But what happens when you want to pass a frog from one object to the next?  A frog is not a standard data type.  Instead it is something made up that we want to send hopping through our Max patch from one green object box to the next.

Where do we start?

Before we can pass the frog from one object to another, we first need to define the frog type.  What is it?  Is it an object (meaning a proper Max class with a t_object as it’s first member)?  Or is it a naked struct or C++ class?  Or something else entirely?  Are we passing the data by value, or by reference (meaning a pointer)?

That last question may be more difficult that it seems at first glance.  Answering the question may help to determine the answer to the other questions.  If we pass by value then we have a certain amount of simplicity, but for anything other than rudimentary types it quickly becomes a very computational expensive situation.  So the obvious answer here is to pass by pointer, right?  Not so fast…  Consider the following patcher topology:

simple-patcherIf we pass by value from the first number object, then we get the results that are shown.  If we simply pass a pointer to the value (pass by reference) without some sort of management in place then we will get very different results.  The result could be the following:

  1. The address of the data (2) is passed to the [+ 5] object.
  2. 5 is added to 2, the data now has a value of 7 and this new value is passed to the lower-right number box.
  3. The address of the data (which now has the value 7!) is now passed to the [+ 7] object.
  4. 7 is added to 7, the data now has a value of 14(!) and this new value is passed to the lower-left number box.

Indeed.  A subtle problem with real life consequences.  In our example the problem may seem trivial, but when you are operating on more complex structures (e.g. Jitter or FTM) then there needs to be a system in place that allows for the graph to bifurcate without downstream operations corrupting the output of other operations happening ‘in parallel’.

Series Overview

This introduction to the problem is the first of a multi-part series. Over the next several weeks I will be writing about several different approaches to passing custom data types in Max, and I’ll be using some real-world examples to demonstrate how and why these various strategies are effective.

  1. Introduction
  2. Creating “nobox” classes
  3. Binding to symbols (e.g. table, buffer~, coll, etc.)
  4. Passing objects directly (e.g. Jamoma Multicore)
  5. Hash-based reference system (similar to Jitter)

Accessing buffer~ Objects in Max5

One thing that has always been a bit tricky, and perhaps a bit under-documented, has been writing good code for accessing the contents of a buffer~ object in Max.  What has made the situation a bit more confusing is that the API has changed slowly over a number of versions of Max to make the system more robust and easier to use.  This is certainly true of Max 5, and the most recent version of the Max 5 Software Developer Kit makes these new facilities available.

I’ll be showing the favored way to access buffer~ objects for Max 5 in the context of a real object: tap.buffer.peak~ from Tap.Tools.  I’ll show how it should be done now, and in some places I’ll show how it was done in the past for reference.

Getting a Pointer

The first thing we need to do is get a pointer to the buffer~ bound to a given name.  If you know that there is a buffer~ object with the name “foo” then you could simply do this:

t_symbol* s = gensym("foo");
t_buffer* b = s->s_thing;

However, there are some problems here.  What if “foo” is the name of a table and not a buffer~?  What if there is a buffer~ named foo in the patcher, but when the patcher is loaded the buffer~ is instantiated after your object.  What if you execute the above code and then the user delete’s the buffer~ from their patch?  These are a few of the scenarios that happen regularly.

A new header in Max 5 includes a facility for eleganty handling these scenarios:

#include "ext_globalsymbol.h"

Having included that header, you can now implement a ‘set’ method for your buffer~-accessing object like so:

// Set Buffer Method
void peak_set(t_peak *x, t_symbol *s)
{
	if(s != x->sym){
		x->buf = (t_buffer*)globalsymbol_reference((t_object*)x, s->s_name, "buffer~");
		if(x->sym)
			globalsymbol_dereference((t_object*)x, x->sym->s_name, "buffer~");
		x->sym = s;
		x->changed = true;
	}
}

By calling globalsymbol_reference(), we will bind to the named buffer~ when it gets created or otherwise we will attach to an existing buffer.  And when I say attached, I mean it.  Internally this function calls object_attach() and our object, in this case tap.buffer.peak~, will receive notifications from the buffer~ object.  To respond to these notifications we need to setup a message binding:

class_addmethod(c, (method)peak_notify,		"notify",		A_CANT,	0);

And then we need to implement the notify method:

t_max_err peak_notify(t_peak *x, t_symbol *s, t_symbol *msg, void *sender, void *data)
{
	if (msg == ps_globalsymbol_binding)
		x->buf = (t_buffer*)x->sym->s_thing;
	else if (msg == ps_globalsymbol_unbinding)
		x->buf = NULL;
	else if (msg == ps_buffer_modified)
		x->changed = true;

	return MAX_ERR_NONE;
}

As you may have deduced, the notify method is called any time a buffer~ is bound to the symbol we specified, unbound from the symbol, or any time the contents of the buffer~ are modified.  For example, this is how the waveform~ object in MSP knows to update its display when the buffer~ contents change.

Accessing the Contents

Now that you have a pointer to a buffer~ object (the t_buffer*), you want to access the contents.  Having the pointer to the buffer~ is not enough, because if you simply start reading or writing to the buffer’s b_samples member you will not be guaranteed of thread-safety, meaning that all matter of subtle (and sometimes not so subtle) problems may ensue at the most inopportune moment.

In Max 4 you might have used code that looked like the following before and after you accessed a buffer~’s contents:

    saveinuse = b->b_inuse;
    b->b_inuse = true;

    // access buffer contents here

    b->b_inuse = saveinuse;
    object_method((t_object*)b, gensym("dirty"));

The problem is that the above code is not entirely up to the task.  There’s a new sheriff in town, and in Max 5 the above code will be rewritten as:

    ATOMIC_INCREMENT((int32_t*)&b->b_inuse);
    // access buffer contents here
    ATOMIC_DECREMENT((int32_t*)&b->b_inuse);
    object_method((t_object*)b, gensym("dirty"));

This is truly threadsafe. (Note that you only need to call the dirty method on the buffer to tell that it changed if you wrote to the buffer).

Here is the code from tap.buffer.peak~ that access the buffer~’s contents to find the hottest sample in the buffer:

{
	t_buffer	*b = x->buf;		// Our Buffer
	float		*tab;		        // Will point to our buffer's values
	long		i, chan;
	double		current_samp = 0.0;	// current sample value

	ATOMIC_INCREMENT((int32_t*)&b->b_inuse);
	if (!x->buf->b_valid) {
		ATOMIC_DECREMENT((int32_t*)&b->b_inuse);
		return;
	}

	// FIND PEAK VALUE
	tab = b->b_samples;			// point tab to our sample values
	for(chan=0; chan < b->b_nchans; chan++){
		for(i=0; i < b->b_frames; i++){
			if(fabs(tab[(chan * b->b_nchans) + i]) > current_samp){
				current_samp = fabs(tab[(chan * b->b_nchans) + i]);
				x->index = (chan * b->b_nchans) + i;
			}
		}
	}

	ATOMIC_DECREMENT((int32_t*)&b->b_inuse);
}

Reflections on ObjectiveMax

objective-max-screenshot

A couple of years ago I got on an Objective-C kick.  In the process I created ObjectiveMax, an open-source framework for writing Max externals using Objective-C.  The goal was to create the easiest way possible to write objects for Max/MSP with minimal amount of “decoration code”.

My feeling is that it has been under-used by the community as a whole.  Reflecting, I think I know (at least partially) why…

I had intended (and even started) writing all of Tap.Tools 3 using ObjectiveMax.  The problem was that getting it to work on Windows, and then keeping it working on Windows, was beyond onerous.  I eventually decided that my personal sanity could not be tied to the inner workings of the GNUStep project.  In the end only the tap.applescript object is using ObjectiveMax.

A lot of people writing Max objects think about cross-platform compatibility.  Objective-C, and thus ObjectiveMax, do not easily provide this.  So that’s the first problem.

The second problem is that ObjectiveMax was initially licensed in a fairly unfriendly manner.  It took a dual-licensing approach similar to the one used by JUCE: GNU GPL for open source people, and pay money for a closed-source license.  While not an impossible license, there is enough in there to annoy and offend everyone.

There is a third problem too, ultimately caused by the first two problems: lack of adoption or popularity begets a lack of adoption and popularity.  No one uses it because no one else is using it.

If it’s Broke, Fix it.

I’m convinced that for Mac users this is the easiest way possible to write objects for Max/MSP.  So how should these concerns be addressed?

First, I just changed the licensing of ObjectiveMax to a single new BSD license.  This license was first suggested to me for ObjectiveMax about a year ago by Niklas Saers.  The more I’ve thought about it, the more sense it makes.  This makes the framework free and open-source to everyone for virtually all purposes, including commercial purposes.

Second, I don’t plan on rushing in to making ObjectiveMax work on Windows.  But I’m more than happy to help anyone who does want to get it working.  I’ve been keeping an eye on the Cocotron project for a while now, and I’d be very pleased to see people join the ObjectiveMax GoogleCode project to work on using Cocotron to get a version working for Windows.

And the third problem?  Well, it would be futile to try and control the adoption of the framework.  It’s open-source, and if people think it is worthwhile then they will use it.  Hopefully people will join the project and contribute to it.  One thing that is true is that I have not promoted ObjectiveMax much, and I’ve never taught any seminars using it.  This type of activity would be almost certain to increase the project’s luster and adoption.

… and if You Can’t Beat ‘em …

As previously mentioned, I ended up giving up on using ObjectiveMax for Tap.Tools 3, but solely for the reason of cross-platform compatibility.  A lot of what I learned, and learned to love, about Objective-C made it into the C++ library that Tap.Tools 3 and Jamoma use.

The TTBlue project (a.k.a the Jamoma DSP Library) incorporates reflection, message passing, and other aspects of Objective-C but in a C++ context that functions on both the Mac and Windows.  This project is very active, and the resulting code with which you write objects is getting progressively cleaner.

Or, as a card from Brian Eno’s Oblique Strategies states “When faced with a choice, do both.”

CreativeSynth Interview

We’re opening a time capsule here.  It is now 2009 and a lot of things have changed — and the excellent CreativeSynth.com is no longer with us.  I obtained permission from Mr. CreativeSynth himself, Darwin Grosse, to re-publish this interview from November 2002.

Back in 2002 (7 years ago!) I was fairly ‘green’ as a coder / designer / artist / entrepreneur / musician and was feeling my way around with Tap.Tools, which was also in a very different place than it is today.  And Jade, which has since evolved into Jamoma, had just been released.


on-longs-peak

Tim Place, creator of the Tap.Tools and Jade development environment, allowed himself to be grilled by your intrepid editor. Since Tim is a bit hesitant to blow his own horn too greatly, I’ll say it – his development tools are great, and Jade is a serious music environment that needs to be examined by anyone doing serious Max/MSP work.

This interview was done via email – thanks to Tim for his willingness to go through the process…

Tim, why don’t you give us a quick overview of your background?

My primary activity is composition. The doctoral diploma I’m working on will say ‘Composition,’ when I’m done with the degree here at the University of Missouri – Kansas City. I think most people who know me though, know that I’m not here to just write traditional pieces for orchestras and chamber groups and whatsuch.

My father is a very creative electrical engineer, which I would characterize as [possessing] values such as creative problem solving, practical innovation, and inventive spirit. They apply to my general approach to things in life, but most significantly to my music. And while there are times that my musical vision can be realized fully within the acoustic world (usually with fairly extended avant-garde techniques), typically I find adding another element to be critical to communicating my musical vision.

How well accepted is your “vision”? Do you find the academic community willing to embrace this perspective?

Well, academia is an odd place. I’m extremely fortunate to be where I am because I do have a great deal of flexibility in how I approach my degree – much more so than at other institutions. Given that however, it can still be an uphill battle at times.

In the U.S. it is generally perceived that the serious composer may incorporate electronics, but the bulk of their work will be with purely acoustic forces. So if I’m hoping to get a composer/composition gig at an academic institution, there is concern that a body of works which mostly involves electronics will reduce my ‘marketability.’ I noted this in some of the attitudes of faculty when I was selecting a school for my doctorate – even though all of my works with electronics involve live performers, most of which are playing orchestral instruments! In Europe there seems to be a much more open-minded attitude about this.

My mentors at UMKC want to see me succeed, and knowing the academic market, try to encourage me to balance my portfolio, etc. So they’ll challenge me at the outset of a project – I think that is good – but then no matter what I choose to do, they fully support me in every imaginable way once I get going on something. Like I said earlier, I’m extremely fortunate to be here.

It sounds like a great environment. Now, you’ve been an active Max/MSP developer – creating tools like the Tap.Tools and the new Jade development environment. How do you maintain the balance between academic work and commercial development?

Ask me again in 6 months. I’m not really sure… The big unknown is Jade. I’ve tried to be thorough with the documentation, but there are probably ways it can improve – which I won’t know about until people tell me about it. Also, since it has just been released I can’t really gauge how many people will be interested enough to buy it. I guess Tap.Tools doesn’t worry me too much because I’ve been supporting that for nearly two years as public alpha and beta versions.

tap-fft-list

The Tap.Tools are a popular set of objects for Max/MSP users. Tell us about their development.

In attempting to bring my interactive music to life there have been a few stages of development. The first is just learning Max/MSP. I more or less had to learn Max/MSP on my own – and it took a good 6 months before I actually made it get through a piece of music. Where I am now (UMKC) we teach Max/MSP as a course, but for most students it is still unreasonable to expect them to be able to grapple with all of the issues involved in creating time-based art with the software. It has been my experience that it still takes another semester for them to really make it fly.

One of the things that I found helped this in my own development was downloading objects others had built to do x, y, or z. I could use it out of the box, but I could open them up and modify them too. This was very helpful, but still there was no accessible (i.e. free or cheap) set of higher level stuff – pitch shifting, compression, reverb, etc. So I ended basically building all of this stuff for my music. Somewhere along the way I guess I picked up enough C to start building externals to do a few of the things I wanted.

After a couple of years, I finally felt like I could do something with Max – in part because I had built this little arsenal of tools. So after gaining so much by lurking on the Max Listserve and downloading others’ work, I finally had the opportunity to give back and make my objects available.

Well, that makes it sound like it’s a hodge-podge of objects, when in practice it “feels” much more coherent than that. What are the major categories of objects that you provide – and what was the impetus in making them?

tap-jit-motion-bball[Laughing] I’ve been exposed! The tap.tools really did just start as a hodge-podge of objects. I guess they’ve developed quite a bit since then. One of my chief concerns is working with audio, so there is an emphasis on that. Within that there are ‘high-level’ objects (effects, processors) and ‘low-level* objects that are typically the building blocks I use to create the high-level processes.

Here is an example. I wanted a reverb to use in a piece. It couldn’t really be a VST plugin because it would be potentially illegal to distribute the plugin with my score and software. So I did some digging and decided that I wanted to combine a few algorithms based on one by J. A. Moorer. The problem was that it needed a comb filter with a low-pass filter in the feedback loop. So I made an external, [tap.comb~] (with some generous help from David Zicarelli), and then made a patch for reverb, tap.verb~, which is built around the external.

Another object is tap.crossfade~, essential for creating a wet/dry mix control. This one I could have done as a patch, but the external is faster and more flexible. The same thing with tap.pan~. When you use over a hundred of these in a project, that speed really adds up.

So beyond the lower level building blocks and the higher-level effects and processors there are objects I built for control purposes. This might be to take an audio signal and generate a control (tap.sift~ or tap.bink~ for example), or to take a video signal using Jitter to do the same (tap.jit.motion+ for example). Some additional objects just help me manage that control stream.

Some objects were actually written for other people. Paul Rudy, a composer here in Kansas City was working on this piece for Bass Clarinet and MSP and was running into a nightmare of problems trying to manage several dynamic hierarchies of gain structure. So I looked at and thought it would be much simpler if there was an object to do x, and then build the patch around that. So I created tap.elixir~ to help with gain structure management. It has come in handy several times since…

Lately, licensing has been a bit of a hot-spot in the Max/MSP/Jitter community. What is the license that the tap.tools works under, how did you choose this approach and how do you think this affects its use?

Well I’ve just moved Tap.Tools out of beta and up to 1.0, so I took the opportunity to re-examine the licensing. The licensing of Tap.Tools has been always been a perplexing situation for me. I truly desire for the objects to be accessible (meaning cheap and/or free), and to be educational (meaning they are open source and well commented), but at the same time I don’t really want others to go running off with my work and making a fortune on it without my benefit (or reimbursement, depending on how you look at it).

This combines with the fact that developing the Tap.Tools comes at a personal expense. My upgrade of CodeWarrior to keep the Tap.Tools up for OS X will be several hundred dollars – not to mention the time spent making help files (which I obviously don’t [need] for myself), responding to questions that folks have, and just supporting the package and paying for server space, etc.

Tap.Tools are now shareware. I am aware of over 200 people using them pretty regularly. I was hoping that maybe in the first week I could bring in enough to afford the needed Code Warrior upgrade, but only 4 people actually registered in the past two weeks (compared with 162 downloads of it). I figured that most people will take Shareware as freeware, but I still figured I’d have about 10% pay the nominal ($45) fee. Guess I was wrong…

I still think shareware is the way to go with it. Because it is now shareware I felt freer with the license to let people do anything they darn well please with it. They can make a million dollars with the Tap.Tools and that is just fine provided they gave me $45 of it for the license. Some people will be ticked, and morally opposed, etc. Oh well. That’s for their conscience to grapple with, not mine.

A tougher license to make reasonable was the one for Jade. Jade is also a version of shareware (I guess), but if you don’t pay for it there are significant restrictions built into the software – unlike Tap.Tools, which simply uses the honor system. It will be interesting to see the two methods side-by-side.

sp_df1

Can you tell us more about your just-released project – Jade?

Jade is my solution to everything. Okay, so maybe that is a little over-the-top. But seriously, Jade basically bundles together solutions to my most common needs when creating a composition or installation. I can frame Jade by presenting the problems that I think it helps to resolve:

As I said earlier, Max/MSP is hard for a lot of musicians and really requires you to pay your dues before you start doing things with it. This is not helped by some of the idiosyncrasies of the software. I think Max/MSP excels like no other when it comes building instruments and effects processors. But when it comes to structuring a piece over time, a lot of folks just sort of sit and stare at the computer screen wondering what to do. How do I time my events? How do I automate events? How do I keep track of these hundreds of parameters when I want to change the order things happen in? It is a pretty complex issue.

There is also the issue of reusability of components. Object-oriented programming constantly tries to promise that if you do something once you can just call it and re-use it. But it isn’t that simple, especially in Max. You need to have enough structure that you know how do develop an object so that it can be reused over and over. Max gives you no constraints – but you need some, even if you develop them yourself. After my first couple of pieces/projects with Max I found myself frustrated at how long it would take me to try and take a piece of a previous project and incorporate it into a new project.

Then there is the issue of distribution. If you want to send a Max-built project to a performer (who doesn’t own Max) there are two possibilities: send the project with the Max Runtime or create a standalone app. A common problem though, even among experienced Max users, is that once on location for a performance or installation some of the variables or parameters need some adjustments. This can be really frustrating, especially if the patch wasn’t written to allow all of the variables to be controlled.

jadespace4These are some of the issues that I’ve dealt with in creating the interactive component for my pieces. I’m obviously biased, but I think Jade deals with them admirably. I know people who haven’t ever used Max, but have spent a few weeks with Jade and created a piece music using modules that I have pre-packaged with the software. I think that speaks rather loudly, though Jade is the most powerful when used in conjunction with Max to build your own modules for the system.

Jade also deals with perennial Max/MSP issues like saving/loading presets, managing CPU easily and effectively, etc. All of my music runs in Jade now. Because of the structural framework it forces me to build my patches so that they will be re-usable in other projects, which is a big bonus in the long run. I could go on and on, but at some point you will probably fall asleep (if you haven’t already)…

Not at all – can you give us a brief “walk-through” of making a simple Jade-based composition? Sometimes, products like this seem so complex that it is difficult to get a “vision” of using it.

Sure. I think what can initially overwhelm someone is that there is a lot to look at and a bunch of files, etc. It’s like when I introduce a class to Pro Tools or Digital Performer, every last ounce of the screen is filled with buttons and doodads that do something. What I think helps with Jade is understanding the paradigm. I like to use the paradigm of doing a gig with a bunch of hardware boxes.

If you are going to do gig with hardware boxes the first thing you do is select the gear you want. ‘I want a reverb, a delay unit, a couple of CD players, a compressor, and a mixer.’ Once you’ve selected the gear (and decided where you want to put it / how you want to stack it) then you have to wire it up. Finally, you will probably want to label the mixer so you remember what is plugged into where.

What I just described is a text file used by Jade called the Configure Script. Like a script used in a play, this script contains instructions on what pieces of gear (in Jade they are called modules) to use and how to hook it up and label it. If you tell Jade to create a new performance setup it will load a set of default scripts with, for example, a VST plugin.

Jade also has two other scripts. One sets all of the knobs and sliders to the correct position when Jade starts up, or when you manually tell it to Initialize. The other script is an event list which can be triggered by other processes in Jade, by shortcut keys, MIDI triggers, video analysis, etc.

If you deal with each chunk/module/unit on the screen one at a time I think the interface becomes a little easier to digest. Also the online help system can answer questions quickly. I tried to create lots of ways to get at information about modules quickly. That means bookmarks in the PDF documentation, getting an HTML page from any module’s menu, etc.

Well, now that you’ve developed a solution for everything – what next on the horizon? Do you see yourself getting more involved with Jitter? And how much work will OS X (and potentially Windows) represent for you?

Hah! Now I’ve got someone else saying I’ve developed a solution for everything – next we take over the world! On a more serious note, I’ve got about 10 pages of lists of ideas for improvements and additions to Jade. Some of this is just making more modules to process audio. Some are ways to automagically generate the scripts.

One of those things is better support for Jitter. You can use Jade to do all sorts of stuff for video just like you can audio. But it isn’t very well documented, and there aren’t very many pre-built modules, or libraries to help with building the modules. So that is the next thing. In a way Jade will probably never be ‘finished’ – there will always be things I (or others) can add. It is stable, and the framework is in place so that I can write music more efficiently. And that was the initial purpose. As an example, the [technical side of an] installation I created for ICMC this past year (with Jesse Allison) took two weeks to create from scratch. For me that is lightning pace – and it is because of Jade.

tim-2002-quandrypeak

Probably before I get to really clean up the Jitter side of Jade I will be working on the port to OS X. I love OS X. Besides all of the stability stuff people always site, I like that you can really get in and tweak your system and see all of the background processes, etc. The programming tools are all well documented, many are open-source, etc. I think it will be terrific to have Max/MSP/Jitter in OS X.

As far as Windows is concerned, I haven’t made up my mind. I used to be a hard-core Windows guy – hated the Mac, etc. Back in 1998 I bought Powerbook so that I could run Max, and I’ve totally flipped on the issue. I think back to the problems I have had, and that I have to help others with still, and it just doesn’t seem like Windows will be a great platform for interactive music. ‘Computers get stage-fright too,’ is something I like say. I guess I’m more comfortable with a Mac (especially OS X) that has stage-fright than a Windows machine that has stage fright. It seems like it would be nice to get Tap.Tools running for windows. Right now it would be hard to justify the expense though…

Thanks for spending the time answering these questions. Here’s for some blatant self-promotion – GO!

Thanks for giving me the opportunity! I love your site, and read various columns frequently. I don’t have a lot of blatant self-promotion to do… Other than to say that I’ll have an interactive piece, Dandelions for Alto Sax and computer, on an upcoming Centaur Release in the CDCM series (expected release later this fall).

Thanks Darwin!

Timothy Place’s various work can be found at [http://blog.74objects.com and http://electrotap.com], and is highly recommend by practically everyone that has used it.