Custom Data-Types in Max Part 4: Passing Object Pointers

How do you pass data between objects in Max?  If the data is a simple number or a symbol then the answer is easy.  What happens when you are trying to pass around audio vectors, dictionaries, images, or some other kind of object?  The implementation of Jamoma Multicore for Max deals with these issues head-on as it provides an illustration of how this problem can be tackled.

This is the fourth article in a series about working with custom data types in Max.  In the first two articles we laid the groundwork for the various methods by discussing how we wrap the data that we want to pass.  The third article demonstrated the use of Max’s symbol binding as means to pass custom data between objects.  This article will show an example of passing pointers directly between objects without using the symbol table.  In this series:

  1. Introduction
  2. Creating “nobox” classes
  3. Binding to symbols (e.g. table, buffer~, coll, etc.)
  4. Passing objects directly (e.g. Jamoma Audio Graph)
  5. Hash-based reference system (similar to Jitter)

A Peer Object System

Jamoma Audio Graph for Max is implemented as what might be called a Peer Object System.  What is meant is that for every object that a user creates and manipulates in a Max patcher, there is a matching object that exists in a parallel system. As detailed in Designing an Audio Graph, a Jamoma Audio Graph object has inlets and outlets and maintains connections to other objects to create a graph for processing audio through the objects.  The implementation of Jamoma Audio Graph for Max then has the task of creating and destroying these objects, sending them messages, and making the connections between them.  Once the objects are connected Jamoma Audio Graph will take care of itself.   The end result is that no audio processing actually happens in the Max objects for Jamoma Audio Graph — instead the Max objects are a thin façade that helps to set up the relationships between the objects as they exist in something akin to a parallel universe.

A Patcher

A Jamoma Multicore patcher in Max For context, let’s take a look at a Max patcher using Jamoma Audio Graph. In this patcher we have 4 Jamoma Audio Graph objects, identified by the ≈ symbol at the tail of the object name.  Each of these Max objects have a peer Audio Graph object internal to themselves.  Each Audio Graph object then contains a Jamoma DSP object that performs the actual signal processing. For example, the jcom.overdrive≈ object contains a pointer to a  Jamoma Audio Graph object that contains an instance of the Jamoma DSP overdrive class.  The attributes of the overdrive class, such as bypass, mute, and drive are then exposed as Max attributes so that they can be set in the patcher. Remember that each connection may carry N channels of audio.  The jcom.oscil≈ is, in this case, producing a stereo signal which is then propagated through the processing graph down to the jcom.dac≈ object.

Configuring the Graph

The exciting work doesn’t begin until the start message is sent to the jcom.dac≈ object.  As with all Jamoma Audio Graph externals, jcom.dac≈ Max external has a peer object.  In this case the peer object that it wraps is the multicore.output object.  This is the same multicore.output object that is shown in the Ruby examples in the Designing an Audio Graph article. When the start message is sent, the jcom.dac≈ object performs the following sequence:

  1. Send a multicore.reset message to all objects in the patcher.  This message sends a reset message to the peer objects underneath, which tells them to forget all of their previous connections.
  2. Send a multicore.setup message to all objects in the patcher.  This message tells the objects to try and connect to any object below it in the patcher.
  3. Tell the audio driver to start running.  When it is running it will periodically request blocks of samples from us, which in turn means that we will ask the other objects in the graph to process.

The processing happens completely within the Jamoma Multicore objects, thus not involving the Max objects at all.  It is the set up of the network of objects in the graph (steps 1 and 2) that involve our passing of custom data types in Max.

Diving into the code

For a full source listing of the jcom.dac≈ object, you can find the code in Jamoma Audio Graph source code repository.  We’ll abstract the important parts from that code below.  Let’s start with the method that is executed when the start message is sent:

TTErr DacStart(DacPtr self)
	MaxErr			err;
	ObjectPtr		patcher = NULL;
	long			vectorSize;
	long			result = 0;
	TTAudioGraphInitData	initData;

	self->multicoreObject->mUnitGenerator->getAttributeValue(TT("vectorSize"), vectorSize);

 	err = object_obex_lookup(self, gensym("#P"), &patcher);
 	object_method(patcher, gensym("iterate"), (method)DacIterateResetCallback, self, PI_DEEP, &result);
 	object_method(patcher, gensym("iterate"), (method)DacIterateSetupCallback, self, PI_DEEP, &result);

 	initData.vectorSize = vectorSize;
 	return self->multicoreObject->mUnitGenerator->sendMessage(TT("start"));

As previously discussed, the last we thing we do is send a start message to our peer object, the multicore.output, so that the audio driver will start pulling audio vectors from us. Prior to that we iterate the Max patcher recursively (so the messages go to subpatchers too) to send the multicore.setup and multicore.reset messages.   To do this, we send the iterate message to the patcher and pass it a pointer to a method we define.  Those two methods are defined as follows.

void DacIterateResetCallback(DacPtr self, ObjectPtr obj)
	TTUInt32	vectorSize;
	method		multicoreResetMethod = zgetfn(obj, gensym("multicore.reset"));

	if (multicoreResetMethod) {
		self->multicoreObject->mUnitGenerator->getAttributeValue(TT("vectorSize"), vectorSize);
		multicoreResetMethod(obj, vectorSize);

void DacIterateSetupCallback(DacPtr self, ObjectPtr obj)
	method multicoreSetupMethod = zgetfn(obj, gensym("multicore.setup"));

	if (multicoreSetupMethod)

These functions are called on every object in the patcher.  If we start with the last function, we can see that we first call zgetfn() on the object, obj, which is passed to us.  If that object possesses a multicore.setup method then we will receive a pointer to that method.  Otherwise we receive NULL.  If that method exists then we call it. The multicore.reset method works the same way.  The only difference is that the method takes an additional argument — the vector size at which the jcom.dac≈ is processing.

The Other End

At the other end of this calling sequence are the remaining objects in the patcher.  The full jcom.oscil≈ source code will show how this Max object is implemented.  In brief, we have two message bindings in the main function:

	class_addmethod(c, (method)OscilReset, "multicore.reset",	A_CANT, 0);
	class_addmethod(c, (method)OscilSetup, "multicore.setup",	A_CANT,	0);

These two methods respond to the those called by the jcom.dac≈ object.  They both have an A_CANT argument signature, which is how you define messages in Max that use function prototypes different than the standard  method prototypes.  These messages can’t be called directly by the user, and they are not listed in the object assistance, but we can send them from other parts of Max such as our jcom.dac≈ object.  The reset message (for forgetting about all previous connections) is simply passed on to the oscillator’s Multicore peer object:

TTErr OscilReset(OscilPtr self)
	return self->multicoreObject->reset();

The setup method, as we discussed, tells our object that we need to try and make a connection to any object below us in the patcher. To do this we wrap our peer Multicore object’s pointer up into a Max atom.  That, together with the outlet number (zero), are passed as arguments to the multicore.connect message which is sent out our outlet.

TTErr OscilSetup(OscilPtr self)
	Atom a[2];

	atom_setobj(a+0, ObjectPtr(self->multicoreObject));
	atom_setlong(a+1, 0);
	outlet_anything(self->multicoreOutlet, gensym("multicore.connect"), 2, a);
	return kTTErrNone;

One More Time…

That took care of the jcom.oscil≈ object.  Once it sends the multicore.connect message out its outlet, its work is done.  But what happens with that message when it is received?

In our example it is going to a jcom.overdrive≈ object.  The source code for jcom.overdrive≈ isn’t going to be very helpful though.  It uses a magic class-wrapper that wraps any Jamoma DSP object as a Multicore object using 1 line of code.  That’s really convenient for coding, but not for seeing how all of the parts communicate.  So for our discussion, we will look at the jcom.dcblocker≈ source code instead — beginning with the main() function.

	class_addmethod(c, (method)DCBlockerReset,	"multicore.reset",	A_CANT, 0);
	class_addmethod(c, (method)DCBlockerSetup,	"multicore.setup",	A_CANT, 0);
	class_addmethod(c, (method)DCBlockerConnect,	"multicore.connect",	A_OBJ, A_LONG, 0);

You should recognize the multicore.reset and multicore.setup messages.  Those are exactly the same as they were for our oscillator.  We now also have a multicore.connect message.  The oscillator was generating a signal but has no signal inputs, so it had no need for a multicore.connect message.  Any object that requires an input, however, will require this message binding.  How that method is implemented?

TTErr DCBlockerConnect(DCBlockerPtr self, TTMulticoreObjectPtr audioSourceObject, long sourceOutletNumber)
	return self->multicoreObject->connect(audioSourceObject, sourceOutletNumber);

We simply wrap a call to our peer object’s connect method, sending the audioSourceObject (which is the peer object that the jcom.oscil≈ object sent us), and the outlet number from which that object was sent.  If you compare this to the connect message from the Ruby example in Designing an Audio Graph, it may illuminate the process.

Some Final Details

The example code that we’ve seen from Jamoma Audio Graph demonstrates the passing of custom data (pointers to C++ objects) from one object to the next through the multicore.connect message. Because we are sending this custom data type, and not all inlets of all objects will understand this data type, it would be nice if we could protect users from hooking up the objects in a way that will not function.  For this task, Max makes it possible to give outlets type information.  When the type of an outlet is specified, a user will not be able to connect the patch cord to any inlet that doesn’t accept the specified message. To get this functionality, in DCBlockerNew(), we create our outlet like this:

	self->multicoreOutlet = outlet_new(self, "multicore.connect");

So instead of the customary NULL for the argument to outlet_new(), we specify that this outlet will be sending only multicore.connect messages.

Surfacing for Air

Jamoma Audio Graph provides a fairly intense example of passing custom data types in Max.  However, it presents not just the basics of how you would pass a pointer, but rather a context for why you might want to pass a custom type, and a real-world example to show what you can do.  I think that objective has been accomplished.

Designing an Audio Graph

In previous articles about the Jamoma Platform and the Jamoma DSP Library, there have been references to Jamoma Audio Graph (also previously known as Jamoma Multicore).  Up to this point, Jamoma Audio Graph has not been significantly documented or written about.  The authoritative information has been an Electrotap blog post showing the initial prototype in 2008.

At a workshop in Albi in 2009 we attempted to further expand Jamoma Audio Graph — and we failed.  The architecture was not able to handle N multichannel inputs and M multichannel outputs.  So we had to redesign a major portion of the inner-workings.  Get out your pipe wrench; it’s time to take a look at some plumbing…

What Is Jamoma Audio Graph ?

Let’s back up for moment to get the big picture.  The Jamoma Platform is essentially a layered architecture implementing various processes for interactive art, research, music, etc.  At the lowest level, the Jamoma Foundation delivers basic components for creating objects, passing values, storing values in lookup-tables, etc.  The Jamoma DSP library then extends the Foundation classes and provides a set of pre-built objects for audio signal processing.

Jamoma Audio Graph then gives us the ability to create Jamoma DSP objects and combine them into a graph.  In other words, we can connect the objects together like you might connect modules together on a Moog synthesizer.

A Moog Modular patch. Photo: Maschinenraum

A Moog Modular Patch. Photo: Maschinenraum

Unlike the Moog synthesizers of old, however, we can do a few new tricks.  Instead of sending a single channel of audio through a connection, we can send any number of channels through a connection.  While Jamoma Audio Graph does not currently implement any particular features for parellel processing on multiple cores/processors, the design of the system is ideal for such parallelization in the future.

The Audio Graph In Action

At the time of this writing, Jamoma Audio Graph has bridges to make it available in the Max and Ruby environments.  Most of the work is done on making it available to Pd as well (though if you are really interested in this then let us know so we can put you to work!).

In Ruby, you can code scripts that are executed in a sequence.  This provides a static interface to Jamoma Audio Graph even though all of the synthesis and processing is typically happening in real-time.  Alternatively, the irb environment allows you to type and execute commands interactively.  Jamoma Audio Graph, together with irb, then functions much like the ChucK environment for live coding performance.


If you’ve been jonesin’ for an Atari/Amiga/Commodore fix then this might be your perfect example of Jamoma Audio Graph in Ruby:

# This is the standard require for the Jamoma Platform's Ruby bindings
require 'TTRuby'

# Create a couple of objects:
dac = "multicore.output"
osc = "wavetable"

# connect the oscillator to the dac
dac.connect_audio osc

# turn on the dac
dac.send "start"

# play a little tune...
osc.set "frequency", 220.0
sleep 1.0
osc.set "frequency", 440.0
sleep 1.0
osc.set "frequency", 330.0
sleep 0.5
osc.set "frequency", 220.0
sleep 2.0

# all done
dac.send "stop"

It’s a pretty cheesy example, but it should give you a quick taste.  If you want a flashback to kinds of music you could make with MS-DOS, be sure you set the oscillator to use a square waveform.

After creating a couple of objects, you connect two objects by passing the source object to the destination object using a connect message.  If you provide no further arguments, then the connection is made between the first outlet of the source object and first inlet of the destination object.  The inlets and outlets are numbered from zero, so the connect message in our example could also have been written as

dac.connect osc, 0, 0

The sleep commands are standard Ruby.  They tell Ruby to pause execution for the specified number of seconds.  Everything else is performed with the basic Jamoma Ruby bindings.  These provide the send method for sending messages and the set method for setting attribute values.

If you want to know the messages or attributes that an object possesses, you can use the messages? or attributes? methods.  This is particularly useful when coding on the fly in irb.  In the following example, I requested the list of attributes for the oscillator in the previous example:

>> osc.attributes?
=> ["gain", "mode", "size", "processInPlace", "maxNumChannels", "frequency", "mute", "interpolation", "sr", "bypass"]

How It Operates

If you create a visual data-flow diagram of the objects in a graph, like you would see in Max or PureData, then you would get a good sense of how audio starts at the top and works its way through various filters until it gets to the bottom.  The same is true for a Jamoma Audio Graph.  However, what is happening under the surface is exactly the opposite.

Pull I/O Model

Multicore Graph Flow

The flow of a Jamoma Audio Graph.

Jamoma Audio Graph is based on a “Pull” I/O Model.  Some other examples of audio graph solutions using a similar model include ChucK and Apple’s AUGraph.  In this model a destination, sink, or terminal node object sits at the bottom of any given graph — and this is the object driving the whole operation.  In Max, on the other hand, messages (e.g. a ‘bang’ from a metro) begins at the top of the graph and pushes down through the objects in the chain.

The image to the left visualizes the operation of the audio graph.  Let’s assume the the destination object is an interface to your computer’s DAC.  The DAC will request blocks of samples (vectors) every so often as it needs them.  To keep it simple, we’ll say that we are processing samples at a sample rate of 44.1KHz and a block size of 512 samples.  In this case, every 11 milliseconds the DAC will tell our destination object that it needs a block of samples and the process begins.

The process flows through the light blue lines.  The destination asks the limiter for a block of samples, which then asks the overdrive for a block of samples which then asks both the source and the multitap delay for samples, and then the multitap delays asks the source for a block of samples.  To summarize it: each object receives a request for a block of samples, and in response it needs to produce that block of sample values, possibly pulling blocks of samples from additional objects in the process.

One Object At A Time

To understand in finer detail what happens in each object, the graphic below zooms-in to view a single instance in the graphic above.  Here we can see that we have the actual unit generator, which is a Jamoma DSP object, and then a host of other objects that work to make the interface for the audio graph.

Anatomy of a Multicore Object

Jamoma Audio Graph class structure

The text in graphic explains each of the classes contained in a Jamoma Audio Graph object.  Implied in both of the figures, is the ability to handle “fanning” connections where many inlets are connected to an outlet, or an inlet is connected to many outlets.

In essence, the outlets are only buffers storing samples produced by the unit generator.  Each time a block is processed the unit generator is invoked only once.  Subsequent requests for the object’s samples then simply access the samples already stored in the outlet buffers.

As explained in the graphic, the inlets have more work to do, as they need to sum the signals that are connected.  And remember, each connection can have zero or more channels!


The most obvious  benefit is the ability to easily handle multiple channels in a single connection.  So imagine that you create a Max patcher for mono operation.  It can then function in stereo or 8-channel or 32-channel without a single modification.

But there’s a lot more than that here.  The number of channels is dynamic and can change at any time.  One place this is valuable is in ambisonic encoding and decoding where the order of the encoding can dramatically alter the number of channels required for the encoded signal.  If you want to try changing the ambisonic order on-the-fly, which changes the number of channels passed, you can.

Likewise, the vectorsize can be altered dynamically on a per-signal basis.  The benefit here may not be immediately obvious, but for granular synthesis, spectral work, and analysis based on the wave length of an audio signal (e.g. the kinds of things in IRCAM’s Gabor) this can be a huge win.

Writing the objects is also very simple.  If you write a Jamoma DSP object, then all you have to do to make it available in Jamoma Audio Graph is…


That’s right.  In Ruby, for example, all Jamoma DSP classes are made available with no extra work.  If you want to make a Max external for a particular object then you can use a class wrapper (1 line of code) to create the Max external.

Interested in join the fun?  Come find us!

Writing DSP Objects

In my last article I talked about the structure of the Jamoma Platform.  That’s a bit abstract to be of much direct use.  A primer on how to write a DSP object seems to be in order.

So… let’s imagine we want to write a simple unit generator for audio processing.  One of the simplest filters we can write is a one-pole lowpass filter.  In pseudo code, it might look like this:

static float previous_output = 0.0;
static float feedback_coefficient = 0.5; // default is half way between 0 Hz and Nyquist

float processOneSample(float input)
    float output = (previous_output*feedback_coefficient) + ((1.0-feedback_coefficient)*input);
    previous_output = output;
    return output;

Simple, right?  Like most simple questions, the answer is only simple until you start asking more questions…  Let’s brainstorm a few practical questions about this simple filter:

  • How do we set the coefficient?
  • How do we efficiently process in blocks of samples instead of one sample at a time?
  • how do we handle multiple channels?
  • what if the number of channels changes on the fly?
  • is the audio processed in a different thread than the object is created and deleted on?  how do we handle thread safety?
  • do we want to have a “bypass” built-in so we can audition the effect?
  • How do we wrap this for a host environment like Max/MSP?
  • How do we wrap this as an AudioUnit plug-in?
  • What if we want to swap this unit generator out for another in real-time, without having to recompile any code?
  • How do we handle denormals and other similar gremlins that can cause performance headaches in realtime DSP code?

One more question: how do you get all of this without it sucking the life and love out of making cool DSP code? Funny you should ask, because that’s the very reason for the Jamoma DSP framework. Let’s look at an how we would write this object using Jamoma DSP.

Example Class: TTLowpassOnepole

First, the header file: TTLowpassOnepole.h:

#include "TTDSP.h"

class TTLowpassOnePole : TTAudioObject {
	TTFloat64		mFrequency;	///< filter cutoff frequency
	TTFloat64		mCoefficient;	///< filter coefficients
	TTSampleVector		mFeedback;	///< previous output sample for each channel

	TTErr updateMaxNumChannels(const TTValue& oldMaxNumChannels);
	TTErr updateSr();
	TTErr clear();
	TTErr setfrequency(const TTValue& value);
	inline TTErr calculateValue(const TTFloat64& x, TTFloat64& y, TTPtrSizedInt channel);
	TTErr processAudio(TTAudioSignalArrayPtr inputs, TTAudioSignalArrayPtr outputs);

The TTDSP.h header includes everything needed to create a subclass of TTAudioObject. We will see some of the magical joy of TTAudioObject shortly. In the class definition there is a macro called TTCLASS_SETUP. This creates prototypes for the constructor, destructor, and glue code for class registration, etc.

This class implementation thus follows as:

#include "TTLowpassOnePole.h"

#define thisTTClass		TTLowpassOnePole
#define thisTTClassName		"lowpass.1"
#define thisTTClassTags		"audio, processor, filter, lowpass"

	addAttributeWithSetter(Frequency,	kTypeFloat64);
	addAttributeProperty(Frequency, range, TTValue(2.0, sr*0.475));
	addAttributeProperty(Frequency, rangeChecking, TT("clip"));


	// Set Defaults...
	setAttributeValue(TT("maxNumChannels"), arguments); // This attribute is inherited
	setAttributeValue(TT("frequency"), 1000.0);

	; // Nothing special to do for this class

TTErr TTLowpassOnePole::updateMaxNumChannels(const TTValue& oldMaxNumChannels)
	return kTTErrNone;

TTErr TTLowpassOnePole::updateSr()
	TTValue	v(mFrequency);
	return setFrequency(v);

TTErr TTLowpassOnePole::clear()
	mFeedback.assign(maxNumChannels, 0.0);
	return kTTErrNone;

TTErr TTLowpassOnePole::setFrequency(const TTValue& newValue)
	TTFloat64	radians;

	mFrequency = newValue;
	radians = hertzToRadians(mFrequency);
	mCoefficient = TTClip(radians / kTTPi, 0.0, 1.0);
	return kTTErrNone;

inline TTErr TTLowpassOnePole::calculateValue(const TTFloat64& x, TTFloat64& y, TTPtrSizedInt channel)
	y = mFeedback[channel] = TTAntiDenormal((x * mCoefficient) + (mFeedback[channel] * (1.0 - mCoefficient)));
	return kTTErrNone;

TTErr TTLowpassOnePole::processAudio(TTAudioSignalArrayPtr inputs, TTAudioSignalArrayPtr outputs)

Breaking it Down

To understand what’s happening here, let’s start at the bottom and work our way back up toward the top.


This method accepts an input and an output.  The input and output arguments are arrays of multichannel audio signals.  That is to say the each of the input and output can contain zero or more multichannel signals, and each of those signals may have zero or more channels.  The audio signal has a vector size which indicates how many samples are contained for each channel that is present.

In most cases an object is only functioning on one multichannel input signal and one multichannel output signal.  Also, in most cases, the number of input channels and output channels are the same (e.g. 2 inputs and 2 outputs).  Furthermore, it is quite common that each channel is processed in parallel, and can be considered independent of the other channels.

Given this set of somewhat common set of assumptions, we can avoid the work of handling all of this audio processing machinery and just call the TT_WRAP_CALCULATE_METHOD macro.  Calling that macro will invoke the named calculation method to be used for processing one sample on one channel of one signal.  The calculate method is inlined, so we do not give up the performance benefits of processing by vector.


As we had previously alluded, this method calculates one output value for one input value.  You can think of this method in the mathematical terms

y = f(x)

This method may be called directly or, as just discussed, called to crunch numbers for the vector-based audio processing method.


As we will see shortly, attributes can be set using a default setter method that works most of the time.  In this case we need to do some customized work when the “Frequency” attribute is set.  Namely, we need to calculate the feedback coefficient.  We want to do that here so that the coefficient isn’t calculated every time our audio processing method is called.

This is the first time we’ve seen the TTValue data type, but we’ll be seeing a lot more of it.  This is the standard way of passing values.  TTValue can contain zero or more of any common data type (ints, floats, pointers) or special types defined in the Jamoma Foundation (symbols, objects, strings, etc.).


This method is quite simple: it resets all of the feedback samples for each audio channel to zero.  It can be invoked by a user if the filter ‘blows-up’.


This method is slightly special.  Just as we have a “Frequency” attribute, we have an “sr” attribute, which is the sample-rate of the object.  The trick is that we inherit the “sr” attribute from TTAudioObject.

Some objects may ignore the sample rate, or will function properly when the sample rate changes by virtue of the face that the member variable changed values.  In our case we need to take further action because our coefficient needs to be re-calculated.  The “updateSr” method is a notification that we will receive from our base class when the base class’ “sr” attribute is modified.


Just like the updateSr() method, this method is a notification sent to us by our base class.  In this case, the notification is sent when the base class has a change in its “maxNumChannels” attribute.

The “maxNumChannels” attribute is an indicator of the maximum number of channels the object should be prepared to process in the audio processing method.  As such, we use this notification to take care of memory allocation for anything in our instance that is related to the number of channels we process.

The Destructor

As the comment says, we don’t have anything special to take care of in this case.  We still define the destructor so that we can be explicit about what is happening regarding object life-cycle.

The Constructor

Obviously, to experienced C++ programmers anyway, the constructor is what gets called when a new instance of our class is created.  But what we do in this constructor is what makes everything else we’ve been through actually work.

First, we use a macro to define the function prototype.  We do this because it is the same every single time, and this ensures that we don’t screw up the initialization (or lack of initialization) of members or super-classes.

Next, we define attributes.  In our case we have only one attribute, and that attribute has a custom setter method (the setFrequency() method).  It is represented by the mFrequency member variable.  Attributes can be given properties.  In this case we limit the range of the values for our attribute to things that will actually work.

In addition to attributes, which have a state and are represented by data members, we have messages.  These are stateless methods that will be invoked any time our object receives the said message.  Messages might have no arguments, as in the case of the “sr” and “clear” messages.  If they do have arguments the arguments will passed as a TTValue reference, as in the case of the “updateMaxNumChannels” method.

Finally we set defaults.  This means default attribute values, but it also means the initial audio processing and value calculation methods.  These methods may be changed on the fly during operation, though in our case we only have one of each.

Gift Wrap

To summarize, we now have an object with the following features from our original list:

  • We set the coefficient using an attribute for cutoff frequency, which is automatically updated when the sample rate changes.
  • We efficiently process in blocks of samples (instead of one sample at a time) using the processAudio method.
  • processAudio also handles N channels of input and output transparently.
  • It is no problem if the number of channels changes on the fly, this is all handled properly.
  • The audio may be processed in a different thread than the one on which the object is created and deleted.  Thread safety for this circumstance is handled by the environment.
  • We did not discuss it, but we do have a “bypass” attribute that we inherited, among others, so we got this functionality for free.
  • We can swap any object inheriting from TTAudioObject for another in real-time.  The attributes and messages are called by dynamically bound symbols, so there are no linking problems or related concerns.
  • We did not discuss it but the calculateValue method handles denormals using a library function.

So now we just need to use the object.  TTAudioObject classes have been used directly and in combinations with each other to create Max/MSP objects, Pd objects, VST plug-ins, AudioUnit plug-ins, etc.  Some examples of these can be found in the Jamoma DSP Github repository.  Others include the Tap.Tools, sold by Electrotap.

The Magic Wand

One of the benefits of our dynamically-bound, message-passing TTAudioObjects is that we can use introspection on objects to find out about them at runtime.  That means we can load an object by name, ask what attributes it has and what types they are, and then create a new interface or adapter to the object.  One manifestation of this a class wrapper for Cycling ’74′s Max environment.

Given our TTAudioObject that implements a onepole lowpass filter, all that is required to make a full-blown Max/MSP object complete with Max attributes is this:

#include "TTClassWrapperMax.h"

	return wrapTTClassAsMaxClass(TT("lowpass.1"), "jcom.onepole~", NULL);

The first symbol we pass is the symbol name of the TTAudioObject.  The second argument is the name of the Max class we generate.  It really is this easy.

At the time of this writing, no one that I’m aware of has written a similar class wrapper for PureData, SuperCollider, AudioUnits, etc.  but there is no reason that this kind of wrapper couldn’t work for any of those target environments.

It’s fun stuff!  As the Jamoma Foundation and DSP projects have evolved over the last six years the code for classes has become increasingly flexible and also increasingly clear.  It’s possible to really focus on the task in the code without having to worry about all of the glue and filler typically involved in writing audio code with C and C++ APIs.

Less is Less

This month’s issue of Inc. Magazine features a profile of Jason Fried, founder of 37Signals. The part that caught my attention was the open:

You could sum up Jason Fried’s philosophy as “less is more.” Except that he hates that expression, because, he says, it still “implies that more is better.”

More clearly isn’t better. I wrote about a small bit about the ideas of Sarah Susanka a few months ago. Carried to an extreme, the idea of smaller houses results in the work of Jay Shafer, like in this video (via the 37Signals blog):

A happy coincidence occured, where I saw the above video during the same week that that I saw the video that follows: an etude for piano and electronics by fellow Jamoma developer Alexander Refsum Jensenius. As Alexander describes it:

Many performances of live electronics is based on large amounts of electronic equipment, cables, sound cards, large PA-speakers, etc. One problem with this is that the visual appearance of the setup looks chaotic. Another is that the potential for things that can go wrong seems to increase exponentially with the amount of equipment being used. The largest problem, though, at least based on my own experience of performing with live electronics, is that much effort is spent on making sure that everything is working properly at the same time. This leaves less mental capacity to focus on the performance itself, and sonic output.

I am currently exploring simplicity in performance, i.e. simplicity in both setup and musical scope.

I can attest to the problems Alexander relates, and I think the musical results he achieves are incredibly beautiful – in part because using less helps to focus the musical expression and make it more concise.

Making things simple, concise, and expressive, is incredibly difficult to do: whether it be music, prose, code, business, architecture, or hardware. It’s great to see examples of people finding the sweet-spot.

Not So Big…

This past week I received a gift.  It was a DVD called “The Not So Big House” by Sarah Susanka.  She has also written a couple of books, though I haven’t read them (or at least not yet).  It doesn’t say it so bluntly, but it essentially provides a foil to the bankruptcy of architectural trends in the U.S. urban-sprawl markets (which is to say, most of the U.S.).  There is an interview with her in the Washington Post (though it is more nuts-and-bolts than philosophical).

We often get caught in the trap of scale.  We want a ‘big’ orchestra.  We want to want to create a ‘large’ or ‘significant’ work, like a concerto or symphony.  Or a giant installation vs. a small sculpture.  This is often encouraged by our academic and accrediting institutions.  It is much easier to judge based on the quantity of music or art rather than subtle issues of a work’s quality.  The same is true of houses — is bigger better?  Most everyone will tell you ‘yes’ without giving much thought to the various qualities that may affect the persons living in the house.

While I don’t have any earth-shattering conclusions to share, I have been thinking about applications of this architectural philosophy to software design.  There are very superficial ways to apply it (using small focused tools, etc.), but I think there are deeper applications which even impact the structural aspects of code-bases.

Architectural patterns and issues are among the most fascinating subjects.  As an artist I find the same approaches to design showing up in my artistic output as well as my code and hardware development.  How I approach building furniture with hand tools, sketch an idea for remodeling a room in the house, shape the flower beds for landscaping, craft contrapuntal lines in my orchestration, and pattern software are all expressions of the same essence and character.

And now?  Now it is time go design a meal to enjoy.  Yum!

The Jamoma Platform

In the series Custom Data-Types in Max there is frequent reference to Jamoma.  Jamoma is “A Platform for Interactive Art-based Research and Performance”.  Sounds great, right?  But what does it mean by “A Platform”?  How is  it structured?  What are the design considerations behind Jamoma’s architecture?  Many people are aware of some of Jamoma’s history or what it was in the past, but it has come a long way in the last couple of years.

The Jamoma Platform

The Jamoma Platform comprises a group of projects addressing the needs of composers, performers, artists, and researchers.  These projects are orchestrated in a number of layers with each layer dependent on the layers below it, but the layers below not dependent upon the layers above them.


The modular framework is built on top of the Max environment while others are completely independent of Max.  For example, the Jamoma DSP layer is actually used to write objects for Pd and SuperCollider, plug-ins in the VST and AU formats, C++ applications in addition to creating objects for Max or user by the Jamoma Modular Framework.

The modular layer also bypasses some intermediary layers, which is indicated in this graphic with the lines that directly connect the layers.

Let’s take a look at each of these layers (bypassing the System Layer).

Jamoma DSP Layer

At the bottom of the stack is the Jamoma DSP Layer, also known as TTBlue for historical reasons.  The DSP layer, logically enough, is where all signal processing code for Jamoma is written in C++.  There is a library of processing blocks and utilities from which to draw.  The library is extensible and can load third-party extensions to the system dynamically or at start-up.  Finally, the DSP Layer is more than just a bunch of DSP processing blocks: it includes an entire reflective OO environment in which to create the processing blocks and send them messages.

All by itself the Jamoma DSP Library doesn’t actually do anything, because it is completely agnostic about the target environment.  The Jamoma DSP repository includes example projects that can wrap or use the DSP library in Max/MSP, Pd, SuperCollider, VST and AU plug-ins, etc.  In some cases there are class wrappers that will do this in one line of code.  In all of these examples, the DSP library is used, but no other part of Jamoma is required, nor will it ever be required, as we keep a clear and firm firewall between the different layers.

Jamoma Multicore Layer

Jamoma Multicore, hereafter we’ll simply say ‘Multicore’, is built on top of the DSP layer.  Multicore creates and manages graphs of Jamoma DSP objects to produce signal processing chains.  One can visualize this as an MSP patcher with lots of boxes connected to each other, patchcords fanning and combining, generator objects feeding processing objects etc.  Multicore does not, however, provide any user interface or visual representation; it creates the signal processing graph in memory and performs the actual operations ‘under-the-hood’.

At this time I would describe the status of the Multicore layer as “pre-alpha” – meaning it is not very stable and is in need of further research and development to fulfill its vision.

Jamoma Modular

When most people say Jamoma, they typically are referring to the Jamoma Modular Layer, and more specifically the Jamoma Modular Framework.  The Modular framework provides a structured context for fully leveraging the power of the Max/MSP environment.  The modular layer consists of both the modular framework and a set of components (external objects and abstractions).  The components  are useful both with and without the modular framework.

To exemplify the Modular Components, we can consider the jcom.dataspace external.  This is an object that converts between different units of representation for a given number (e.g. decibels, linear gain, midi gain, etc.).  This is a useful component in Max/MSP regardless of whether the modular framework is being used for patcher construction or management.

The Modular Framework, on the other hand, is system of objects and conventions for structuring the interface of a Max patcher – both the user interface and the messaging interface.


The screenshot above (from Spatial sound rendering in Max/MSP with ViMiC by N. Peters, T. Matthews, J. Braasch & S. McAdams) demonstrates the Jamoma framework in action.  There a number of modules connected together in a graph to render virtual microphone placements in a virtual space.  The module labeled ‘/cuelist’ communicates remotely with the other modules to automate their behavior.

Digging Deeper

In future articles I’ll be treating the architecture of each of these layers in more detail.  I also will be demoing Jamoma at the Expo ’74 Science Fair next week.  If you are going to be at Expo ’74, be sure to stop by and say hello.

The Hemisphere as Architecture


I first experienced the hemisphere loudspeakers in 1998 at the SEAMUS national conference held at Dartmouth College.  At that conference I saw an amazing performance by Curtis Bahn and Dan Trueman in a genre/style/practice to which I had never before been exposed.  I saw (and heard) them again at a performance at the Peabody Conservatory in the Spring of 1999.  The speakers sound amazing in the right context, not because of the quality of the drivers, but because of how they engage the acoustics of the space.

Fast forward a couple of years and my good friend Stephan Moore, at this time studying with Curtis at RPI, became involved with producing a fair number of a new generation of hemisphere for use in installations and performances. At the 2002 SEAMUS Conference at the University of Iowa, Curtis and Stephan presented their work with the Hemispheres. Their work included experiments with different sound dispersion paradigms. One example is a laying the speakers throughout a space on the floor and distributing the sound material amongst the loudspeaker arrays using the Boids algorithm.

Later that year, in the middle of a very hot July in Upstate New York, I helped Stephan build 43 of the third-generation Hemispheres.  The hemisphere’s have evolved a few more times since then.  The fifth generation hemisphere’s now sold by Electrotap are produced through a joint effort of Stephan and master furniture maker Ken Malz.


Incidentally, Stephan has been touring this Spring with the sixth generation Hemisphere — a powered version that has the amplifiers built into the cabinet.

A couple months ago an inquiry came in to Electrotap’s support asking about different ways to mount or suspend the Hemispheres in a gallery.  In the process, Stephan sent me a couple of the photos you see in this post (and gave me permission to post them).


These photos are from his installation “Outside Information”  that was shown July-November 2008 at the Mandeville Gallery of Union College in Schenectady, NY, in the Nott Memorial.  From the photos I would say the architecture of the building is pretty fascinating.  Here is Stephan’s artist statement from the exhibition:
Depending on how you listen to it, Outside Information is a decorative soundscape for an already highly-decorated space, or a means of listening to and navigating a complicated acoustic environment.  The eight Hemisphere speakers suspended in the giant column of air carry layers of small, shifting sounds to all parts of the Nott Memorial, activating the space’s acoustics and providing opportunities to explore its sonic eccentricities.  The small sounds in the speakers create a wash of sound in the space, which can resolve into high, unexpected detail when a speaker is approached closely.  Every point on each of the floors provides a different perspective on these sounds.

The title is inspired by the Jason Martin song “Inside Information”, which was written about the Nott Memorial and the mysteries (and potential conspiracies) surrounding its geometry and decoration.  In Martin’s song, he describes trying to discover the secret meanings of the Nott, suggesting that “If you ask, no one will tell you/but you should ask anyway.”  Outside Information, by contrast, supplies the space with an extrinsic layer of activity, geometry, decoration, and meaning, no less mysterious, but gradually yielding to investigation and exploration.

In many ways, this piece expands upon my 2007 Steepings series of sound installations, which were made to flood smaller spaces with intimate, shifting sounds that varied based on a set of simple rules.  Outside Information uses similarly-conceived custom sound software to generate algorithmic sound with both greater momentary variability and the capacity for long-term drift.  As the resulting environment can sound quite different from hour to hour, day to day, and will interact with changes in the air conditioning’s rumble and human activity in the space, my hope is that it will reward repeated visits from the members of the Union College community that encounter it daily or weekly.


Unlike many loudspeakers, the Hemispheres work visually with the space and the concept of the art.  They become a part of the artistic expression itself rather than a force acting in contradiction (or in orthogonality) to the creative concept.

These loudspeakers seem to lend themselves well to this in a number of different contexts as well.  A couple of months ago I saw a video of a work by Michael Theodore, who used hemispherical speaker arrays for this particular work, and they appears as mounds of earth rising from the surface.  The Princeton Laptop Orchestra also situates the speakers on the floor next to a person who is also sitting the floor, invoking a ritualistic image which perfectly reinforces the context of the performance ecosystem created by PLOrk.

I had a conversation with Trond Lossius walking back to BEK from the Landmark in Bergen a few years ago, following a sound installation exhibition.  Trond was interested in using the Hemisphere not just for it’s radiant acoustic qualities, but for its visual quality.  Two hemispheres placed back-to-back to create sphere, on top of a pole about 5 or 6 feet tall.  The resulting visual invokes a human image. How then do we interact with the sound in space that emanates from this creature/sculpture/environment?