Weekly Program Updates / Sign Up

Jonathan Marmor at the Ontological Theater + Interview

By Bijan Rezvani
Thursday, April 15th, 2010
  • comments (0)

by Bijan Rezvani

Quentin Tolimieri, Devin Maxwell, Isabel Martin, Jonathan Marmor, and Michael Pisaro at Listen/Space, Brooklyn - February 3, 2008

Quentin Tolimieri, Devin Maxwell, Isabel Martin, Jonathan Marmor, and Michael Pisaro at Listen/Space, Brooklyn - February 3, 2008

On Saturday night I made it to the Ontological-Hysteric Theater at St. Mark’s Church for a new piece of music by computer-aided algorithmic composer Jonathan Marmor. Marmor conducted 10 human beings through a deceivingly lovely alien song cycle.  Without the romantic flourishes typical of our pledge-time heroes, the piece used shifting sound combinations patterned with long silences to warp the temporal experience.

To learn more about the composer and his unique piece, which is streamed below, I had Jonathan Marmor answer a few questions:

Bjian: What’s the name of the piece, and when did you write it?

Jonathan Marmor: The piece doesn’t have a name. You’re the first person to ask. I wrote it between December and a week before the concert. However both the construction of the piece and the software used to make it are just the latest variation in a string of related pieces.

B: How did you get into making computer music?

JM: Since I was a teenager I’ve been writing ‘algorithmic’ music, in which all or most of the events are governed by some simple systematic process. My musical training from age 14 was in North Indian Classical music, which frequently uses very clear logical patterns to construct phrases and forms. As a foreigner I didn’t have an intuitive understanding of the musical structures, learning process, or folk tunes that make up Hindustani music, so I think I had a tendency to over-emphasize the importance of systematic processes. I started writing music that consisted of one simple process. I’d set up some process just to hear what all the different combinations sounded like. The interesting part of listening to these experiments was hearing the unexpected results that came from uncommon combinations or sequences of otherwise pretty standard material. One liberating aspect of experimental music in the tradition of John Cage is that it encourages you to appreciate music by simply observing its unique shape. A common practice to make some music to observe is to make decisions about the content of a piece using some procedure with random results, such as flipping a coin. So when I started studying the music of John Cage, and the generations of musicians who were influenced by his music and ideas, I had a realization about my experience listening to algorithmic music: I didn’t need a clear logical process to get to the unusual combinations of material I was interested in, I could just use randomness. The next several pieces I wrote employed increasingly complex webs of decisions made with a random number generator. Following the advice of my brother, I started using the Python programming language to generate huge lists of all the possible combinations and permutations of little patterns of musical material. I was still making one decision at a time, making choices from the lists of options, then notating the music manually. A couple years ago I was asked to write some music for some friends coming to town to play a concert. Using this process I managed to generate the data for a piece that was about ten times bigger than I could notate before the concert. I missed my deadline and was totally embarrassed. So I decided I needed to build two tools: 1.) a standardized representation or model of a piece of music in Python data structures that could be customized to create a new piece, and 2.) a wrapper for the popular notation typesetting library Lilypond that could take my Python representation of a piece and automatically make beautiful sheet music. The piece performed last Saturday was the second piece I’ve written using these tools.

There is another path I took to electronic music. In 1997 I downloaded a free trial copy of Noteworthy Composer, music notation software that appeared to be written by people who had a very strange and seemingly faulty conception of how music behaves. It could be used as a sequencer triggering the amazing Roland Sound Canvas GM/GS Sound Set that came built in as a part of Windows (think pan flutes and steel pans). Noteworthy Composer had some unusual capabilities, which I exploited: the tempo could be set to dotted half note equals 750 beats per minute, you could write 128th notes, and you could change the tempo at any point abruptly or gradually; the pitch of each track could be tuned to 8192 divisions of a half step and could be changed on the fly; individual tracks could contain loops of any duration that did not affect the other tracks and loops could be nested. I made roughly 1000 little studies using this tool between 1997 and 2003, in the spirit of Conlon Nancarrow’s player piano studies. Check out track three of my old experimental pop album for a sample: http://www.archive.org/details/JonathanMarmor_FantasticDischarge.

B: Describe the creative process for this piece.

JM: It’s possible to think of musical genre as a set of rules and tendencies that govern how musical material is organized. The rules are defined by the sum of the genre’s body of work. Most genres are the accumulated contributions of hundreds or thousands of diverse musicians spanning decades or centuries. This has led most genres to obey a handful of nearly universal rules, such as pitch class equivalence at the octave (middle C is the same note as C in any other octave), or the idea that some element of the music must repeat. In all my recent music I have tried to create an original set of rules and tendencies based on a skewed or faulty conception of the nature of music. It embraces some collection of traditional or made up rules and relentlessly sticks to them. Other common rules are completely ignored. The hope is that this results in an internally consistent piece which is only related to other music by coincidence.

What’s the longest silence (length)?

Only about 2 and a half minutes. Surprising, right?

[note: I'm amazed.  I would have guessed 10 minutes.]

One of the purposes of putting periods of silence in a piece of music is to let the listener’s mind wander. However, the first 50 or so times a normal listener goes to a concert with a lot of silence in it, his mind is going to wander to rage! He’ll be really uncomfortable, trying not to breathe. He’ll be self conscious. He won’t know what he’s supposed to be doing or thinking or listening to. He might think he’s doing something wrong. He’ll certainly think that silence isn’t music, that there isn’t music happening during the silence, that the composer is a self-righteous idiot, and that the concert is bad. Some of the time, however, this is not the case. If the you are open to listening carefully and letting your mind wander, you may find all sorts of nice things to enjoy.

B: Tell me about the lyrics.

JM: I wrote a little program that makes nonsense poetry. You give it any arbitrary pattern of stressed and unstressed syllables and a rhyme scheme and it will grab random individual words from lyrics of Bob Dylan, Steely Dan, The Eagles, Elvis Costello, Billy Joel, The Band, Tom Waits, and Rufus Wainwright that match the meter and rhyme scheme.

For example, just now I gave it a rhyme scheme of AABBA and this

pattern of unstressed (u) and stressed (S) syllables:

uSuuSuuSu

uSuuSuuSu

uuSuuS

uuSuuS

uSuuSuuSu

and it spit out this limerick:

The Wrongfully Showdown Y’all Sounding

Reporters The Reading In Bounding

Ayatollah’s A Slot

Inconceivable Bought

A Callin’ Coincidence Pounding

It uses the Carnegie Mellon Pronouncing Dictionary to match rhymes and syllable stresses. It always ends up sounding like total nonsense but follows the meter and rhyme scheme very strictly. It doesn’t use any kind of natural language processing to make the order of words similar to English. For this piece, I made it tend to pick words with more syllables first then fill in the gaps with shorter words which gives it a certain sound.

In this piece, after a melody’s rhythm is selected a corresponding poem is made to match. Because the melody rhythms were written with no consideration for the lyrics rhythm the meter and rhyme scheme of the lyrics aren’t really apparent. I’ll probably write another vocal piece in the future that more deliberately exploits this tool.

B: One of the most unique things about your instrumentation was the weak sound coming out of the keyboards. What were those sounds?

JM: I love the sounds that come with consumer keyboards. They’re beautiful and funny. The choice to use layered synthesizer sounds along with an otherwise acoustic ensemble was made purely because I like how it sounds.

B: Did you know early on what your instruments would be, did the computer determine this, or did you decide after you had a composition?

JM: Picking the instruments was a back and forth between an idea I had for a sound and figuring out which of my very talented musician friends were available. The specific sounds used by the synthesizers were chosen randomly from a list that I ranked intuitively.

B: Describe the process of working with the musicians. Were there any challenges?

JM: We only had four rehearsals and never had the whole group together until the first note of the concert was played. It’s an hour and 20 minutes of pretty non-idiomatic music. I was very happy with the way the concert came out, but there were a few trainwrecks.

How did the performance end up at the ontologic-hysteric theater?

JM: Composer Travis Just curates a monthly experimental music series at the Ontological Theater. He is familiar with my music from the period when we were both graduate students at CalArts in Los Angeles.

Here’s a link to a recording of the performance [the same that's streamed above]:

http://www.archive.org/details/April102010

One hour 17 minutes

Erin Flannery – Voice,

Laura Bohn – Voice,

Quentin Tolimieri – Synthesizer/Voice,

Phil Rodriguez – Synthesizer/Voice,

Will Donnelly – Synthesizer,

Matt Wrobel – Acoustic Guitar/Voice,

Ian Wolff – Acoustic Guitar/Voice,

Katie Porter – Clarinet,

Beth Schenck – Alto Saxophone,

Jason Mears – Alto Saxophone

These are links to two pieces that were made using the same basic

approach and software:

http://www.archive.org/details/CattleInTheWoods

9 minutes, featuring the fantastic New Zealander violinist Johnny Chang

http://www.archive.org/details/TheStoneNovember15th2010junesFirstBirthday

45 minutes, featuring the incomparable clarinetist Katie Porter

Next/weekend: Double Andriessen, Sonic Groove 20-Year (!)