Intra Muros is a project born during the quarantine nights in March 2020, via a remote collaboration (Milan> Turin).
The eleven sound-objects, improvised day by day in a sort of creative ping pong between the minds of Matteo and Enrico, overcome the physical barriers of their confinement and recompose themselves the form of apparently stable, but remarkably articulated sound sculptures.
During the quarantine the apartments have become simultaneously protective shells and static prisons that vibrate, perturbed by ambivalent feelings of calm and tension.
The tracks symbolize these movements of the souls in constant balance between fever pitch and research of a new interior peace, giving life to an immersive and iridescent sound textures.
Kyma Ambiences vol.2 contains 100 abstract and evolving ambiences created with our beloved Kyma sound design workstation.
Due to the increasing demand of spatialized content for game and VR/AR project, this inspiring collection has been entirely designed as a spherical representation of sound natively in 3rd Order Ambisonics by generating all the Sounds in Kyma and mastering them in Pro Tools through a custom analog setup.
Create a truly immersive and dystopian atmosphere with the 2nd volume of the classic Kyma Ambiences sound effects library.
Mel Wesson received a multi platinum award for his contributions to The Verve's 'Urban Hymns' album and the anthemic single 'Bitter Sweet Symphony'. It was this album Hans Zimmer was listening to in the New Year of 2000 when he spotted Mel's credit and invited him to work on the score for Mission Impossible 2.
Since that time Mel has created his own niche within the movie score genre as 'Ambient Music Designer'. This area of atmospheric sound has weaved its way through many of Hans' scores including Ridley Scott's 'Hannibal' and 'Black Hawk Down', Christopher Nolan's 'Batman' trilogy and most recently Ron Howard's 'Rush', amongst the others.
[photo courtesy of Mel Wesson]
Matteo Milani: Mel, how you got involved in working on motion pictures?
Mel Wesson: Aside from a few piano lessons as a child I had no real formal music training, I learnt most about my approach to music at Art College... I learn't about keeping an open mind, freedom of expression, things that have stayed with me all my life. I spent my youth playing with bands, touring and recording. I started getting offers of session work, one thing led to another... I'd known Hans Zimmer since my late teens and he got me involved in a few projects with his mentor Stanley Myers, as well as some of his own early musical projects. Hans and I drifted apart for a few years when he moved to LA and I got more involved in working with recording artists but eventually Hans called out of the blue and asked me to get involved in the score of 'Mission Impossible 2'. That was the start of a new chapter for me.
MM: Would you like to describe your collaboration between composers James Newton Howard and Hans Zimmer?
MW:
They're both very different, Hans prefers everything to be in a
constant state of flux, whereas James has a more structured approach. I
seem to have an approach that works with them both, I'm pretty flexible
but I do like make progress through a picture so often I'll leap ahead
of wherever they are in the in movie and then feed ideas back at them.
James often likes to arrive at a cue and find something in place, it
might be a soundscape consisting of many cues, or perhaps a rhythmic
idea, maybe even a map. Hans tends to assimilate my ideas in his own
way, a lot of time things come together at the mix as opposed to the
more traditional point of composition, that works for him as he never
considers a cue finished until it's in the theatre!
MM: Please explain your role as 'Ambient Music Designer' when working with the composer and other members of the sound editorial.
MW: Well, I really should know the answer to that one but I'm still working on it! The AMD role came about through a few conversations with Hans on 'Hannibal'. People read things into that title, but it's really just a phrase we cooked up to give me a credit on that movie and it stuck! The important thing is I just didn't want to go down the orchestral route (despite what people may think I DO write conventional music occasionally! ) and Hans gave me the opportunity to experiment with sound in a way that crosses the boundaries of music and sound design. A lot of my work is to do with atmospheres, creating a presence, emotions, sometimes through rhythm too. I create bespoke sounds but that's only a part of what I do. I use those sounds to work with picture, that's the real challenge here. The word 'Ambient' can cover a lot of ground in the same way the word 'Orchestral' covers an amount of options... Back to the question, occasionally a composer will have something specific in mind, but a lot of the time I'm left to my own devices and we see what occurs... I enjoy the freedom, it would be rather pointless for everybody if I didn't play a creative role.
MM: How do you deal with the everlasting collision between sound effect and music?
MW: It's all noise... some noises work together, some don't. I try not to distinguish too much between violins and helicopters, everything has it's place... On Ron Howard's 'Rush' for example there's the most amazing sounding race cars... they're the sound of the movie, the heart and soul the story, yet they'll carve through any music... which is fine by me, I'd far sooner listen to them then an orchestra! Most recently I've been playing with the band 'Node', with producers Flood and Ed Buller, plus electronic artist Dave Bessell. What we do is all about sound, it's all live too, no overdubs, no mix process. The music we're creating crosses a lot of boundaries, there really is no conflict between sound and music in that environment and no one would draw a line between what we do and music. For me it's the ideal band to play with, I'm very excited about our album.
[photo courtesy of Mel Wesson]
MM: The
sounds you've done for 'Inception: the App' are these totally
synthesised or is there any usage of reprocessed actual sounds?
MW: Nearly all of my work on Inception, the score and therefore the App was
based of reprocessed sounds, mostly from one sample session, but you'd
struggle to recognise the source of the sounds in about 90% of the case.
It was a session of natural instruments resonating through a piano
soundboard, so we had an amazing amount of harmonics to work with plus
the room itself. Then I took my ideas back to Air Lyndhurst and replayed
them into the hall and re-recorded them again... we got some amazing
material of out that session. There was some live recording in the app
too.... like rain on my studio window that became part of the
environment. There's been a Dark Knight App since that one too, the same
team, but based more around a more interactive way of playing with the
score in the real world.
MM: For 'Inception: the Soundscape',
you’re credited as composer. It’s your first art installation work. Do
you have any additional anecdotes to share?
MW:That
was played on the walk between Grauman's Chinese Theatre in Hollywood,
(the US Inception Premiere) and the after show venue where we played a
live concert. It was a walkway constructed a few hundred metres long and
a really interesting project. I used a lot of ambience from the film,
plus some sounds I got from sound designer Richard King, things like the
train coming down the centre of the road, waves on the beach etc. It
was a lot of fun, I'd love to do more... but really more long term
exhibition based... and yes, I'm open to offers! Venice Biennale anyone?
MM: Tell us about the software and hardware production tools in your arsenal...
MW:
My studio's based around Logic Pro, for now at least.... I love working
in Logic but it's way overdue an update so I'm looking at alternatives.
So... within the computer world I have a few favourite toys, MetaSynth
has served me well in the past, Reaktor is probably still my favourite
plugin, it's just so forward thinking, flexible, origional and most
importantly it sounds good... that's everything you want from a plugin.
I use a lot of sounds manipulation devices, things from
Izotope, Audio Damage, etc. I like the Waldorf plugins too, the Wave 3
is very good, but the most exciting thing I've seen in a long time is
the PPG Wave Generator, it's an iPad app, again it's sounds great and
it's innovative. I sometimes process sounds through my modular or the
Synthi but I don't use a lot of outboard FX, although I've started using
a Kemper Profiling Amp. Obviously that's designed with guitars in mind
but there's no rules. I use a lot of vintage synths too, partly because
nothing sounds as good and partly because the interface makes you think
differently. I'm fortunate to have a large analog modular system that's
part Moog 3C, part PPG300, other 'Go to' synths include a Synthi A, PPG
Wave 2. It's not just about the sound though, it's how these devices
make you think. I just bought a guitar too, I don't really play, but
again I come up with things I'd never think of on a keyboard or
computer.
[photo courtesy of Mel Wesson]
MM: What are your preferred titles you worked on and what kind of sounds you designed?
MW: Well all my sounds are organised within folders so there's the 'BatFlaps', for Batman Trilogy, there was 'Rage' for the Joker, 'Ice Brass' for Inception. 'Gotham Metal', "BanePane'... it's a long list. Of course we had 'MetaPiggies' on Hannibal!
MM: Can you reveal us a 'making of' of a very special sound effect(s) or a sound sequence for 'Green Lantern'?
MW: Ah Green Lantern... we had a vast library of sounds, unusually for me all of my work was created with synthesisers, We had there's the Green Energy sounds for the good guys and the sound of Yellow Light which was the sound of fear... We did record the extraordinary voice of Grant Gershon and I processed his voice to create various sounds and textures. We had so much fun on that movie, although it was pretty much universally slated which was a shame as the team was great as was the experience.
MM: How do you deliver your sound elements to the mixing stage?
MW: I master everything in quad, but really all I do is give options, the real work in surround takes place on the dubstage. They have far more to deal with in terms of effects and dialog beyond the music and that team are experts in bringing it all together. My work is all part of the score, it's a common misunderstanding that I deliver stand alone sounds but for me the sounds are only a part of the process. Once I have my pallette I start to use those sounds to compose with, so the work is delivered as mixes and stems, the same as any other cue would be. That said Chris Nolan is very keen to have my material as wide as possible as what we called 'Melements', he likes to swap things around on the dub and experiment. For me it's always an exciting time and I love the way the process is constantly evolving.
MM: What projects are you currently working on?
MW: Well.... I'm a rarity in this industry in that I actively dislike have more than one project ongoing at the same time... in an ideal world at least! This year got off to a crazy start. I've been working a couple of old friends, Trevor Morris on 'Olympus Has Fallen' and Ramin Djwadi on 'Pacific Rim', these have been more electronic arrangements and ideas as opposed to ambient work. I've also worked on 'The Secret Life of Walter Mitty' with Teddy Shapiro, plus a few bits for Henry Jackman on 'Captain Phillips' and Chris Bacon's score for 'Bates Motel'. The important thing is at times like that everybody respects everyone else's projects and space... it's not like I have a team around me, I can't and won't delegate which makes things diffiicult at times. A nice Margaux helps though...
DREAM is a EU funded project, aimed at preserving, reconstructing, and exhibiting the devices and the music of the Studio di Fonologia Musicale della Rai di Milano.
During the 1950s and 1960s, this was one of the leading places in
Europe for the production of electroacustic music, together with Paris
and Cologne.
During the project, part of the equipment of the Studio (oscillators and
non-linear filters) has been virtually reconstructed and will become
part of the permanent exhibit at the Museum of Musical Instruments in
Milan.
The aim of this one-day symposium is to present to the public the main results of the DREAM project, including the installation that recreates part of the original devices of the Studio di Fonologia di Milano della Rai, as well as the book “The Studio di Fonologia – A musical journey”, edited by Maria Maddalena Novati and John Dack, and published by Ricordi.
The event is comprised of two parts.
The morning will be devoted to the workshop Conservare, mostrare, interagire: per un museo da toccare [Preserve, exhibit, interact: for a tangible museum]. During the workshop, DREAM researchers and invited speakers will discuss applications of novel interactive technologies to museum exhibits, with particular reference to music and musical instruments museums.
The afternoon session will present to the large public the results of the DREAM project, through the movie Avevamo 9 oscillatori [We used to have 9 oscillators], additional talks by DREAM researchers, and two musical performances that make use of sonic materials produced at the Studio di Fonologia.
Hologram Room is the first bundle of the abstract Sound Design Collection produced by sound designers and composers Matteo Milani and Federico Placidi (aka U.S.O. Project).
These two gigabytes of “ready to use” original sound elements are designed to help you sweetening and enhancing your sound production. The whole library is organized in eight main folders: Active Drones, Alarms, Blips, Buttons, Communications, Ignitions, Telemetries, Transitions. It provides a selection of out of this world drones and ambiences, futuristic sound effects and electronic tools.
We have been spending hours composing, editing, mixing these categories in Symbolic Sound Corporation Kyma and Avid Pro Tools. All of the audio files have been embedded with metadata for detailed and accurate searches in your asset management software.
A note about the mastering: the library has not been peak normalized, but loudness normalized, based on the recommendation by the European Broadcast Union. What does it mean? During the audition of your samples, they will have the same loudness level when played through monitors.
This work has been made possible by the aid of LevelOne, a program developed by Grimm Audio.
EBU TECHNICAL provides all kinds of information about the EBU R128 loudness recommendation. The official R128 documents and guidelines can be found online, as well as introduction papers and videos.
Observation’s Pod is a new section on Synesthesia Recordings where we post most of our researches’ output to the collective. This place works as a permanent Laboratory where the product of our creative and experimental activity with sound is freely opened to the public in its raw form.
These three small works are based on the improvisational exploration of a specific configuration of the modules of Serge Modular synthesizer.
The synthesis model which was implemented is that of the Complex Feedback Frequency Modulation as shown in the artwork image: two oscillators recursively modulating that build a dynamic non-linear system exhibiting a chaotic behaviour.
In order to obtain a high timbral complexity, the waveforms generated by each oscillator are dynamically varied through the use of waveshaping modules.
All the material was created using only the patch described above, without any filter or other editing/mixing procedure.
The three short works are created on order to intuitively explore a dynamic system, while combining its output using an analogy with three well-defined poetic abstractions.
Richard Beggs, sound designer and re-recording mixer, has worked in his career with directors like Francis Coppola, Ivan Reitman, Mel Brooks, Barry Levinson, Kathryn Bigelow, Sofia Coppola (including her latest "Somewhere" - Golden Lion for best picture at the Venice Film Festival 2010), and Alfonso Cuarón, among others.
He won an Academy Award for Best Sound for Apocalypse Now (1979) and has received many Golden Reel Award nominations as sound designer and mixer for Harry Potter and the Prisoner of Azkaban (2004), The Chronicles of Narnia: The Lion, the Witch and the Wardrobe (2005), Children of Men (2006).
Matteo Milani: I'm very curious about your sound education background, and how you entered the movie industry. Can you tell me about it?
Richard Beggs: I have no formal film or audio education. I had no professional aspirations in this regard. I'm self-taught. During the sixties I was, in the parlance of the time, a "hi-fi bug" and had a consuming passion for classical music that tended to the modern and contemporary. These two interests helped prepare the way for what would eventually become a career in film sound. My vocation and training was to become a painter. My formal education consists of a bachelor’s and master’s degree in fine arts.
During the mid-sixties, my enthusiasm for audio technology and music led to a period of volunteer field recording for KPFA radio, a listener-supported radio station in Berkeley, California. By this time my hi-fi system had come to include an inordinately expensive, semi-pro 1/2 track recorder and four AKG condenser microphones. I recorded lectures and concerts, whatever the station assigned me and I had time for. I really had little understanding of what it was I was about, but KPFA was a nonprofit public radio station and so it dealt with my learning curve, which was often painful for both of us. It was during this period that I was asked to record a chamber music concert independently of the radio station. A fee was mentioned by the musicians and I realized that people would actually pay me to indulge my passion for music and audio. I could support my painting and my family by working as a recording engineer on a part-time freelance basis and devote the bulk of my time to painting. Or so I thought.
I was fortunate in having a technical mentor during those years who taught me the audio engineering principles and practices that allowed me to eventually become professionally viable. My skills with a soldering iron and audio schematics developed along with razor blade editing and microphone technique. Business is not my strong point so client relations and a crude, business sensibility were slow in developing. I ran more on enthusiasm than common sense. A small, garage recording studio in my home was the result. Rock and roll was how I spent most of my time along with field recordings of chamber groups and the occasional symphony orchestra but over time advertising work and documentary scores became increasingly important. These more “fiscally viable” clients often provided more creative latitude and problem-solving opportunities along with a habit of regular payment for services. The idea that there were clients to whom the idea of paying was not an annoying hindrance to creativity but instead a professional obligation was a revelation. The issue of painter versus sound designer remains unresolved.
I eventually moved the studio from the home garage environment to a more professional location. In 1972, together with a partner, I leased the old Kingston Trio Studio in the basement of the Sentinel Building in San Francisco's North Beach. Over several years time, the studio developed a clientele. A part-time audio engineering job I held at Cal State University Hayward’s music department soon gave way to full time demands at my studio. My partner went to NY to pursue fame and fortune, and I hired my first employee. It was my first experience with delegation and surrendering complete control of what till then had been a totally personal, one man undertaking. It was difficult and still is.
In 1974 the director Francis Coppola bought the Sentinel Building and moved in with his film production company, American Zoetrope. In 1976 Francis purchased a controlling interest in my studio and we formed Beggs/AZ. The partnership moved the studio overnight from 4-track capability to 16 and eventually 24 tracks. The studio continued with it's original clientele for several years. Along with a lot of head-banging rock and roll and other album projects, I was doing work for Sesame Street and national and local radio and TV advertising.
When post for Godfather Part ll started, the studio began a fitful transition from music and commercials to feature film. In 1977 Francis bought my interest in Beggs/AZ and put me to work on Apocalypse Now. My first full-tilt foray into film. He talked to me in abstract generalities about the sound of the picture and the areas he would like me to work in. I was so enthused I generated a slightly corny but earnestly felt, 20-minute "tone impression" mood piece that expressed my idea of what Apocalypse Now might sound like. It was an unsolicited demo. I don't remember him ever saying anything about it. The studio was four-walled for 2 years and I became fully committed to film sound.
[YouTube - “Heard Any Good Movies Lately? The Sound Design of Apocalypse Now” featurette]
MM: What did Apocalypse Now represent in your sound career?
RB: Apocalypse Now marked the beginning of my career in film. Few are fortunate enough to have a picture like that as a first film. Apocalypse Now set the bar very high. That experience and its fallout opened many doors and created many opportunities. My hope is that I’ve been able to follow through and have lived up to my initial success. The entire experience was transformative. Apocalypse was like dropping acid and it went on for two years.
[Beggs was not only the music re-recording mixer, he also recorded the narration (by Martin Sheen) and was one of the six synthesists who “realized” the score by Coppola and his father, Carmine.. The first ghost helicopter, the classic sound effect that opens the film, was created by Richard Beggs on a Moog synthesizer.]
MM: How do you fabricate the "Ghost Helicopter" sound of the title sequence?
RB: There were many helicopter sounds in the film. They came in two forms, organic and synthesized. I created all of the synthesized helicopters from scratch with pure synthesis. These synthesized elements could stand on their own or were used to sweeten "real" effects. They would often transition between real and synthesized. The synthesized ceiling fan in the title sequence that becomes the Huey landing in downtown Saigon is a synth to real transition.
At the time, Moog synthesizers had no storage capabilities. No samples. Sounds were created from raw waveforms or white noise or played through a synthesizer and manipulated but not stored. I made the helicopter sounds in real time performance, utilizing white noise, envelope generators, filters and a clock driven sequencer to create the modulated, rhythmic patterns of the rotors.
[R. Beggs and R. Thom discuss and examine the sound design of the opening shot of the movie]
[When Willard finally arrives at Kurtz's compound, he is met by an army of natives who shoot a barrage of arrows at him. How was this sound created?
Richard Beggs recorded a stick of willow that you could whip through the air. Then they dumped it over onto a twenty-four-track multitrack and just repeated and repeated it, creating layers of sound on different tracks staggered out of sync. So we had twenty-four tracks of whoosh, whoosh, whoosh, whoosh, whoosh, placed in the space.]
MM: Is your job like a calling, a passion you had when you were young?
RB: At its best, my job is a calling, to use your phrase. The irrational dedication and inhuman schedules of the early days have been muted a bit, but if I'm working on a good picture with a good crew and have that fundamental creative identification with the project, it's a work of passion. Maybe my approach is more tempered and measured these days, but I still feel that rush when a scene’s working, when I've managed to connect with the essence of the project and I think to myself, "I made that happen." In this business, a potential downside to this consuming creative commitment is that it contributes to making one an easy target for financial exploitation. Enough exploitation and passion falters. Art and finance are uncomfortable bed-mates.
MM: You're a pioneer, together with others Bay Area "Mavericks": you were in the right place at the right time to do the right thing. Do you consider yourself lucky to have lived that time in movie history?
RB: Yes, I consider myself lucky. My introduction to the industry was served with a silver spoon. I was brought into the business, I didn’t seek it. I was in league with a brilliant, forward-looking director who placed tremendous stock in creativity and innovation. Francis made a point of demanding these virtues in the creation of his sound tracks. My colleagues and co-workers on Apocalypse Now were accomplished, dedicated players. It was a first film experience that defied convention. I had no standard of comparison and no preconceptions. I knew though, that I was in an exceptional situation with a group of exceptional people. All movies couldn’t be made like this. To my knowledge, multi-track capabilities and skills that I had learned in my music studio had never been applied to feature film production before. To me it made perfect sense and I plunged in, way over my head.
My first serious bruising was when I was nonchalantly presented change notes. I had never seen a change note in my life. When I saw my first and it was explained to me, the pain began. All the fluidity and speed of the 24-track recording format suddenly bogged down in the quicksand of conventional film post-production. The advantages of my native working format, which made sprocketed film technology appear like a vestige of the stone age, were now accompanied by serious liabilities. I won’t bore you with the gory details. It turned out that the technology was at a turning point and it was worth pushing and pursuing despite the inefficiencies. Over the next several years the system came to work and pay dividends in it’s own clumsy, maddening way. If I had come to the project with a practical, traditionally grounded background in film post I may have dismissed the entire notion out of hand. I would have missed out on a preliminary phase of what ended up as the digital revolution. How many sound editors could prepare, mix and audition 22 tracks simultaneously in their cutting room in 1977? Few if any. You could do more than cut sound, you could design sound, assuming a certain amount of patience.
Another advantage of entering the industry in the ‘70s was the exposure to both the pinnacle of analog technology and the arrival of digital and all its possibilities. I know where I came from and I know what I’m doing… most of the time. Film based analog protocols still exist in the digital film world, most with good reason. The reasons were obvious in the analog era. Now they are often an abstraction and their relevance and necessity is a harder sell for a digital generation of film makers new to the craft..
MM: Is there a technological achievement that has radically changed your artistic approach?
RB: No, there isn't any. I could speculate on how my work has been affected by new technologies, but I don’t think it’s that interesting. I feel my approach has always been fundamentally the same. It’s been the same as my approach to painting. My interactions with the track and the process by which ideas occur and develop is very similar to my interaction with the canvas as a painter. Light, dark, mass, line, contrast, color, texture, objects advance or recede, these visual properties all have sonic equivalents. These qualities, when used successfully, contribute to an emotional or expressive state that advances the story.
MM: Recently, did you preferably work in your editing room and then transfer your material to a bigger facility?
RB: My first cutting room was a recording studio control room. I could not make a useful distinction between cutting and mixing. The technology was there so I did both from the beginning. Initially, mixing in the “small room” was constrained by the inability to efficiently provide flexible and accessible elements to the mixing stage. I was 2” tape, they were 35mm mag. Eq and reverb were major issues. Over time, advances in DAW technology have made it possible to increase the amount of mixing operations that can be performed successfully before going into the “big room,” without limiting mix stage decisions. This is becoming an industry trend that is not without complication.
As a sound designer/mixer, I’m unwilling in general to delegate mixing decisions to the sound editorial department. Mixing decisions are often creative, interpretive decisions that I prefer to make by myself or in concert with fellow mixers and the director. These decisions are made with all the elements of the track on line simultaneously and reasonably accessible. The creative and efficient preparation of tracks and presenting them in the best possible light with the least restriction of flexibility during the final mix, is a good thing. My worry is that our tools allow us to slip into a formulaic, albeit “efficient” methodology. The technology that originally offered a tremendous expansion of creative potential, can, in the wrong hands, become the provider of the pre-assembled, “it mixes itself” mix. Where the line is drawn between mixing and editing is an open question. For some the distinction is very clear, for some it’s a blur, for others the line no longer exists.
I like to think that I’m hired to make a unique contribution with a singular point of view to a film that has a singular point of view. I was once referred to as “the last of the art school sound designers.” While not intended as a compliment, I liked the “art school” part. The notion of being the last, if true, would bode ill for the kind of films I like to see and work on.
MM: Do you have a lot of work proposals? How do you choose the next work? Do you have a trusted team you rely on?
RB: My project calendar is usually determined by directors with whom I have sustained relationships, and those projects seem to arrive in cycles, the “when it rains, it pours” syndrome. I recently had a period of about five years that was breakneck, project to project. The last two years have been relatively quiet. Part of that time was a voluntary hiatus on my part; the rest was waiting for the phone to ring while doing smaller scale and independent or personal projects. Time has allowed me to become more selective as well. That’s a luxury I still have to learn how to enjoy and use well. Now things appear to be picking up again. We’ll see. There are no guaranties.
[The Making of Ghostbusters - 1984]
MM: The first motion picture 'sound' work I remember from my childhood, is the funny Ghostmobile "siren" you created for Ghostbusters (1984). Would you like to describe how you did it?
RB: I made the “Ectomobile” siren sound from a leopard snarl that was edited, pitched and otherwise processed, all in the analog domain. An Ampex AG-440 recorder, ¼ inch tape, VSO (Variable Speed Operation), a razor blade and an Editall splicing block were the tools.
[the distinctive Ectomobile's siren wail - via hprops.com]
Several ago, a “technically sophisticated” young man visited my old Zoetrope studio. I was there, digging around in archive sound effects and he heard something I was playing that I had created for Ghostbusters and asked what “program” I used to create the effect. He was incredulous when I told him “none.” He said he hadn’t thought it possible.
MM: How did you manage to design the fantastic sound-world of Children of Men? The screenplay helped you to imagine the sound prior to gather and edit the sound itself?
RB: The sound idea of Children of Men is the result of a time collaboration with Alfonso Cuarón, the director. In part, its sound was almost foreordained. I read the script and felt in tune with its sensibility. The script provides the particular set of motivators that determine the style of the track. I will scribble ideas in the margin as I read. That’s the beginning, then I dive in. The most interesting and successful ideas evolve organically, often after long and sometimes tedious manipulation of the practical, prosaic, aspects of the track. Sometimes ideas will present themselves because of peculiar juxtapositions of disparate elements or a particularly unique recording of a sound. There is a balance between calculated preconception and the more spontaneous act of recognizing a possibility that, through serendipity, presents itself. For me, this is an intuitive and somewhat vague area not lending itself to logic or explanation.
One of the germinal ideas for the track of Children of Men was Alfonso’s desire to have the lead character experience the internal ringing sensation of tinnitus after an explosion. This occurs early, during the title sequence, I expanded on the idea and it became a recurring motif throughout the film. This idea wasn’t planned, it grew out of the context of the project.
A major part of the equation is a crew that is responsive to, and excited by, the possibilities of the track. When the materials generated by editors who are in tune come together, either in the cutting room or on the dubbing stage, the possibility of interesting and effective sound solutions is a given This was absolutely the case with Children of Men.
MM: Did you have the chance to travel around the world with your equipment to record your custom sound library, in an era when commercial libraries were not available?
RB: I’ve had the privilege of being able to do a fair amount of my own effects recording, often in unique or bizarre locations. I pride myself in using as many production location specific effects as possible. While no picture of mine is 100% original recordings, some come very close. Lost in Translation, Marie Antoinette and Somewhere are good examples. I spent a lot of time in Tokyo, Versailles and the Chateau Marmont hotel in L.A. and its environs compiling libraries for those pics. The bottom of Vancouver Harbor in British Columbia in a three person submarine was interesting as was two weeks in Jodhpur India. The advantage in doing my own recording is that I know what I want and it’s very specific but just as often it’s a mood or sensation, an abstraction. These are ideas that are difficult to communicate. Being on the location stimulates ideas and realizations that wouldn’t occur to me by simply watching the cut. The subtleties are difficult to express to another recordist. I often don’t recognize what I need until I hear it. I can’t put it on a list for someone else to gather. A large portion of my library consists of sounds un-contemplated. A location reveals itself in unexpected ways and provides many pleasant surprises.
An example would be a scene in Marie Antoinette. It’s an evening garden party that would appear to require a straightforward, conventional background. While hanging around on the set during a night shoot I wandered of into the field behind Le Petit Trianon. What do I hear? A very loud chorus of frogs. Frogs like in a swamp. If I hadn’t heard it, it would never have occurred to me. My preconception had settled around the formality and refinement of Louie XVl’s court, and there was no call for a raspy, exotic frog chorus. Apparently, they’ve been there since before Versailles was built. Of course they play a part in the scene. They impart a nervous, contemporary energy to the proceedings because they don’t conform to our expectations.
When I haven’t been able to do my own recordings I’ve had the talents of others to rely on. Randy Thom’s field recordings for Rumblefish years ago were a marvel. He could see (in this case hear) beyond the obvious. John Fasal in L.A. was key in recording the Ferrari for "Somewhere".
MM: I loved your sound work on Somewhere: I especially appreciated the reflections of the car engine under the bridges in the highway. Do you have any favourite sound or scene would you like to brief describe how you built it?
RB: The opening and closing scenes of Somewhere were the first I constructed. Those two sequences dictated the arc of all the sound that came between. The two portentous, low frequency pulses near the end when the car goes under viaducts were created by manipulating the rush of a bullwhip through the air. I created the extended musical drone that underlays the Ferrari sound in that scene from multitrack elements of "Love Like a Sunset" which were supplied by Phoenix, the band who provided the score. Johnny's lonely helicopter ride is also a composed, manipulated section. Those are the only "special" sound effects in the picure. The opening sequence of the Ferrari on the track is the product of an inordinant amount of editing of recordings I made on the set during the filming of the shot. Essentially all of the effects in the picture are from recordings made at scene specific shooting locations. I spent two days and nights in Johnny's suite at the Chateau Marmont to aquire the hotel library. Susumu Tokunow's production tracks carried much of the intimate. pointilistic sound in many scenes with some help from foley. It's not an elaborate, showy track but it has it's satisfactions. More than enough to make me happy.
MM: What's your thinking about the transformation of the independent industry in the latest 30 years, due to the work of Lucas, Pixar, Dreamworks, Zantz?
RB: I don’t have any particular thoughts on the “transformation of the independent film industry”. At least nothing that hasn’t already been said, and whoever said it, said it better than I would anyway. I will say that I don’t think independent film is as healthy as it was during its halcyon days. For the most part, I believe industry values and practices have moved in a negative direction. But I do hold out for the possibility of another “golden age.” There are significant films being made by young (and some not so young) filmmakers that I believe spring from a new, more relevant sensibility than the bulk of the industry’s “product” would indicate. Let’s hope.
Thanks Richard, it has been a pleasure. — Matteo Milani
"The RAI Studio of Musical Phonology is the outcome of the matching between music and the possible new means of analyzing and processing that sound has" - Luciano Berio
After more than fifty years since the birth of analog magnetic recording, on the 17th of September 2008 at the Castello Sforzesco's Museum of Music Instruments in Milan, took place the inauguration of a new space dedicated to Rai Studio of Musical Phonology - "musical instrument of the 20th Century, extension of human thought". Such an event was made possible thanks to the International Music Festival MITO, in collaboration with the Civic Museum of Musical Instruments and RAI.
Maddalena Novati, RAI Radiophonic Production's musical consultant and responsible for the Phonology archive - thanks to the decisive contribution from Doctor Massimo Ferrario, Director of RAI TV Production Centre (Milan) - was able to transfer all the Studio equipment from Rai Turin to Milan headquarters. This is the very first plan of recovery, storing and refurbishing electrophonic musical instruments.
Maddalena Novati does in fact describe this niche of the Museum as the "20th Century lute shop". The idea of conceiving this space as a sole instrument in its whole, is moving: there are so many experiences enclosed in those devices that it is actually still possible to perceive the residual energy that characterized the entire handcraft process of sound-writing. The Milan Institute of Musical Phonology, designed by physicist Alfredo Lietti, was created in June 1955 at the RAI headquarter in Corso Sempione 27, by Luciano Berio and Bruno Maderna. During that year, Milan was on the verge of becoming a pivotal point in the international electroacoustic music post-war scene, through a new expressive language, which was a synthesis of the concrete and electronic experiences happening in Europe at the Studio für Elektronische Musik (WDR) in Cologne and at the Groupe de Recherches Musicales (GRM) in Paris. Among the electronic music experimental productions at the Studio of Phonology, we must mention works such as Visage by Luciano Berio, Notturno by Bruno Maderna, Fontana Mix by John Cage and Omaggio ad Emilio Vedova, the only entirely electronic work created by Luigi Nono.
"...in the opaque Milan of the '50s, Berio and Maderna found a hostile, apathetic environment, while opening the Phonology Studio. In a completely different situation from the newborn Studio in Cologne, the two masters, built their ideas on the strong basis of their French experiences, through a different technical method which was free and imaginative. Creatively, it has been the most relevant experience in the whole Old continent…" - Giacomo Manzoni
I always close my eyes and try to jump backwards in time, inside that Studio, imagining the noises, dialogues and sounds coming out the loudspeakers: living and breathing the miracles happening in that far age. We now have the possibility to touch by hand what I always used to see only in photographs and videos, how many protagonists of that time have lived (eminent musicians like Luigi Nono, Giacomo Manzoni, Aldo Clementi, Henri Pousseur, John Cage). In fact, in room XXXVI of the Museum of Music instruments, thanks to a glass structure designed by the architect Michele De Lucchi, the back of the eight famous frames containing the circuits are open for everyone to enjoy, allowing spectators to get the heart of the analog technologies with a 360° vision. Based on original pictures and videos from the time, the atmosphere of those years has been recreated.
Further information about the sounds that characterized the second half of last century, are available to the public of devotees and researchers, thanks to four computer stations with multimedia applications, and a digital library with photographs, footage, sound examples and scores (cured by the LIM Laboratory of the Università Statale di Milano).
The Studio, a patrimony fundamental to understand electroacoustic music writing, in the beginning has been experienced by composers as a mean to emancipate themselves from traditional instruments, with its 9 oscillators, the noise generators, different modulators, filters and the Tempophon (a device with rotating heads that allowed to vary the duration of the playback of a previously recorded sound, while maintaining the original pitch).
Those were the times of technicians in white lab coats, yet one particular person changed this professional's profile: Marino Zuccheri. Born on the 28th of February 1923, he was hired by EIAR in 1942; in the following year he left his job because of the war, but was re-hired a few years later by the new-founded RAI.
"... I like remembering Marino in his Phonology Studio, master among masters, master of sound among masters of music, because sound for him did not have any secrets, since he was trained in auditoriums while working for the Radio together with the most famous directors of the time. He would always recall how he begun working in Phonology by chance, but it is certain that it wasn't because of chance that he continued during the years, considering he's been the only holder of the Studio from when it was created (1955) until it closed down (1983)." - Giovanni Belletti, "Marino Zuccheri in Fonologia", 2008
He did not have any obligation to give advice, contributions or suggestions, yet musicians would follow his instructions on how to realize musical compositions: without him, much of last century's music would have never been born.
"... All the protagonists of Neue Musik passed through the Studio, and it is fair to say this: many of them were in Milan through scholarships, and had to present a final composition at the end of their term, and sometimes the stay had not been long enough to master the nine oscillators' secrets, so the great Marino Zuccheri would put together an acceptable composition with a few touches, thus many of electronic music "incunabula" are his works, and not of those who signed them." - Umberto Eco, La Repubblica, 29 ottobre 2008
That has been an amazing adventure for many years, until 1983 to be precise, year of his retirement (Zuccheri then passed away in Milan, 10th of March 2005).
"Marino Zuccheri's demise was a great loss, not only for what he meant for Contemporary Music, but moreover for what he still could have done: he was to be involved in an important project by RAI, in order to catalog tapes (as himself defined the work) that would have given us a fundamental technical, artistic, musical and cultural insight on the history of the Phonology adventure (another of his own definitions), but not to restore the sound itself (which could be done by others when needed), rather to revive the ideas and technical intuitions that made possible the creation of that sound: Marino (along with the composers) was the only one that could help us!" - Giovanni Belletti, "Marino Zuccheri in Fonologia", 2008
"[..] Two of the first electronic works in my record collection - Berio's Visage from 1961, and John Cage's Fontana Mix from 1958 - were created there with Zuccheri. Even today, both of these pieces sound impressively vivid and dynamic, and what we should now recognise is that such qualities should be attributed to the technician as much as to the composer. [..] Zuccheri appears to fit the profile: Parete 1967, composed for painter Emilio Vedova for the Italian Pavilion at Montreal Expo, 1967, was his only known work. [..] Luigi Nono was his first choice as composer, but Nono's schedule prevented that, so Zuccheri stepped in to assemble a 30 minute continuous work using previously recorded sounds built up from long intersecting tape loops. Zuccheri's modest opinion of himself was that he was no composer. Certainly there's very little sense of form in Parete 1967, but the dramatic contrasts of harsh noise, perhaps sourced from piano strings and struck metal, and shifting, modulating drones suggestive of vocal choruses, have something in common with the ritualistic side of Iannis Xenakis, or the best horror movie soundtracks. To the regret of his label Die Schachtel, who have produced another of their sumptuous limited edition vinyl releases here, Zuccheri died before seeing the publication of his only record." - David Toop, The Wire, 2008
After the Studio closed down, the equipments were disassembled and exhibited in Venice for a short period, on the occasion of the temporary exhibition called Nuova Atlantide, organized in 1986 by the Biennale (with the collaboration of Roberto Doati and Alvise Vidolin) and in Milan for I piaceri della città - Iconografia delle sensazioni urbane in 2001, where Il risveglio di una città was displayed through music thanks to the futurist composition of the same name by Luigi Russolo, and to Ritratto di città, the first electroacoustic composition of the Fifties (voice and tapes by Luciano Berio and Bruno Maderna, lyrics by Roberto Leydi) created in order to convince the direction of RAI to create the Studio.
At the end of the Eighties, there still was no awareness of what the Phonology Studio had meant historically, at the point that all the documentations had been deposited (packed and cataloged) in a storage room at the RAI Museo della Radio in Turin, together with all sorts of disused equipments such as video-cameras, tape recorders, record players, microphones, with no plans to be restored or rebuilt.
Thanks to Maddalena Novati's interest, in 1996 the Studio devices were displayed in Turin's Music Salon and in 2003 they were brought back to RAI in Milan and located in a room on the fifth floor which is adjacent to the one where the Phonology Studio originally was. On June 20, 2008 they were officially transferred to the Castello Sforzesco.
It was a great pleasure for me to interview Maddalena Novati to talk about what the unique experience of the Milan Institute of Musical Phonology was and meant, and to understand what consequences and developments this process of recovery could lead to.
Matteo Milani: Maddalena Novati, why in your opinion the Studio became a myth?
Maddalena Novati: In 1955, to own 9 oscillators tuned on different frequencies, as opposed to the single one that was in Cologne, was like having a "whole orchestra" at your disposal, that could generate many sounds simultaneously like in a chord. This way, you already had a handful, a palette of sounds that killed production times. Human ear was the only judge which decided whether a sound was good or not, and after several attempts and mistakes, interesting tapes would be recorded and stored and this process would continue until a result was achieved. Berio would broadcast on radio each new composition from other Studios around Europe in order to spread the repertoire. The 11 TV episodes of C'è Musica e Musica (1972) on the history and ways of making music, sound fascinating with explanations by Luciano (Berio, n.d.a.). He was a great teacher on top of being a great composer, a person that could communicate and bring music to large audiences.
Berio developed an intense activity as a teacher in the United States and Europe, offering courses of composition in Tanglewood (1960 and '62), in Dartington Summer School (1961 and '62) in Mills College in California (1962 and '63), in Darmstadt, Cologne, Harvard University and, from 1965 to '72, in Juilliard School of Music in New York. From 1974 to '79 he collaborated with IRCAM in Paris. Berio's "Un ricordo al futuro - Lezioni americane" published by Einaudi is a beautiful book, which collects the lectures on aesthetics he gave in the US.
MM: When and how did Phonology's decadence period began?
MN: After the Sixties, at the beginning of the Seventies, radio is not anymore the core of research (not being an experimental mean anymore). Computers came up, and research centers moved elsewhere: large calculators are owned by universities and musicians depend on the physics and science departments. The computing machines had to function day and night, and only when physicists would leave their workplaces, musicians could take their place at night in order to perform calculations for their compositions. The radio was not involved anymore, as the broadcasting media did not yet use computers and their respective courses were different. Due to the lack of updates to its equipments and technology, the Phonology Studio was less and less attended by composers. Moreover the defection of some big names (Berio moved to the IRCAM of Paris, Luigi Nono to Fribourg, Maderna passed away prematurely in 1973) had an influence on an inevitable decline.
Active Conservation
It’s again Maddalena Novati who tells us that today the archives (original master copies) of the Phonology Studio, hold 391 ¼ inch-audio tapes with one or two tracks, as well as one inch tapes with four tracks, plus 232 digital copies from acquisitions (copies of works coming from other studios, centers for electronic music, recordings or concerts or plays of the main authors and interpreters who frequented the Studio during its active years). Since 1995 the custody of tapes continues to be her primary objective as well as their cataloging and digitalization in cooperation with Casa Ricordi (the oldest and most important Italian editor), together with the central Nastroteca of RAI and Mirage Laboratory of the Gorizia University.
No real decay of the tapes has occurred, thanks to the good quality of the tapes from BASF, that were selected instead of other brands like Scotch or Agfa. Survival of the audio documents is possible only by separating them from their physical support, and periodically transferring them to new supports. The second digital transfer at 96 khz/24 bit is currently ongoing at the Centro di Produzione TV and Produzione Radiofonia of Rai Milano, in cooperation with Mirage laboratory.
As for the restoration, the most common task is the elimination of noise of the analog tapes and, according to the tape conditions and the specific nature of the music, there are other interventions to be decided on a case by case basis. As a start, one has to perform an initial search, in order to detect the possible existence of other copies of the same music, as it would then be possible to exploit the best parts from each copy, and with suitable techniques, improve the effectiveness of the cleaning work.
In many works of electronic music, the original tape contains scratches and joints glued with adhesive tape whose glue loses its adhesive properties after a few years. Therefore it often happens that one has to paste various pieces of tape before copying it on a new support. On some tapes, it happens that there are some detached magnetic fragments, while on others, parts of material or protective film might have melted, and therefore deposited on the sound head thus blocking the progress of the tape. On top of these mechanical problems, there might occur electromagnetic problems or problems due to the type of equalization used during recording, and the fact that the oldest tapes could only support a proportionally lesser quantity of magnetization.
Reconstructing the laboratories
As I already had the chance to point out, very often the production of electronic music is not linked to a specific instrument like it happens with traditional music, but rather to a whole of equipments commonly called a system. Hence the conservation of a single element of a system does not give a full testimony of the modus operandi of a musician in a given period. Without doubt, the most effective solution is reconstructing a lab where one can reproduce all the phases of the production process of a musical work. In Cologne for instance, they reconstructed and brought back into function a lab for electronic music with the same configuration it had in the Fifties. In a similar way, this is what is happening at the Aja museum for the study of Sonology Institute of Utrecht University in the Sixties. In Paris at the Parc de la Villette, they are setting up a large section dedicated to the musical electrophonic instruments, until the experiences of computer music in real time of the Eighties.
[Vidolin A., "Conservazione e restauro dei beni musicali elettronici", in Le fonti musicali in Italia - Studi e Ricerche, CIDIM, year 6, pp. 151-168, 1992]
[the italian version of the article is also available @Digimag 55]
[listen: Ricordo di Marino Zuccheri | ITA - via Radio3 Suite]
"Synthesisers interest me for two reasons. One is because they do introduce new sounds into the world, and the other is because in working with them, I learn a lot about how sounds made up. The DX7 has been very useful for that, I use it almost as much as a research tool for seeing how a sound is made. [...] My solution has been to make the equipment unreliable in various ways. I used to like to old synthesisers because they were like that. My first synthesisers - the EMS, the AKS and the early Minimoog - were all fairly unstable and they had a certain character. Character has really to do with deviations, not with regularity. And then, of course, I used to feed them through all sorts of devices that also had a lot of character: that were themselves in various ways unpredictable. The interaction of a lot of these things started to create sounds that had an organic, uneven sound to me."
[Music Technology interview, 1988 - pdf | via EnoWeb]
Quotes from some of the conversations recorded for the BBC4 Arena documentary series, dedicated to the life and work of Brian Eno:
"One of the important things about the synthesiser was that it came without any baggage. A piano comes with a whole history of music. There are all sorts of cultural conventions built into traditional instruments that tell you where and when that instrument comes from. When you play an instrument that does not have any such historical background you are designing sound basically. You're designing a new instrument. That's what a synthesiser is essentially. It's a constantly unfinished instrument. You finish it when you tweak it, and play around with it, and decide how to use it. You can combine a number of cultural references into one new thing. [...] They are a very new instrument. They are constantly renewing so people do not have time to build long relationships with them. So you tend to hear more of the technology and less of the rapport. It can sound less human. However ! That is changing. And there is a prediction that I made a few years ago that I'm very pleased to see is coming true – synthesisers that have inconsistency built into them. I have always wanted them to be less consistent. I like it that one note can be louder than the note next to it."
On the naming of things:
"My interest in making music has been to create something that does not exist that I would like to listen to, not because I wanted a job as a musician. I wanted to hear music that had not yet happened, by putting together things that suggested a new thing which did not yet exist. It's like having a ready-made formula if you are able to read it. One of the innovations of ambient music was leaving out the idea that there should be melody or words or a beat… so in a way that was music designed by leaving things out – that can be a form of innovation, knowing what to leave out. All the signs were in the air all around with ambient music in the mid 1970s, and other people were doing a similar thing. I just gave it a name. Which is exactly what it needed. A name. A name. Giving something a name can be just the same as inventing it. By naming something you create a difference. You say that this is now real. Names are very important."
On the end of an era:
"I think records were just a little bubble through time and those who made a living from them for a while were lucky. There is no reason why anyone should have made so much money from selling records except that everything was right for this period of time. I always knew it would run out sooner or later. It couldn't last, and now it's running out. I don't particularly care that it is and like the way things are going. The record age was just a blip. It was a bit like if you had a source of whale blubber in the 1840s and it could be used as fuel. Before gas came along, if you traded in whale blubber, you were the richest man on Earth. Then gas came along and you'd be stuck with your whale blubber. Sorry mate – history's moving along. Recorded music equals whale blubber. Eventually, something else will replace it."
The University of Illinois Experimental Music Studios were founded in 1958 by Lejaren Hiller and were the among the first of their kind in the western hemisphere. Faculty members and students working in these studios have been responsible for many of the developments in electroacoustic music over the years including the first developments in computer sound synthesis by Lejaren Hiller, the Harmonic Tone Generator by James Beauchamp, expanded gestural computer synthesis by Herbert Brün creation of the Sal-Mar Construction by Salvatore Martirano, and acousmatique sound diffusion/multi-channel sound immersive techniques researched and applied by Scott Wyatt in electroacoustic music and performance. Today the facility continues as an active and productive center for electroacoustic and computer music composition, education and research.
EMS - Experimental Music Studios | 50th Anniversary CD Set
Carla Scaletti: excerpt from Cyclonic (binaural mix of a multichannel version, 2008) [.mp3 - 8:22]
Taking its name from the rotational motion associated with powerful meteorological events, Cyclonic was inspired by the awesome power of the weather in east central Illinois and plays at the edges between events as recorded, events as experienced, events as remembered, and events as imagined.
Pitches were derived from the frequencies in the National Weather Service alert signal, and the concept of a Cycle is abstracted in various ways ranging from an endlessly accelerating pan to endless (cyclic) increases in the pitches of synthetically generated sirens and filterbanks processing synthetic wind.
Apart from rain, thunder, and wind sounds recorded in downtown Champaign, the entire piece was synthesized in Kyma.
I came to Illinois because of a book I found in the Texas Tech University library: Music By Computers, edited by Heinz Von Foerster and James Beauchamp. I had known for several years that I wanted to make music with computers, so when I found this book and noticed that most of the authors were at the University of Illinois; I immediately applied and was accepted into the doctoral program. I arrived a week before classes to take the entrance exams and asked Scott Wyatt if I could help him get the studios ready for the fall semester and was delighted when he put me to work soldering patch cords.
Illinois was an environment where virtually everyone--whether faculty, student, or staff--was actively experimenting and creating software, hardware, and music. Faculty members did not act as mentors but as colleagues who, by actively engaging in their own creative work, served as examples of artists questioning the status quo and postulating alternative solutions. Sal Martirano was just learning to program in C in preparation for his YahaSalMaMac and would hold after-hours seminars on combinatorial pitch theory in his home studio where we read articles by Donald Martino over glasses of wine and freshly sliced watermelon seated right next to the SalMar Construction: one of the earliest examples of a complex system for music composition and digital sound synthesis. Herbert Brün written the beautifully algebraic SAWDUST language and was using it to compose I toLD YOU so. Jim Beauchamp had just finished the PLACOMP hybrid synthesizer, was doing research in time-varying spectral analysis of music tones, (and, contemporaneously with Robert Moog, had built one of the first voltage-controlled analog synthesizers: the Harmonic Tone Generator). Sever Tipei was writing his own stochastic composition software, and John Melby was using FORTRAN to manipulate/generate scores for Music 360 (the predecessor to C Sound). And in an abandoned World-War II radar research loft perched atop the Computer-based Education Research Laboratory (home of PLATO), Lippold Haken and Kurt Hebel were designing their own digital synthesizer (the IMS) that eventually evolved into a microcodable DSP (prior to the advent of the first Motorola 56000 DSP chip). The CERL Sound Group's LIME software was among the first music notation/printing programs; I saw it demonstrated at the annual Engineering Open House and asked if I could use it to print the score for my dissertation piece Lysogeny.
I practically begged Scott Wyatt to let me work as his graduate assistant in the Experimental Music Studios and, thanks to Scott, Nelson Mandrell and I had an opportunity to help build a studio: Studio D (at that time, the Synclavier Studio, now the studio where Kyma is installed), as well as experience the Buchla voltage controlled synthesizer and the joy of cutting & splicing tape. All of these experiences plus my explorations of Music 360, PLACOMP, and the CERL Sound Group's IMS and Platypus microcode, fed into the creation of Kyma. When my adviser John Melby won a Guggenheim award and took a year's leave of absence, I had the opportunity, as a visiting assistant professor, to teach his computer music course and to establish a Friday seminar series on computer music research.
Because the School of Music is part of a world-class university, Illinois afforded me opportunities for study and research that I would not have found elsewhere. It meant that I could play harp in the pit orchestra for an Opera Theatre production of Madame Butterfly and, the next day, run an experiment in Tino Trahiotis' psychoacoustics lab course in my minor, Speech and Hearing Science. It meant that I could go on tour with the New Music Ensemble led by Paul Zonn or David Liptak, that I could study mathematics, that I could do spectral analyses of the harp, that I could also get a degree in computer science and learn Smalltalk from Ralph Johnson after finishing my doctorate in music, and it meant that I could do some of the early work in data sonification with Alan Craig at the National Center for Supercomputing Applications. These experiences, along with the computer science courses in abstract data structures, computer languages, automata, and discrete mathematics, also fed into Kyma.
For me, Illinois was the perfect environment for exploration, and my work with Kyma is a direct outgrowth of those experiences as well as a continuation of several threads of interest that can be traced back to my graduate work at the University of Illinois.
While I was a graduate assistant, my office mates were Chuck Mason and Paul Koonce; the day that Chuck defended his dissertation and accepted a position in Birmingham, he taped a piece of notebook paper to the wall of our office with the heading "Famous Inhabitants of this Office" followed by all three of our names. I remember this act of optimism with great fondness, and I've heard that the list is still in the office (and that it has grown a lot longer by now).
Carla Scaletti, DMA in composition, University of Illinios President, Symbolic Sound Corporation