Showing posts with label GRM. Show all posts
Showing posts with label GRM. Show all posts

Friday, November 06, 2015

GRM pt.3: What made Syter original

(Continued from pt.2)

an excerpt from the booklet about the Syter system at INA - GRM | Archives GRM (CD 4) - by Daniel Teruggi

Looking back on it now, it is not easy to describe just how original the tool was for the time. It was at this time that the first "black boxes" were beginning to appear in the shops, at very high prices, enabling users to do a limited amount of processing to sounds. It was impossible to programme these devices. They had a certain number of parameters which were determined in advance and could be controlled using buttons and potentiometers to simulate analogue devices. At the same time, the first samplers were appearing, enabling users to record a sound just a few seconds long into memory and then to replay it, by transposing it and modifying certain parameters.
Syter was all of that and much more besides: processing and synthesis tools, rapid memories, the possibility of reading and recording sounds in real time on a hard disk (500 Mb, which was considered to be absolutely fabulous at the time) and above all, the possibility of reprogramming the processing tools and building new ones to your heart's desire, using a modular programming approach. Syter was the potential book of magic on the basis of which all the existing processing and synthesis methods could be rethought and new processes imagined and designed. All of this had a cost, and the price of the system was such that only one institution was able to buy it (although it was only about 10 times the cost of a synthesiser or a digital processing box at the time), and it required maintenance engineers to keep it running.
The originality came from the fact that processing methods that had come from studio work, and which had been used from the outset for GRM concrete music, were made readily available, without the need to learn programming languages or to have an assistant constantly on hand. In other words, the real originality was to be found in the algorithms and the interfaces.
Concrete music and the use of electroacoustic studios had stabilised and modelled a certain number of sound-related operations on the basis of perception-based concepts. For example, an extremely powerful analogue studio process, "micro-editing", involved cutting minute fragments of sound from magnetic tape (using scissors!), which were then stuck end to end to create a new continuity. This principle was very successfully applied by the deferred time software and by Syter, making it possible to reorganise the material into new coherent sequences. This became known as "brewing". But brewing is not the end of the story, because the difficulty lies in controlling the way the brew comes together. Graphical interfaces, which these days are at the very heart of all computer technology, but which at the time were practically unheard of, were used to visualise the sound and the control parameters, and there was even an interpolation screen for exploring the intermediary terrain between two processing states.
Syter was a hit with musicians, both for studio work and instrumental work. In the studio, it could be easily built into the existing environment and breathed new life into the palette of processing possibilities. The system was essentially used for the processing of sound, meaning that the composer would record sounds and then modify them using the processing tools that were already built in, or by creating his own tools. In so doing, he would be faithful to the GRM tradition of processed sound, even though many hybrid processing techniques (between recorded sounds and synthesised sounds) provided entirely new kinds of sound. This material would then become (whether or not mixed with other sounds from other sources) the basis on which the composer would build his work.
Furthermore, at the time there was a unique relationship between composers and technical designers, who thanks to the modular programming techniques and their user-friendliness, could quickly build the tools necessary for creative work. A number of models that were later to become GRM Tools were a result ot this experimental relationship (in particular Doppler and Pitch Accum]. Once they had been built up, these algorithms were simple to implement, and integrated the whole palette of processing tools available in the system (around 40 different algorithms were designed and 15O variants of these basic algorithms).

An approach founded in pedagogy

The philosophy of the GRM has always been that the creator should work independently on his own process of composition, without the assistance of anybody else. Most composers had the training necessary to handle the techniques, to understand and work the analogue studio, and only in very rare cases were they assisted by the technician-musician. There was such a great interest in the deterred time software or the Syter system, and it was aimed at musicians of such varied backgrounds, that a training programme had to be set up in order to help them come to grips with the different systems. Man of these composers who came from an electroacoustic background, and many others were not familiar with studio techniques but who wished to become acquainted with them and develop projects bringing together instrumental and electroacoustic techniques. There were many other professionals from other fields: artists, radio and sound technicians, teachers or musicologists.
Week-long courses with small groups of trainees began to be organised 2 or 3 times a year, involving generally 6 to 8 participants (a total of 20 courses between 1985 and 1993]. During these courses, the system was explained and the participants had the chance to experiment and play with sounds. The objectives of these courses were manifold: the first was to provide composers with the training necessary for them to be autonomous in their work and to enable them to develop a project.
Another objective was to test the system with users. Because it was such an innovative system, using original approaches with regard to algorithms and interfaces, it had to demonstrate that it was up to the task and that the composers could use it easily and efficiently. Around 120 people followed these courses, and 80 works were composed, sometimes several of which were written by the same composer.

From pedagogy to production and concert presentation

Many of the composers were attracted by the possibilities offered in terms of the real time processing of acoustic sounds, and embarked on projects that brought together live instrumentalists, real time processing and recorded sounds. Others used the system in the studio, for acousmatic works, either to complement other existing studio technologies and tools, or sometimes as the sole production tool.
I was personally involved in this pedagogical and production aspect of the Syter system for some ten years. When it was first presented in-house in 1984, everybody underlined the technical prowess it had been to develop a system of that kind, but there was little enthusiasm on the part of the GRM composers, in light of the small number of existing algorithms and the fact that there were no instructions for use. I was fascinated by this approach and I proposed to Jean-François Allouis that I would help him in his project, in particular by explaining to composers how the system worked and by writing up a manual. We then organised the first training sessions in August 1985 and August 1986, and thereafter I took charge of the courses and production associated with the system and the development of variants of the instruments, in response to requests made by composers. I was therefore able to meet everybody who participated in the courses and I followed everything that was produced using Syter. I also played a great many works that involved Syter for the real time processing of instrumental sound Ia task that we became particularly involved in with Richard Bulski, the system technician, especially for moving it and setting it up for concerts).
I was able to gain an extensive and in-depth knowledge of how the system functioned, so much so that I was able to write my PhD dissertation on Syter (The Syter system, its history, development, musical production and implication in contemporary electroacoustic language, presented in December 1998 at the University of Paris VIII). I composed ten pieces on the system, some of which were with instruments, using the system only to produce electroacoustic sound, and others which were acousmatic, where a great deal of the sound creation work was done on Syter from start to finish. I began to move away from the system in around 1993, when it was beginning to become obsolete and when the first versions of GRM Tools were becoming available on Macintosh, designed and built by Hugues Vinet, who took much of his inspiration from the algorithms of Syter. I also realised, in 1993, that my life had been too wrapped up in the system, when a composer asked me seriously whether Syter was an acronym for System Teruggi!

inagrm.com/grmtools

[Prev | 1 | 2 ]

Monday, June 04, 2012

OnMedia - GRM Tools Workshop

Milan - Saturday, June 9th 10:00 to 12:00 and 13:00 to 17:00

Fifth round of the cycle, 'European centers of research on sound and new media'

Focus FRANCE: Ina-GRM_Groupe de Recherches Musicales, Paris
Guest speakers: Emmanuel Favreau (Chief Engineer for the Development of GRM Tools), Francois Bonnet (Research, Teaching and Curating activities)

[Pierre Schaeffer and Bernard Parmegiani, courtesy of Ina-GRM]

Ina-GRM (Institut National de l'Audiovisual - Groupe de Recherches Musicales) in Paris is a pioneering organization for musique concrete, acousmatic and electro-acoustic music, whose history dates back to the '50s, when it was founded by Pierre Schaeffer. Always engaged in the development of creative activity, research, preservation and dissemination in field of music and recorded sound, the GRM is an experimental laboratory unique in the world. In response to expectations and needs of musicians, composers and sound designers, it is highly specialized in the development of a range of innovative tools to treat and represent the sound: the GRM Tools and the Acousmographe. The activities of music creation and production are mainly grouped at Studio 116 in the Maison de la Radio in Paris.

Grm Tools Workshop
Up to 10 participants.
Bring your own laptop and headphones.
The workshop is free and is held in Italian by Emmanuel Favreau along with Francois Bonnet. During the seminar, after outlining a brief history, Emmanuel Favreau will explore the possibilities of digital sound processing with the latest Tools developed by GRM; he will also deal with issues related on how to interact with the musician.
These notions will be illustrated by demonstrations in real time and musical examples from the repertory of electroacoustic music. After the workshop Francois Bonnet will present the lecture 'Music and sound in space, an introduction to multichannel compositions'. The meeting is open to public, and will present the research activities of the Centre in Paris plays through spatialized listening sessions and projections.

For information and registration: [email protected]

OnMedia is focused - from September 2011 and throughout 2012 - in a wide range of events including a series of conferences dedicated to the most important European centers for research on multimedia sound, art and technology, a series of workshops on subversive listening, presentation of international visual artists and authors who use different media and languages, concerts and performances.

[More info: on-o.org]

Related Posts:

Friday, March 09, 2012

GRM pt.2: the birth of a concept

Daniel Teruggi wrote an interesting article about the Syter system at INA - GRM in the booklet for Archives GRM (CD 4). This whole CD is comprised of works created through Syter.

"To mark and celebrate the thirty years of the INA (Institut National de l'Audiovisuel), the GRM (Groupe des Recheches Musicales) has chosen to bring together an exceptional set of five compact discs, illustrating some of its most remarkable musical archives. These original works, which are often previously unpublished or have been dispersed throughout a host of other publications, are important because of the originality and audacity they testify to in the second half of the 20° century. Some listeners will be pleased to see that there are a number of illustrious composers here who, in the 1950s, frequented the studio of Pierre Schaeffer, and others will discover numerous musicians whose enthusiasm enabled this innovative musical genre to last throughout the following decades."
Emmanuel Hoog, président directeur générale de l'Ina

Daniel Teruggi - The time of real time

From the very beginning, music, whether vocal or instrumental, improvised or written, and up until the invention ol recording processes, was listened to at the precise moment it was produced. The twentieth century changed all that, First of all with the appearance of recording media, which made it possible to listen to sound in a place and at a time other than those at which it was originally produced; then by the widespread use of electricity, which made it possible to invent new instruments and new ways of imagining and making music. Concrete music, electronic music, electroacoustic music, acousmatic music or contemporary electronic musics are all testimony to the same ambition: using electrical, electronic and computer-based technologies to invent the sounds of music. The invention of sounds is the invention of new forms of music, of new ways of looking at music, and is the logical consequence of the new opportunities that technology continues to provide us with. Musicians began to use computer systems a long time ago (1958) in order to synthesise sounds and to develop computer programmes that would enable them to combine sounds into musical works. Progressively it became possible to record these sounds, to process them or to hybridise them with synthetic sounds.
Musical computer technology did not develop fast and was dependent on the way processors and data storage systems evolved; in 1958, a large computer in a research centre was necessary in order to produce a simple synthesised melody, which it was not even possible to record in the memory. These initial technical difficulties brought about the appearance of two concepts which could be described in a historical perspective, but which are often presented as if the were antagonistic: deferred time and real time. Deferred time described the way that the first computer systems were unable to produce an instantaneous result.
Between the moment at which the intention was expressed and the moment when its result become an audible phenomenon, there was always a certain lapse of time.
The user programmed a sound using software, defining its various parameters and timbre, and then the computer calculated the sound and, depending on the complexity of the calculation, produced the result ofter a given interval. The listening time was deferred with respect to intention time.
It was logical that the next technological objective was real time, a concept that describes the possibility of hearing a sound at precisely the some time as the intention to make it is expressed.
Moving over to real time required changes to the command tools. Deferred time was the result of a programming system whereby the user defined, using written language, the result he wished to obtain; moving over to real time made it possible to define the intentions instantaneously and to modify the result as it was being listened to.
Now, most sound production and generation systems work in real time, enabling the user, thanks to various interaction tools (keyboards, mice, screens) to control and modify the sounds created and heard. Nevertheless, in the field of musical creation, and for a relatively long time, this technological evolution was opposed on methodological grounds. Real time obliges the operator to act and react, depending on the result, in a way that is similar to that of the instrumentalist. For many composers, deferred time, because it separated the moment of conception from the moment of listening, created a distance that was necessary for reflection, a situation that is similar to instrumental composition, between the writing of a piece on paper, and its being played.

[Daniel Teruggi @ Sonic Acts 2010 - courtesy Rosa Menkman]

  
Deferred time and real time in the GRM

At the beginning of the 1970s, the Groupe de Recherches Musicales began to experiment using computer technologies. At the time, the Group already had 20 years of experience, a major repertoire of musical works, a tradition for profound reflection on music and perception as well as innovative technological research. Little by little, therefore, work was undertaken to look at the possibilities that this new domain, which was already strong in the United States, could offer in France, where it was comparatively little known. Two projects were to follow one another, and then coexist, between 1975 and 1993: the first, from 1975 to 1987, concerned the development of deferred time sound processing tools, the "Studio 123 software programmes", developments that are dealt with in CD 3 of the GRM Archives set. The second project, the Syter system was a major technological development for musical computer programming, so original that its impact can still be felt in the development of processing tools today.
These two projects were vitally important in opening electroacoustic music up to composers from the instrumental world. The main successes of these two projects were to bring electroacoustic music out of the studio, making computer technology accessible, without needing programming skills, and making processing reliable and reproducible. The range of things it was possible to do to sound was considerably widened, using original and unheard of sound processing techniques. These two projects were a unique period for the GRM, the studios opened up to welcome composers with other ideas, concepts and points of view, the dialogue was rich and fruitful, and the understanding and analysis of the music being written there were enhanced.

The Syter project 

With the advent of computer technology, the first idea was to imagine a parametric control of machines using digital tools. For example, synthesisers, while remaining analogue in the way that the sound is generated, could be controlled by digital systems that would provide o greater precision in terms of frequency that traditional rotary buttons. It was thus that the first Syter was born, an acronym for: Synthése en temps réel (real time synthesis), and the objective of which was to build up a digital synthesis system based on a set of oscillators, controlled in real time by specialised gesture-based access or by external signals.
The first prototype that was built was relatively simple, since its only function was to control, in real time, the movements of a sound source between a number of loudspeakers. This prototype, with its delicate control system and laborious programming, was used in concert on 16 March 1977 for the creation of Cristal by Francois Bayle.
The designer of this tool and of its following versions was Jean-François Allouis, an engineer who arrived at the GRM in 1974, and who was fascinated by the potential of computer technology as applied to sound and music, and who had an uncanny inventiveness when it came to finding solutions to new problems and designing original systems. For this first concert, the acronym Syter become: Systéme temps réel (real time system), and was the starting point for a whole 5-year period of development during which Jean-François Allouis contributed to the setting up of the first GRM computer, oversaw the implementation of the deferred time processing system, built the Syter real-time sound processor and the input and output converters, developed programming software for the processor, built one of the first interactive real-time parameter control systems and programmed the first processing tools. In conjunction with computer scientist Jean-Yves Bernier and computer technician Richard Bulski, he needed to build and rebuild the system several times before the first full system was complete, in 1984. The system underwent very few modifications and additions, subsequent to that. Eight systems were built and sold, up until 1988. The software continued to evolve up until 1989, in particular thanks to the impetus of Hugues Vinet, who designed a digital mixing tool, providing the system with all the functions of a Studio. Two systems were in operation at the GRM until 1995, and around 100 works were composed in part or in whole using the system.

Related Posts: 

Tuesday, May 17, 2011

GRM Tools - pt.1: an interview with Emmanuel Favreau

by Matteo Milani - U.S.O. Project, May 2011 

GRM Tools is the result of more than 50 years of cutting-edge research and experimentation at the Groupe de Recherches Musicales de l'Institut National de l'Audiovisuel in Paris.
These plug-ins were realized by a succession of hardware and software engineers, who formulated the algorithms for the original GRM Tools in the 1990s. Over the years the GRM has focused on developing a range of innovative tools to treat and represent the sound.
The new GRM Tools Evolution is the latest powerful and imaginative  bundle of new algorithms for  sound processing. Three new instruments are available: Evolution, Fusion and Grinder. All works in the  frequency-domain and provide powerful ways to manipulate audio in real time. I had the privilege of interviewing Emmanuel Favreau, software developer at INA - GRM. Here we go!


Matteo Milani: How many people are part of the GRM development team at INA?

Emmanuel Favreau: We are two people, working full-time. Adrien Lefevre handles the Acousmographe. I’m on GRM Tools. We welcome regular students.


MM: Can you tell us a brief history of the GRM Tools from the origin until now?

EF: The first version of the GRM Tools was created by Hugues Vinet, who is now scientific director of IRCAM in Paris. This stand-alone version offered a couple of algorithms, using the Digidesign SoundAccelerator/Audiomedia III card. The user interface was made ​​with HyperCard. When I arrived at the GRM in 1994, we took the decision to convert the processing available in the stand-alone version of GRM Tools plugins to TDM for Digidesign Pro Tools III. Treatments were rearranged, some modified, others abandoned. The original GRM Tools Classic bundle dates from this era. Later, the evolution of treatments has been closely following the technological evolution: when the processors became powerful enough for real-time processing, Steinberg introduced the VST architecture and the Digidesign RTAS Pro Tools format. And finally, we developed the ST version - Spectral Transform - when computer processing power allowed us to calculate several simultaneous FFT in real time.

 
[...] Jean-Francois Allouis and Denis Valette pioneered the hardware development of SYTER (SYsteme TEmps Reel / Realtime System) with a series of prototypes produced during the late 1970s, leading in due course to the construction of a complete preproduction version in 1984. Commercial manufacture of this digital synthesizer commenced in 1985, and by the end of the decade a number of these systems had been sold to academic institutions.
Benedict Mailliard developed the original software for SYTER. By the end of the decade, however, it was becoming clear that the processing power of personal computers was escalating at such a rate that many of the SYTER functions could now be run in real-time using a purely software-driven environment. As a result, a selection of these were modified by Hughes Vinet to create a suite of stand-alone signal processing programs. Finally, in 1993, the commercial version of this software, GRM Tools, was released for use with the Apple Macintosh.
The prototypes for SYTER accommodated both synthesis and signal processing facilities, and additive synthesis facilities were retained for the hardware production versions of the system. The aims and objectives of GRM, however, were geared very much toward the processing of naturally generated source material. As a consequence, particular attention was paid to the development of signal processing tools, not only in terms of conventional filtering and reverberation facilities but also more novel techniques such as pitch shifting and time stretching.

[via Electronic and Computer Music by Peter Manning]


MM: About GUI - 2DController. What is the origin of this pioneering, intuitive, but simple performer-instrument "link"?

EF: This type of interface has been widely used at the time of SYTER during the 80’s. It allowed us to regain "analog" access to a digital instrument. Indeed, even the manipulation of a slider with a mouse requires some attention (click in the right place, moving vertically or horizontally without mechanical guide, etc.). With the 2D interface, the entire surface of the screen becomes a controller. To obtain a result as soon as you click, the precision of movement is becoming necessary if you want to tune that.


MM: The mapping of parameters on multi-touch control surfaces free us from the use of a mouse and gives us an expressiveness never achieved before. What do you think of this new generation of controllers?

EF: Of course, these interfaces allow an overall and "analog" control which is not possible with the mouse (although the knob 2D mode or "elastic" are possible solutions to overcome the single pointer limitation). Since the engineering of the SYTER we proposed a system of "interpolator balls" to interpolate between different set of parameters arranged in a two-dimensional space. The multi-point control of such a device is natural: we need both hands to shape and transform the space.
 "Interpol" control screen of SYTER
[via DAFX: Digital Audio Effects - Udo Zölzer, Xavier Amatriain]


MM: Is the SYTER still in use today in Paris?

EF: No, SYTER no longer works. It was composed of several elements (a PDP-11, large hard drives, a vector graphics terminal) which can not be sustained today.


MM: Host-based tools vs. custom DSP engines: will there be a winner or they will continue to peacefully coexist in the business?

EF: For the type of tool that we develop, the winner is clearly the host-based. For very large sessions with dozens of tracks and hundreds of plug-ins, DSP is now the best choice, but they could disappear with the diffusion of multi-core processors.


MM: How long did the Classic Bundle take to get ported from TDM to RTAS?

EF: It's hard to say because it was not done directly. I first made ​​the VST version, and then adapted the RTAS version. The algorithmic part posed no particular problems, the difficulties being rather on the side of the interface between the various plugins and hosts.


MM: How much research was needed to create the Spectral Transform bundle?

EF: The prototypes of the Spectral Transform have been fast enough to achieve. The basic algorithm is the phase vocoder, which has been well known for a long time. What took time was the interface design, the choice of parameters and their mutual consistency, stability and the whole robustness (i.e. avoid audio clicks and saturation of the values ​​of some parameters).


MM: What's the technology behind the bundles?

EF: If we leave aside the TDM - the processing code is written in 56000 assembly language, all plugins are written in C++. The processing codes are fully compatible between Mac and PC. In addition, the portability of the user interface is guaranteed by Juce. All development is done on Mac; PC adaptation is virtually automatic and requires minimal work.


MM: A description of version 3 and its new features: what goals have you achieved during this long period of software development?

EF: Having redesigned the interface and rewritten all the code allowed us to add some new features: resizing the window, MIDI control with automatic learning, agitation mode.
Agitation is a generalization of the Randomize, it can be applied to all parameters of random variations in amplitude and frequency control. Now all the GRM Tools are also available as standalone applications. This easily handles individual sounds, to make quick tests and become familiar with the treatments without having to use host daw and sequencers.


MM: How do you manage feedback from musicians and sound designers to improve sound quality and the graphical interface?

EF: The user feedback comes from various forums and from discussions with users and composers here at the GRM. In response to suggestions, plug-ins will be changed, some features will be added (but always in small numbers to ensure compatibility) or it will create a new treatment that may ultimately prove quite different from the original application. This is what happened to Evolution that comes from improving the freeze that can be achieved with FreqWarp.

[GRM Tools Evolution @ Qwartz 7 - courtesy Alexandra Lebon]


MM: What are the most efficient methods of applications against piracy?

EF: There is none. Whatever the methods, they will be bypassed one day or another. We must find a solution that is not too heavy for the users, while allowing a minimum of protection. We chose the system of Pace iLok because it is very common in musical applications. The recently announced changes should make it more flexible to use.


Thanks for your time Emmanuel, keep up the good work!


[...] Any transformation, no matter how powerful, will never equal or surpass synthesis, if it fails to maintain a causal relationship between the sound resulting from the transformation and the source sound. The practice of sound transformation is not to create a new sound of some type by a fortunate or haphazard modification of a source, but to generate families of correlated sounds, revealing persistent strings of properties, and to compare them with the altered or disappeared properties.
In synthesis, the formalisation of the devices and resulting memorisable abstraction, offer a stable set of references which can be easily transposed from one environment to another. In sound transformation, no abstraction of the available results is possible and neither is generalisation. The result of an experiment is always the product of an operation and a particular sound to which this operation is applied. The composer must be able to add to the sum of knowledge by reproducing a previously proven experiment.
What makes the wealth and functionality of a system is the assembly and convergence of the whole, its ability at any moment to answer the questions imagined. Specific tools built for a single experiment, no matter how prestigious, are sterile if they cannot be applied to other purposes. - Yann Geslin




References:

[Digital Audio Workstation by Colby Leider]
[sounDesign, a blog dedicated to the world of Sound and Audio Design]
[On GRM Tools 3, Part 1 - via designingsound.org]
[GRM Tools 3 review: a classic reborn]
[The GRM: landmarks on a historic route
[GRM's current team]
[GRM Tools Store]

You can also read my interviews and reviews on Computer Music Studio (italian only), a monthly magazine by Tecniche Nuove Editore. - Matteo Milani

Wednesday, October 07, 2009

An interview with Christian Zanesi, pt. 2

by Matteo Milani and Federico Placidi, U.S.O. Project
English translation: Valeria Grillo
(Continued from Page 1)


USO: Did mass-culture and consumerism create a lack of “listening attention”?

CZ: I think the ear is intact, it is the demand that has to be discovered and awaken. Even if there is a mass culture, to establish there is a dominant thought in hearing. But the border is very narrow, between the dominant thought and the adventure, we can cross this border very quickly, we only have to open ourselves and to create bridges. Also here there are very serious issues, for example the dominant idea of rock - let's take this example - has built a peculiar "ear" and we can recuperate this ear by using experimental musics. So the world is not made of cases separated by inviolable borders, on the contrary, there is no interruption of continuity between sounds; I am very attentive about popular music, because I know that there are cases of recycling, of experimentation, and also a genre which can be experimented. We have to be attentive, we have above all to avoid being dogmatic, fundamentalist or categorical. On the contrary, we have to observe what happens, try to detect in every practice, in every thing something different from the variable design of music, the variable design of humans and see how this can be transcended.



USO: What has been the role of physical space in the sound projection practice in GRM history?

CZ: In the 1950s, when Schaeffer invents this music, musique concrète with his collaborators, they suddenly asked themselves how to let everybody listen to this music in a public space, in a concert hall. Schaeffer said, if we have to go to the Carnegie Hall, how do we do, we cannot use a single loudspeaker on the scene, this was a little ridiculous, and finally the dimension of the concert, the dimension of how to present this music to the public was created, little by little; there is also the cinema industry which plays an extremely important role, since in the 1960s the Dolby firm had already imagined the surround, due to the invention of the Cinemascope, and were forced to position many loudspeakers to occupy the screen, and many more surround loudspeakers all around. So, we have at the same time the desire of people to experience sound in a bigger dimension, and the technology which built that ear. Today, we follow this process, and people is happy to come to a concert, because we think that a concert is going to be a particular and privileged moment, maybe a show, and that there is a difference between experiencing music in a concert hall and at home, that there is something really spectacular in that moment which justifies the concert. So there are many ways to face the problem, we chose one, but there are others.



USO: Can you describe your compositional process? How do you go from the idea to the sound, to the elaboration, to the structure, to the shape?

CZ: This is a personal question, if you ask this same question to another composer, the answers are different. Me, I don't go from idea to sound, but the opposite, I go from sound to idea; that is I need to find the sound that touches me for different reasons - if I had the time I would explain these reasons and give examples, but I don't - and inside this sound there is a potential, and the idea born from the sound is not a sound subjected to an idea, it is the opposite. We can consider the sound like some kind of fugue, which means that there is potential for development, and to work I need to find a sound which will be at the base because there is an emotive shock, there is really something imposing, which will be the source of my work, and at that moment we can listen to the sound in depth, not for what it is, but for what it can become; we hear the sound as a promising element of a musical structure. But this is a personal answer.


USO: Do you work with stereophonic material and perform live spazialization or do you prefer to design spatial trajectories/position in the studio?

CZ: 

I mainly work in stereo, because my studio is not multiphonic, but I have a lot of experience with sound projection in the auditoriums, with a stereo source you can give the illusion of a multiphonic work; I both do live concerts and acousmatic works. Finally you don't know what system we have, where it is, the space, and also the kind of concert we have been requested, but what we experience nowadays is that live practices and acousmatic practices are coexisting pacifically, and they enhance each other depending on the place, time, and project. The concrete music was born in 1950s, and it is about, after all these years, to create a new branch which will be the performance version, and the live version of this music. This evolution started in the year 2000, since today's systems (computers) are much faster than 20 years ago, and the tools for sound processing allow it. But these have developed because there was a strong desire to develop them. There are no technologies which arrive spontaneously, we look back into history, we observe, and what happens sometimes with dreams, there are dreams and conjunctions which make things possible. That's what it is going to happen in about 10 years.


USO: What are your working tools for non-linear editing and sound signal processing?

CZ: At the GRM there are more or less all programs - Pro Tools, Digital Performer - but there is a kind of consent on the tools to use, very quickly we can comment that today's programs are actually a direct-line consequence of the techniques from the 1950s, because with a tape or a disk we could go from a sound to another, create discontinuity, being able to superpose sounds, and create a verticality, and harmony; we have the mastery of time, thus on counterpoint and polyphonie, we could increase or decrease the volume; we need to check that there are the minimal conditions for the program to be exploitable, and finally, checking on today's programs; we have continuity. With the program, whatever that is, we can do anything - montage, superposition, regulate the volume, regulate the space and finally you do the same as it were a tape recorder. These are fundamental operations for the music's world.



[Prev | 1 | 2 ]

Special Thanks:
Laure de Lestrange
Christian Zanesi
www.ina.fr/grm

[Listen to Magnetic Landscapes - Christian Zanesi | via deezer.com]

Related Posts: Daniel Teruggi (GRM, Paris): The novelty of concrete music.

An interview with Christian Zanési, pt. 1

by Matteo Milani and Federico Placidi, U.S.O. Project
English translation: Valeria Grillo


Christian Zanési is a French composer and the head of the musical programming at the Groupe de Recherches Musicales. Zanési studied music at Université de Pau, then in Paris at the CNSMDP under Pierre Schaeffer and Guy Reibel. Since 1977, he is a member of GRM especially in charge of the production of radio programmes for France Musique and France Culture. He has been recently awarded the Special Qwartz at the Qwartz Electronic Music Awards (Edition 5).
We met Christian Zanési at 104 for the fifth Présences Électronique Festival, organized in co-production with Radio France, which explores the link between the concrete music of Schaeffer and new experiments in electronic music.
Since its founding in 1958, GRM has been a unique place for creation, research and conservation in the fields of electroacoustic music and recorded sound. The systematic analysis of the sound material Schaeffer proposed to his collaborators resulted in the publication of a seven hundred pages reference book, the "Treaty of Musical Objects" published in 1966.
Michel Chion’s Guide To Sound Objects (PDF, English translation) is a very useful introduction to this voluminous work (via ears.dmu.ac.uk).
In this very complete essay, Pierre Schaeffer develops the main part of his new musical theory of the sound object. It is based on two principal ideas, making and listening, and explores the first two levels of this theory: typo-morphology and classification of sounds. He uses various disciplines such as acoustics, semiotics and cognitive science to demonstrate his explanation of sound and, in particular, the musical object within musique concrète.


U.S.O. Project: You often mention that sound is the prime matter. How can we “orient” and recognize sounds we’ve already heard, without the cause-effect relationship?

Christian Zanési: The sound has to be considered in a musical context, it is not about recognizing the sound, it is an expressive matter, and every composer, every musician has some kind of personal sound, a signature sound, so when he transcribes it into his work, into an artistic project, the idea is not to try and play a guessing game, this is this sound, or this is that sound, there is a higher level that belongs to it, and this level is that of the musical relations, expressive relations. When you listen to a concert, for example for cello and orchestra, you are not pointing out each instant "this is a cello sound", you listen to music. And it is the same with sound.
 So, there is a second level which is a little more complex, it is true that some sounds reveal an image, from time to time there could be ambiguity, and this is also part of a rethoric, or of an ancient model, and it is possible that this provokes a kind of curiosity, and at the same time gets mixed into the expression. Composers are very strange.


USO: In the first 50 years of electronic music, has the absence of visual elements during performances limited or enhanced the listening experience?

CZ: Well, naturally us, at our festival Presences Electroniques, prefer to bring attention on the sound, sometimes there are some audiovisual experiences, but the nature, the physiology of it, makes the image a priority on the sound. So we, since we developed a projection system very very sophisticated, prefer that kind of listening experience, it is more complete, we don't need anything else, it is an orientation, it is one of the strengths of our festival; that is, to offer artists the best possible sound projection system, the best definition, and we think that is complete. We are very music-oriented, and I've often been there and I've seen musicians doing an audio-video performance, and I've been rarely convinced, sometimes yes, but it becomes something else, we go down to another level, and we want to explore the purely musical universe, sound and its programming.



USO: Can you give us a more in-depth definition of the concept of “l’ecoute reduit” (reduced listening) by Pierre Shaeffer?

CZ: The first time that... well in a few words it is difficult... when we recorded on vinyls, there were no tape recorders in the 1950s, when we reached the end of a disk, the last groove was repeated indefinitely, and it was a full modulation. In all vinyls today, there is a closed groove with a silence, to avoid that the stylus exits the turntable. All the disks at that time - all that were recorded in the 1950 - the disc, equivalent of today's vinyls, ended on a closed groove, and Schaeffer listened to this phenomenon, everybody could hear it, there was not a silence. The fact that the sound repeats itself, made him understand that finally we were listening to the sound itself, we wouldn't say "this sound is close or not, it starts this way, it is more or less acute, it is more or less brilliant, it is more or less thick, and in that moment he imagined that we could listen inside the recorded sound, which means we could verify, validate, objectivize the phenomenon by hearing the peculiar characteristics of the sound, how it starts and how it ends, in which tessitura it is, if it is light or if it is dark, it is loud, it is close or far. This is to objectivize the listen, this is "L'ecoute reduit".



USO: Can you discuss your vision of the sound organization principles? Definition of the sound object, objective and subjective aspects, articulation, iteration, quality of the information, musical language.

CZ: In classical music, the constituent elements of an opera function organically, that is there are consequences; when you put a sound together with another, this creates an harmony, for example a chord, and if you add another sound the chord completely changes in timbre, this is an organic principle. We can do the same with the sound the composer chooses, so to imagine music sometimes as a vertical relation, or as an horizontal relation; this is an organic system, and the organic system is a biological system, in some way copied from the living, and since we function following the living principles, we are particularly able and specialized to recognize an organic system, and in that moment something starts in our brains and there is some kind of a communion; this is what happens, how to organize the sound, why put this with that, and why after this, add that, they are organic relations.



USO: What do you think of hidden causality (between what the audience sees and hears)?

CZ: The ability to listen is a very complex phenomenon, so when a composer works, it is difficult to be conscious about everything, an artist works very much in an intuitive manner, intuition is a super-computer which gives us the answer immediately. But if we talk about the genesis, about why we made some choices, it takes an eternity. So, the relations, the sound is not inside a single "ear", there is a very ancient ear, for example, that is the ear we use to cross the street, it is the hearing of danger. Hearing is a prolongation of the sense of touch, imagine to be in the forest, when a predator hunts, if it touches you it is too late, you are dead. So hearing is a prolongation of touch, it allows you to foresee the arrival of a predator. There are various hearings - one which detects aesthetic information, one that detects information on the behavior of others, so when you work with sound you alternatively activate many "ears". When you begin a piece of music by having a sound turn around you, you awaken the ancient, the primitive "ear", the danger hearing, since in nature, something that turns around you it is something that tries to catch you, and a composer in some way is conscious of this mechanism and plays with it. It is very difficult to answer this question because the ear is very complex, and the reality around us is very complex. But we are all naturally gifted at hearing, since without it we could not cross the street, we could not hear something behind us, we could not see around us through sound.


[ 1 | 2 | Next ]

Saturday, April 11, 2009

Bernard Parmegiani, a sound master

Organized by Groupe de Recherches Musicales (G.R.M.) and jointly produced with Radio France, the Présences Électronique festival explores the link between the concrete music of Pierre Schaeffer and new experiments in electronic music. One of this event’s special features is that it offers the public and performers a unique “spatialized” broadcasting and listening system in the Acousmonium.
This year, for this fifth event, Présences Electronique has moved out of the Maison de Radio France and spent three days in the various rooms of 104, the new multi-disciplinary cultural centre in Paris’s 19th arrondissement.

We were there and listened in darkness to 'De Natura Sonorum' (1974-1975), one of the best of Parmegiani's works in terms of technics-sound-harmonic-tone.

Bernard Parmegiani (1927) met Pierre Schaeffer who encouraged him to attend a training course in electro-acoustic music in 1959. Then he joined the Groupe de Recherches Musicales the next year, becoming a full member right up until 1992. Pierre Schaeffer put Bernard Parmegiani in charge of the Music/Image unit of the ORTF's Research Departement, where he went on the compose the music for both full-length and short films. The proved to be a first class training ground for learning how to deal with the problems of musical form as these relate to time, and how to overcome the constraints imposed by the medium of the cinema. He also wrote the music for several jingles, as well as songs and music written for television, the ballet or the theater. There then followed 40 years of uninterrumpted research and musical creations built out of an ongoing fight that led him to regard bodies of sound as living bodies. He took a keen interest in those areas in which the improvisation techniques used by jazz musicians meet with electro-acoustic music. Parmegiani's own output, primarily made up of sounds recordes on tape, includes more than 70 pieces of concert. Except some mixed pièces, his work as a whole take the form of music for « fixed sound », coming within the scope of the large repertoire of electro-acoustic music.


Some excerpts from the interview by Évelyne Gayou, published in full in the book "Portrait Polychromes: Bernard Parmegiani":


Can the Parmegiani's sound be defined? Some people speak of "organic sounds"...

In the past, people used to talk about a "Parmegiani's sound", a little too much to my liking, and it bothered me a lot. People would say: "Oh! Parmé, what beautiful sounds you make!!!" It's good to make nice sounds, but really, we don't compose music to produce nice sounds, but rather to compose from an idea. I'm not trying to seduce anyone with my music; I'm trying to get people interested. That's why I'm obsessed with constantly renewing myself musically. I can only exist by continuosly exploring new territories; otherwise one gets bored with one's own music. The risk is to do 'Parmegiani in the style of Parmegiani' and so on. If I must define what the "Parmegiani sound" is, then it's a kind of movement, a kind of colour, a way of starting and a way of fading the sound, a way of bringing life into it. I do consider sounds as living things. So there's, indeed, something organic, skin deep, but it's always difficult for me to define my music; what we perceive from within isn't always understood by others in the same way. We recognize ourselves in the mirror others hold up for us, to a certain extent; it's a game between the inner and outer realms.
[...] When I start a piece, I create a sound bank; I include new sounds, never used before, that might fit my intention and reworked old sounds. I listen to them and create detailed inventories; it is essential and imposed by my working method. For example, for De Natura Sonorum, I made lists of sounds classified by shape, subject, colour, etc. according to the TOM (Treaty of Musical Objects)'s typology. I like to set the sound material in my ear first, so that I can then work with these sounds to express what I want to say [...]


When performing your music in concerts, how do you see the spatial aspects? Do you want to create a show or is it a mere experience?

I'm not very happy with the word show because of its demonstrative character. I prefer for it to be an "experience" because I never project the sounds in the same way twice. When I'm in a concert, standing at the sound projection desk, I intentionally send the sound to specific speakers, I either pan to the left, to the right, along the sides or behind and I associate pairs of sounds. The sound can follow a pre-defined trajectory; remain static in a speakers area or even in a pair of stereo speakers. Some composers, especially when they start out, turn all the potentiometers up and don't vary the levels of the speakers much, the result is imperceptible. Worse than that, the sound is hindered in all directions because it is everywhere at once. Depending on the acoustics of the concert hall, you might even get reverberation or interference phenomena, and then the audience can't hear any subtlety.


You've gone through the digital revolution, what do you think these new tools have brought to your music?

I was probably the first person at the GRM with a personal digital studio. So I had to learn how to use the digital equipment by myself. By switching from the scissors to the mouse, we've improved a few things, but we've lost out on others. [...] The time it takes to put an idea into practice has shortened and, consequently, we're closer to the compositional act.

[the review of Parmegiani 12-CD box set | by Caleb Deupree]

Wednesday, March 18, 2009

Sound Junction 2009 - Special Guest: François Bayle

University of Sheffield Drama Studio
Sheffield (UK - map)

François Bayle has been at the forefront of acousmatic music for over 40 years. In 1958-60, François Bayle joined Pierre Schaeffer’s Groupe de Recherches Musicales in Paris, and between 1959-62 worked with Olivier Messiaen and Karlheinz Stockhausen. In 1966, Pierre Schaeffer put him in charge of the GRM which, in 1975, became an integral department of the French National Audiovisual Institute (INA). He maintained this position until 1997.
In addition, it was François Bayle’s idea to create the Acousmonium (1974).
Upon leaving the Grm in 1997, he created his own studio and the record label Magison.

5th June
Concert II - François Bayle
  • Eros bleu - L'infini du bruit (Erosphère /2) - 14' - stereo (1980 / revision 2009)
  • Eros noir - Toupie dans le ciel (Erosphère /3) - 25' stereo (1980 / revision 2009)
  • Métaphore + Lignes et points, Journal (L'Expérience Acoustique 6/7) - 15' four channel + video (1966 / 71) with images by Piotr Kamler
  • Univers nerveux, in memoriam K. Stockhausen - octophonic - 22' (2005 / 07)

6th June
Concert III - 'International'

Sound Junction concludes with a 'classic' by Christian Zanesi (Profil-Désir, 1988), who worked alongside François Bayle for many years at the GRM.

[via shef.ac.uk]
[Press release - pdf]
[magison.org]

Wednesday, January 07, 2009

Michel Chion: Audition Musicale

Michel Chion will present an audition musicale and a talk about his work at the University of Edinburgh on 21st January 2009.

Michel Chion was born in 1947 in Creil (France). After literary and musical studies, he began in 1970 to work for the ORTF (French Radio and Television Organization) Service de la recherche, where he was assistant to Pierre Schaeffer at the Paris' Conservatoire national de musique, producer of broadcasts for the GRAM, and publications director for the Ina-GRM, of which he was a member from 1971 to 1976.

He also works as a theoretician in a new area: the systematic study of audio-visual relationships, which he teaches at several centres (notably at Université de Paris III where he is an Associate Professor), and film schools (ESEC, Paris; DAVI, Lausanne) which has developed in a series of five books. He has also written on Pierre Henry, François Bayle, Charlie Chaplin, Jacques Tati, David Lynch, diverse subjects on music and film.
After having dedicated his Guide des objets sonores to the ideas of Schaeffer, in 1991 he published "L'art des sons fixés" in which he proposes, in order to properly designate this music, the return to the term ‘musique concrète' in its initial non-causal sense. His redefinition insists upon the effects particular to the fixation of sound, a term which he proposes in place of recording.

[via sd.caad.ed.ac.uk]
[michelchion.com]

An interview with Michel Chion

[excerpt from paristransatlantic.com]


When did you join the GRM yourself?

In 1971, not as a student but as a member. My first job was to be Schaeffer's assistant for his classes at the Conservatoire. It was a very original class which didn't only focus on electroacoustic music, but all forms of music. I prepared his lessons, taught some of the classes, and set assignments for the students, which Schaeffer graded. Composition assignments, exercises in montage. The course was about music in general, including non-Western musics and music therapy, and I thought it was quite original. Schaeffer wanted it that way. He wanted students to ask questions on the music's background, its social origins and function. In class it was more a question of participating and debating. So I was his assistant for a year, and then someone else took over.


What led you to join the GRM in the first place?

I'd already read Schaeffer's Traité des Objets Musicaux, and found it more honest, direct and relevant than certain books by Boulez, which I thought avoided a lot of questions relating to perception. But Schaeffer's book was 700 pages long, so to make it more widely known I wrote a kind of abridged version called Guide des Objets Sonores. That's why I joined the GRM. Schaeffer's book still hasn't been translated into English, by the way, but mine has.


You referred to "electroacoustic music" a minute ago. What name do you prefer for this music?

Well, as you know, the terminology has changed. At the beginning of the 70s "electroacoustic" meant something on magnetic tape. But live electronic music already existed, and more and more composers started adding live instruments. Then you had people like Jean-Michel Jarre saying he was making electroacoustic music too, and people started thinking electroacoustic music had to be live. Anyone doing it on tape was kind of retarded. That was the ideology at IRCAM in the mid 70s, saying people who made music on magnetic tape (or later on a computer) were somehow lagging behind, or didn't understand that it could be done live. Well, I've always been of the opinion that there are things you can't do live, or rather, things you can do better on tape. It's like someone who doesn't understand that cinema is the art of fixing things, and tries to make a live film, with actors acting live in front of people and being filmed at the same time. Obviously that's absurd. In the same way I think there are many pieces of live electronic music that don't make sense. So after the mid 1970s, the terminology changed, and François Bayle came up with the idea that we should find a term specifically to describe music on tape. The problem for me was that he found a word – "acousmatic" – which was understood by only a handful of people in France and by nobody else in the world. When I mentioned acousmatic music outside France, nobody had a clue what I was talking about. They couldn't find the word in the dictionary. It didn't exist. You still can't find it in any French dictionary today. In the 1980s I suggested we return to the term "musique concrète", because it's known throughout the world. It's in all the dictionaries. Musique concrète, the art of fixed sounds – I wrote a book on the subject. I thought it was important that members of the public should understand what a work of musique concrète consisted of. So, yes, I still call it musique concrète, and that applies to François Bayle as well as Karlheinz Stockhausen.


When did you begin writing about cinema yourself?

That started in about 1980, when I was 33. Pierre Schaeffer told me about an offer he'd had to lecture on sound in cinema at a film school, which at the time was called IDHEC, Institut de Hautes Etudes Cinématographiques (now called FEMIS). He declined, but told them Michel Chion could do it! I'd already written an article on the relationship between sound and image, so I agreed. It was right about that time that something very important happened, with the arrival of videotape. Prior to that, if you wanted to study a film it was difficult, because you had to borrow a copy of a print and sit in a cutting room for three days taking notes. But by 1980 video recorders had appeared, and you could record a film from the television. Which I did. The first thing I analysed was a film by Bresson I recorded from the television.


Which one?

Un condamné à mort s'est échappé, which is magnificent, and very good for showing off-screen sound. It's the story of a man in prison, and we hear the sounds as he hears the sounds. It's a real lesson in off-screen sound, and a very beautiful film.
So with a VCR you can stop the image, analyse the sound, listen to it alone or watch the image without it. Until then, few people had studied film like that. When I started, it was an area in which there were few books published, and most of those were by people coming from a technical or literary background. I'm one of only a few writers in France who came from a musical background, and I think you have to understand music to be able to talk about the use of sound in cinema. So I started writing articles and ended up at the Cahiers du Cinéma. I suppose I'm best known as a writer. But it all started because of Pierre Schaeffer.

[read the full interview - via paristransatlantic.com]

Wednesday, November 12, 2008

Daniel Teruggi: GRM at 50

On Friday, November 14, Electronic Music Foundation will honor the 50th anniversary of the Paris-based Groupe de Recherches Musicales (Music Research Group), one of the world's major pioneering organizations in electronic music, with guests like Daniel Teruggi, director of GRM, François Bayle, past director, and Marc Battier, professor at the Sorbonne.

Teruggi @ Tempo Reale, 17th May 2008


The story of GRM begins 60 years ago, in April 1948, when Pierre Schaeffer, an engineer for the French Broadcasting Company, took a sound truck to a railroad switching yard at Batignolles, in Paris, to record the sounds of steam locomotives, wheels, and whistles. He used the sounds to compose Etude aux Chemins de Fer (Railroad Study) and coined the term 'musique concrète' to mean music composed with sounds (as against symbols for sounds). Following an enormously successful broadcast of a Concert de Bruits (Concert of Noises) in October 1948, he initiated a succession of research projects that culminated in 1958 in the official formation of the Groupe de Recherches Musicales with François Bayle, Luc Ferrari, Iannis Xenakis, and several other composers. The goal was to explore the new creative possibilities in using all sounds in music. The result was the beginning of a musical revolution.

[via Suzanne Thorpe - Arts Electric]

[Listen to Daniel Teruggi speak at Arts Electric]
[more on the concert]

Monday, June 16, 2008

Daniel Teruggi (GRM, Paris): The novelty of concrete music.


Daniel Teruggi (GRM, Paris): The novelty of concrete music. from U.S.O. Project on Vimeo.

Tempo Reale/Musical encounters
Florence - RAI/Studio C, 17th May 2008

DT: The GRM (Groupe de Recherches Musicales), institution that I represent, which celebrates this year (2008) the 50th anniversary, is an accident, it should not exist, and unfortunately it exists!
Pierre Schaeffer, who created the GRM in 1958, said that it was fundamental to invent “institutions which were useless, but necessary”. I think that this definition perfectly applies to the group with which I work.
Unfortunately, my business is largest than pure music composition. I always say that one day I’ll stop everything and start doing only my compositions. This day is not coming, now I have 56 years, it may be that when I’ll quit, I will finally compose as much as I want. However, I continue to make music regularly.
The GRM, a group of 13 people, is actually inside a research and testing department made of 60 people, which is part of the bigger INA (Institut National de l'Audiovisuel), where 940 employees work. This explains how small our group is, against the whole institution.
INA, born in 1975, even though it deals with audiovisual, not music, gives us the necessary confidence, it gives us all the necessary funds, the money that we need to develop our activities, gives us the missions that we have to follow.
Many structures similar to us that had developed in the fifties in Europe and United States have disappeared. But this “historic mistake” called GRM strangely survived.
To survive, is always a challenge, particularly in a world that don’t easily understands us, that is shifting, maintained by economic interests, in a world where the idea of creating a space for composers to build the sound and music of tomorrow, is almost exceptional, I would say.
We struggle for the uniqueness of our group, we fight for this mission that for us is historical, but it is written, our activity is not invented, not decided by us; we have contracts with radios, some very important.
We have an agreement with Radio France, saying that we have to provide 20 to 25 new music pieces, to produce 10 to 16 concerts, and to broadcast 50 to 80 hours of radio programs per year.
So this is our activity base, everything is absolutely positive, doing those three things is what we love. Radio France pays INA to do these jobs and we (GRM) do it.
Thirty years ago, the radio did not want to deal with the issue of music that involves technology.
Therefore they asked GRM to do this activity, to create radio programs where music with new technologies is presented.
On our side, we faced the challenge starting from the end: you need a radio program? It would be interesting to have original music, so let’s build the recording studio for production. If we do original music, airing it directly on the radio without having it heard in a concert was not so believable, so we decided to do the concert for the public before going on air.
It would be interesting, for the composers working in the studio, to have original technologies so we started the research for special tools to get the unique GRM sound.
After all this, there are new music, and how we handle all this music? So we began to investigate on perception, listening, musical analysis to understand the meanings of this music.
CDs, books, internet finally came, from a small, distant beginning that is the radio program we had to produce.
This is an up-to-date screenshot of the GRM today.

[an interview with Daniel Teruggi is available here - via Radio Papesse - mp3/italian only]

Friday, May 16, 2008

Daniel Teruggi: Et voilà, la musica concreta

Tempo Reale/Musical encounters
Florence - RAI/Studio C, 17th May, h 5pm

Daniel Teruggi (GRM, Paris)
The novelty of concrete music
Music by Ferrari, Malec, Schaeffer/Henry, Teruggi


It is sixty years since Pierre Schaeffer started "musique concrète" in Paris. Does it make any sense to go on speaking of concrete music today? Who are the composers who perform it? A reply to these two questions comes from the two musical encounters of Tempo Reale, encounters headed by two important composers from the GRM of Paris, the institution founded by Schaeffer himself.

Daniel Teruggi will lead the listeners in a journey through the present day and the history of a musical genre which has had no mean influence on contemporary languages. Two kinds of historical and present-day sound experiences are therefore compared in a technological-musical context that is particularly refined, the Studio C of the RAI in Tuscany.

[fabbricaeuropa.ffeac.org]