Music composition process:
The muscular activity of the participants' smiles is recorded in graphs. The Y-axis (vertical) reflects the intensity of the muscular activity, and the X-axis (horizontal) reflects the time in which this activity takes place. This height/time relationship is also the basis of any piece of music. We could define a piece of music as a series of determined values (pitches), with a determined duration, which are developed during a determined time.
What I have used to compose the music are those values recorded in the graphs provided by the campaign (muscle movements of a person's face and caregiver interacting). It is these records that determine the pitch values of the musical notes, other variables related to the color of their sound, the duration of these values and their temporal development.
How do muscle registers determine pitch, duration and other music variables?
I have created a hybrid analog-digital instrument that translates these values into musical notes and modulations of the timbre of the sound. The instrument consists of a microcomputer that I have programmed through the musical and visual programming language Pure Data. Through that microcomputer and the program, I read the values of the graphs at a given tempo and send the values, scaled, through MIDI and through voltage to the Moog Mother-32 semi-modular analog synthesizer.
Each graph reflects two participants interacting. I have decided to use the registers of one participant to give the pitch value of the notes (via MIDI) and the registers of the other participant to modulate the color of the sound (via voltage control, modulating the cutoff value of the synthesizer filter, i.e. equalizing the sound and thus giving it different timbre). In this way, while one participant determines for example the note C, the other participant makes that note sound with a specific timbre. The key and tempo of the music are decisions I have made, as well as the range of variation of the filter.
The end result is two melodies sounding together created as I have explained above, but with the function of each participant reversed for each of those melodies. So, in melody 1, participant A
determines the notes and participant B determines the timbre of the sound and in melody 2 those roles are reversed, with participant B determining the notes and participant A determining the
timbre. Of course, the two melodies are played at the same tempo and from the same synchronized starting point. Their interaction is strictly that which comes from the graphs. I then selected
which of the three pieces of music resulting from each of the three graphs to use, as well as which part of the piece to use (each piece lasts about 3 minutes, and the wedge, about 30
seconds).
I have also added a high-pitched beep, simulating what would generally be recognized as the sound of a machine that detects any vital activity in a scientific environment, which has also been
generated strictly with the rhythm of the graphics of the campaign. The musical piece begins with that beep to which the melodies corresponding to the smiles are added, interacting with each
other and with the beep and ending with the isolated beep again.
Final mixing and mastering of the music recording was done by Arturo Ramón.
Commissioned for the Government of Navarra (Spain) to Brandok (communication agency, Pamplona). María Llorens, creative director and creator of the concept.
https://brandok.es/proyectos/gobierno-de-navarra-las-emociones-se-pueden-ver-y-escuchar/
Visualization of the development of the graphs of two melodies together (tuned and with tonal filter, without timbre modulation). Pure Data on Laptop.
Part of the recording of one of the final melodies. Melody determined by a graph with timbre modulated by another graph, without any kind of manipulation added during the recording.Pure Data on microcomputer + Moog Mother-32.
Selection of the favorite recording: two melodies together corresponding to the graph “Patricia_MAngeles_2_Toma_de_la_risa_al_final”, modulated, with the beep added.
Conversion of graph photo into data
First translation of a muscle register into notes (without tuning and tonal filter). Pure Data on Laptop
Example of two melodies together (tuned and with tonal filter, without timbre modulation). Pure Data on Laptop
How the graph to be used in the microcomputer is formed. Pure Data on Laptop.
First approximation to a melody determined by a graphic with timbre modulated by another graphic (with synchronized delay, which in the end I discarded). Pure Data on microcomputer, Moog Mother-32, Roland Demora (semi-modular delay).
Exploration/improvisation by changing parameters (attack, tempo, octaves, ...) with melody determined by one graph and timbre modulated by another graph (with synchronized delay, which in the end I discarded). In the end I also discarded any extra manipulation on my part during the recording. Pure Data on microcomputer, Moog Mother-32, Roland Demora (semi-modular delay).
Complete recording of one of the final melodies. Melody determined by one graph with timbre modulated by another graph, without any manipulation added during the recording. Pure Data on microcomputer + Moog Mother-32.