Research


Research is a systematic investigation of some aspect of thought or reality which leads to transferable knowledge. 
In artistic research, this knowledge, embedded in compositional or performative work, may be expressed through diverse media, including but not confined to written text.

 

Laura Agnusdei (1st-year master’s)

Combining Wind Instruments and Electronics within the Timbral Domain

My research at the Institute of Sonology is focused on composing with wind instruments and electronics. The starting point for this is my background as a saxophone player and my compositional process aims to enhance the timbral possibilities of my instrument, while still preserving its recognisability. In line with the artistic interest of blurring the perceptual difference between acoustic and electronic sounds, I process acoustic sounds from my instrument using digital software like Spear, CDP and Cecilia – carefully selecting procedures based on analysis and re-synthesis techniques.

Musical points of reference in my contexts are chosen from different contexts - such as free jazz, electroacoustic composers, experimental rock, and my interest in timbre has also encouraged me to explore many extended techniques on my instrument. Additionally, when composing for saxophone I consider the instrument to be occupied in music history, straddling influences between African-American culture, pop music and contemporary classical music. Aside from the saxophone, however, I plan to use my own timbral peculiarities and how to use their sonorities in a personal way.

More precisely, I am interested in working with acoustic instruments because it is possible to translate this into the electronic domain. Therefore, my research will take place in the studio as well as live performance. The final outcome of my studies should be to incorporate discoveries. Moreover, in my improvisational practice (both solo and group) I want to expand my research in order to combine live processed sounds with pure acoustic as well as experimenting with different amplification techniques. 

 

Görkem Arikan (1st-year master's Instruments & Interfaces)

Interactive Chair: A Physical Interface for Live Electronic Music Performance

In live computer music, many remarkable works have been created that utilise new sound sources and digital sound synthesis algorithms. However, in live electronic music concerts, we may be encountering a lack of visual/physical feedback between the musical output and the performer, since looking at a screen doesn't really convey much to an audience about a performer’s actions. Therefore, in my concerts, I have been looking for ways to minimise the need to look at the screen by using various kind of setups, largely those consisting of MIDI controllers and sensor-based systems to transform physical acts into sound. 

At the Institute of Sonology, I am focusing on building an interactive system consisting of an office chair equipped with various sensors that provide the performer with the ability to emphasise their movements, specifically tilting, rotating and bouncing. Inevitably, a crucial part of the project is how I interpret these movements, which rely on mapping strategies as well as the quality and functionality of the sound engine. Overall, my goal is to design an interface mediating between the system and the performer in order to provide an expressive performance.

During my research I will explore prior studies and methodologies including sound synthesis techniques, digital signal processing algorithms, mapping strategies, micro-controller programming, wireless data-transfer techniques and devices, as well as the potential use of compatible sensors. Following the realisation of such a system I will conduct a user study and then discuss the results in the written part of my thesis (including its entire construction, a user evaluation, and future implementations of the system). 

The system will be presented in a public performance by a musician familiar with the system, in addition to an exhibition where the audience will have the opportunity to physically experiment with the system simply by sitting and moving in the chair.

 

Matthias Hurtl (1st-year master’s)

DROWNING IN ÆTHER
software defined radio – a tool for exploring live performance and composition

In my practice I am often fascinated by activities happening in outer space. Currently, I am interested in the countless number of signals and indeterminate messages from many of the man-made objects discreetly surrounding us. Multitudes or satellites transmit different rhythms and frequencies, spreading inaudible and encoded messages into what was once known as the æther. Whether we hear them or not, such signals undeniably inhabit the air all around us. Radio waves, FM radio, WiFi, GPS, cell phone conversations, all of these signals remain unheard by human ears as they infiltrate our cities and natural surroundings. Occasionally though, such signals are heard by accident, emerging as a ghostly resonance, a silent foreign voice, or as something creating interference in our hi-fi systems. Yet aside from these accidental occurrences, tuning into these frequencies on purpose requires a range of tools, such as FM radios, smartphones, wireless routers and navigation systems. 

Presently, my research at the Institute of Sonology includes placing machine transmissions into a musical context, exploring what inhabits the mysterious and abstract substance once referred to as æther. This exploration fundamentally delves into how one might capture these bodiless sounds into a tangible system, so they can be transformed into frequencies like an oscillator or any other noise source. Additionally, I employ methods or indeterminacy; a methodology assisting the emergence of unforeseeable outcomes. Specifically, this includes using external parameters that engage with chance and affect details or the overall form of my work.

Principally, this research project will focus on grabbing sound out of thin air and using it in a performative setup or within a composition. I expect this to be concealed as possible, as well as concealed signals, as well as finding methods for listening to patterns in the static noise and using tools to generate sound in bursts of coded or scrambled signals.

 

Slavo Krekovic (1st-year master's Instruments & Interfaces)

An Interactive Musical Instrument for an Expressive Algorithmic Improvisation

The aim of the research is to explore the design strategies for a touch-controlled hybrid (digital-analogue) interactive instrument for algorithmic improvisation. In order to achieve a structural and timbral complexity of the resulting sound output while maintaining a great degree of expressiveness and intuitive ‘playability’, possibilities of simulations of complex systems drawing inspiration from natural systems and their external influence via the touch-sensor input will be examined. The sound engine should take advantage of the specific timbral qualities of a modular hardware system, with an intermediate software layer capable of generating a complex, organic behaviour, influenced by a touch-based input from the player in real time. The system should manifest an ongoing approach of finding the balance between deterministic and more chaotic algorithmic sound generation in a live performance situation.

The research focuses on the following questions: What are the best strategies to design a ‘composed instrument’ capable of autonomous behaviour and at the same time being responsive to the external input? How to overcome the limitations of the traditional parameter control of hardware synthesizers? How to balance the deterministic and more unpredictable attributes of a gesture-controlled interactive music system for an expressive improvisation performance? The goal is to use the specific characteristics of various sound-generation approaches but to push their possibilities beyond the common one-to-one parameter-mapping paradigm, thus allowing a more advanced control leading to interesting musical results.

 

Hibiki Mukai (1st-year master’s)

An Interactive and Generative System Based on Traditional Japanese Music Theory 

The origin of notation in western classical music. This meant that the score should be in a form that was easy for the general public to understand and also be passed on future generations. Since then, this western notation system has been widely used as a universal language throughout the world.However, its popularity does not mean that there is not a loss of important sonic information. In contrast, most traditional Japanese music was handed down to the next generation orally, but was also accompanied by original scores that conveyed subtle musical expressions. These 'companion scores' were written with graphical drawings and Japanese characters.

In my research at the Institute of Sonology, I plan to reinterpret traditional notation systems of Japan (ie Sho ̄ myo ̄ 声明 and Gagaku 雅 楽) and designing real-time interactive systems which analysis relationships between these scores and the vocal sounds made by performers. In doing this, I plan to generate that exists in digital form. This will allow me to control parameters such as pitch, rhythm, dynamics and articulation by analyzing intervals in real time. Furthermore, this research will culminate in using this system to realize a series of pieces for voice and western musical instruments (eg piano, guitar, harp) and live electronics.   

Overall, I believe it is possible to adapt this traditional Japanese music theory into a western system by using electronics as an intermediary that processes data. In addition, by re-inventing traditional Japanese notation I expect it to be easier to access expressive ideas in western notation. However, using this type of score I also aim at extending many of the techniques of western musicians and composers – designing an interactive relationships between instruments, the human voice, and the computer. 

 

Yannis Patoukas (1st-year master’s)

Exploring Connections Between Electroacoustic Music and Experimental Rock Music of the Late 60s and early 70s

From the late 60s until the late 70s the boundaries of music genres and styles, including popular music, were blurred. This situation resulted from rapid and numerous socio-economic changes and an unpredicted upheaval in the music industry. Rock music, later renamed to “progressive rock” or “art rock”, separated itself aesthetically from “mass pop” and managed to blend art, avant-garde and experimental genres into one style. Also, the shift from just capturing the performance to using the recording as a compositional tool led to increasing experimentation that created many new possibilities for rock musicians.

Undoubtedly, many bands were aware of the experiments in electroacoustic music in the 1950s (elektronische Musikmusique concrète) and drew upon influences from the compositional techniques or avant-garde / experimental composers. However, many questions arise about why and how art rock was connected to experimental and avant-garde electroacoustic music; and secondly, whether it is possible to trace common aesthetic approaches and production techniques between the two genres.

My intention during my research at the Institute of Sonology is to elucidate and exemplify possible intersections between experimental rock and the field of electroacoustic music, especially in terms of production techniques and aesthetic approaches. The framework of this research will include a historical overview of the context that experimental rock emerged from, exploring why and how certain production techniques were used at that period in rock music, and investigating into whether the aesthetic outcome of these techniques relates to experiments in the field of electronic music.

Parallel to this theoretical research, I also plan to attempt a reconstruction of some production techniques which I will explore for the sake of developing my own aesthetic and compositional work. The driving force behind this type of reconstruction will include exploring tape manipulation, voltage control techniques, and their application to different contexts (such as fixed media pieces, live electronics and free improvisation).

 

Orestis Zafiriou (1st-year master's)

Mental Images Mediated Through Sound

I'm interested in researching correlations between physical and perceptual space in the process of composing music. Concentrating on the ability of human perception to create images and presentations of the phenomena it encounters, I propose a compositional methodology where the behaviour and the spatiotemporal properties of matter are translated into musical processes and represented through electronic music.

Sounds in my work represent objects and events, as well as their movement in space-time. This implies that a musical space is formed by a succession of mental images unfolded in the perceptual space of the composer when encountering these events. With this method I also aim to point out the importance of music as a means to communicate objective physical and social processes through a subjective filter, both from the standpoint of a composer as well as from the perception of a listener.

Orestis Zafeiriou has studied mechanical engineering at the technical university of Crete, completing his thesis on sound acoustics specifically pertaining to the physical properties of sound and its movement through different mediums (active noise control using the finite-element method). In addition to his present and past studies, he also actively composes music in the band Playgrounded, who released their first album (Athens) in 2013 and their second (In time with Gravity) in October 2017.

 

Katrina Burch Joosten (2nd-year master's)

Diagrammatic Blue Deeps: Transforming Sound for Poetry

My compositional work focused at the Institute of Sonology combines electroacoustic music, extended vocalism, digital processing, poetry, and abstract theatre. Using digital sound transformation tools, I explore the voice and poetry, to investigate how phononormativity is shaped by the temporality of language. My research considers the role of what I call ‘erotic abduction’ and symbolic language in the formation of impermanent, sounding mental images. My research is opened with a feminist perspective. I seek to question how and why the de-reification of sound can become a starting point for the deracination of the listening subject. I probe feminist writings to map out philosophies of (dis)embodiment, intuitive rationality, and anthropologies of alienation and transformation. Interweaving these philosophical threads and complex memory systems sustains my compositional practice. As well, it situates my work within a broader social milieu. Finally, I seek to intersect differing starting points for subjectivity, through voice and poetry, in order to generate sonic myths centred around emancipatory notions of technological futurity.  

 

Chris Loupis (2nd-year master's)

Bridging Isles: Dynamically Coupled Feedback Systems for Electronic Music

My research at the Institute of Sonology has been principally involved with the investigation of coupled and recursive processes, with a final goal of applying and employing derived techniques into modular synthesis, through the (re)implementation – or appropriation – of circuits. Audio and control feedback systems present a non-hierarchical way of interacting with an emergent, unforeseeable and unrepeatable output. In such systems, individual components share energy mutually. They can therefore be considered as coupled; their resulting sonic behaviour being one of synergetic or conflictual relations. The central theme this project builds upon is the idea of musical systems designed to be operating – or better yet, operated – within reciprocally affected and variably intertwined structures. 

Specifically, the above ideas include an investigative trajectory through the mechanics of coupling in feedback control systems, which arrived to the Van der Pol equation (Balthasar Van der Pol, 1927). 

This oscillator has a significant musical value, as it is characterised by behavioural versatility, rhythmical and timbral complexity and richness; its non-linear response to external signals, is also particularly interesting. While being well-documented in physics and mathematics, applying the Van der Pol oscillator as an analogue model has not been widely adopted into modular synthesis. With these observations in mind, I am currently developing an extended implementation of the circuit into a set of analogue-coupled Van der Pol electronic oscillators. This will culminate in the creation of a series of modules to be used for the composition and performance of electronic music, for studio and live usage.    

In parallel, and alongside the notion of the user as an integrated part of the musical system, my work has also focused on redesigning the interface of a computer-aided switch matrix mixer developed by Lex van den Broek in 2016, known as the Complex. This version allows users to actively and dynamically participate in the recursive architecture of sensitive interwoven systems. 

 

Riccardo Marogna (2nd-year master's)

CABOTO: A Graphic Notation-based Instrument for Live Electronics

The main idea behind my research project is to explore ways of composing and performing electronic music by means of scanning graphic scores. Drawing inspiration from the historical experiments on optical sound by Arseny Avraamov,  Oskar Fischinger, Daphne Oram, Norman Mclaren and the computer-based interface conceived by Xenakis (UPIC, 1977),  the project has evolved into an instrument/interface for live electronics, called CABOTO. In CABOTO, a graphic score sketched on a canvas is scanned by a computer vision system. The graphic elements are then recognised following a symbolic/raw hybrid approach, that is, they are interpreted by a symbolic classifier (according to a vocabulary) but also as waveforms and optical raw signals. All this information is mapped into the synthesis engine. The score is viewed according to a map metaphor, and a set of independent explorers are defined, which traverse the score-map according to real-time generated paths. In this way I can have some kind of macro-control on how to develop the composition, while at the same time the explorers are programmed for exhibiting a semi-autonomous behaviour. CABOTO tries to challenge the boundaries between the concepts of composition, score, performance, and instrument. 

 

Sohrab Motabar (2nd-year master’s)

Non-standard synthesis and non-standard structure

The starting point of my research is to explore the structural possibilities of using sound material generated by non-standard synthesis, namely the jey.noise~ object in Max. My research also proceeds from a consideration of the technique used by Dick Raaijmakers in Canon 1,­ where the time interval between two simple impulses becomes the fundamental parameter on which the music is composed. I have investigated the possibilities of replacing these impulses by more complex materials and these microscopic time intervals with different timescales.

 

Julius Raskevičius (2nd-year master's)

PolyTop: Map-oriented Sound Design and Playback Application for Android

My work is currently focused on creating an Android application that can act as a universal translator of parameter space to a meaningful 2D map of sounds. The application can control virtual instruments in SuperCollider or any other program that accepts MIDI as parametric control. The goal of this research is to create an app positioning a musical piece as a type of network, which enables a gradual gestural transformation of material. This research was prompted by the fact that a majority of touchscreen instruments have inherited the looks of older-generation mouse-oriented interfaces, and thus they still rely on the old paradigm of pointing and clicking. Consequently, users of this type of traditional mouse-based interface have been limited to a single interaction with a given virtual instrument at a time. Simply put, gestures involving multiple fingers have not been commonly found in professional sound design programs — even with the widespread adoption of touchpads as a primary input device for portable personal computers.  

However, with the growing computing power of touchscreen devices and acceptance of touch as a mode of interaction, new sound-design possibilities are emerging. These developments, combined with visual and sonic input, are allowing touch to become a powerful way of intuitively generating sound. Additionally, the continuous nature of touch gestures promotes the design of sounds encouraging uninterrupted modulation. This also promotes a holistic perspective on sound design, by way of using multi-touch to make possible the simultaneous adjustments of sonic details.  

Overall, this suggests that the possibilities of touch input can be combined with 2D maps of sound parameters. And, similar to a scrollable digital map representing a geographical area (such as a 2D tablet), can represent all possible permutations of a virtual instrument's parameters. Given that such parametric combinations are nearly infinite, a user takes upon the role of an explorer; wandering through the space of parameters and pursuing directions that lead to interesting sonic results, or conversely, avoiding areas on the map that are less intriguing. All of this underlines the undeniable fact that multi-touch gesture will speed up and simplify this process, largely by adding intuitive control to the randomisation of parameters.

 

Edgars Rubenis (2nd-year master's)

Adventures in Temporal Field 
pushing past clock-based notions of temporality and the self

While sounding material has always been of central interest in my musical practice, in the course of this master's project I am directing my attention towards the experiential side of musical interactions – the various types of perceptual material that are being received while engaging in the act of listening to music. 

By being interested in music that exists on its own terms, free from obligations towards the listener, I am also aware that in the case of a musical listening act an inevitable overlapping of worlds takes place. In the course of a musical event our human sphere enters in relations and becomes affected by the principles of the musical world. In some cases it can even be said that these worlds temporarily merge.

Therefore, for the course of this research, the focus is on raising the awareness about how the “thing that I interact with” not only fills my perceptual space but also shapes its boundaries. Considering that our perception informs us of who/what we are, such musical experiences shape our notions of what is our human realm. 

By building strongly on my bachelor's thesis “Use of Extended Duration in Music Composition” (which focused on works of Eliane Radigue, La Monte Young and Morton Feldman) and on an even more previous personal musical practice of a related kind, I am currently gathering insights on how musical experiences shape our notions of who we are – how they draw borders of our humanness and legitimise certain types of experiences and states over others.

Notions of perception and temporality are informed through reading of Edmund Husserl’s work On the Phenomenology of the Consciousness of Internal Time and related academic texts.

 

Timothy S.H. Tan (2nd-year master’s)

Spatialising Chaotic Maps with Live Audio Particle Systems

Chaotic maps have already been used for many parameters in algorithmic music, but have very rarely been applied to spatialisation. Furthermore, chaotic maps are sensitive to tiny changes and still wear distinctive shapes, thus providing strong gestures and metaphors. This allows for effective control or spatialisation during real-time performances.   

On the other hand, particle systems provide a novel, effective means for sound design. Often, this involves using regular and random shapes for visual effects like smoke, clouds and liquids. However, up to now chaotic maps have not been included in particle systems, and both of them hold promising potential for choreography of sounds. In my research, I seek to explore this crossroads between chaotic spatialisation and audio particle systems. This involves probing and evaluating the use of chaos and particle systems in music, and then spatializing select chaotic maps with particle systems in upcoming works for performances, and finally documenting my findings.

 

Vladimir Vlaev (2nd-year master's)

Real-time Processing of Instrumental Sound and Limited Source Material Composition

My research at the Institute of Sonology focuses on real-time digital processing of acoustic instrumental sound. It is an extension of my background as a composer and an instrumentalist and it is a result of my interest in applying these two activities in the real-time electroacoustic domain. 

At the core of my project lies a compositional approach which I call “limited source material composition”. In this approach, ‘material’ or ‘sound material’ has the broad meaning of pitch, timbre or rhythm. This principle is one I have applied extensively to many of my previous works, and indeed, is a concept whose implementations could be traced back from early polyphonic music to certain examples of contemporary instrumental and electronic music. My aim is to implement this non real-time compositional approach in real-time, thus this concept which once was used for composing a score now serves as a performative and improvisational tool. Thereby time also turns into one of the parameters subjected to limitation or restriction. I have sought to accomplish this by designing a real-time sound processing system, which uses instrumental sound as a source or ‘material’. In other words I compose certain digital sampling processes, which then treat the acoustic sound in real-time during a performance in order to create an ‘instant composition’. Additionally, this computer-based interface is intended as a tool for both scored composition and live electronic improvisation.

Therefore, in order to accomplish these ideas, I distinguish two main directions in my work:

1) Composing a piece for a solo instrument (prepared piano) and live electronics in which I apply the above-mentioned principles of constraint.

2) Another particular implementation of the proposed real-time DSP system involves the area of instrument building as an additional activity and is based on the use of a hexaphonic-pickup guitar as an acoustic sound source with multichannel output. The ability to apply individual processing to each string of the guitar and thus creating complex polyphonic textures is one of the major advantages of this implementation.      

More particularly, the desired interface itself is a set of modules, each representing a real-time audio process: ring modulation, delay lines, granulation, filter, pitch shifter, distortion, buffer module, etc. Each module has a number of parameters whose values determine the behaviour of the module. The signal flow between the distinct modules is flexible. The system is capable of switching, adding, or removing processes from the chain during performance, as well as reversing or changing their order. To achieve a smooth control over the parameters is also an essential task of this project.