Lost Oscillations

Lost Oscillations, a recent collaboration between myself, Jim Murphy, and Mo H. Zareei, is a sound installation that requires the human touch – literally – of its audience in reactivating and feeling through the layered sonic archeology of Christchurch, the city’s contemporary and historic soundscapes and ever-shifting spatial character.

In Lost Oscillations the immediacy of listening and touch embed the participant in a field of phantom sound that their touch draws forth from the city, sonically and emotionally colouring the cityscape surrounding the installation.

The installation was commissioned by the 2015 Audacious Festival of Sonic Art. Jim and I were interviewed by Eva Radich on Radio NZ Concert’s Upbeat show. You can listen to the interview here.

Let x = [Binaural version] (IEM#9)

I’ve just uploaded a binaural version of Let x = (on Soundcloud) for icosahedral loudspeaker (ICO) and 24-channel loudspeaker hemisphere, composed while I was 2014 composer-in-residence at IEM (Graz, Austria). The binaural version combines recordings of the ICO, made using a Schoeps KFM 6 mic, with mix-downs from the 24-channel ambisonic audio. The result isn’t the same as hearing the piece in-situ – the verticality of the piece is lost and the degree of immersion is reduced – but it gives some sense of its spatiality and I hope also conveys the ICO’s spatialisation capabilities, which I described in an earlier post.

Just so you know what you’re listening to, the piece is:

In 5 sections, which alternate and combine use of the ICO and hemisphere: 1. ICO > hemisphere; 2. ICO, 3. Hemisphere; 4. ICO, 5. hemisphere, ICO/hemisphere > hemisphere. A wide range of tools were used in composing the work, but the most significant were certainly Matthias Kronlachner’s ambix and mcfx plug-in suites, which made the task of mixing and spatialising for both the ICO and the hemisphere wonderfully straightforward.

The first section of a larger work-in-progress based on the transformation of speech into sounding objects with carnal, cultural and environmental resonances. The texts are metaphors, in multiple languages, coupling the human body and the natural environment, aiming to dissolve “the barrier between ‘over here’ and ‘over there,’… the illusory boundary between ‘inside and outside’” (Timothy Morton). There’s more to read about my compositional intentions and materials, and my creative process, in an earlier posts.

Here’s the programme note (nice and concise):

Let x = (2014-)

Kaki Langit – Foot of the Sky

 “Flesh = Earth, Bone = Stone, Blood = Water, Eyes = Sun, Mind = Moon, Brain = Cloud, Head = Heaven, Breath = Wind” (Adams & Mallory, Encyclopedia of Indo-European Culture).

Another way of saying this kind of thing comes from Levi Bryant:

[E]cology must be rescued from green ecology, or that perspective that approaches it as a restricted domain of investigation, pertaining only to rain forests and coral reefs. Ecology is a name of being tout court. It signifies not nature, but relation. To think ecologically is to think beings in relation; regardless of whether that being be the puffer fish, economy, or a literary text. Everything is ecological. Above all, we must think culture and society as ecologies embedded in a broader ecology.


Broken Magic: the Liveness of Loudspeakers (IEM#8)

Further to my experiences working with IEM’s hemisphere and IKO systems in 2014, here’s a draft chapter on the liveness of loudspeakers, and music written for loudspeakers, which may (fingers crossed), appear in print sometime in the coming year.

This reflection attends to the role of the loudspeaker in the creation and experience of liveness in electronic music in the context of immersive loudspeaker environments, such as the Hemisphere at the Institut für Elektronische Musik und Akustik (Graz, Austria), BEAST (Birmingham Electroacoustic Sound Theatre, UK) and 4DSound. These sound systems are allied through their potential to create vibrant aesthetic experiences in the absence of live performers and any significant visual element. Such acousmatic contexts, while not live in a conventional sense, use sonic immersion, dynamic spatial articulation of sound, and the experience of sound as invisible matter, as means to create a unique form of liveness. As a composer of fixed-media electronic music who works within the acousmatic domain, I find this enthralling and somatically powerfully, yet highly fragile as its auditory objects (both real and virtual) are contradicted by the visible physicality of the objects that give rise to them – loudspeakers.

To begin with it can be pointed out that loudspeaker listening is ubiquitous. More music is experienced via loudspeakers (including headphones) and acousmatically (without a visual element), than in an unmediated form. Even in live contexts, irrespective of whether music is realised physically-acoustically (traditional performance) or quasi-physically (by operator-musicians, interfacing with electronics), most music is mediated by loudspeakers, from classical Indian music through to stadium rock. Even performances of acoustic music in the Western classical tradition are often mediated via loudspeakers, using sound reinforcement systems designed to compensate for acoustic deficiencies in the room or deployed simply to increase the acoustic presence and impact of the music.

Perhaps due to its ubuity, this technologisation of music is rarely noted. In the case of the sound reinforcement of music in the Western classical tradition, this is unsurprising as such sound systems are designed to be visually and acoustically transparent. For other acoustic musics (jazz or singer-songwriter forms for example) transparency is also desired, but this often rubs against the requirement of making intimate low amplitude music audible in a larger space. In such cases, as the music is usually reinforced using stereo PA systems with speakers either side of the stage, a spatial division between visual source (musicians, the actual source of sound) and sonic source (loudspeakers) is introduced. This phenomenon is a kind of schizophonia.[1] For an alert listener, this split will be noticeable, but equally the phenomenon of audiovisual magnetisation tends to result in the perceptual gravity of the visual source drawing in sound such that listeners hear it as emanating from the visual source.[2] This phenomenon reaches an extreme in high amplitude performances, such as in clubs or large stadiums, where loudness is often so extreme as to create a pervasive and directionless sonic field. Here the magnetisation effect still applies, often supported by large-scale video projections that create a virtual visual source which may well not be located near the actual visual source. Similarly, in cinemas equipped with surround sound systems, the screen is the experiential locus and any sound which strays from the front and centre will be either magnetised to the screen or, in the case of off-screen sound, explained by on-screen events (as is often encountered in genres involving intense action).

The naturalisation of schizophonic listening in mediated live performance can be constrasted with the aural environments created by composers of fixed-media electronic music using the kind of immersive loudspeaker systems mentioned earlier. In such systems, regardless of the technical approach taken (arrayed stereo, surround, ambisonic, wavefield synthesis) there is a general concern to create a cohesive and credible sonic field or image that is heard as not projecting from the real source of the sound – loudspeakers. Here the trompe l’oreille (the aural equivalent of the trompe l’oeil) is paramount, emerging not from a disavowal of loudspeakers (despite their obvious presence) but from the active effort of the composer-engineer to render them sonically transparent. [3] Such immersive sound systems, in which sound is encountered as a distributed and multiplicitous omnispherical field, are akin to real-world listening – as analysed by Don Ihde[4] – in which sound will be heard from many different spatial locations simultaneously. This affords immersive loudspeaker systems an environmental quality not enountered in live-mediated performance (in which spatial magnetisation is in effect). Sound is always immersive (as Stefan Helmreich has argued) to the extent that Tim Ingold describes sound – like light – as a medium in which we exist (we are “ensounded”).[5] Immersive systems intensify the ensoundment of the listener, deploying sonic space as a core parameter rather regarding it as secondary field emanating from the sonic source much as the interior of a cinema is inevitably lit by reflected light from the screen. Affectively, this intensification can be unnerving and/or exhilarating, as listeners encounter sound as sound, in forms of ambiguous provenance, heard in the absence of visual cues. This redoubling of sonic experience, noted by the blind and exploited by artists working in the acousmatic domain (notably Francisco Lopez, who blindfolds his listeners), is also a feature of environmental audition in which the listener must actively engage in making sense of audial experience that is not an a posteriori residue of visual phenomenon nor determined by the conventions of musical cultures and systems.[6]

It is interesting to observe then, that in mediated live performace audiovisual schizophonia is barely noticed, while in the presentation of fixed-media electronic music – even when this offers a highly naturalistic and unified sound-image – the non-liveness introduced by its non-visuality and the temporal splitting of sound and source draws attention. It appears that the fixed-media acousmatic object (for all its virtual realism), in contrast to the live performance event (no matter how simulated it may be), represents a mode of aesthetic production and reception which falls outside normative understanding of what music is.[7] Indeed, a recurrent theme in electronic music circles is how to overcome the experiential difficulties presented to audiences by loudspeakers in which the provenance of sound is opaque even as its actual technological source is plainly visible.

That loudspeakers, and the media-specific music made for them, place many listeners in an interpretative quandry seems odd. After all, the loudspeaker long ago reshaped our expectation of how music should sound live, just as the recording has warped our expectations of live performance.[8] The sonic presence, grain, balance, and amplitude of music is now insuperable from its mediation via loudspeakers such that listeners will very likely be disappointed by the sonic qualities of a purely acoustic music or unsufficiently reinforced music. No one questions the magnification of sonic scale that takes place, for example, when an electric guitarist gently picks a string, exciting an acoustic response that is barely audible at 10 metres, but unleashing an electroacoustic object of enormous intensity (think here of Pink Floyd’s David Gilmour, or the even more restrained Fennesz). Loudspeakers render massively exponential the relationship between input gesture and sonic output, despite the rockist efforts of many performers to take physical ownership of this input/output disparity.

A key observation to make here is that the performer-loudspeaker assemblage is entirely naturalised assemblage, remaining so even when the performer is not a physical agent, but a virtual one, implied in the music and understood as present by the listener. The loudspeaker is granted liveness by the actual or virtual presence of the performer, presence which also renders the loudspeaker invisible. The performer on the other hand is always perceived as live, even when heavily mediated, such as is the case in much simulated music (music in which performance is fabricated). When no performer or performance can be readily heard in mediated music then there is no longer liveness. When the human body is not involved or perceived as involved in the active production or reception of mediated music then the presence of technology is foregrounded. Thus this assemblage is a tool that breaks  when a central component – human agency – is not perceptibly present. Remove the performer and the assemblage is denaturalised, the music broken, the loudspeakers starkly apparent as inorganic technological things. Remove the loudspeakers and you are left with a rather less impressive performance which is nevertheless musical. This also means the assemblage is assymetrical, formed of unequally weighted components, for on its own the loudspeaker has no liveness, despite its role in generating detailed and/or massive sonic presence such as is exploited for sheer somatic impact of Jamaican and club sound systems, or in the 4DSound system’s fourth (low frequency) dimension – underfloor subwoofers – which corporeally enliven the system’s audience.

For the listener, when the ontological shift from performative presence to absence is noticed, there is often an accompanying epistemological shift: music moves towards noise or soundscape (environmental sound) – non-music in any case. Put coarsely, this means that for many listeners there is no music when people are not clearly involved in its sounding form. This is an understandably anthropocentric view. After all, until very recently music has been an exclusively human art, involving technologies that are more like traditional tools, which do nothing without constant human input. Listeners versed in music that is rooted in instrumental or vocal performance (and their historically determined materials and organisational systems) tend to find technologically grounded music lacks “soul” or “spirit” or sounds “like it was made by a machine” (sometimes it is). This is the Memorex trope (“Is it live or is it Memorex?”, i.e. technology), given contemporary expression in the high value placed on detailed simulations of human performance (in sample library-based film soundtracks, for example), or in this statement from electroacoustic music theorist Denis Smalley: “music which does not take some account of the cultural imbedding of [performative] gesture will appear to most listeners a very cold, difficult, even sterile music.”[9]

This points to the ontological ambiguity of loudspeaker music (“What are you?” as Batman was asked in 1989), produced by a different form of schizophonia than found in live-mediated music, where live visual source and audial sound source are spatially split but temporally (more or less) synchronous. Rather it is one in which there is sonic-spatial unity – the trompe l’oreille in which the listener is ensounded – with a temporal split between sound and source. This is unavoidable in any fixed-media music for what is heard is not happening live but is a reproduction of earlier events or a fabricated event which may have no correlate in reality.[10] This temporal split introduces the space in which interpretation must occur, a necessity intensified by the fact that there’s nothing to see (except loudspeakers). Audiovisual experience involves spatial magnetisation of sound to image, but the warping of sound into visually determined forms (ontologically, espistomologically and affectively). By contrast, the monomodality of loudspeaker music, the hermeneutic gap it exists in, the form’s intensification through immersive sound environments, creates significant uncertainty for the listener, radically focusing them on sonic experience and its multivalence.

Multivalence is of course a feature of any (musical) experience. As Jean-Jacque Nattiez has it, “[What] horizons of experience might the musical work invoke? […] these horizons are immense, numerous and heterogeneous” But these horizons, when not visually-anthropomorphically determined (as most music is), are magnified still further. Loudspeaker music, shifts the centre of gravity away from the performer and towards the listener, reconstituting liveness as listener-determined. By way of example, consider the following: The ontological dimension of music, when detached from real or virtual human sources, becomes unsettled and labile, and can can only be stabilised through listener interpretation. Alva Noë asks “What would disembodied music even be?”, concluding that there is no such thing as music without bodies to make it, even if we can’t see them. Loudspeaker music complicates this through its profusion of unknown and unfamiliar bodies; There is also the somatic or corporeal dimension of sound and its physical-affective stimulation of the listener’s body, often coupling sound with the resonant spaces of the body to produce affects which can be pleasurable (bass music), painful (sonic weapons), or stimulating (Maryanne Amacher’s works involving the physiological response of the ear itself)[11]; Similarly, there are the empathetic-affective responses of listeners to sonic corporeality. The involuntary response at hearing the noises of another person’s body (as analysed by Stacey Sewell)[12], or the sounds of physical bodies and processes – physis – which embed and implicate the listener’s body in the mesh of an acoustic hyperobject that is no longer tied to the performing bodies of human beings.[13]

The liveness of loudspeaker music then, particularly in immersive sonic environments, emerges in the interaction of sound, space and the somatic, affective and interpretative activity of the listener. This can only happen in the absence of performer and performance, and in the presence of the loudspeaker. Such liveness is both singular and radical, particularly considered within a contemporary cultural context dominated by multimedia, whether spectacular or mundane. Yet the loudspeaker is always a broken tool, its visual-physical presence undermining the very audial-immaterial – but corporeal – experiences it creates, even as this deficient object propagates qualitative abundance, ontological ambiguity and somatic immediacy.

[1] Introduced as a pejorative term by R. Murray Schafer to describe the technological splitting of sound from source in recording. The term has since been used in a more positive sense by ethnomusicologist Stephen Feld.

[2] Chion, M. (1994). Audio-vision: Sound on screen. New York: Columbia University Press.

[3] Batchelor, Peter (2007. Really Hearing the Thing: An Investigation of the Creative Possibilities of Trompe L’Oreille and the Fabrication of Aural Landscapes. Electroacoustic Music Studies conference proceedings 2007.

[4] Ihde, D (2007). Listening and voice: A phenomenology of sound. Buffalo: SUNY Press.

[5] Helmreich, S (2007). An anthropologist underwater: Immersive soundscapes, submarine cyborgs, and transductive ethnography. American Ethnologist, Vol. 34, No. 4, pp. 621–641. Ingold, T (2007). Against Soundscape. In Autumn leaves: Sound and the environment in artistic practice, Carlyle, A. (ed). Paris, France: Association Double-Entendre in association with CRISAP.

[6] For a discussion of auditory perception in the blind, see Blesser, B., & Salter, L. (2007). Spaces speak, are you listening?: Experiencing aural architecture. Cambridge, Mass: MIT Press. Environmental audition is discussed in Fisher, J. (1998). What the Hills Are Alive with: In Defense of the Sounds of Nature. The Journal of Aesthetics and Art Criticism, Vol. 56, No. 2: 167-179.

[7] Object is used here to denote something concrete, fixed, in contrast to the use of event to denote something indeterminate, in flux. An object can of course be used as a blanket term of anything that is perceived, as in philosopher Graham Harman’s phenomenological use of the term

[8] Katz, M., 1970. (2004). Capturing sound: How technology has changed music. Berkeley: University of California Press.

[9] Smalley, Denis (1997). “Spectromorphology: explaining sound-shapes”. Organised Sound 2(2): 107–26.

[10] See Dellaira, M. (1995). Some Recorded Thoughts on Recorded Objects. Perspectives of New Music, vol. 33, 1 & 2: 192-207.

[11] Goodman, S. (2010). Sonic warfare: Sound, affect, and the ecology of fear. Cambridge, Mass: MIT Press; Amacher, M. (1999). Sound characters (making the third ear). New York: Tzadik.

[12] Sewell, S. (2010). Listening Inside Out: Notes on an embodied analysis. Performance Research: A Journal of the Performing Arts, vol. 15, 3: 60-65.

[13] Morton, T. (2013). Hyperobjects: Philosophy and ecology after the end of the world. Minneapolis: University of Minnesota Press.

Situated and non-situated spatial composition (IEM #7)

What do I think I mean by space in spatial composition? In this post I want to outline a distinction between situated and non-situated spatial composition, at least as it applies to my use of the IEM icosahedral loudspeaker (ICO) and 24-channel hemisphere array. This distinction extends upon an earlier post arguing for the ICO as an instrument for electronic chamber music.

In multichannel composition, including ambisonic works (and the very few wavefield synthesis compositions that exist), space is generally understood as the acoustic space, as a virtual sound field (VSF) and its shaping over time, created within the listening space, whether this be via headphones or a loudspeaker array of some sort in a room. The VSF is a composed space, which may consist of multiple other kinds of spaces (outlined in considerable detail by Denis Smalley PDF) either real or virtual in provenance but which ultimately are all virtual spaces, regardless of this provenance. A microphone recording of a real-world scene (a field recording) is a real space rendered virtual through the decontextualisation of recording and recontextualisation within the VSF of the musical work. In some cases the spatial resolution of the recording and its reproduction (recontextualisation) may be very high indeed, such that it might pass a blind listening test (an odd idea, but it has a history). In other instances the recording might be of a lower resolution such that the listener can identify the scene captured in (not by) the recording but would acknowledge that a spatial transformation has been enacted (stereo recordings “flatten” real acoustic space even as they convey a strong sense of space). Where the VSF is created through entirely synthetic means, say additive synthesis with multiple delay lines creating a sense of the sound coming into being within a space (implying reflections and reverb), the VSF is not real but has qualities which are heard as spatial through reference to real world acoustic spaces. Between the fully virtual space heard as having real world (i.e. physical) qualities and the real world space that is rendered virtual through recording, there are hybrid spaces created by digital audio processing which may transform the spatiality of recordings or impart new spatial qualities upon any sonic materials (the VSF created by impulse response reverberation is paradigmatic here). Such digital events and interactions are of course virtual, even if they convey qualities that impart a sense of the real world. [Another way to say all this, but quickly: the spatial attributes of sound within the VSF are a product of reference to the physical-acoustic spatiality of the real world, regardless of the actual provenance of the VSF.]

Spatial composition is engaged with the VSF alone unless it engages directly with the sonic space that contains the VSF – the room – and with the interactions between the VSF and the space of its reproduction or audition (the real physical space in which sound is reproduced and heard) in a way that transforms the VSF itself. The room itself usually is not taken into consideration, or even considered, unless it in some way interferes with the the VSF. (One could think of the room in Heideggerian sense: it is a tool for presenting music and isn’t noticed until it itself speaks, which in this case means, speaks out of turn, interferes) Typically, such interference is something to be counteracted so as to ensure that the VSF is compromised to the least possible extent. A composer or sound artist working within this understanding is engaging in non-situated spatial composition. The musical work here is self-contained and can be transposed from from one space to another without transformation. However, if the real space is brought into deliberate interaction with the VSF, or at an extreme is entirely integrated with it, then we are talking about situated composition. The situated composition, like site-specific art, cannot be moved from the site for which it has been realised, without transformation.

There’s a great deal to be said about this topic, and many examples that could be discussed, but here I’m focusing on the way that the ICO requires an approach that is situated (unless one uses it simply as another – albeit prolific – loudspeaker, which is entirely to ignore its full potential, something I discussed in an earlier post). The ICO affords and encourages a situated approach because one of its most interesting uses is as an instrument to “orchestrate reflecting surfaces” (Sharma and Zotter, PDF) and to do this of course requires that the VSF itself is brought into close relationship with the room acoustic. This means that the way in which one spatialises music material with the ICO will have to be created anew, or at least adapted, if this material is shifted from one room to another, especially if the material itself has been created to allow space to be given emphasis in the composition (which seems to require a reduction in the complexity of the music itself). Furthermore, given that each room has its own acoustic qualities, it is the case that not all materials will afford equal spatiality in all spaces (an upcoming post will explore this topic). At an extreme, this means that a situated work might truly be site-specific – to move the piece is to lose the piece (so to say). The IEM hemisphere on the other hand, is simply yet another multichannel array. A carefully considered and constructed array, but more or less homogeneous with other arrays which have been designed to allow the realisation of non-situated VSFs (just like cinemas, which can be better or worse, but in the end are designed for cinematic experience, not the experience of a cinema).

Working with the ICO and the hemisphere together for Let x = required that I minimise the situated-composition potential of the ICO, constraining such usage to spatial effects (there must be a better word to use than this…) which either strongly contrasted with the VSF of the hemisphere, or which afforded smooth blending between the two. In the former I tended to use the ICO solo, used as a complex acoustic surface, with layered textural materials skittering and moving across its surface, emphasising direct sound (the half of the ICO oriented towards the audience) more than indirect. In the former I tended to use the ICO to create reflections in the front-left corner of the room (directly behind the ICO), which allowed blended transitions between ICO “solos” and sections in which the hemisphere predominated (this worked because it was difficult to distinguish between reflected sound from the ICO and direct sound from the hemisphere in this part of the room). Such usage was not necessary and certainly there are many other possible approaches to combining the ICO and the hemisphere, but this approach was taken partly as a response to the site-specific limitations of using the ICO in IEM’s smallish (for a concert hall) Cube space. The size of this space means that the audience sits quite close to the ICO, reducing the possibilities for orchestrating reflections (for most of the audience there’s too little difference between direct and reflected sound for this to be compositionally useful, excepting the cases just described). In other words: in using the ICO I took an approach that was situated, perhaps even site-specific, but which didn’t fully explore the ICO’s potential as an instrument. So when Let x = is presented in another space it will required fair amount of reworking (I’m ignoring the conundrum of what to do with music written for an instrument – the ICO – that only exists at IEM). In using the hemisphere I took a standard compositional approach, the creation of a VSF which can be realised using any similar multichannel system, and this approach was only minimally adapted to afford a certain set of interactions with the ICO.

What would be interesting as a next step is to move this hybrid (non-)situated piece into a larger space, to hear to the ICO in interaction with such a space and listen to the ways it demands that my materials and spatialisation be transformed in order to remain effective. If this isn’t possible, then I’ll know that Let x = is a fully situated work.





Let x = technology (IEM #6)

It has often struck me that while there is a plethora of information, particularly online, about how to use audio technology, it is very rare to see a composer talk about their use of technology towards creative outcomes. Why is this? A cynical interpretation is that there’s a kind of anxiety around technology-based creativity, driven by concern not to be regarded as a insufficiently technical composer, or perhaps by the fear that creativity involving technology is not “real” creativity (you feed something into the black box and it spits out the finished work). I completely reject the former, citing the work of composers such as John Cage and New Zealander Douglas Lilburn which, in very different ways, is not particularly sophisticated at a technical level but is quite singular in terms of its musical value. The latter is a (psuedo-)fallacy but does point to the reality that the work of electronic musicians, and particularly composers of electronic music, is very often determined – to greater and lesser extents – by the engineers of the tools they are working with.

To go down the fully deterministic route, one could argue that you simply wouldn’t have electronic dance music – with rigidly metronomic rhythms – without early sequencers (including drum machines). Early technology didn’t do “humanise”, it did 16-step (sometimes more) sequencing, with each step having precisely the same duration. At a certain point in musical history this was abhorrent, and then all of a sudden it was a style, entirely accepted as authentic music making. Put post-humanly, this means the musical machine afforded new ways not just of making music but also of hearing and understanding it (as celebrated in Kodwo Eshun’s afrofuturism paean More Brilliant than the Sun). In popular electronic music, a terribly vague but useful categorisation, this seems not to be a problem at all and in fact is integral to some of the many fleeting microgenres that Adam Harper celebrates (see his Fader article on the voice in the digital landscape).

By contrast, technological determinism, let alone technological influence, doesn’t go down at all well in electronic art music (EAM, another vague but useful category). I think here of Denis Smalley’s (1997) exhortations against technological listening and the purported sterility of music which does not feature qualities that are perceived on the human level of gesture and utterance. In EAM, much of which is made in a context attached to the tail end of Romanticism/Modernism (the difference is not so great), man masters machine and does so alone. But this old-school humanistic/anthropocentric approach is blind to the degree to which the composer is bound to the machine and to the engineer of that machine (digital or analogue). Another way to say this is that EAM is a collaborative practice, even when there’s just one person in the room. And yes of course there are many artists who are also architects of their own machines (Michael Norris and his Soundmagic Spectral VST plugins, for example) but yet the history of EAM is strewn with unacknowledged relationships between composers and the technicians/technologists who aided and abetted them (Sean Williams has researched the relationship between Karlheinz Stockhausen and the WDR studio technicians and technology, but I’ve yet to read his work). This does seem to be changing, and I’m looking forward to reading William’s chapter on King Tubby and Stockhausen’s use of analogue technology and the influence it had on their sound (the two are presumably considered independently, but what a fantastic pairing!). The notion that Stockhausen’s work has a sound is already an upsetter. Stockhausen made music, his music was sound, but did not have a sound (“the seductive, if abrasive, curves of [Studie II’s] mellow, noisy goodness“). Yes, it does, just like Barry Truax’s PODX era granular work has a sound, and many works in the era of early FFT have a sound, and countless late musique concrète composers sound like GRM-Tools, and my work has a sound which sometimes sounds like FFT and sometimes like GRM-Tools etc etc.

This has gone a little off-target, but it does support my initial point: composers, of EAM, don’t like to talk about how they do what they do. They’ll tell you what they used to do it, but not what they did with that it. Similarly, technologically adept artists will explain the tools they’ve developed, but not how these tools have been creatively applied. In either case this is a shame, as it limits the pace and depth at which the practice can evolve. If artists explain how they do what they do, other artists can learn from them, and apply a particular set of technicaI routines in their own ways. I don’t buy the argument that this might lead to a kind of technology-based plagiarism. There’s already enough sonic and aesthetic homogeneity in EAM. Opening up its creative-technological processes would, I imagine, lead to greater technical refinement and a wider creative palette, and – heaven forbid – perhaps even some criticism of aesthetic homogeneity where it is found. More than this, acknowledgement on the part of composers that they are using technology that has been designed and implemented by another human being might actually lead to establishing a stronger feedback loop between engineer and end-user. This is one of the real beauties of using the Ardour and Reaper DAW‘s – their design teams implement user feedback in an iterative design process – resulting in DAWs that are much much easier and friendlier to use than just about any other I can think of. It also strikes me that what I’m outlining is different to the kind of DIY/Open Source culture that makes contemporary software and electronic art cultures so strong. I’m not talking about how to make analogue and digital stuff, but rather how to make musical stuff with it (and if this requires that both the technology and its creative deployment be discussed, all the better).

It is of course a fair point that the artist might not want to spend their time explaining how they do what they do (there’s already too little time in which to do it), but I do think practitioners should open up their laptops and outline they ways in which they achieve certain creative outcomes. If this simply reveals that they ran a recording of dry kelp (being cracked and ripped) through GRM-Tools Shuffling and pushed the parameter faders around until they got a sound (a sound!) they liked, that would be a start. This is just what I did almost 20 years ago when I first started seriously trying to make EAM. What I still haven’t done is explain to myself, or anyone else, why this combination of sound, DSP and parameter settings, produced a result that made me feel there was musical value in what was coming out of the loudspeakers. The initial act may have been relatively simple (setting aside the design of the still wonderful GRM-Tools), but the process and outcomes are not. Untangling and explaining this, or indeed any (interesting) creative-technological method, could be a valuable and useful thing to do. So, this is a task I’m setting myself: in a future entry on this blog, hopefully the next one, I’ll attempt to dissect a particular technical method used in the composition of Let x = and also try to explain why the outcome of the process found musical application (i.e. had musical value to me in the context of the work-in-progress).

Icosahedral Loudspeaker: the ICO as instrument for electronic chamber music (IEM #5)

IEM‘s ICO is a 20-channel loudspeaker, developed by Franz Zotter at IEM. The ICO, and its smaller spherical cousin, was developed as a part of Zotter’s PhD research into sound radiation synthesis as a tool for replicating the acoustic radiation of instruments and measuring the acoustic response of rooms: “This work demonstrates a comprehensive methodology for capture, analysis, manipulation, and reproduction of spatial sound-radiation. As the challenge herein, acoustic events need to be captured and reproduced not only in one but in a preferably complete multiplicity of directions” (Zotter, 2009). The ICO was developed primarily as a technical tool but through collaborations between Zotter and composer / sound artist Gerriet Sharma it has found application as a creative tool, or indeed as an instrument. As Sharma and Zotter (2014) outline “[The ICO] is capable of providing a correct and powerful simulation of musical instruments in their lower registers in all their 360◦ directional transmission range. The device is also suitable for the application of new room acoustic measurements in which controllable directivity is used to obtain a refined spatial characterization.” It is this “controlled directivity” that has primarily found artistic application. The “beamforming algorithm developed in [Zotter’s PhD research] allows strongly focused sound beams to be projected
onto floors, ceilings, and walls… [This] allows to attenuate sounds [sic] from the ICO itself
while sounds from acoustic reflections can be emphasized. Beams are not only freely adjustable in terms of their radiation angle, also different ones can be blended, or their beam width can be increased. A loose idea behind employing such sound beams in music is to orchestrate reflecting surfaces, maybe yielding useful effects in the perceived impression.”

ICO 20-channel loudspeaker (IEM)

ICO 20-channel loudspeaker (IEM)

My work with the ICO, in combination with the IEM 24-channel hemisphere array, certainly confirmed that beam-forming can find artistic application, and indeed that the phenomena described by Zotter and Sharma are actual (hearing is believing). My exploration of the ICO’s propensities as a compact loudspeaker array are confirmed by small sample listener-response research presented in their 2014 paper. Using spatial controller plug-ins (VST) developed by Matthias Kronlachner it is mercifully trivial to shape and control acoustic beams in terms of perceived size, movement, and to use beam-forming to create reflections on surfaces within the performance space.

It is the latter that is perhaps the most surprising and lively aspect of the ICO (although there’s much to be said for the capabilities of the ICO as an acoustic surface, see below). To my ears this is because it requires one to fully engage with the rich interactions of source material, loudspeaker and room response in ways which one tends not to do when using the hemispherical array (or indeed any other multichannel approach). This is simply because in such arrays one is concerned with creating a virtual sound image/space and concern for the room acoustic tends to be limited to minimising its impact on the qualities of audio reproduction or reinforcement (in the case of live electronic music). Using the ICO as an instrument to “orchestrate reflecting surfaces” on the other hand, requires engagement with the acoustic properties of the performance space as an integral aspect of the creative process, and also an awareness of the specific capabilities of the ICO. The outcomes of this are interesting:

The (electroacoustic) work itself can no longer considered as independent of the space in which it is to be performed. In composing Let x = (2014-) for the ICO and hemisphere, for example, many of the compositional decision made in those sections of the work for the ICO alone were based on achieving results that may not be achievable in other spaces. (I hope I get the chance to find out!) When combining the ICO with the hemisphere this was not the case as the latter masks the acoustic response of the IEM Cube space, making such sections or passages more readily transposable to other spaces. One of the really exciting things about working with the ICO is that you are required to closely tune in to the interaction sound, space and movement, and often encounter results that are quite unexpected as the room response enacts spatial behaviours that could not be anticipated from the topology and morphology of source material projected from the ICO. The room itself is revealed as a sonic object integral to the work as a whole and this raises the very question of what it is to compose spatially.

When one uses the ICO to orchestrate space, or more accurately to orchestrate perceived relationships between sound and space (sound and space being interdependent), one is orchestrating for a specific space, creating a relatively high (but far from total) degree of site-specificity. In moving a work from one space to another, the piece needs then to be spatially recomposed in order to work in a new acoustic context. This is entirely possible, as Sharma wonderfully demonstrated in his Signale portrait concert (11 Nov 2104) in the György-Ligeti-Saal in MUMUTH, which featured works not originally composed for this space. The question arises though: what kind of work, using the ICO and focused on sound-space orchestration, is more readily adapted for effective outcomes in different spaces, and indeed are all spaces suitable for the ICO and such application? (I hope to return to this topic in a later post, including the ways in which the ICO can be used in sound installation work, as it has been by Martin Rumori.)

The ICO, despite its unique affordances, is characterised by a number of limitations. That these are noticeable is due to recognition of what the ICO is as a musical object, rather than a response to the false expectation that the ICO is some kind of super-loudspeaker, possessed of sonic superpowers. In fact, such limitations should lead one to consider the ICO as an instrument which, as is the case in effective use of any instrument, needs to be used in ways that exploit its strengths and are not compromised by its shortcomings (don’t ask a trumpet to do the rapid passage work of a flute, for example). Due to the driver size (16.5cm), the frequency response of the ICO is reduced below around 150Hz and the power of each individual speaker is also limited. These limitations can be overcome by coupling loudspeakers to create greater loudness and improve bass response (the typical solution is to feed low-pass signal below 150Hz to all 20 loudspeakers). However, these solutions have acoustic-spatial implications. Bass material, even in low-mid range, is clearly non-directional when an omni-source is created (as just described), which decouples this frequency range from directional beam-forming producing occasionally quite unusual spatial effects (which Sharma has exploited). Similarly, beam-forming is compromised when loudspeakers are coupled to increase loudness as the signal is spread over a larger area. Moreover, the icosahedron that houses the 20 loudspeakers is itself something of a resonating chamber which, although far from functioning like a resonator in a traditional instrument, has its own acoustic colour.

Beam-forming and sound-spatial orchestration are two of the strengths of the ICO, but I shouldn’t forget that the ICO packs a lot of loudspeakers into a relatively compact object. This comes right down to the possibility to address each loudspeaker individually, creating a very distinct point source. While I was surprised by what can be done in working with the ICO as a tool for creating acoustic reflections, I was equally pleased to hear it realise my ideas for enacting complex acoustic surfaces. Pontillistic textures skittering across its surface, sweeps of material oscillating between indirect and direct loudspeakers, clearly stratified layers of sound, and all emanating from a discrete spatial field. As I hope was evident in Let x = the ICO offers an abundance of spatial-compositional possibilities, even before using it to stimulate and shape room response or conjoining it with a more traditional array such as the IEM hemisphere.

Given the just outlined propensities and limitations of the ICO, it’s my feeling that not only should it be considered as an instrument, but more than this it is an instrument suited to chamber music. After all, in itself it is a chamber, a space which has a certain resonance which gives it the quality of enclosing sound as much as it projects sound. In other words, it has its own sound, it’s own sonic grain, as any instrument does. Through its frequency response and loudness it is best suited for small to medium size rooms, such as those in which chamber music is best heard, not so much because it cannot project sufficiently to be heard in large spaces but more because it requires listener proximity allowing perception of the detail it generates, both in itself and in the space it activates.

Guided by voices (IEM #4)

After having written below about the need to introduce constraints into the creative process, and then having fully immersed myself in the composition of the piece, what has become clear is that while it is useful to introduce pre-compositional constraints – establishing a work-concept that informs the creative process – these constraints themselves are only a scaffold which is eventually replaced by the more substantial constraints the work itself gradually establishes the further one moves inside this emergent territory. For example, in working with vocal material for the newly completed Let x = I had presumed that the heterogeneity of voice, the fact that it resonates at causal, semantic, and reduced levels (voice as voice, meaning and sound), could be accomodated in a single work. Certainly it can be and there are plenty of examples of works in which all these levels (and more) are operating simultaneously (one of my favourites remains John Young’s Sju), but in Let x = the constraint that the work itself introduced was a product of exploring the spatial sound environments afforded by IEM’s Ikosaeder (20-channel loudspeaker) and ambisonic hemisphere (24-channels), and the combination of these. In investigating what can be done with these spatially, it quickly became clear that the work was veering away from voice as speech (semantic), and voice as voice (causal) (aside from in a delimited but structurally significant way, i.e. as a means to mark key structural moments in the piece), and that in fact it needed to, wanted to, was going to, do so. The voice as sound, transformed but still vocalesque (voice-like), afforded enough sonic ambiguity and abstraction for sound materials to be utilised spatially without the histrionics of voices that behave like winged creatures or the schizophrenic effects of invisible conclaves, juries and choruses. The outcome is a work that, at least in my reckoning (I don’t need to be reminded that “the author is dead“), through which I manage to achieve one of the initial aims, but which also guided itself in a direction I had not anticipated. In the vocally tinged spatial atmospheres, textures and trajectories of Let x = there is a commingling of voice and environment which I feel fulfils my stated aims of  “the transformation of speech into objects resonating at embodied.. and environmental levels” and the dissolution of  “the barrier between ‘over here’ and ‘over there,’… the illusory boundary between ‘inside and outside’” (Tim Morton). Yet at the same time, the aim of deploying speech “as an ‘interface language’ between the kinaesthetic (nonverbal expression, including music, utterance, gesture and space) and the cenaesthetic (complex cognitive structures, including poetic language and the semantics of music)” has not really begun to be explored, simply because there is so little direct speech in the piece and when speech is heard it is in forms difficult to understand unless one is Indonesian (“Kaki langit,” the foot of the sky is the phrase that opens the piece) or capable of deciphering 6 languages at once (the closing moment of the piece simultaneously introduces the same phrase in English, German, Italian, Farsi, Bengali and Indonesian). Recognising that the piece guided itself is an important thing for me, not only because it recognises the extent to which things themselves have their own propensities and powers to which we are always having to respond (Graham Harman’s Guerrilla Metaphysics is good on this topic), but also because it means one can feel good about letting one’s own work-concept fall away and trust that engagement with the complex thing-in-itself (shadowy though its being is) will produce an object which is cohesive in its own ways, irrespective of how closely these match with the hypothetical thing it was intended to become. One of the other very satisfying aspects of this is that I can still attempt to compose the piece I thought I was composing, and by concentrating less on spatiality and more on semantics, perhaps a new work will emerge which “[deploys] speech “as an ‘interface language’ between the kinaesthetic… and the cenaesthetic”. Therefore the work itself is not finished. Let x =.