Skip to main content
Eric Lyon

    Eric Lyon

    Articulated noise is a computer-assisted strategy for applying various flavors of noise to the structuring of musical compositions. The approach applies equally well to creating electroacoustic music and instrumental scores, and makes no... more
    Articulated noise is a computer-assisted strategy for applying various flavors of noise to the structuring of musical compositions. The approach applies equally well to creating electroacoustic music and instrumental scores, and makes no assumptions about musical style. Articulated noise is discussed here in terms of its precedents, philosophical and aesthetic basis, methods adopted by the author, and current compositional work employing the approach. 1. ORIGIN OF ARTICULATED NOISE The impetus for articulated noise emerged from “Rose of the World,” a work of computer music I composed for the CD “Clairaudience: New Music from Electronic Voice Phenomena (EVP)” [4]. EVP is a scientifically unverified method of allegedly detecting spirit voices captured on blank media such as magnetic tape. Absent confirmation of supernatural causation, EVP may be regarded as an example of a statistical “type I error,” a false positive. My compositional approach in “Rose of the World” was to mix an EVP ...
    The Bregman Studio was begun in 1967 by Jon Appleton with a gift from Gerald Bregman '54, built around a Moog synthesizer and tape recording technology available at the time. In 1974 the Bregman Electronic Music Studio moved to... more
    The Bregman Studio was begun in 1967 by Jon Appleton with a gift from Gerald Bregman '54, built around a Moog synthesizer and tape recording technology available at the time. In 1974 the Bregman Electronic Music Studio moved to Dartmouth's Thayer School of Engineering to support a joint project in digital synthesis between Professor Appleton and Thayer research associate Sydney Alonso (and later undergraduate Cameron Jones) which became the Synclavier the first commercially available portable digital synthesizer. In 1980 the studio moved to its current home in Hallgarten as part of the Hopkins Arts Center, shortly after the graduate program was inaugurated.
    In celebration of the 25th anniversary of Pure Data, this essay discusses the development of audio programming up to the present, and considers the role that Pd can continue to play in the computer music of the future.
    The Cube is a recently built facility that features a high-density loudspeaker array. The Cube is designed to support spatial computer music research and performance, art installations, immersive environments, scientific research, and all... more
    The Cube is a recently built facility that features a high-density loudspeaker array. The Cube is designed to support spatial computer music research and performance, art installations, immersive environments, scientific research, and all manner of experimental formats and projects. We recount here the design process, implementation, and initial projects undertaken in the Cube during the years 2013–2015.
    FFTease is a collection of Max/MSP objects implementing various forms of spectral processing. These include cross synthesis, morphing, noise reduction, spectral compositing, and other unique and unusual forms of spectral processing. This... more
    FFTease is a collection of Max/MSP objects implementing various forms of spectral processing. These include cross synthesis, morphing, noise reduction, spectral compositing, and other unique and unusual forms of spectral processing. This paper will discuss the functionality of these processors, and discuss architectural issues pertinent to spectral signal processing within the framework of Max/MSP.
    This colloquy may be the first multi-perspective, in-depth look at a music video. We can imagine why there’s been such a paucity of music-video scholarship. It’s not only due to, as Ann Kaplan has observed, that music videos straddle a... more
    This colloquy may be the first multi-perspective, in-depth look at a music video. We can imagine why there’s been such a paucity of music-video scholarship. It’s not only due to, as Ann Kaplan has observed, that music videos straddle a border between advertising and art, but that the analyst must also feel comfortable with addressing the music, the image (including the moving bodies, cinematography and editing), the lyrics, and the relation among them. (This might include looking at a dance gesture against a harmonic shift and an edit, and asking how these might relate to one another.) A collective approach is probably the best way to understand a clip and the genre, and also adds some benefits. Music videos are open forms, and as each analyst charts his or her path through the video, we can get a sense of a personal perspective (and readers can then more carefully track their own trajectories as well).

    Each of us takes on a different facet: Dani Oore writes on the song’s rhythm arrangement, Eric Lyon attends to rhythm and the song’s production features, Gabriel Ellis attends to the song’s multiply-stylized vocal performances, Maeve Sterbenz considers harmony and gesture; Gabrielle Lochard looks closely at race and the background figures; Dale Chapman attends to “APESH**T” in relation to other African American, opulent, art-inspired videos as well as their bonds to neoliberalism; Jason King considers larger contemporary phenomena, including other films, that turn to the museum as a historical repository that might help us solve what feels like humanity under threat; Kyra Gaunt describes how The Carters confront exclusionary regimes of power and other “ape-shit” through a mosaic of art, music, and media; and I offer an overview of music-video aesthetics, and some possible ways of finding a path through the video.

    We hope our tack will inspire a confederated approach, where art historians, dance scholars, media experts, and those who work on poetry and rap lyrics, costuming and architecture would write alongside us.
    Traditional software synthesis systems, such as Music V, use an instance model of computation in which each note instantiates a new copy of an instrument. An alternative is the resource model, exemplified by MIDI " mono mode, " in which... more
    Traditional software synthesis systems, such as Music V, use an instance model of computation in which each note instantiates a new copy of an instrument. An alternative is the resource model, exemplified by MIDI " mono mode, " in which multiple updates can modify a sound continuously, and where multiple notes share a single instrument. We have developed a unified, general model for describing combinations of instances and resources. Our model is a hierarchy in which resource-instances at one level generate output, which is combined to form updates to the next level. The model can express complex system configurations in a natural way.
    ... POWERpv: a suite of sound processors. Autores: Eric Lyon; Localización: Proceedings of the 1996 International Computer Music Conference, 1996, ISBN 962-85092-1-7 , págs. 285-286. Fundación Dialnet. Acceso de usuarios registrados. ...
    Spectral tuning is a sound processing method based on the phase vocoder. In this method, most or all partials are tuned to a fixed set of frequency values corresponding to a desired scale. If the input sound is a monophonic melody, the... more
    Spectral tuning is a sound processing method based on the phase vocoder. In this method, most or all partials are tuned to a fixed set of frequency values corresponding to a desired scale. If the input sound is a monophonic melody, the result is a kind of auto-tuning, providing greater flexibility in tuning schemas than with commercial auto-tuning programs. When
    ABSTRACT The mapping problem is inherent to digital musical instruments (DMIs), which require, at the very least, an association between physical gestures and digital synthesis algorithms to transform human bodily performance into sound.... more
    ABSTRACT The mapping problem is inherent to digital musical instruments (DMIs), which require, at the very least, an association between physical gestures and digital synthesis algorithms to transform human bodily performance into sound. This article considers the DMI mapping problem in the context of the creation and performance of a heterogeneous computer chamber music piece, a trio for violin, biosensors, and computer. Our discussion situates the DMI mapping problem within the broader set of interdependent musical interaction issues that surfaced during the composition and rehearsal of the trio. Through descriptions of the development of the piece, development of the hardware and software interfaces, lessons learned through rehearsal, and self-reporting by the participants, the rich musical possibilities and technical challenges of the integration of digital musical instruments into computer chamber music are demonstrated.
    I use the term 'computer music' to denote any music for which computation is essential to its production. I use the term 'electroacoustic music' to describe music in which electricity is required for the synthesis or... more
    I use the term 'computer music' to denote any music for which computation is essential to its production. I use the term 'electroacoustic music' to describe music in which electricity is required for the synthesis or processing of the sound materials. Given the steep rise of the use of ...
    Eric Lyon: I think it's fair to say that without the work of today's speakers, computer music as we know it would not exist. Now that we have them all together, we have an unprecedented opportunity both to focus on their... more
    Eric Lyon: I think it's fair to say that without the work of today's speakers, computer music as we know it would not exist. Now that we have them all together, we have an unprecedented opportunity both to focus on their individual achievements and to assess the bigger picture of where ...
    The ambitious goal towards which this panel was convened is to “cover all needs and create a format which everyone is willing to use” [1]. While I am undecided about the need for an interchange format at this time, I am certain that it is... more
    The ambitious goal towards which this panel was convened is to “cover all needs and create a format which everyone is willing to use” [1]. While I am undecided about the need for an interchange format at this time, I am certain that it is time for the discussion that will be stimulated by ...
    A Sample Accurate Triggering System for Pd and Max/MSP Eric Lyon Music-School of Arts, Histories and Cultures The University of Manchester Martin Harris Building Coupland Street Manchester, M13 9PL United Kingdom, eric. lyon@ manchester.... more
    A Sample Accurate Triggering System for Pd and Max/MSP Eric Lyon Music-School of Arts, Histories and Cultures The University of Manchester Martin Harris Building Coupland Street Manchester, M13 9PL United Kingdom, eric. lyon@ manchester. ac. uk Abstract A system ...
    This paper curates two collections of externals originally created for both Max/MSP and Pure Data (Pd) at a time before the coding protocols of the two programs started to significantly and increasingly diverge. The current distributions... more
    This paper curates two collections of externals originally created for both Max/MSP and Pure Data (Pd) at a time before the coding protocols of the two programs started to significantly and increasingly diverge. The current distributions of these two packages for Pd were created to finally separate the Max/MSP code from the Pd code. We will focus on some of the functionalities of this software that are not easily obtained by combining other Pd objects. Some of the more recent tools are especially suited for spatial composition, through arbitrary panning schemes, or by spectrally fractionating an incoming sound, for spectral diffusion to multiple loudspeakers. Several different spectral processors are also introduced.