Skip to main content

R. Causse

In this paper, we present a model of the oscillations of a semi-infinite jet that could be used for time-domain simulations of flute-like musical instruments. We first calculate the spatial amplification coefficient of an infinite jet for... more
In this paper, we present a model of the oscillations of a semi-infinite jet that could be used for time-domain simulations of flute-like musical instruments. We first calculate the spatial amplification coefficient of an infinite jet for different piecewise linear velocity profiles. We then present a receptivity model that enables to link the oscillations modes of an infinite jet to the flow at the jet formation points when a transversal acoustic field is applied. These results are then used to calculate the displacement of a jet having a top-hat velocity profile for which our receptivity model seems to be well adapted
This article discusses progress and advances in the Acoustic-Aggregate-Synthesis project (first described in Proceedings from ICMC, 2012). Acoustic-Aggregate-Synthesis is a real-time performancetool which fuses synthetic and acoustic... more
This article discusses progress and advances in the Acoustic-Aggregate-Synthesis project (first described in Proceedings from ICMC, 2012). Acoustic-Aggregate-Synthesis is a real-time performancetool which fuses synthetic and acoustic sound sources in order to achieve semi-acoustic re-synthesis of a pre-defined acoustic model. This technology functions most effectively, from a cognitive stand-point, when it is used with an instrument which has been modified in order to allow synthesis to emanate via the same channels as those from which that instrument's acoustic signal emanates (i.e. with electronic diffusion of synthesis taking place inside the instrument itself). Thus, the project comprises elements of both software development and instrument modification. At the heart of this initiative is the desire to maintain the acoustic amplification & diffusion patterns, attack/sustain/release characteristics, etc. of a given instrument whilst overriding its timbral characteristics in f...
This article discusses advances in Acoustic-Aggregate-Synthesis, as described in Acoustic-Aggregate Synthesis, ICMC Proceedings [2012]. Acoustic-Aggregate-Synthesis is a real-time performance-tool which fuses synthetic and acoustic sound... more
This article discusses advances in Acoustic-Aggregate-Synthesis, as described in Acoustic-Aggregate Synthesis, ICMC Proceedings [2012]. Acoustic-Aggregate-Synthesis is a real-time performance-tool which fuses synthetic and acoustic sound sources to achieve semi-acoustic resynthesis of a pre-defined acoustic model. This technology is intended for use with instruments which have been modified allowing the synthesis to emanate from the same channels as those of the instrument's acoustic signal (i.e. with integrated transducers). Thus, the project comprises elements of both software development and instrument extension. At the heart of this initiative is the desire to maintain the idiomatic qualities of a given instrument (acoustic amplification & diffusion patterns, at-tack/sustain/release characteristics, etc.) whilst 'overriding' its timbral characteristics.
Motivated both by theoretical physical interest and by potential musical considerations, we have experimented with electronic systems which interact with an actual physical string in a way that duplicates the action of a bow. With... more
Motivated both by theoretical physical interest and by potential musical considerations, we have experimented with electronic systems which interact with an actual physical string in a way that duplicates the action of a bow. With appropriate adjustment of parameters, the string can be made to break into spontaneous Helmholtz motion. Results using both digital and analog systems are reported, and
ABSTRACT
Research Interests:
ABSTRACT cote interne IRCAM: Helie11a
ABSTRACT
Introduction. In most human-computer interaction paradigms, the variations in the gesture execution are commonly depicted as noise inside a gesture class. This is not the case for musical performance: each variation should be regarded as... more
Introduction. In most human-computer interaction paradigms, the variations in the gesture execution are commonly depicted as noise inside a gesture class. This is not the case for musical performance: each variation should be regarded as possible expression of a musical meaning [1]. Therefore, for music applications, the analysis of a given gesture could be considered as a twofold process: the recognition of a gesture class, and the characterization of the nuance embodied in its execution. In addition to idiosyncratic ...
ABSTRACT In order to have reproducible and controllable experiments, a robotized artificial mouth dedicated to brass instruments has been developed (CONSONNES project of the French National Research Agency). The actuators control: (A1)... more
ABSTRACT In order to have reproducible and controllable experiments, a robotized artificial mouth dedicated to brass instruments has been developed (CONSONNES project of the French National Research Agency). The actuators control: (A1) the airflow, (A2) the mouth position (monitoring the lips force applied to the mouthpiece), (A3-4) the water volume in each lip (water-filled latex chamber). The sensors measure: (S1-2) the pressure in the mouth and the mouthpiece, (S3: optical sensor) the area between the lips, (S4) the force lips/mouthpiece, (S5-6) the water pressure in each lip, and (S2bis-S4bis) the position of the moving coils (A2-4). We present an open-loop control obtained from measures, according to the following steps. First, a calibration for the lips control is performed, by analyzing signals (S4-6) w.r.t. positions (S2bis-S4bis), with no airflow. Second, slowly time-varying calibrated commands (A2-A4) are used to obtain quasi-stationary regimes (non oscillating, quasi-periodic, etc), for constant airflows. Third, a sound analysis is processed to generate cartographies of features (energy, fundamental frequency, etc) w.r.t. to the calibrated control parameters. This exploratory tool allows to build a dictionary of relevant control parameters from which basic sequences of notes can be played, and during which all the signals (S1-6) can be recorded.