WO2007054285A1 - A method and system for sound reproduction, and a program product - Google Patents
A method and system for sound reproduction, and a program product Download PDFInfo
- Publication number
- WO2007054285A1 WO2007054285A1 PCT/EP2006/010704 EP2006010704W WO2007054285A1 WO 2007054285 A1 WO2007054285 A1 WO 2007054285A1 EP 2006010704 W EP2006010704 W EP 2006010704W WO 2007054285 A1 WO2007054285 A1 WO 2007054285A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- sound reproduction
- mobile terminal
- audio content
- mapping
- leading mobile
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S7/00—Indicating arrangements; Control arrangements, e.g. balance control
- H04S7/30—Control circuits for electronic adaptation of the sound field
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2205/00—Details of stereophonic arrangements covered by H04R5/00 but not provided for in any of its subgroups
- H04R2205/024—Positioning of loudspeaker enclosures for spatial sound reproduction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2420/00—Details of connection covered by H04R, not provided for in its groups
- H04R2420/05—Detection of connection of loudspeakers or headphones to amplifiers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2420/00—Details of connection covered by H04R, not provided for in its groups
- H04R2420/07—Applications of wireless loudspeakers or wireless microphones
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S3/00—Systems employing more than two channels, e.g. quadraphonic
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S7/00—Indicating arrangements; Control arrangements, e.g. balance control
- H04S7/30—Control circuits for electronic adaptation of the sound field
- H04S7/301—Automatic calibration of stereophonic sound system, e.g. with test microphone
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S7/00—Indicating arrangements; Control arrangements, e.g. balance control
- H04S7/30—Control circuits for electronic adaptation of the sound field
- H04S7/308—Electronic adaptation dependent on speaker or headphone connection
Definitions
- the invention relates to methods, program products and systems for sound reproduction.
- stereo televisions can be used to reproduce sound - especially music - but also mobile terminals, such as mobile telephones, portable computers, or portable music players that can receive audio content can be used for this purpose.
- mobile terminals such as mobile telephones, portable computers, or portable music players that can receive audio content can be used for this purpose.
- a method for sound reproduction comprises the steps of: a) receiving or generating a mapping from at least two instruments, from at least two frequency ranges, from at least two directional channels, or any combination thereof, to at least two sound reproduction devices or to at least one sound reproduction device and a leading mobile terminal; b) receiving audio content; c) using said mapping on the audio content to pass sound information describing an instrument, a frequency range, or a directional channel from said audio content to a corresponding sound reproduction device; and d) at a leading mobile terminal, synchronizing playback on at least one of the sound reproduction devices over a wireless local connection, audio content, can be output via the sound reproduction devices, making the impression of the audio content being reproduced via a surround speaker set or by a small orchestra.
- step d) is performed at the leading mobile terminal by transmitting a synchronization signal to at least one of the sound reproduction devices, the user control over the sound reproduction may be improved.
- a gimmick in the form of a light show may be obtained, at least if the optical signal is at least partly within the visible light spectrum. Furthermore, the proper functioning of the sound reproduction device may be checked in a less complex manner, also if one or more of the sound reproduction devices or the mobile terminal has been turned silent.
- the mapping When the mapping is automatically adapted responsive to a change in the relative position of the sound reproduction devices from each other or from the leading mobile terminal, or responsive to a change of availability of a sound reproduction device, the user experience may be improved further, and if one of the sound reproduction devices is taken away, the missing instrument, frequency range, or directional channel can be replaced to output by at least one of the other sound reproduction devices .
- the required computing power at the leading mobile terminal can be reduced.
- a system for sound reproduction that comprises not only a leading mobile terminal comprising a program product adapted to carry out the method according to the invention when executed in a processing unit, but also at least one sound reproduction device comprising means adapted to receive sound information describing at least one instrument, at least one frequency range, or at least one directional channel from the leading mobile terminal and to receive synchronization information from the leading mobile terminal over wireless local connection means, a system with which the user experience may be improved can be obtained. Especially if at least one of the further sound reproduction devices is a mobile terminal too, the versatility of the mobile terminals can be improved, possibly with a gimmick effect.
- Figure 1 illustrates the idea of a mobile orchestra
- Figure 2 shows the conductor together with sound reproduction devices that together form a mobile orchestra
- Figure 3 illustrates how the mapping may be used on audio content
- Figure 4 shows system architecture with a network server for using the mapping on audio content.
- Figure 1 illustrates the idea of a mobile orchestra. Sound reproduction devices 12, 13, 14, which may be deliberately many, form the mobile orchestra.
- the leading mobile terminal that conducts the orchestra may only direct the performance of the mobile orchestra or also be a part of the mobile orchestra. In the following, the leading mobile terminal is referred to as conductor 11.
- sound reproduction devices 12, 13, 14 are mobile terminals too, there may be a graphics on the display of each mobile terminal illustrating a face of a musician, the face of a musician robot, or an instrument. Same applies also to the conductor 11, but preferably instead of a musician, a picture of a conductor is shown.
- Figure 1 shows the conductor 11 from behind, illustrating the principle that if the leading mobile terminal has a display on both sides of its housing, the graphics on the display on the back side of the housing may be different from that on the display on the front side, preferably showing the same object as in the other display but from behind.
- the same principle can be used to implement graphics on the display or displays of the sound reproduction devices, especially if they are mobile terminals .
- the looks of the musicians may be changed automatically responsive to the music style.
- jazz or Blues musicians may have some characteristics, such as clothing, showing different style or ethnic background than that of musicians playing some other class of music, for example.
- Sets of images may be mapped to a given music style or performer which may be recognized automatically through a genre or artist or record identifier stored with the audio content.
- FIG. 1 shows the conductor 11 together with sound reproduction devices 12, 13, 14, 25 that together form the mobile orchestra.
- the conductor 11 receives or generates a mapping 30 from at least two instruments, from at least two frequency ranges, or from at least two directional channels to at least two sound reproduction devices 12, 13, 14, 25.
- the mapping 30 may further comprise information to map at least one instrument, at least one frequency range, or at least one directional channel to the conductor 11.
- the conductor 11 receives audio content 31. Then the conductor 11 uses the mapping 30 on the audio content 31 to pass sound information describing an instrument, a frequency- range, or a directional channel from the audio content 31 to a corresponding sound reproduction device 12, 13, 14, 25 or to itself 11. Then the conductor 11 preferably synchronizes playback on the sound reproduction devices 12, 13, 14, 25 over a wireless local connection, possibly with the playback locally on the conductor 11.
- the individual tracks can be transferred from the conductor 11 to the sound reproduction devices 12, 13, 14, 25 via data cable, IrDA or Bluetooth from the conductor 11 to each of the sound reproduction devices 12, 13, 14, 25.
- the audio content preferably comprises control signals for synchronization. If the conductor 11 detects a control signal (Fig 3, a synchronization mark), it signals this to the sound reproduction devices 12, 13, 14,
- a light source such as a LED- flash.
- the light emitted by the light source is preferably at least partially within the visible light spectrum in order to have a visual effect.
- the control signals can be placed in the audio content at constant time intervals, e.g. every 20 milliseconds. If the audio content is constant bit- rate audio content, the use of synchronization marks may not be necessary since then the number of buffers reproduced can be synchronized with an internal timer at the responsive sound reproduction device or in the conductor 11.
- the sound reproduction devices 12, 13, 14, 25 detect the signaling, e.g. via their light sensors LS, and responsive to the detecting they may discard the rest of the stream buffer and start playing the next buffer which begins with the synchronization signal.
- the extent of quality degradation such as jitter, as observed by human listeners can be minimized.
- the mobile orchestra may give a better stereo or surround sound than a single sound reproduction device since the distance between the sound reproduction devices 12, 13, 14, 25 and of the conductor 11 can be larger than that of normal wired speakers .
- the cables between the handset and speakers are not necessary but may be replaced by wireless communication.
- Figure 3 illustrates how the mapping may be used on audio content.
- the mapping 30 is on the audio content, such as that of an audio file 31, to pass sound information describing at least two instruments, frequency ranges or directional channels to corresponding sound reproduction devices.
- the mapping 30 is a mapping from four directional channels to four sound reproduction devices 12, 13, 14, 25 and from one frequency range to the conductor 11.
- the directional channels and the sound reproduction devices assigned are L-FRONT (sound reproduction device 12) , L-REAR (sound reproduction device 13) , R-REAR (sound reproduction device 14), R-FRONT (sound reproduction device 25) .
- the frequency range BASS is assigned to the conductor 11.
- the mapping 30 may be automatically adapted responsive to a change in the relative position of the sound reproduction devices 12, 13, 14, 25 from each other or from the leading mobile terminal 11.
- the mapping 30 may be automatically adapted responsive to a change of availability of a sound reproduction device 12, 13, 14, 25. Then if one sound reproduction device disappears, because of an empty battery or because the user of the sound reproduction device takes the sound reproduction device with him or her, the mapping 30 may be modified by mapping the part of audio content to another sound reproduction device or to the conductor instead of the disappeared (or disappearing) sound reproduction device.
- Figure 4 shows system architecture with a network server 400 for using the mapping on audio content, such as an audio file 31.
- the mapping 30 is in the network server 400 that uses in on the audio file and passes the resulting processed audio file 33, preferably through the Internet, to the conductor 11.
- the conductor 11 passes the processed audio file 33 as a whole or only partially to the sound reproduction devices 12, 13, 14, 25.
- the audio file 31 may alternatively be converted directly at the conductor 11 from a music file to a desired number of partial audio files to be passed to the desired number of sound reproduction devices .
- the audio content provides already partial audio files, i.e. tracks.
- the mapping 30 preferably comprises mapping from each of the tracks to at least one sound reproduction device 12, 13, 14, 25 or to the conductor 11.
- the audio content, especially the audio file 31 may be in form of a midi file, containing information on sound to be reproduced by different instruments.
- the mapping 30 preferably comprises mapping from each instrument to at least one sound reproduction device (or to the conductor) .
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- Stereophonic System (AREA)
Abstract
A method for sound reproduction comprises the steps of : - a) receiving or generating a mapping (30) from at least two instruments, from at least two frequency ranges, from at least two directional channels (L-FRONT, L-REAR, R-REAR, R-FRONT, BASS) , or from any combination thereof, to at least two sound reproduction devices (12, 13, 14, 25, 11) or to at least one sound reproduction device (12, 13, 14, 25) and a leading mobile terminal (11) ; - b) receiving audio content; - c) using said mapping (30) on the audio content to pass sound information describing an instrument, a frequency range, or a directional channel (L-FRONT, L-REAR, R-REAR, R-FRONT, BASS) from said audio content to a corresponding sound reproduction device (12, 13, 14, 25, 11) ; and - d) at a leading mobile terminal (11) , synchronizing playback on at least one of said sound reproduction devices (12, 13, 14, 25, 11) over a wireless local connection.
Description
A method and system for sound reproduction, and a program product
Field of the invention The invention relates to methods, program products and systems for sound reproduction.
Background art
Not only home stereo systems and stereo televisions can be used to reproduce sound - especially music - but also mobile terminals, such as mobile telephones, portable computers, or portable music players that can receive audio content can be used for this purpose.
Because of the acoustic properties of most mobile terminals have been only modest at best, the interest to use a mobile terminal for sound reproduction has been mostly limited to reproduction of human speech. Thanks to recent engineering efforts, it can be expected that the coming mobile terminal models have improved audio properties.
Summary of the invention
It is an objective of the invention to provide a method, a program product and a system for sound reproduction, with which the user experience of sound reproduction may be further improved.
This objective can be met with a method as set out in claim 1, with a program product as set out in claim 9, or with a system as set out in claim 10.
The dependent claims describe various advantageous embodiments of the invention.
Advantages of the invention
If a method for sound reproduction comprises the steps of: a) receiving or generating a mapping from at least two instruments, from at least two frequency ranges, from at least two directional channels, or any combination thereof, to at least two sound reproduction devices or to at least one sound reproduction device and a leading mobile terminal; b) receiving audio content; c) using said mapping on the audio content to pass sound information describing an instrument, a frequency range, or a directional channel from said audio content to a corresponding sound reproduction device; and d) at a leading mobile terminal, synchronizing playback on at least one of the sound reproduction devices over a wireless local connection, audio content, can be output via the sound reproduction devices, making the impression of the audio content being reproduced via a surround speaker set or by a small orchestra.
If the method step d) is performed at the leading mobile terminal by transmitting a synchronization signal to at least one of the sound reproduction devices, the user control over the sound reproduction may be improved.
If said synchronization signal is transmitted as optical signal, a gimmick in the form of a light show may be obtained, at least if the optical signal is at least partly within the visible light spectrum. Furthermore, the proper functioning of the sound reproduction device may be checked in a less complex manner, also if one or more of the sound reproduction devices or the mobile terminal has been turned silent.
When the mapping is automatically adapted responsive to a change in the relative position of the sound reproduction
devices from each other or from the leading mobile terminal, or responsive to a change of availability of a sound reproduction device, the user experience may be improved further, and if one of the sound reproduction devices is taken away, the missing instrument, frequency range, or directional channel can be replaced to output by at least one of the other sound reproduction devices .
If also the method steps a) to c) are performed at the leading mobile terminal, delays due to communication to and from the network can be minimized.
If the method steps a) to c) are performed at a network server, the required computing power at the leading mobile terminal can be reduced.
With a system for sound reproduction that comprises not only a leading mobile terminal comprising a program product adapted to carry out the method according to the invention when executed in a processing unit, but also at least one sound reproduction device comprising means adapted to receive sound information describing at least one instrument, at least one frequency range, or at least one directional channel from the leading mobile terminal and to receive synchronization information from the leading mobile terminal over wireless local connection means, a system with which the user experience may be improved can be obtained. Especially if at least one of the further sound reproduction devices is a mobile terminal too, the versatility of the mobile terminals can be improved, possibly with a gimmick effect.
List of drawings
In the following, the invention is described in more detail with reference to embodiments shown in the accompanying drawings in Figures 1 to 4 , of which:
Figure 1 illustrates the idea of a mobile orchestra,-
Figure 2 shows the conductor together with sound reproduction devices that together form a mobile orchestra;
Figure 3 illustrates how the mapping may be used on audio content ; and
Figure 4 shows system architecture with a network server for using the mapping on audio content.
Same reference numerals refer to similar structural elements throughout the Figures .
Detailed description
Figure 1 illustrates the idea of a mobile orchestra. Sound reproduction devices 12, 13, 14, which may be deliberately many, form the mobile orchestra. The leading mobile terminal that conducts the orchestra may only direct the performance of the mobile orchestra or also be a part of the mobile orchestra. In the following, the leading mobile terminal is referred to as conductor 11.
If sound reproduction devices 12, 13, 14 are mobile terminals too, there may be a graphics on the display of each mobile terminal illustrating a face of a musician, the face of a musician robot, or an instrument. Same applies also to the
conductor 11, but preferably instead of a musician, a picture of a conductor is shown.
Figure 1 shows the conductor 11 from behind, illustrating the principle that if the leading mobile terminal has a display on both sides of its housing, the graphics on the display on the back side of the housing may be different from that on the display on the front side, preferably showing the same object as in the other display but from behind. The same principle can be used to implement graphics on the display or displays of the sound reproduction devices, especially if they are mobile terminals .
The looks of the musicians may be changed automatically responsive to the music style. Jazz or Blues musicians may have some characteristics, such as clothing, showing different style or ethnic background than that of musicians playing some other class of music, for example. Sets of images may be mapped to a given music style or performer which may be recognized automatically through a genre or artist or record identifier stored with the audio content.
Figure 2 shows the conductor 11 together with sound reproduction devices 12, 13, 14, 25 that together form the mobile orchestra.
The conductor 11 receives or generates a mapping 30 from at least two instruments, from at least two frequency ranges, or from at least two directional channels to at least two sound reproduction devices 12, 13, 14, 25. The mapping 30 may further comprise information to map at least one instrument, at least one frequency range, or at least one directional channel to the conductor 11.
The conductor 11 receives audio content 31. Then the conductor 11 uses the mapping 30 on the audio content 31 to pass sound information describing an instrument, a frequency- range, or a directional channel from the audio content 31 to a corresponding sound reproduction device 12, 13, 14, 25 or to itself 11. Then the conductor 11 preferably synchronizes playback on the sound reproduction devices 12, 13, 14, 25 over a wireless local connection, possibly with the playback locally on the conductor 11.
The individual tracks can be transferred from the conductor 11 to the sound reproduction devices 12, 13, 14, 25 via data cable, IrDA or Bluetooth from the conductor 11 to each of the sound reproduction devices 12, 13, 14, 25.
To keep all instruments, frequency ranges or directional channels synchronized, the audio content preferably comprises control signals for synchronization. If the conductor 11 detects a control signal (Fig 3, a synchronization mark), it signals this to the sound reproduction devices 12, 13, 14,
25, preferably by lighting up or flashing a light source, such as a LED- flash. The light emitted by the light source is preferably at least partially within the visible light spectrum in order to have a visual effect.
The control signals, such as synchronization marks, can be placed in the audio content at constant time intervals, e.g. every 20 milliseconds. If the audio content is constant bit- rate audio content, the use of synchronization marks may not be necessary since then the number of buffers reproduced can be synchronized with an internal timer at the responsive sound reproduction device or in the conductor 11.
The sound reproduction devices 12, 13, 14, 25 detect the signaling, e.g. via their light sensors LS, and responsive to the detecting they may discard the rest of the stream buffer and start playing the next buffer which begins with the synchronization signal. By using this kind of synchronization method, the extent of quality degradation, such as jitter, as observed by human listeners can be minimized.
The mobile orchestra may give a better stereo or surround sound than a single sound reproduction device since the distance between the sound reproduction devices 12, 13, 14, 25 and of the conductor 11 can be larger than that of normal wired speakers . The cables between the handset and speakers are not necessary but may be replaced by wireless communication.
Figure 3 illustrates how the mapping may be used on audio content. The mapping 30 is on the audio content, such as that of an audio file 31, to pass sound information describing at least two instruments, frequency ranges or directional channels to corresponding sound reproduction devices.
In the example of Figure 2 the mapping 30 is a mapping from four directional channels to four sound reproduction devices 12, 13, 14, 25 and from one frequency range to the conductor 11. In the mapping 30, the directional channels and the sound reproduction devices assigned are L-FRONT (sound reproduction device 12) , L-REAR (sound reproduction device 13) , R-REAR (sound reproduction device 14), R-FRONT (sound reproduction device 25) . The frequency range BASS is assigned to the conductor 11.
The mapping 30 may be automatically adapted responsive to a change in the relative position of the sound reproduction
devices 12, 13, 14, 25 from each other or from the leading mobile terminal 11.
Alternatively or in addition, the mapping 30 may be automatically adapted responsive to a change of availability of a sound reproduction device 12, 13, 14, 25. Then if one sound reproduction device disappears, because of an empty battery or because the user of the sound reproduction device takes the sound reproduction device with him or her, the mapping 30 may be modified by mapping the part of audio content to another sound reproduction device or to the conductor instead of the disappeared (or disappearing) sound reproduction device.
Figure 4 shows system architecture with a network server 400 for using the mapping on audio content, such as an audio file 31. The mapping 30 is in the network server 400 that uses in on the audio file and passes the resulting processed audio file 33, preferably through the Internet, to the conductor 11. The conductor 11 passes the processed audio file 33 as a whole or only partially to the sound reproduction devices 12, 13, 14, 25.
As already explained with reference to Figure 2, the audio file 31 may alternatively be converted directly at the conductor 11 from a music file to a desired number of partial audio files to be passed to the desired number of sound reproduction devices .
It is also possible that the audio content provides already partial audio files, i.e. tracks. In this case, the mapping 30 preferably comprises mapping from each of the tracks to at least one sound reproduction device 12, 13, 14, 25 or to the conductor 11.
Alternatively or in addition, the audio content, especially the audio file 31, may be in form of a midi file, containing information on sound to be reproduced by different instruments. In this case, the mapping 30 preferably comprises mapping from each instrument to at least one sound reproduction device (or to the conductor) .
Claims
1. A method for sound reproduction, comprising the steps of:
- a) receiving or generating a mapping (30) from at least two instruments (11, 12, 13, 14), from at least two frequency ranges, from at least two directional channels (L-FRONT, L-REAR, R-REAR, R-FRONT, BASS) , or from any combination thereof, to at least two sound reproduction devices (12, 13, 14, 25, 11) or to at least one sound reproduction device (12, 13, 14, 25) and a leading mobile terminal (11) ;
- b) receiving audio content (31) ;
- c) using said mapping (30) on the audio content (31) to pass sound information describing an instrument (11, 12, 13, 14), a frequency range, or a directional channel (L- FRONT, L-REAR, R-REAR, R-FRONT, BASS) from said audio content to a corresponding sound reproduction device (12, 13, 14, 25, 11) ; and
- d) at a leading mobile terminal (11) , synchronizing playback on at least one of said sound reproduction devices (12, 13, 14, 25, 11) over a wireless local connection.
2. A method according to claim 1, wherein: the method step d) is performed at the leading mobile terminal (11) , by transmitting a synchronization signal to at least one of said sound reproduction devices (12, 13, 14, 25) .
3. A method according to claim 2, wherein: said synchronization signal is transmitted as optical signal.
4. A method according to claim 2 or 3 , wherein: the mapping
(30) is automatically adapted responsive to a change in the relative position of the sound reproduction devices (12, 13, 14, 25) from each other or from the leading mobile terminal (11) , or responsive to a change of availability of a sound reproduction device (12, 13, 14, 25) .
5. A method according to claim 2, 3, or 4 , wherein: also the method steps a) to c) are performed at the leading mobile terminal (11) .
6. A method according to claim 2, 3, or 4 , wherein: the method steps a) to c) are performed at a network server (400) .
7. A method according to any one of the preceding claims, wherein: said audio content (31) is an audio file.
8. A method according to claim 7, wherein: said audio content is transmitted as streaming to the leading mobile terminal (11) from a network server (400) .
9. A program product, comprising: software means adapted, when executed in a processing unit, to carry out at least the method steps c) and d) according to any one of the preceding claims.
10. A system for sound reproduction, comprising:
- a leading mobile terminal (11) comprising a program product according to claim 9; and
- at least one sound reproduction device (12, 13, 14, 25) comprising means (LS) adapted to: i) receive sound information describing at least one instrument (11, 12, 13, 14), at least one frequency range, or at least one directional channel (L-FRONT, L-REAR, R-REAR, R-FRONT, BASS) from said leading mobile terminal (11) ; and ii) receive synchronization information from said leading mobile terminal (11) over wireless local connection means (LS) .
11. A system according to claim 10, wherein: at least one of the sound reproduction devices (12, 13, 14, 25) is a further mobile terminal.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP05024347A EP1784049A1 (en) | 2005-11-08 | 2005-11-08 | A method and system for sound reproduction, and a program product |
EP05024347.6 | 2005-11-08 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2007054285A1 true WO2007054285A1 (en) | 2007-05-18 |
Family
ID=36659810
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/EP2006/010704 WO2007054285A1 (en) | 2005-11-08 | 2006-11-08 | A method and system for sound reproduction, and a program product |
Country Status (2)
Country | Link |
---|---|
EP (1) | EP1784049A1 (en) |
WO (1) | WO2007054285A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102009031995A1 (en) * | 2009-07-06 | 2011-01-13 | Neutrik Aktiengesellschaft | Method for the wireless real-time transmission of at least one audio signal |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090298420A1 (en) * | 2008-05-27 | 2009-12-03 | Sony Ericsson Mobile Communications Ab | Apparatus and methods for time synchronization of wireless audio data streams |
FR2970574B1 (en) | 2011-01-19 | 2013-10-04 | Devialet | AUDIO PROCESSING DEVICE |
CN103065658B (en) | 2012-12-18 | 2015-07-08 | 华为技术有限公司 | Control method and device of multi-terminal synchronized playing |
EP2804397B1 (en) * | 2013-05-15 | 2015-07-08 | Giga-Byte Technology Co., Ltd. | Speaker system with automatic sound channel switching |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2000076272A1 (en) * | 1998-12-03 | 2000-12-14 | Audiologic, Incorporated | Digital wireless loudspeaker system |
WO2004023841A1 (en) * | 2002-09-09 | 2004-03-18 | Koninklijke Philips Electronics N.V. | Smart speakers |
US20040159219A1 (en) * | 2003-02-07 | 2004-08-19 | Nokia Corporation | Method and apparatus for combining processing power of MIDI-enabled mobile stations to increase polyphony |
-
2005
- 2005-11-08 EP EP05024347A patent/EP1784049A1/en not_active Withdrawn
-
2006
- 2006-11-08 WO PCT/EP2006/010704 patent/WO2007054285A1/en active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2000076272A1 (en) * | 1998-12-03 | 2000-12-14 | Audiologic, Incorporated | Digital wireless loudspeaker system |
WO2004023841A1 (en) * | 2002-09-09 | 2004-03-18 | Koninklijke Philips Electronics N.V. | Smart speakers |
US20040159219A1 (en) * | 2003-02-07 | 2004-08-19 | Nokia Corporation | Method and apparatus for combining processing power of MIDI-enabled mobile stations to increase polyphony |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102009031995A1 (en) * | 2009-07-06 | 2011-01-13 | Neutrik Aktiengesellschaft | Method for the wireless real-time transmission of at least one audio signal |
Also Published As
Publication number | Publication date |
---|---|
EP1784049A1 (en) | 2007-05-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7096080B2 (en) | Method and apparatus for producing and distributing live performance | |
US9779708B2 (en) | Networks of portable electronic devices that collectively generate sound | |
CN110692252A (en) | Audiovisual collaboration method with delay management for wide area broadcasting | |
Carôt et al. | Network music performance-problems, approaches and perspectives | |
US11399249B2 (en) | Reproduction system and reproduction method | |
CN1240565A (en) | Method and device for projecting sound sources onto loudspeakers | |
US20220386062A1 (en) | Stereophonic audio rearrangement based on decomposed tracks | |
EP2743917B1 (en) | Information system, information reproducing apparatus, information generating method, and storage medium | |
WO2007054285A1 (en) | A method and system for sound reproduction, and a program product | |
US20240129669A1 (en) | Distribution system, sound outputting method, and non-transitory computer-readable recording medium | |
WO2018095022A1 (en) | Microphone system | |
US6525253B1 (en) | Transmission of musical tone information | |
US10863259B2 (en) | Headphone set | |
JP2014066922A (en) | Musical piece performing device | |
KR101657110B1 (en) | portable set-top box of music accompaniment | |
JP2003085068A (en) | Live information providing server, information communication terminal, live information providing system and live information providing method | |
Braasch et al. | Mixing console design considerations for telematic music applications | |
JP6819236B2 (en) | Sound processing equipment, sound processing methods, and programs | |
JP2009100134A (en) | Information processor and program | |
JP2024001600A (en) | Reproducing device, reproducing method, and reproducing program | |
JP6834398B2 (en) | Sound processing equipment, sound processing methods, and programs | |
WO2018092286A1 (en) | Sound processing device, sound processing method and program | |
WO2024177629A1 (en) | Dynamic audio mixing in a multiple wireless speaker environment | |
Konstantas et al. | Design and implementation of an ATM based distributed musical rehearsal studio | |
Controller | Products of Interest |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 06828963 Country of ref document: EP Kind code of ref document: A1 |