Disclosure of Invention
The sounding system, the control method and the vehicle are beneficial to improving the flexibility of the sounding system design, facilitating the user to modify the sounding system and improving the use experience of the user.
In a first aspect, the application provides a sounding system, which comprises N terminal units, M preset interfaces and a communication interface, wherein each terminal unit in the N terminal units comprises a communication module and a sounding device, N is a positive integer, each preset interface in the M preset interfaces comprises a power supply interface and/or a communication interface, the power supply interface is used for supplying power to the terminal unit, the communication interface is used for sending signaling and/or data to the terminal unit through the communication module, M is a positive integer, and M is greater than or equal to N.
According to the technical scheme, the N terminal units and the M preset interfaces are designed in the sounding system, so that the terminal units can be flexibly connected into the preset interfaces, a user can flexibly design the positions or the number of the terminal units conveniently, different sound effect environments are created for the user, meanwhile, the sounding system is conveniently modified by the user, and the user experience is improved.
In some possible implementations, the terminal unit may be a car audio device, and the preset interface may be a car body interface. At this time, the sound generating system may be a car audio system.
The car audio device may be a speaker or a box, for example.
The vehicle audio system comprises N vehicle audio devices, M vehicle body interfaces and a power supply interface, wherein each vehicle audio device comprises a communication module, N is a positive integer, each vehicle body interface comprises a power supply interface and/or a communication interface, the power supply interface is used for supplying power to the vehicle audio device, the communication interface is used for sending signaling and/or data to the vehicle audio device through the communication module, and M is a positive integer and is greater than or equal to N.
According to the technical scheme, the M vehicle body interfaces are designed in the vehicle, so that N vehicle-mounted audio devices can be flexibly connected into the vehicle body interfaces, a user can flexibly design the positions or the number of the vehicle-mounted audio devices in the cabin conveniently, different sound effect environments are created for the user in the cabin, meanwhile, the user can modify the vehicle-mounted audio system in the cabin conveniently, and the use experience of the user can be improved conveniently. For a whole car factory, the system is also convenient to equip the same car model with sounding systems with different grades, and the process flow of the production and manufacturing links is simplified.
With reference to the first aspect, in certain implementation manners of the first aspect, the N in-vehicle audio devices include a first in-vehicle audio device including an active noise reduction controller configured to receive first audio data from a microphone and perform noise reduction processing on the first audio data.
Based on the technical scheme, the active noise reduction controller is added in the first vehicle-mounted audio device, so that the noise reduction processing is carried out on the audio data from the microphone, the design flexibility of the vehicle-mounted audio device is improved, and the vehicle-mounted audio device can realize the active noise reduction function besides sound production. Meanwhile, the design mode does not need to establish a connection relation between the processor and the active noise reduction controller, and the audio data acquired by the vehicle-mounted audio device through the communication module can be sent to the active noise reduction controller to perform active noise reduction, so that wiring in a vehicle is reduced. In addition, the active noise reduction system of the closed loop can be formed locally, and the effect of local active noise reduction is improved.
With reference to the first aspect, in certain implementation manners of the first aspect, the N in-vehicle audio devices include a second in-vehicle audio device, where the second in-vehicle audio device includes the microphone, and the second in-vehicle audio device is configured to send the first audio data to a communication module corresponding to the first in-vehicle audio device through the communication module corresponding to the second in-vehicle audio device.
Based on the technical scheme, the active noise reduction controller and the microphone can be located in different vehicle-mounted audio devices, so that an active noise reduction function is realized through communication between different vehicle-mounted audio devices in the vehicle-mounted audio system, and the design flexibility of the vehicle-mounted audio system is improved.
Illustratively, the car audio device 1 (including the active noise reduction controller) and the car audio device 2 (including the microphone) are located in the main driving area. In this way, the in-vehicle audio device 2 can transmit the audio data picked up by the microphone to the in-vehicle audio device 1, so that the in-vehicle audio device 1 can actively noise-reduce the audio data picked up by the microphone by the active noise-reduction controller. In this way, a closed-loop active noise reduction system can be realized by the car audio device 1 and the car audio device 2 in the main driving region.
With reference to the first aspect, in certain implementations of the first aspect, the first vehicle audio device includes the microphone.
Based on the technical scheme, the active noise reduction controller and the microphone can be located in the same vehicle-mounted audio device, so that the active noise reduction function can be realized through one vehicle-mounted audio device.
With reference to the first aspect, in some implementations of the first aspect, the N car audio devices include a third car audio device, the third car audio device includes a car light controller, and the car light controller is configured to receive second audio data from a communication module corresponding to the third car audio device, and control the car light to work according to the second audio data.
In some possible implementations, the vehicle-mounted luminaire may include an exterior trim light, an interior atmosphere light, a spotlight, or a light-emitting diode (LED) lighting.
The atmosphere lamp, also called ambient lighting, may be typically arranged in the steering wheel, center control, foot lamp, cup holder, roof, welcome lamp, welcome pedal, door, trunk, car light, etc. of the vehicle. The atmosphere lamp is mainly in the forms of single color, multiple colors, breathing rhythm, music rhythm and the like. The atmosphere lamp gives people a warm and comfortable feeling, and simultaneously gives people a scientific, luxurious aesthetic feeling.
Exterior lights may generally include turn signals, brake lights, fog lights, illumination lights, width lights, dome lights, license plate lights, chassis, wheel lights, and the like.
For example, when the vehicle-mounted lamp is an in-vehicle atmosphere lamp, the vehicle-mounted lamp controller may be an atmosphere lamp controller.
Based on the technical scheme, the vehicle-mounted lamp controller is added in the vehicle-mounted audio device, so that the communication module can send the received audio data to the vehicle-mounted lamp controller, and control of the vehicle-mounted lamp is achieved. Therefore, the flexibility of the design of the vehicle-mounted audio device is improved, and the vehicle-mounted audio device can realize the function of controlling the vehicle-mounted lamp besides sound production. Meanwhile, the design mode does not need to establish a connection relation between a processor (such as a central system) and the vehicle-mounted lamp controller, and is beneficial to reducing wiring in the vehicle. In addition, the vehicle-mounted lamp controller is directly integrated in the vehicle-mounted audio device, so that the time delay of data transmission is reduced, and the timeliness of the vehicle-mounted lamp along with music rhythm can be improved.
With reference to the first aspect, in some implementations of the first aspect, the sound generating system is located in a vehicle, and the vehicle-mounted light controller is configured to control a vehicle-mounted light of a first area in the vehicle according to the second audio data, where the first area is an area corresponding to the third vehicle-mounted audio device.
Based on the technical scheme, the communication module in the third vehicle-mounted audio device can send the received audio data to the vehicle-mounted lamp controller, so that the vehicle-mounted lamp in the area corresponding to the third vehicle-mounted audio device is controlled.
The first area may be an area in which the third car audio device is located, or the first area may be different from an area in which the third car audio device is located. The first area is an exemplary two-row area, and the third car audio device is located in the main driving area.
The vehicle may be a vehicle, for example.
With reference to the first aspect, in certain implementation manners of the first aspect, the N in-vehicle audio devices include a fourth in-vehicle audio device, and the fourth in-vehicle audio device is further configured to send, to the processor, audio sink capability and/or location information of the fourth in-vehicle audio device through a communication module corresponding to the fourth in-vehicle audio device.
Based on the technical scheme, the vehicle-mounted audio device can send the sound sink capacity and/or the position information to the processor, so that the processor can tune according to the sound sink capacity and/or the position information of each vehicle-mounted audio device, a user does not need to manually tune, the intelligent degree of the vehicle is improved, and the hearing experience of the user is improved.
In some possible implementations, the audio sink capability includes information of maximum volume of the car audio device, frequency information of sound emitted (e.g., high frequency, medium frequency, or low frequency), information indicating bandwidth supported by speakers, controllers, communication modules in the car audio device.
With reference to the first aspect, in certain implementations of the first aspect, the N car audio devices include a fifth car audio device, and the fifth car audio device includes a power amplifier module.
Based on the technical scheme, the vehicle-mounted audio device can further comprise a power amplification module, sound source data acquired from terminal equipment such as the Internet or a mobile phone can be directly sent to a communication module of the vehicle-mounted audio device, the communication module sends the sound source data to the power amplification module for processing, and finally the processed audio data are played through the vehicle-mounted audio device. In this way, it is helpful to promote flexibility in designing the car audio device.
With reference to the first aspect, in certain implementations of the first aspect, the N car audio devices include a sixth car audio device including a digital signal processing DSP module.
Based on the technical scheme, the vehicle-mounted audio device can further comprise a DSP module, sound source data acquired from terminal equipment such as the Internet or a mobile phone can be directly sent to a communication module of the vehicle-mounted audio device, the communication module sends the sound source data to the DSP module for processing, and finally the processed audio data are played through the vehicle-mounted audio device. In this way, it is helpful to promote flexibility in designing the car audio device.
In a second aspect, the application provides a control method, which comprises the steps of obtaining the sound sink capacity and the position information of one or more vehicle-mounted audio devices, determining a first tuning parameter according to the sound sink capacity and the position information of the one or more vehicle-mounted audio devices, and controlling the one or more vehicle-mounted audio devices to work according to the first tuning parameter.
Based on the technical scheme, tuning parameters can be determined by acquiring the sound sink capacity and the position information of one or more vehicle-mounted audio devices, and the one or more vehicle-mounted audio devices are controlled to work through the tuning parameters. Therefore, the whole tuning process does not need user participation, the intelligent degree of the vehicle can be improved, and the hearing experience of the user can be improved.
With reference to the second aspect, in some implementations of the second aspect, the determining the first tuning parameter according to the one or more in-vehicle audio devices 'audio-capturing capabilities and location information includes determining the first tuning parameter according to location information of a user, a height of a human ear of the user, and the one or more in-vehicle audio devices' audio-capturing capabilities and location information.
Based on the technical scheme, the three-dimensional coordinates of the user can be combined when the tuning parameters are determined, so that the user can feel better in audio effect, and the hearing experience of the user is improved.
In some possible implementations, the method further includes obtaining location information of a user and a height of a human ear of the user.
By way of example, the location of the user and the height of the user's ear may be determined from data collected by an in-cabin sensor (e.g., a camera, lidar or millimeter wave radar, etc.).
By way of example, the location of the user may be determined from data acquired by a microphone and the height of the user's ear may be determined from data acquired by an in-cabin sensor (e.g., a camera, lidar or millimeter wave radar, etc.).
By way of example, the position of the user can be determined from data acquired by pressure sensors on the seat and the height of the user's ear can be determined from data acquired by in-cabin sensors (e.g., cameras, lidar or millimeter wave radar, etc.).
With reference to the second aspect, in some implementations of the second aspect, the location information indicates that the user is located at a first location, and before the one or more car audio devices are controlled to operate according to the first tuning parameter, the method further includes controlling the one or more car audio devices to emit audio data according to the first tuning parameter, and determining that a difference between a fitting degree of a frequency response curve and a preset curve of sound at the first location is within a preset range.
Based on the technical scheme, after the tuning parameters are determined, audio data can be sent out according to the tuning parameters, and when the difference of the fitting degree of the audio response curve of the sound at the position of the user and the preset curve is within the preset range, the final tuning parameters can be terminated. Like this, the vehicle can realize automatic tuning process, and this process need not the user to participate in, helps avoiding the user to be in loaded down with trivial details operation of tuning in-process, helps promoting the intelligent degree of vehicle, also helps promoting user's experience.
With reference to the second aspect, in some implementations of the second aspect, the determining the first tuning parameter according to the soundtrack capabilities and the position information of the one or more car audio devices includes obtaining a second tuning parameter, and inputting the second tuning parameter, the soundtrack capabilities and the position information of the one or more car audio devices into a prediction model to obtain the first tuning parameter, where the prediction model includes a cabin three-dimensional model and/or a sound wave propagation model.
Based on the technical scheme, the second tuning parameters, the sound sink capacity of the vehicle-mounted audio device and the position information can be input into the prediction model, so that the final tuning parameters are obtained. Thus, generalization of the tuning process and accuracy of tuning can be improved.
The three-dimensional model of the cabin can be determined by the position relation of each part in the cabin and the interior decoration parameters.
The above acoustic wave propagation model may be determined by the interior parameters, frequency characteristics of the acoustic wave (e.g., reflection characteristics, diffraction characteristics, refraction characteristics, etc.).
With reference to the second aspect, in some implementations of the second aspect, the determining the first tuning parameter according to the one or more audio devices and the position information includes determining the first tuning parameter according to the one or more audio devices and the position information and a mapping relationship, where the mapping relationship is a mapping relationship between the audio device and the audio device.
Based on the above technical solution, a mapping relationship among the sound-sink capability of the car-audio device, the position of the car-audio device, and tuning parameters may be preset in the vehicle. Thus, after the car audio device transmits the sound sink capability and the position information to the processor, the processor can determine the appropriate tuning parameters through the mapping relationship. Like this, the vehicle can realize automatic tuning process, and this process need not the user to participate in, helps avoiding the user to be in loaded down with trivial details operation of tuning in-process, helps promoting the intelligent degree of vehicle, also helps promoting user's experience.
With reference to the second aspect, in some implementations of the second aspect, the determining the first tuning parameter according to the audio-in-vehicle capability and the position information and the mapping relation of the one or more in-vehicle audio devices includes determining a plurality of tuning parameters according to the audio-in-vehicle capability and the position information and the mapping relation of the one or more in-vehicle audio devices, and determining the first tuning parameter from the plurality of tuning parameters according to physiological characteristic information of a user.
With reference to the second aspect, in some implementations of the second aspect, the acquiring the audio sink capability and the location information of the one or more car audio devices includes receiving the audio sink capability and the location information of the one or more car audio devices after the change when the change in the location and/or the number of the one or more car audio devices is detected.
Based on the above technical solution, when the position and/or number of the car audio devices change, the processor may receive the sound sink capability and the position information of one or more car audio devices after the position and/or number change. In this way, the processor may be caused to re-determine tuning parameters based on the location and/or the updated number of the audio sink capabilities and location information of the in-vehicle audio device. The intelligent degree of the vehicle is improved, and the hearing experience of the user is improved.
With reference to the second aspect, in certain implementations of the second aspect, the one or more in-vehicle audio devices include a first in-vehicle audio device and a second in-vehicle audio device, the method further comprising establishing a communication connection between the first in-vehicle audio device including an active noise reduction controller and the second in-vehicle audio device including a microphone.
With reference to the second aspect, in certain implementations of the second aspect, the one or more in-vehicle audio devices include a third in-vehicle audio device, and the method further includes establishing a communication connection between the third in-vehicle audio device including the in-vehicle light controller and the in-vehicle light.
In a third aspect, the application provides a control device, which comprises an acquisition unit, a determination unit and a control unit, wherein the acquisition unit is used for acquiring the sound sink capacity and the position information of one or more vehicle-mounted audio devices, the determination unit is used for determining a first tuning parameter according to the sound sink capacity and the position information of the one or more vehicle-mounted audio devices, and the control unit is used for controlling the one or more vehicle-mounted audio devices to work according to the first tuning parameter.
With reference to the third aspect, in some implementations of the third aspect, the determining unit is configured to determine the first tuning parameter according to location information of a user, a height of a human ear of the user, and an audio sink capability and location information of the one or more car audio devices.
With reference to the third aspect, in some implementations of the third aspect, the location information indicates that the user is located at a first location, the control unit is configured to control the one or more car audio devices to emit audio data according to the first tuning parameter before controlling the one or more car audio devices to operate according to the first tuning parameter, and the determining unit is configured to determine that a difference between a fitting degree of a preset curve and a frequency response curve of sound at the first location is within a preset range.
With reference to the third aspect, in some implementations of the third aspect, the obtaining unit is further configured to obtain a second tuning parameter, and the determining unit is configured to input the second tuning parameter, the sound-sink capabilities of the one or more vehicle-mounted audio devices, and the position information into a prediction model, to obtain the first tuning parameter, where the prediction model includes a cabin three-dimensional model and/or a sound wave propagation model.
With reference to the third aspect, in some implementations of the third aspect, the determining unit is configured to determine the first tuning parameter according to the soundtrack capability and position information of the one or more vehicle-mounted audio devices and a mapping relationship, where the mapping relationship is a mapping relationship among the soundtrack capability of the vehicle-mounted audio device, the position of the vehicle-mounted audio device and the tuning parameter.
With reference to the third aspect, in some implementations of the third aspect, the determining unit is configured to determine a plurality of tuning parameters according to the audio-capturing capability and the position information of the one or more car-audio devices and the mapping relation, and determine the first tuning parameter from the plurality of tuning parameters according to the physiological characteristic information of the user.
With reference to the third aspect, in some implementations of the third aspect, the acquiring unit is configured to, when detecting that the number and/or the number of the one or more car audio devices changes, receive the changed sound sink capability and the changed position information of the one or more car audio devices.
With reference to the third aspect, in certain implementations of the third aspect, the one or more in-vehicle audio devices include a first in-vehicle audio device and a second in-vehicle audio device, the control unit is configured to control a communication connection to be established between the first in-vehicle audio device including the active noise reduction controller and the second in-vehicle audio device including the microphone.
With reference to the third aspect, in certain implementations of the third aspect, the one or more in-vehicle audio devices include a third in-vehicle audio device, and the control unit is configured to control the third in-vehicle audio device including the in-vehicle light controller to establish a communication connection with the in-vehicle light.
In a fourth aspect, there is provided a control device comprising a processing unit and a storage unit, wherein the storage unit is adapted to store instructions, the processing unit executing the instructions stored by the storage unit to cause the control device to perform any one of the possible methods of the second aspect.
In a fifth aspect, a control system is provided, the system comprising a car audio device and a computing platform, wherein the computing platform comprises the control device of any one of the third or fourth aspects.
In some possible implementations, the in-vehicle audio device may be an in-vehicle audio device in the first aspect.
In a sixth aspect, there is provided a terminal device comprising any one of the possible control means of the third aspect, or comprising the control means of the fourth aspect, or comprising the control system of the fifth aspect.
In some possible implementations, the terminal device is a vehicle (e.g., a vehicle).
In some possible implementations, the terminal device is a device in a smart home.
In a seventh aspect, there is provided a computer program product comprising computer program code which, when run on a computer, causes the computer to perform any one of the possible methods of the second aspect described above.
It should be noted that, the above computer program code may be stored in whole or in part on a first storage medium, where the first storage medium may be packaged together with the processor or may be packaged separately from the processor, and embodiments of the present application are not limited in this regard.
In an eighth aspect, there is provided a computer readable medium having stored thereon a program code which, when run on a computer, causes the computer to perform any one of the possible methods of the second aspect described above.
In a ninth aspect, embodiments of the present application provide a chip system comprising a processor for invoking a computer program or computer instructions stored in a memory to cause the processor to perform any of the possible methods of the second aspect above.
With reference to the ninth aspect, in a possible implementation manner, the processor is coupled to the memory through an interface.
With reference to the ninth aspect, in a possible implementation manner, the chip system further includes a memory, where a computer program or computer instructions are stored.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the accompanying drawings in the embodiments of the present application. In the description of the embodiment of the present application, unless otherwise indicated, "/" means or, for example, a/B may represent a or B, and "and/or" herein is merely an association relationship describing an association object, which means that three relationships may exist, for example, a and/or B, and that three cases, i.e., a alone, a and B together, and B alone, exist. "at least one item" means one or more than one item. For example, "at least one of A and B", similar to "A and/or B", describes an association relationship of the association object, meaning that there may be three relationships, for example, at least one of A and B may mean that there are three cases where A alone exists while A and B exist and B alone exists.
In the embodiment of the application, prefix words such as "first" and "second" are adopted, and only for distinguishing different description objects, no limitation is imposed on the position, sequence, priority, quantity or content of the described objects. The use of ordinal words and the like in embodiments of the present application to distinguish between the prefix words used to describe an object does not limit the described object, and statements of the described object are to be read in the claims or in the context of the embodiments and should not constitute unnecessary limitations due to the use of such prefix words. In addition, in the description of the present embodiment, unless otherwise specified, the meaning of "a plurality" is two or more.
As described above, when designing a vehicle model, a current host factory generally determines the configuration of a car audio system, such as determining the model number of a power amplifier, the number and positions of speakers, and the like, so as to perform production according to a fixed configuration and connection during production and manufacture. Therefore, the design of the vehicle-mounted audio system carried by the vehicle when the vehicle leaves the factory is solidified, and a user is difficult to reform the vehicle-mounted audio system.
The sounding system, the control method and the vehicle are beneficial to improving the flexibility of the sounding system design, facilitating the user to modify the sounding system and improving the use experience of the user.
Fig. 1 is a schematic block diagram of a sound production system 100 provided by an embodiment of the present application. The sounding system 100 includes N terminal units (e.g., terminal units 1-N) and M preset interfaces (e.g., preset interfaces 1-M), where N and M are positive integers and M is greater than or equal to N, each of the N terminal units includes a communication module and a sounding device, and each of the M preset interfaces includes a power interface and/or a communication interface, where the power interface is used to power the terminal units, and the communication interface is used to send signaling and/or data to the terminal units through the communication module.
In one embodiment, the preset interface may include a power supply interface and not include a communication interface, where the power supply interface may supply power to the communication module in the terminal unit. Taking the sound generating device as a speaker, if the speaker is an active speaker, the power supply unit may be further configured to supply power to the active speaker. The communication module of the terminal unit can receive the audio data in a wireless mode, so that the audio data can be played through the sounding device.
In one embodiment, the preset interface may include a communication interface and not include a power interface. Taking the example that the communication module in the terminal unit is a communication interface, the communication module of the terminal unit is connected with the communication interface in the preset interface in a wired mode. Taking the sound generating device as a passive speaker as an example, the communication module of the terminal unit can receive the audio data sent by the communication interface, so that the audio data can be played through the passive speaker.
In one embodiment, the preset interface may include a communication interface and a power supply interface. Taking the example that the communication module in the terminal unit is a communication interface, the communication module of the terminal unit is connected with the communication interface in the preset interface in a wired mode. Taking the sound generating device as an active speaker as an example, the power supply interface can supply power to the active speaker. The communication module of the terminal unit may receive the audio data transmitted by the communication interface, so as to play the audio data through the active speaker.
When the above terminal unit 100 is located in a home scene, the terminal unit 100 may be located in different locations of the home.
Fig. 2 is a schematic diagram illustrating an application of the sound generating system in the smart home scenario according to an embodiment of the present application. Taking a terminal unit as an intelligent sound box (the intelligent sound box comprises a communication module), a living room can comprise a preset interface 1, a preset interface 2, a preset interface 3 and a preset interface 4, wherein the intelligent sound box1 can be connected with the preset interface 1, the intelligent sound box 2 can be connected with the preset interface 2, the intelligent sound box 3 can be connected with the preset interface 3, and the intelligent sound box 4 can be connected with the preset interface 4. The smart screen may transmit audio data to the communication modules of the smart speakers 1 to 4, respectively, so that the smart speakers 1 to 4 may create surround sound in the living room.
When the above terminal unit 100 is located in a vehicle, the terminal unit 100 may be a car audio device (e.g., car speaker). The sound production system 100 may be a car audio system.
The vehicle related in the embodiment of the application is a vehicle in a broad concept, and may be a vehicle (such as a commercial vehicle, a passenger vehicle, a motorcycle, an aerocar, a train, etc.), an industrial vehicle (such as a forklift, a trailer, a tractor, etc.), an engineering vehicle (such as an excavator, a earth mover, a crane, etc.), an agricultural device (such as a mower, a harvester, etc.), an amusement device, a toy vehicle, etc., and the embodiment of the application does not limit the type of the vehicle specifically.
Fig. 3 illustrates a schematic block diagram of a car audio system 300 provided by an embodiment of the present application. The car audio system 300 may include N car audio devices (e.g., car audio devices 1-N), each of the N car audio devices including a communication module, N being a positive integer, and M car body interfaces (e.g., car body interfaces 1-M), each of the M car body interfaces including a power interface for powering the car audio device and/or a communication interface for transmitting signaling and/or data to the car audio device through the communication module, M being a positive integer and M being greater than or equal to N.
Fig. 4 shows a schematic diagram of a car audio system according to an embodiment of the present application. As shown in fig. 4, the vehicle includes a body interface 1-12 and an in-vehicle speaker 1-8. Wherein, the vehicle body interface 1-3, the vehicle-mounted loudspeaker 1 and the vehicle-mounted loudspeaker 2 can be located in a main driving area, the vehicle body interface 4-6, the vehicle-mounted loudspeaker 3 and the vehicle-mounted loudspeaker 4 can be located in a secondary driving area, the vehicle body interface 7-9, the vehicle-mounted loudspeaker 5 and the vehicle-mounted loudspeaker 6 can be located in two rows of left side areas, and the vehicle body interface 10-12, the vehicle-mounted loudspeaker 7 and the vehicle-mounted loudspeaker 8 can be located in two rows of right side areas. The vehicle-mounted loudspeaker 1 is connected with the vehicle body interface 1, the vehicle-mounted loudspeaker 2 is connected with the vehicle body interface 2, the vehicle-mounted loudspeaker 3 is connected with the vehicle body interface 4, the vehicle-mounted loudspeaker 4 is connected with the vehicle body interface 5, the vehicle-mounted loudspeaker 5 is connected with the vehicle body interface 7, the vehicle-mounted loudspeaker 6 is connected with the vehicle body interface 8, the vehicle-mounted loudspeaker 7 is connected with the vehicle body interface 10, and the vehicle-mounted loudspeaker 8 is connected with the vehicle body interface 11.
Fig. 5 shows another schematic diagram of the car audio system provided by the embodiment of the application. The networking mode of the vehicle-mounted audio system is a star networking mode. The hub system and the respective vehicle speakers may be connected by audio lines.
Fig. 6 shows another schematic diagram of the car audio system provided by the embodiment of the application. The networking mode of the vehicle-mounted audio system is a daisy chain networking mode. The central system and the vehicle-mounted loudspeaker 1, the vehicle-mounted loudspeaker 1 and the vehicle-mounted loudspeaker 3, the vehicle-mounted loudspeaker 3 and the vehicle-mounted loudspeaker 4, the vehicle-mounted loudspeaker 4 and the vehicle-mounted loudspeaker 7, the vehicle-mounted loudspeaker 7 and the vehicle-mounted loudspeaker 8, the vehicle-mounted loudspeaker 8 and the vehicle-mounted loudspeaker 6, the vehicle-mounted loudspeaker 6 and the vehicle-mounted loudspeaker 5, and the vehicle-mounted loudspeaker 5 and the vehicle-mounted loudspeaker 2 can be connected through an A2B audio bus.
Fig. 7 shows another schematic diagram of the car audio system provided by the embodiment of the application. The networking mode of the vehicle-mounted audio system is a wireless networking mode. The wireless networking mode may be, for example, networking through Wi-Fi, bluetooth, or star flash wireless communication technologies.
Fig. 8 shows another schematic diagram of the car audio system provided by the embodiment of the application. The networking mode of the vehicle-mounted audio system is a tree networking mode. Illustratively, the central system may connect the central subsystem 1 and the central subsystem 2, the central subsystem 1 may be the central system of the primary driving area, and the central subsystem 2 may be the central system of the secondary driving area. The hub subsystem 1 may connect an in-vehicle speaker 1 and an in-vehicle speaker 2. The hub subsystem 2 may connect an in-vehicle speaker 3 and an in-vehicle speaker 4.
Fig. 9 shows another schematic diagram of the car audio system provided by the embodiment of the application. The networking mode of the vehicle-mounted audio system is a mesh networking mode.
The vehicle speaker and the vehicle body interface may be detachable. For example, the vehicle speaker 1 may be detachable from the vehicle body interface 1 for use in an outdoor scenario (e.g., camping, outing, hiking) or a home scenario (e.g., as a smart speaker in fig. 2). For another example, after the vehicle speaker 1 is detached, the vehicle speaker 1 may be powered by a charger, a battery module, or a power supply base so that a wireless connection (e.g., a bluetooth connection) is established between a communication module in the vehicle speaker 1 and a mobile phone, or may be powered when the vehicle speaker 1 is an active speaker. Like this, vehicle audio device is not fixed inside the cabin, through the dismantlement to vehicle audio device under outdoor scene, can promote the flexibility that the user used vehicle audio device, need not the user and carries extra audio device when outdoor scene, helps promoting user's use experience.
For another example, a smart speaker in a home may be mounted to a vehicle. The intelligent sound box 1 as shown in fig. 2 can be connected with the vehicle body interface 3, and the intelligent sound box 2 can be connected with the vehicle body interface 6. Therefore, the number of the vehicle-mounted audio devices can be flexibly increased or reduced in the vehicle, and the position and/or the number of the vehicle-mounted audio devices in the cabin can be conveniently adjusted by a user, so that different sound effect environments can be created for the user, and the flexibility of the design of the vehicle-mounted audio system can be improved.
In one embodiment, the terminal unit (e.g., car audio device) may be of modular design. For example, a communication module and other functional modules may be included in the car audio apparatus. Other functional modules include, but are not limited to, one or more of an active noise reduction controller, a vehicle-mounted luminaire controller, a power amplifier module, and a DSP module.
The power amplifier module can be used as the supplement of the transmission power of the communication line, improves the sounding power of the vehicle-mounted loudspeaker and improves the effect of the vehicle-mounted audio system.
The DSP module can process the audio data and improve the effect of the vehicle in the audio system.
The active noise reduction controller can provide active noise reduction capability, and the audio data acquired by the communication module filters out the sound emitted by the vehicle-mounted loudspeaker picked up by the microphone, and only processes the noise.
The vehicle-mounted lamp controller can provide the control capability of the vehicle-mounted lamp which beats according to the audio data, and acquire the audio data from the communication module so as to control the vehicle-mounted lamp.
The vehicle-mounted lamp controller can be an atmosphere lamp controller and can control atmosphere lamps in a vehicle.
Optionally, the N vehicle audio devices include a first vehicle audio device including an active noise reduction controller for receiving first audio data from a microphone and performing noise reduction processing on the first audio data.
Illustratively, taking the first vehicle audio device as the vehicle speaker 1 as an example, the vehicle speaker 1 may include an active noise reduction controller therein. The active noise reduction controller of the vehicle-mounted speaker 1 may receive audio data from the microphone, thereby performing noise reduction processing on the audio data. The vehicle-mounted speaker 1 can play the audio data after noise reduction.
In one embodiment, the N in-vehicle audio devices include a second in-vehicle audio device, where the second in-vehicle audio device includes the microphone, and the second in-vehicle audio device is configured to send the first audio data to a communication module corresponding to the first in-vehicle audio device through the communication module corresponding to the second in-vehicle audio device.
Illustratively, taking the example that the second in-vehicle audio device is the in-vehicle speaker 2, the audio data collected by the microphone in the in-vehicle speaker 2 may be transmitted to the communication module in the in-vehicle speaker 1 through the communication module in the in-vehicle speaker 2. The communication module in the vehicle-mounted speaker 1 may send the audio data to the active noise reduction controller of the vehicle-mounted speaker 1, thereby completing active noise reduction of the audio data. The audio data processed by the active noise reduction controller can be played by the vehicle-mounted loudspeaker 1. In this way, a closed loop active noise reduction system may be formed in the primary drive region.
In one embodiment, the first vehicle audio device includes the microphone.
The vehicle speaker 1 may include a microphone and an active noise reduction controller, for example. Thus, the audio data collected by the microphone can be transmitted to the active noise reduction controller. Active noise reduction can be achieved by a single vehicle speaker.
In one embodiment, the N car audio devices include a third car audio device, the third car audio device includes a car light controller, and the car light controller is configured to receive second audio data from a communication module corresponding to the third car audio device, and control the car light to work according to the second audio data.
In one embodiment, the third vehicle speaker may be the vehicle speaker 5. The communication module of the vehicle speaker 5 may receive audio data and transmit the audio data to the vehicle light controller. The vehicle-mounted lamp controller can control the vehicle-mounted lamps in the main driving area, the auxiliary driving area, the two left-side areas and the two right-side areas to work according to the audio data.
Optionally, the sound generating system is located in a vehicle, and the vehicle-mounted lamp controller is configured to control a vehicle-mounted lamp in a first area in the vehicle according to the second audio data, where the first area is an area corresponding to the third vehicle-mounted audio device.
In one embodiment, the first area may be an area where the third car audio device is located.
The third vehicle speaker may be, for example, the vehicle speaker 5. The communication module of the vehicle speaker 5 may receive audio data and transmit the audio data to the vehicle light controller. The vehicle-mounted lamp controller can control the vehicle-mounted lamps in the two rows of left areas to work according to the audio data.
In one embodiment, the first area may include other areas than the area where the third car audio device is located.
The third vehicle speaker may be, for example, the vehicle speaker 5. The communication module of the vehicle speaker 5 may receive audio data and transmit the audio data to the vehicle light controller. The vehicle-mounted lamp controller can control the vehicle-mounted lamps in the two rows of right side areas to work or control the vehicle-mounted lamps in the two rows of left side areas and the two rows of right side areas to work according to the audio data.
In one embodiment, the N in-vehicle audio devices include a fourth in-vehicle audio device, and the fourth in-vehicle audio device is further configured to send, to the processor, audio sink capability and/or location information of the fourth in-vehicle audio device through a communication module corresponding to the fourth in-vehicle audio device.
For example, the in-vehicle audio devices 1-8 may send the audio sink capabilities and/or location information of each in-vehicle audio device to the computing platform of the vehicle. The computing platform, upon receiving the sound sink capability and/or location information of each of the in-vehicle audio devices, may determine tuning parameters to control the operation of the in-vehicle audio devices 1-8 via the tuning parameters.
In the above, each car audio device in the vehicle sends its corresponding sound sink capability and/or position information to the computing platform as an example, and the embodiment of the application is not limited thereto. For example, the car audio device 1 and the car audio device 2 may send corresponding sound sink capability and/or position information to the computing platform, and the computing platform may determine tuning parameters of the main driving area according to the information, so as to improve hearing experience of users in the main driving area.
In one embodiment, the N car audio devices include a fifth car audio device, and the fifth car audio device includes a power amplifier module.
In one embodiment, the N car audio devices include a sixth car audio device including a digital signal processing DSP module.
Fig. 10 shows a schematic diagram of a car audio device and a car body interface provided by an embodiment of the present application. If the communication module of the vehicle-mounted audio device is a communication interface, the communication interface of the vehicle-mounted audio device and the communication interface in the vehicle body interface can be positioned on the same side, so that the communication interfaces can be conveniently in butt joint.
In one embodiment, if the vehicle audio system is in a star networking manner as shown in fig. 5, the line in interface may be reserved for the vehicle body interface.
In one embodiment, if the vehicle audio system is in the daisy chain networking manner shown in fig. 6 and a certain vehicle body interface is not connected with a vehicle audio device, a line module may be reserved in the vehicle body interface. Fig. 11 shows a schematic diagram of a line module provided by an embodiment of the present application. And the line in interface and the line out interface are in butt joint through the line module.
The terminal unit (for example, a car audio device) provided in the embodiment of the present application is described above with reference to fig. 1 to 11, and the process of determining tuning parameters in the embodiment of the present application is described below with reference to the embodiment.
Fig. 12 shows a schematic block diagram of a vehicle provided by an embodiment of the present application. The vehicle may include N in-vehicle audio devices and a computing platform 120. The N car audio devices may be connected to the computing platform 120 by wireless or wired means.
The N in-vehicle audio devices may be, for example, the in-vehicle speakers 1 to 8 described above.
By way of example, the computing platform 120 may include a hub system, which may have communication and data processing capabilities. The center system may receive the sound-sink capability and the position information transmitted from each of the N car-audio devices, so that the center system may determine tuning parameters according to the sound-sink capability and the position information of each of the N car-audio devices.
The computing platform 120 may include one or more processors, such as processors 121 through 12n (n is a positive integer), which are a circuit with signal processing capabilities, in one implementation, may be a circuit with instruction reading and execution capabilities, such as a central processing unit (central processing unit, CPU), microprocessor, graphics processor (graphics processing unit, GPU) (which may be understood as a microprocessor), or digital signal processor (DIGITAL SIGNAL processor, DSP), etc., and in another implementation, the processor may implement a function through the logical relationship of hardware circuitry, which may be fixed or reconfigurable, such as a hardware circuit implemented by an application-specific integrated circuit (ASIC) or programmable logic device (programmable logic device, PLD), such as a field programmable gate array (field programmable gatearray, FPGA). In the reconfigurable hardware circuit, the processor loads the configuration document, and the process of implementing the configuration of the hardware circuit may be understood as a process of loading instructions by the processor to implement the functions of some or all of the above units. Further, the processor may also be a hardware circuit designed for artificial intelligence, which may be understood as an ASIC, such as a neural network processing unit (neural network processing unit, NPU), tensor processing unit (tensor processing unit, TPU), deep learning processing unit (DEEP LEARNING processing unit, DPU), etc. In addition, computing platform 120 may also include a memory for storing instructions that some or all of processors 121 through 12n may call and execute to implement the corresponding functionality.
Fig. 13 shows a schematic flow chart of a control method 1300 provided by an embodiment of the present application. The method 1300 may be performed by the vehicle 1200, or the method 1300 may be performed by the computing platform 120 (e.g., a central system) described above, or the method 1300 may be performed by a system-on-a-chip (SoC) in the computing platform described above, or the method 1300 may be performed by a processor or chip in the computing platform. The following description will take an example in which the execution subject is a central system. The method 1300 includes S1310, S1320, and S1330.
S1310, the hub system obtains the sound sink capability and location information of one or more car audio devices.
In one embodiment, the hub system obtains the sound sink capabilities and location information of one or more terminal units, including the hub system obtaining the sound sink capabilities and location information of one or more car audio devices upon detection of user input.
For example, upon detecting a user uttering a voice command "start tuning", the hub system may obtain the sound sink capability and location information of the vehicle speakers 1-8.
In one embodiment, the hub system obtains the sound sink capability and the position information of the one or more terminal units, including receiving the changed sound sink capability and position information of the one or more car audio devices when detecting the number and/or the number of the one or more car audio devices.
For example, taking the car audio system shown in fig. 4 as an example, after the vehicle leaves the factory, the user may cause the vehicle to start tuning by sending an instruction to the vehicle. For example, upon detecting the user's voice command "start tuning", the vehicle may send control signaling to the vehicle speakers 1-8, which may be used to instruct the vehicle speakers 1-8 to upload their sound sink capabilities and location information to the vehicle.
For example, after the center system determines tuning parameters of the in-vehicle audio device for the in-vehicle speakers 1-8, the in-vehicle speakers 1-8 may periodically send their corresponding sink capabilities and location information to the center system. When the central system determines that the sound-hosting capacities and the position information of the vehicle-mounted loudspeaker 7 and the vehicle-mounted loudspeaker 8 are not received within the preset time period, the central system can retune according to the sound-hosting capacities and the position information of the vehicle-mounted loudspeakers 1-6.
For example, after the central system determines tuning parameters of the in-vehicle audio devices of in-vehicle speakers 1-8, the central system may monitor the corresponding vehicle body interface of the in-vehicle speakers. When the central system determines that the vehicle body interface 10 and the vehicle body interface 11 are not supplying power to the vehicle-mounted speakers for a preset period of time, the central system can retune according to the sound-sink capability and the position information of the vehicle-mounted speakers 1-6.
For example, after the central system determines tuning parameters of the in-vehicle audio devices of in-vehicle speakers 1-8, the central system may monitor the corresponding vehicle body interface of the in-vehicle speakers. When the hub system determines that a terminal unit (e.g., the smart speaker 1) is connected to the body interface 3, the hub system may send control instructions to the smart speaker 1 that instruct the smart speaker 1 to send its sound sink capability and location information to the hub system. In this way, the backbone system can determine tuning parameters based on the on-board speakers 1-8, the sound sink capabilities of the intelligent speaker 1, and the positional information.
In S1310, the description has been made taking the case where the car audio device transmits the audio sink capability and the position information to the center system as an example, but the embodiment of the present application is not limited thereto. After the vehicle-mounted audio device is connected with the vehicle body interface, the sound sink capability of the vehicle-mounted audio device can be sent to the central system through a communication interface in the vehicle body interface. The backbone system may determine tuning parameters based on the audio sink capabilities of the car audio device and the location of the car body interface (which may be the location of the car audio device).
S1320, the central system determines a first tuning parameter based on the sound sink capabilities and the location information of the one or more car audio devices.
In one embodiment, the hub system determining the first tuning parameter based on the one or more in-vehicle audio device's soundtrack capabilities and location information includes the hub system determining the first tuning parameter based on the user's location information, the user's ear height, and the one or more in-vehicle audio device's soundtrack capabilities and location information.
In one embodiment, the method 1300 further includes the hub system obtaining the user's location information and the user's ear height before determining the first tuning parameter based on the user's location information, the user's ear height, and the one or more car audio devices ' audio sink capabilities and location information.
The above central system obtains the position information of the user and the height of the ear of the user, which can be also understood as the central system obtains the three-dimensional coordinates of the user.
Exemplary ways in which the hub system obtains location information for a user include, but are not limited to, the following:
(1) The central system may determine the position of the user in the cabin in combination with the data collected by the pressure sensors on the seat, e.g., when the central system determines that the pressure values collected by the pressure sensors in the seats in the two rows of right areas are greater than the preset pressure values, it determines that the user is located in the two rows of right areas, and thus may determine that the center point (or other preset point) of the two rows of right areas is the position where the user is located.
(2) The hub system may obtain instructions from the user indicating the location of the user. For example, the hub system may determine that the center point (or other preset point) of the two rows of right areas is the location of the user when the user moves the desired tuning location (e.g., the emperor's location) to the touch data of the two rows of right areas on the center screen.
(3) The central system may determine the location of the user in conjunction with data collected by sensors (e.g., cameras, lidar, etc.) within the pod.
The above central system may acquire the location information of the user through a certain manner therein, or may acquire the location information of the user in combination with a plurality of manners therein.
By way of example, the manner in which the central system obtains the user's ear height includes, but is not limited to, the following:
(1) The central system stores height information of the user 1. The central system can acquire images acquired by cameras in the cabin and can estimate the height of the ears of the user 1 according to the height information of the user 1 and the seat design parameters when the user 1 is positioned in the two rows of right areas according to the images.
(2) The central system can acquire images acquired by the cameras in the cockpit and estimate the height of the human ear of the user 1 according to the images and the height parameters of the cockpit.
(3) The central system can acquire the pose parameters of the user sent by the seats in the two rows of right areas, so that the height of the human ear of the user 1 can be estimated according to the pose parameters.
The above central system may acquire the user's ear height by one of them, or may acquire the user's ear height by a combination of them.
In one embodiment, the location information indicates that the user is located at a first location, and the method 1300 further includes the hub system controlling the one or more car audio devices to emit audio data according to the first tuning parameter before controlling the one or more car audio devices to operate according to the first tuning parameter, and the hub system determining that a difference between a fit of a frequency response curve of sound at the first location and a preset curve is within a preset range.
The above curve may be a human ear hearing curve. The curve may reflect the user's preference for auditory curves for certain music types.
The above description is given by taking the automatic tuning stopping process when the first tuning parameter reaches the preset condition as an example, and the embodiment of the application is not limited thereto. For example, the tuning process may be stopped when the audio effect perceived by the user in the two right-side areas satisfies the expectations of the user. The vehicle may stop the tuning process and output tuning parameters upon detecting a user command (e.g., a user utters a voice command to "stop tuning").
In one embodiment, the central system determines a first tuning parameter according to the sound-sink capability and the position information of the one or more vehicle-mounted audio devices, wherein the central system acquires a second tuning parameter, and inputs the second tuning parameter, the sound-sink capability and the position information of the one or more vehicle-mounted audio devices into a prediction model to obtain the first tuning parameter, and the prediction model comprises a cabin three-dimensional model and/or a sound wave propagation model.
The above second tuning parameters may be random tuning parameters or tuning parameters obtained during the last tuning of the central system.
After acquiring the second tuning parameters, the central system may input the second tuning parameters, the sound sink capabilities of the one or more vehicle-mounted audio devices, and the location information into a predictive model, thereby obtaining the first tuning parameters. The predictive model may include a cabin three-dimensional model and/or an acoustic propagation model. The prediction model can simulate the propagation condition of sound waves in the cabin, and further can verify the influence of tuning parameters on sound effects.
In one embodiment, the hub system determines a first tuning parameter according to the sound-sink capability and the position information of the one or more vehicle-mounted audio devices, wherein the hub system determines the first tuning parameter according to the sound-sink capability and the position information of the one or more vehicle-mounted audio devices and a mapping relation, and the mapping relation is a mapping relation among the sound-sink capability of the vehicle-mounted audio devices, the positions of the vehicle-mounted audio devices and the tuning parameters.
For example, table 1 shows a mapping relationship in which the hardware system of the car audio system is configured.
TABLE 1
For example, if the audio sink capability and location sent by the in-vehicle audio device matches (AA, BB/(x 1, y1, z 1), (x 2, y2, z 2)) in table 1, the hub system may determine that a hardware system of configuration one is identified. Thereby outputting tuning parameters through the mapping relation shown in table 1.
Illustratively, the configuration one may be a low profile.
For example, table 2 shows the mapping relationship of the hardware system of the car audio system under configuration two.
TABLE 2
For example, the second configuration may be a medium configuration or a high configuration.
Tables 1 and 2 above are merely illustrative, and embodiments of the present application are not limited thereto in particular. Illustratively, the mapping relationship may also include the capabilities of a DSP module, including, but not limited to, processing the two channels into multi-channel, high, medium, and low frequency tuning capabilities. The above DSP modules may be centralized DSP modules or may also be distributed DSP modules.
Where the DSP module is a centralized DSP module, it may be located in the central system.
When the DSP module is a distributed DSP module, it may be located in one or more car audio devices, so that the car audio devices may carry the capabilities of the DSP module in the audio sink capabilities. The backbone system may determine tuning parameters in combination with the capabilities of the DSP module in different car audio devices.
In one embodiment, the hub system determines the first tuning parameters according to the sound-sink capability and the position information of the one or more vehicle-mounted audio devices and the mapping relation, and the hub system determines the plurality of tuning parameters according to the sound-sink capability and the position information of the one or more vehicle-mounted audio devices and the mapping relation, and determines the first tuning parameters from the plurality of tuning parameters according to physiological characteristic information of a user.
By way of example, through the above-described mapping relationship, a plurality of tuning parameters under different tuning styles can be determined. The hub system may further determine a first tuning parameter from the plurality of tuning parameters in combination with the physiological characteristic information of the user.
And S1330, the central system controls the one or more vehicle-mounted audio devices to work according to the first tuning parameters.
After determining the tuning parameters, the hub system may control operation of the one or more car audio devices based on the first tuning parameters.
In one embodiment, the one or more in-vehicle audio devices include a first in-vehicle audio device and a second in-vehicle audio device, and the method 1300 further includes the hub system establishing a communication connection between the first in-vehicle audio device including an active noise reduction controller and the second in-vehicle audio device including a microphone.
The car audio device may transmit other capability information of the car audio device to the center system in addition to its sound-sink capability and location information to the center system. For example, as shown in fig. 4, the vehicle-mounted speaker 1 includes an active noise reduction controller, and the vehicle-mounted speaker 2 includes a microphone. The in-vehicle speaker 1 may send information to the backbone system that it has active noise reduction function and the in-vehicle speaker 2 may send information to the backbone system that it includes a microphone. In this way, the center system can establish connection of the center system and the vehicle-mounted speaker 1 and connection of the center system and the vehicle-mounted speaker 2, respectively, and at the same time, the center system can also establish connection between the vehicle-mounted speaker 1 and the vehicle-mounted speaker 2. In this way, a closed loop system with active noise reduction can be implemented in the main driving area.
The above vehicle-mounted speaker 1 and vehicle-mounted speaker 2 may be automatically established by the center system, or may be established by the center system according to the user's input. For example, after acquiring the function information included in the vehicle-mounted speakers 1-8, the central system may display the functions of each vehicle-mounted speaker through the central control screen and prompt the user to establish a closed-loop system for active noise reduction. Upon detecting a user selection of inputs from vehicle speaker 1, vehicle speaker 2, vehicle speaker 3, and vehicle speaker 4, the central system may establish a connection between vehicle speaker 1, vehicle speaker 2, vehicle speaker 3, and vehicle speaker 4, thereby implementing an active noise reduction closed loop system in the front row area.
In one embodiment, the one or more car audio devices include a third car audio device, and the method 1300 further includes the hub system establishing a communication connection between the third car audio device including a car light controller and a car light.
For example, as shown in fig. 4, the vehicle-mounted speaker 5 includes an active noise reduction controller. The backbone system can determine that it is in the two left-hand area based on the position information sent by the vehicle speakers 5, so that the backbone system can establish a communication connection between the vehicle speakers 5 and the vehicle-mounted luminaire in the two left-hand areas. In this way, after receiving the audio data, the communication module of the vehicle speaker 5 may send the audio data to the vehicle-mounted light fixture controller, so that the vehicle-mounted light fixture controller controls the vehicle-mounted light fixtures in the two left-side areas according to the audio data.
The communication connection between the above vehicle speakers 5 and the vehicle-mounted luminaires of the two left-hand rows may be established automatically by the backbone system or may be established by the backbone system after receiving the input of the user. For example, after acquiring the function information included in the vehicle-mounted speakers 1 to 8, the center system may display the functions of the respective vehicle-mounted speakers through the center control screen and prompt the user to select the vehicle-mounted lamp that the vehicle-mounted speaker 5 can control. Upon detecting a user selection of two rows of left-side area vehicle-mounted light fixtures and two rows of right-side area vehicle-mounted light fixtures input, the hub system may establish a connection between the vehicle-mounted speaker 5, the two rows of left-side area vehicle-mounted light fixtures and the two rows of right-side area vehicle-mounted light fixtures.
Fig. 14 shows a schematic flow chart of a control method 1400 provided by an embodiment of the present application. The method 1400 may be implemented by a car audio device including a speaker and a first communication module and a hub system including a tuning system and a second communication module. The method 1400 includes:
s1401, the first communication module transmits the sound sink capability and the position information of the in-vehicle audio device to the second communication module.
Illustratively, the sound sink capability includes information of a maximum volume of the car audio device, frequency information (e.g., high frequency, medium frequency, or low frequency) of sound emitted, information indicating a bandwidth supported by a speaker, a controller, a communication module, etc. in the car audio device.
S1402, the second communication module transmits the sound sink capability and the position information of the car-audio device to the tuning system.
S1403, the tuning system starts the tuning process.
For example, upon detecting user input (e.g., the user utters a voice command "start tuning"), the tuning system may issue a broadcast message that instructs an in-vehicle speaker within the pod to send it with sink capability and location information. The tuning system may receive the sound sink capability and location information of each of the vehicle speakers 1-8 transmitted from the vehicle speakers within a preset time period, thereby initiating the tuning process.
Illustratively, the tuning system may also monitor each body interface. The tuning process may be restarted upon determining that the car audio device connected to a certain car body interface is detached or upon determining that the car audio device is not connected to the car audio device on a certain car body interface.
S1404, tuning parameter 1 is acquired.
The tuning parameters 1 may be random tuning parameters, or may be tuning parameters that were output by the previous backbone system during tuning, for example.
S1405, the tuning system generates an audio stream according to the tuning parameters and sends the audio stream to the second communication module.
S1406, the second communication module sends the audio stream to the first communication module.
S1407, the first communication module sends the audio stream to the speaker so that the audio stream is played by the speaker.
S1408, the tuning system determines whether the audio stream played by the speaker satisfies a condition for terminating tuning.
Illustratively, the tuning termination condition may be that the average deviation of the fit of the frequency response curve of the sound at the specified location (typically the specified emperor's location) from the auditory curve of the human ear (or the preference of the auditory curve for a certain type of music) is within x% (x may be set by default by the system or by the user).
When the tuning system determines that the condition for terminating tuning is not satisfied, S1409 is executed, otherwise S1411 is executed.
S1409, when the tuning system determines that the condition for terminating tuning is not satisfied, inputting tuning parameter 1, the sound-sink capability of the car-audio device, and the position information into the prediction model, to obtain tuning parameter 2.
S1410, repeatedly executing S1404-S1408 until the audio stream played by the speaker satisfies the condition of terminating tuning.
S1411, outputting the final tuning parameters and stopping tuning.
Optionally, the tuning system may also determine the tuning parameters in combination with three-dimensional coordinate information of the user during tuning. Illustratively, the tuning system may input the three-dimensional coordinates of the user, the tuning parameter 1, the sound-sink capability of the car-audio device, and the position information into the predictive model, thereby obtaining the tuning parameter 2.
In fig. 14, the tuning parameters are determined by a predictive model as an example, and the tuning parameters may be determined by a mapping relationship. Fig. 15 shows a schematic flow chart of a control method 1500 provided by an embodiment of the present application. The method 1500 may be implemented by a car audio device including a speaker and a first communication module, and a hub system including a tuning system, a centralized DSP module, and a second communication module. The method 1500 includes:
S1501, the first communication module transmits the sound sink capability and the position information of the in-vehicle audio device to the second communication module.
S1502, the second communication module transmits the sound sink capability and the position information of the car-audio device to the tuning system.
And S1503, the DSP module sends DSP capability information to the tuning system.
There is no actual order between S1502 and S1503 above.
S1504, the tuning system determines tuning parameters according to the sound sink capacity, the position information, the DSP capacity information and the mapping relation of the vehicle-mounted audio device.
The DSP module is an optional module in a central system. The central system also does not need to include a DSP module, so that the mapping relationship can include the corresponding relationship of the sound sink capacity, the position information and the tuning parameters. Or at least part of the one or more vehicle-mounted audio devices can comprise a DSP module, and the at least part of the vehicle-mounted audio devices can send DSP capability information to the tuning system, so that the tuning system determines tuning parameters according to the sound sink capability, the position information, the DSP capability information and the mapping relation of the vehicle-mounted audio devices.
The mapping relationship may be as shown in table 1 or table 2 above, for example.
The above transmission of signaling or data between the backbone system and each of the car audio devices may be a connection networking transmission configuration that performs time division. For example, in the (L-1) th time granularity, the center system may transmit data to the in-vehicle audio device 1, in the L-th time granularity, the center system may transmit data to the in-vehicle audio device 2, and in the (L+1) th time granularity, the center system may receive data from the in-vehicle audio device 3. Considering the limited resources of the communication frequency band, the communication connection networking mode is more flexible, and can support stronger and more functions.
Fig. 16 shows a schematic block diagram of a control device 1600 provided by an embodiment of the present application. As shown in fig. 16, the apparatus 1600 includes an acquisition unit 1610 configured to acquire the sound-sink capability and the position information of one or more car-audio devices, a determination unit 1620 configured to determine a first tuning parameter according to the sound-sink capability and the position information of the one or more car-audio devices, and a control unit 1630 configured to control the one or more car-audio devices to operate according to the first tuning parameter.
Optionally, the determining unit 1620 is configured to determine the first tuning parameter according to the location information of the user, the height of the ear of the user, and the sound-sink capability and the location information of the one or more car-audio devices.
Optionally, the location information indicates that the user is located at a first location, the control unit 1630 is configured to control the one or more car-audio devices to emit audio data according to the first tuning parameter before controlling the one or more car-audio devices to operate according to the first tuning parameter, and the determining unit 1620 is configured to determine that a difference between a fitting degree of a preset curve and a frequency response curve of a sound at the first location is within a preset range.
Optionally, the obtaining unit 1610 is further configured to obtain a second tuning parameter, and the determining unit 1620 is configured to input the second tuning parameter, the sound-sink capability and the position information of the one or more car-audio devices into a prediction model, to obtain the first tuning parameter, where the prediction model includes a cabin three-dimensional model and/or a sound wave propagation model.
Optionally, the determining unit 1620 is configured to determine the first tuning parameter according to the sound-sink capability and the position information of the one or more vehicle-mounted audio devices and a mapping relationship between the sound-sink capability of the vehicle-mounted audio device, the position of the vehicle-mounted audio device and the tuning parameter.
Optionally, the determining unit 1620 is configured to determine a plurality of tuning parameters according to the sound-sink capability and the position information of the one or more car-audio devices and the mapping relation, and determine the first tuning parameter from the plurality of tuning parameters according to the physiological characteristic information of the user.
Optionally, the obtaining unit 1610 is configured to, when detecting that the number and/or the number of the one or more car audio devices changes, receive the changed sound sink capability and the changed position information of the one or more car audio devices.
Optionally, the one or more in-vehicle audio devices include a first in-vehicle audio device and a second in-vehicle audio device, the control unit 1630 is configured to control a communication connection between the first in-vehicle audio device including an active noise reduction controller and the second in-vehicle audio device including a microphone.
Optionally, the one or more car audio devices include a third car audio device, and the control unit 1630 is configured to control the third car audio device including the car light controller to establish a communication connection with the car light.
For example, the acquisition unit 1610 may be a computing platform or a processing circuit, processor, or controller in a computing platform in fig. 12. Taking the acquisition unit 1610 as an example of the processor 121 in the computing platform, the processor 121 may acquire the sound sink capability and/or location information of one or more car audio devices.
As another example, the determining unit 1620 may be a computing platform or a processing circuit, processor, or controller in a computing platform in fig. 12. Taking the determining unit 1620 as an example of the processor 122 in the computing platform, the processor 122 may determine tuning parameters of the one or more car audio devices according to the sound-hosting capabilities and/or the location information of the one or more car audio devices acquired by the processor 121.
As another example, control unit 16300 may be a computing platform or a processing circuit, processor, or controller in a computing platform in fig. 1. Taking the control unit 1630 as an example of the processor 123 in the computing platform, the processor 123 may control the operation of one or more car audio devices according to tuning parameters of the one or more car audio devices determined by the processor 122.
The functions performed by the acquisition unit 1610, the function performed by the determination unit 1620, and the function performed by the control unit 1630 may be implemented by different processors, or may be implemented by the same processor, which is not limited by the embodiment of the present application.
In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or by instructions in the form of software. The method disclosed in connection with the embodiments of the present application may be directly embodied as a hardware processor executing or may be executed by a combination of hardware and software modules in the processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or power-on erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in a memory, and the processor reads the information in the memory and, in combination with its hardware, performs the steps of the above method. To avoid repetition, a detailed description is not provided herein.
It should be understood that the division of the units in the above apparatus is only a division of a logic function, and may be fully or partially integrated into one physical entity or may be physically separated when actually implemented. Furthermore, the units in the device may be implemented in the form of processor-invoked software, e.g. the device comprises a processor, which is connected to a memory, in which instructions are stored, and which invokes the instructions stored in the memory to implement any of the above methods or to implement the functions of the units of the device, wherein the processor is e.g. a general purpose processor, such as a CPU or a microprocessor, and the memory is a memory within the device or a memory outside the device. Or the units in the device may be implemented in the form of hardware circuits, where the functions of some or all of the units may be implemented by a design of a hardware circuit, where the hardware circuit may be understood as one or more processors, for example, in one implementation, the hardware circuit is an ASIC, where the functions of some or all of the units are implemented by a design of a logic relationship between elements in the circuit, and in another implementation, the hardware circuit may be implemented by a PLD, where an FPGA, for example, may include a large number of logic gates, and where the connection relationship between the logic gates is configured by a configuration file, so as to implement the functions of some or all of the units. All units of the above device may be realized in the form of processor calling software, or in the form of hardware circuits, or in part in the form of processor calling software, and in the rest in the form of hardware circuits.
In an embodiment of the present application, the processor is a circuit with signal processing capability, in one implementation, the processor may be a circuit with instruction reading and running capability, such as a CPU, a microprocessor, a GPU, or a DSP, etc., and in another implementation, the processor may implement a function through a logic relationship of a hardware circuit, where the logic relationship of the hardware circuit is fixed or reconfigurable, for example, the processor is a hardware circuit implemented by an ASIC or a PLD, such as an FPGA. In the reconfigurable hardware circuit, the processor loads the configuration document, and the process of implementing the configuration of the hardware circuit may be understood as a process of loading instructions by the processor to implement the functions of some or all of the above units. Furthermore, a hardware circuit designed for artificial intelligence may be provided, which may be understood as an ASIC, such as NPU, TPU, DPU, etc.
It will be seen that each of the units in the above apparatus may be one or more processors (or processing circuits) configured to implement the above methods, e.g., CPU, GPU, NPU, TPU, DPU, a microprocessor, DSP, ASIC, FPGA, or a combination of at least two of these processor forms.
Furthermore, the units in the above apparatus may be integrated together in whole or in part, or may be implemented independently. In one implementation, these units are integrated together and implemented in the form of a SoC. The SoC may include at least one processor for implementing any of the methods above or for implementing the functions of the units of the apparatus, where the at least one processor may be of different types, including, for example, a CPU and an FPGA, a CPU and an artificial intelligence processor, a CPU and a GPU, and the like.
The embodiment of the application also provides a control device, which comprises a processing unit and a storage unit, wherein the storage unit is used for storing instructions, and the processing unit executes the instructions stored by the storage unit so as to enable the device to execute the control method executed by the embodiment.
Alternatively, if the apparatus is located in the terminal device, the processing unit may be the processors 121-12n shown in fig. 1.
The embodiment of the application also provides a control system, which can comprise the terminal unit and a computing platform, and the computing platform can comprise the control device 1600.
Optionally, the sound generating system may be included in the control system.
The embodiment of the application also provides a terminal device, which can comprise the control device 1600 or the control system.
Alternatively, the terminal device may be a vehicle (e.g., a vehicle).
An embodiment of the present application also provides a computer program product comprising computer program code which, when run on a computer, causes the computer to perform the control method of the above embodiment.
The embodiment of the present application also provides a computer-readable medium storing a program code that, when run on a computer, causes the computer to execute the control method in the above embodiment.
The embodiment of the application also provides a chip, which comprises a circuit for executing the control method in the embodiment.
In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or by instructions in the form of software. The method disclosed in connection with the embodiments of the present application may be directly embodied as a hardware processor executing or may be executed by a combination of hardware and software modules in the processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or power-on erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in a memory, and the processor reads the information in the memory and, in combination with its hardware, performs the steps of the above method. To avoid repetition, a detailed description is not provided herein.
It should be appreciated that in embodiments of the present application, the memory may include read only memory and random access memory, and provide instructions and data to the processor.
It should also be understood that, in various embodiments of the present application, the sequence numbers of the foregoing processes do not mean the order of execution, and the order of execution of the processes should be determined by the functions and internal logic thereof, and should not constitute any limitation on the implementation process of the embodiments of the present application.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the several embodiments provided by the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. The storage medium includes a U disk, a removable hard disk, a read-only memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, an optical disk, or other various media capable of storing program codes.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are contemplated within the scope of the present application. Within the scope of the application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.