[go: up one dir, main page]

CN114650496B - Audio playing method and electronic equipment - Google Patents

Audio playing method and electronic equipment Download PDF

Info

Publication number
CN114650496B
CN114650496B CN202210225832.8A CN202210225832A CN114650496B CN 114650496 B CN114650496 B CN 114650496B CN 202210225832 A CN202210225832 A CN 202210225832A CN 114650496 B CN114650496 B CN 114650496B
Authority
CN
China
Prior art keywords
electronic device
state information
target
audio data
spatial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210225832.8A
Other languages
Chinese (zh)
Other versions
CN114650496A (en
Inventor
文梁宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202210225832.8A priority Critical patent/CN114650496B/en
Publication of CN114650496A publication Critical patent/CN114650496A/en
Priority to PCT/CN2023/079874 priority patent/WO2023169367A1/en
Application granted granted Critical
Publication of CN114650496B publication Critical patent/CN114650496B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/305Electronic adaptation of stereophonic audio signals to reverberation of the listening space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/162Interface to dedicated audio devices, e.g. audio drivers, interface to CODECs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Circuit For Audible Band Transducer (AREA)
  • Stereophonic System (AREA)
  • Signal Processing For Digital Recording And Reproducing (AREA)

Abstract

The application discloses an audio playing method and electronic equipment, and belongs to the technical field of electronics. The audio playing method comprises the following steps: acquiring a control instruction of space state information; determining space state information of the first electronic equipment and each second electronic equipment according to the control instruction; according to each piece of space state information, performing space mixing processing on audio data played by the first electronic equipment and audio data played by each piece of second audio data to obtain first target audio data; and sending the first target audio data to the audio playing device for playing. And performing spatial mixing processing according to the spatial state information respectively corresponding to the first electronic equipment and the at least one second electronic equipment, so that the sense of disordered azimuth generated when a user listens to a plurality of electronic equipment positioned at different positions to play different audio data simultaneously is avoided.

Description

Audio playing method and electronic equipment
Technical Field
The application belongs to the technical field of electronics, and particularly relates to an audio playing method and electronic equipment.
Background
With the development of electronic technology, the use of electronic devices is becoming more and more widespread. In an actual application scenario, a user may use a plurality of electronic devices to play different audio data simultaneously, for example, play a game on a computer while hanging a live broadcast room on a mobile phone.
If the public audio playing mode is adopted, other people may be interfered, and privacy is poor. If multiple audio data are played simultaneously through the earphone, the user perceives sound from directly in front no matter in which direction the electronic device is relative to the user, and there is no time difference when the multiple different audio data reach the ears. Therefore, it may be difficult for the user to distinguish from which electronic device the audio data comes, creating a sense of confusion.
Disclosure of Invention
The embodiment of the application aims to provide audio playing and electronic equipment, which can solve the problem of how to avoid the sense of misorientation of a user when listening to a plurality of different audio data.
In a first aspect, an embodiment of the present application provides an audio playing method, which is applied to a first electronic device, where the first electronic device is connected to at least one second electronic device, and the first electronic device is connected to an audio playing device, where the audio playing method includes:
acquiring a control instruction of space state information;
Determining space state information of the first electronic equipment and each second electronic equipment according to the control instruction;
According to each piece of space state information, performing space sound mixing processing on the audio data played by the first electronic equipment and the audio data played by each piece of second audio data to obtain first target audio data;
And sending the first target audio data to the audio playing device for playing.
In a second aspect, an embodiment of the present application provides an audio playing apparatus, which is applied to a first electronic device, where the first electronic device is connected to at least one second electronic device, and the first electronic device is connected to an audio playing device, where the audio playing apparatus includes:
the acquisition module is used for acquiring a control instruction of the space state information;
the determining module is used for determining the space state information of the first electronic equipment and each second electronic equipment according to the control instruction;
The processing module is used for carrying out space audio mixing processing on the audio data played by the first electronic equipment and the audio data played by the second electronic equipment according to the space state information to obtain first target audio data;
and the sending module is used for sending the first target audio data to the audio playing device for playing.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, and a program or instructions stored on the memory and executable on the processor, the program or instructions implementing the steps of the audio playing method according to the first aspect when executed by the processor.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which when executed by a processor implement the steps of the audio playing method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the audio playing method according to the first aspect.
In the embodiment of the application, a control instruction of space state information is acquired; determining space state information of the first electronic equipment and each second electronic equipment according to the control instruction; according to each piece of space state information, performing space mixing processing on audio data played by the first electronic equipment and audio data played by each piece of second audio data to obtain first target audio data; and sending the first target audio data to the audio playing device for playing. According to the technical scheme provided by the embodiment of the application, the spatial mixing processing can be performed according to the spatial state information corresponding to the first electronic equipment and the at least one second electronic equipment, so that the sense of azimuth confusion generated when a user listens to a plurality of electronic equipment positioned at different positions to play different audio data at the same time is avoided.
Drawings
Fig. 1 is a schematic flow chart of a first audio playing method according to an embodiment of the application;
fig. 2 is a schematic diagram of connection relationships between a first electronic device, a second electronic device, and an audio playing device according to an embodiment of the present application;
FIG. 3 is a diagram illustrating a setting interface of a spatial audio status in an audio playing method according to an embodiment of the present application;
fig. 4A is a schematic diagram of a first scenario of an audio playing method according to an embodiment of the present application;
fig. 4B is a schematic diagram of a second scenario of an audio playing method according to an embodiment of the present application;
fig. 5 is a schematic diagram of a third scenario of an audio playing method according to an embodiment of the present application;
Fig. 6 is a second flowchart of an audio playing method according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an audio playing device according to an embodiment of the present application;
Fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
Fig. 9 is a schematic hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions of the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which are obtained by a person skilled in the art based on the embodiments of the present application, fall within the scope of protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type, and are not limited to the number of objects, such as the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The audio playing method provided by the embodiment of the application is described in detail below through specific embodiments and application scenes thereof with reference to the accompanying drawings.
Fig. 1 is a schematic flow chart of a first audio playing method according to an embodiment of the application.
The audio playing method is applied to first electronic equipment, the first electronic equipment is connected with at least one second electronic equipment, and the first electronic equipment is connected with the audio playing equipment.
The first electronic device may be an electronic device having an audio data processing function and an audio playing function, such as a computer, a mobile phone, a tablet, etc. The second electronic device may be an electronic device having an audio data playing function, such as a computer, a mobile phone, a tablet, etc.
The at least one second electronic device may be one second electronic device or may be a plurality of second electronic devices.
The audio playing device may be a headset or other audio electronic device.
Fig. 2 is a schematic diagram of connection relationships between a first electronic device, a second electronic device, and an audio playing device according to an embodiment of the present application. As shown in fig. 2, a first electronic device 201 is connected to a second electronic device 202, and the first electronic device 201 is connected to an earphone 203. The user wears the headphones 203 while listening to the audio data played by the first electronic device 201 and the audio data played by the second electronic device 202. In fig. 2, the user faces the second electronic device 202, and may also watch the video frame played by the second electronic device 202.
Step 102, a control instruction of the space state information is acquired.
The spatial state information may be spatial position information, spatial position information and audio parameters, or spatial audio state. The audio parameters may be various parameters that may affect the auditory effect, such as volume information, tone information, etc.
The spatial position information may be spatial position information of the virtual sound source corresponding to the first electronic device in the preset sphere, or spatial position information of the virtual sound source corresponding to any one of the second electronic devices in the preset sphere. The volume information may be the volume of the first electronic device or the volume of any one of the second electronic devices.
The control instruction may be a setting instruction, a modification instruction, a detection instruction, or the like.
It should be noted that the volume information may be adjusted independently, or may be adjusted simultaneously with the spatial position information, or may be adjusted to volume information corresponding to the adjusted spatial position information after the spatial position information is adjusted.
Optionally, the spatial state information includes spatial position information and audio parameters, and the control instruction for acquiring the spatial state information includes: receiving a first control instruction for spatial location information; and/or receiving a second control instruction for the audio parameter.
The first control instruction may be a setting instruction or a modification instruction for the spatial location information of the first electronic device, or may be a setting instruction or a modification instruction for the spatial location information of the at least one second electronic device.
Optionally, receiving a first control instruction for spatial location information includes: in the first electronic equipment and at least one second electronic equipment, determining the position information of a virtual sound source corresponding to the electronic equipment in a preset sphere as the spatial position information of the electronic equipment on a user interaction interface aiming at any one electronic equipment; receiving a position adjustment instruction of a virtual sound source; the position adjusting instruction is used for adjusting the position information of the virtual sound source in the preset sphere.
The user interaction interface may refer to fig. 3. Fig. 3 is a setup interface diagram of a spatial audio state in an audio playing method according to an embodiment of the present application. The spatial audio state setting interface diagram shows a spatial audio state one and a spatial audio state two.
The spatial location information of the first electronic device, the second electronic device 1, the second electronic devices 2, …, and the second electronic device n corresponding to the first spatial audio state and the spatial location information of the first electronic device, the second electronic device 1, the second electronic device 2, … and the second electronic device n corresponding to the second spatial audio state can be shown in fig. 3.
The setting interface diagram of the spatial audio state also shows the parameter setting interfaces of the first electronic device volume, the second electronic device 1 volume, the second electronic device 2, … and the second electronic device n volume in the spatial audio state, and the parameter setting interfaces of the first electronic device volume, the second electronic device 1 volume and the second electronic device 2, … and the second electronic device n volume in the spatial audio state.
The circle in fig. 3 represents a sphere, and in the first spatial audio state, the position indicated by the arrow corresponding to the first electronic device is the spatial position of the first electronic device on the sphere, and similarly, the position indicated by the arrow corresponding to each second electronic device is the spatial position of each second electronic device on the sphere. The second spatial audio state is similar to the first spatial audio state, and will not be described again here.
The position adjustment instruction may be to add a new virtual sound source to the preset sphere and set position information of the virtual sound source, or to adjust a virtual sound source from preset initial position information to target position information according to user's needs in the preset sphere, or to adjust a virtual sound source from position information according to old hearing needs of the user to position information according to new hearing needs of the user in the preset sphere.
The second control instruction may be a setting instruction or a modification instruction for the audio parameters of the first electronic device, or may be a setting instruction or a modification instruction for the audio parameters of the at least one second electronic device.
The audio parameters may be various parameters that may affect the auditory effect, such as volume information, tone information, etc. The following description will take audio parameters as volume information as an example:
The user may preset the volume information in different scenes, and specifically, may configure a plurality of volume states, and the volume information of the first electronic device and the volume information of each second electronic device in each volume state. The second control instruction may also be a switch instruction for the volume state.
For example, the volume of audio data played to the first electronic device in the front of the user is larger, and the volume of audio data played to the second electronic devices on both sides of the user is smaller.
Optionally, the audio parameter comprises volume information; receiving a second control instruction for the audio parameter, comprising: and receiving a volume adjustment instruction for the electronic equipment on a user interaction interface aiming at any one of the first electronic equipment and at least one second electronic equipment.
The user interaction interface may refer to fig. 3. In the user interaction interface shown in fig. 3, the volume information of each of the first electronic device and the at least one second electronic device may be set separately, and the volume information may be increased or decreased.
Optionally, the first electronic device is a main sound device located directly in front of the user; each second electronic device is a consonant device positioned in front of the user; a control instruction for acquiring spatial state information, comprising: acquiring a first main and auxiliary switching instruction; the first main-auxiliary switching instruction is used for changing the spatial state information of the first electronic device and each second electronic device, so that the spatial state information of the selected second electronic device corresponds to the front of the user, and the spatial state information of the first electronic device corresponds to the non-front of the user; or the at least one second electronic device comprises a target second electronic device; the target second electronic equipment is a main sound equipment, and the first electronic equipment is a consonant equipment; a control instruction for acquiring spatial state information, comprising: acquiring a second main and auxiliary switching instruction; the second switching instruction is used for changing the space state information of the first electronic device and the target second electronic device, so that the space state information of the first electronic device corresponds to the front of the user, and the space state information of the target second electronic device corresponds to the non-front of the user.
The main sound equipment is positioned right in front of the user, and the attention of the user is mainly focused on the audio data played by the main sound equipment; the consonant device is located in non-front of the user, it being understood that the user's attention to the audio data played by the consonant device is prioritized after the audio data played by the main sound device.
For example, a computer located right in front of a user is playing a net lesson, the computer is a main voice device, a mobile phone located on the left side of the user is playing a shopping live broadcast, the mobile phone is a consonant device, the main efforts of the user are focused on the net lesson, and the attention of the user to the live broadcast audio data is lower than that of the net lesson.
In case the at least one second electronic device comprises a target second electronic device, a switching between the main and the consonant device is possible as follows:
(a1) The first electronic equipment is a main sound equipment, the target second electronic equipment is a consonant equipment, the second electronic equipment is switched to the main sound equipment through a first main and auxiliary switching instruction, and the first electronic equipment is switched to the consonant equipment.
(A2) The first electronic equipment is a consonant equipment, the target second electronic equipment is a main sound equipment, the first electronic equipment is switched to the main sound equipment through a second main and auxiliary switching instruction, and the second electronic equipment is switched to the consonant equipment.
For example, the first electronic device is a main sound device, located right in front of the user, the target second electronic device is a consonant device, located on the left side of the user, and when the user turns around to face the target second electronic device, the spatial state information of the first electronic device and the target second electronic device is changed by the first main and auxiliary switching instructions, in which case the target second electronic device is located right in front of the user as the main sound device, and the first electronic device is located right of the user as the consonant device.
In the case where the at least one second electronic device includes at least two second electronic devices, the switching between the main sound device and the auxiliary sound device in this case is described taking as an example that the at least one second electronic device includes the second electronic device 1 and the second electronic device 2:
(b1) The first electronic equipment is a main sound equipment, the second electronic equipment 1 and the second electronic equipment 2 are auxiliary sound equipment, the selected second electronic equipment 1 is switched to the main sound equipment through a first main and auxiliary switching instruction, and the first electronic equipment is switched to the auxiliary sound equipment;
(b2) The first electronic equipment is a main sound equipment, the second electronic equipment 1 and the second electronic equipment 2 are auxiliary sound equipment, the selected second electronic equipment 2 is switched to the main sound equipment through a first main and auxiliary switching instruction, and the first electronic equipment is switched to the auxiliary sound equipment;
(b3) The second electronic equipment 2 is a main sound equipment, the first electronic equipment and the second electronic equipment 1 are auxiliary sound equipment, the selected first electronic equipment is switched to the main sound equipment through a third main and auxiliary switching instruction, and the second electronic equipment 2 is switched to the auxiliary sound equipment;
(b4) The second electronic equipment 2 is a main sound equipment, the first electronic equipment and the second electronic equipment 1 are auxiliary sound equipment, the selected second electronic equipment 1 is switched to the main sound equipment through a third main and auxiliary switching instruction, and the second electronic equipment 2 is switched to the auxiliary sound equipment;
(b5) The second electronic equipment 1 is a main sound equipment, the first electronic equipment and the second electronic equipment 2 are auxiliary sound equipment, the selected first electronic equipment is switched to the main sound equipment through a third main and auxiliary switching instruction, and the second electronic equipment 1 is switched to the auxiliary sound equipment;
(b6) The second electronic device 1 is a main sound device, the first electronic device and the second electronic device 2 are auxiliary sound devices, the selected second electronic device 2 is switched to the main sound device through a third main and auxiliary switching instruction, and the second electronic device 1 is switched to the auxiliary sound device.
The first primary and secondary switching instruction at least comprises the following implementation manners:
(c1) And performing control operation on a preset user interaction interface aiming at the space state information.
For example, the preset sphere is rotated, so that the position information of the virtual sound source corresponding to the electronic device selected as the main sound device in the preset sphere reaches the preset position corresponding to the main sound device after rotation, and in this case, the virtual sound source originally at the preset position moves to other positions in the sphere after rotation.
For another example, in the first electronic device and the at least one second electronic device, each electronic device corresponds to one virtual sound source. Dragging the virtual sound source in the preset sphere to change the position information of the virtual sound source in the preset sphere. The user can drag the virtual sound source corresponding to the first electronic equipment, so that the virtual sound source leaves a preset position corresponding to the main sound equipment; the user can drag the virtual sound source corresponding to the second electronic device to the preset position.
For example, a plurality of preset spatial audio states are displayed on the user interaction interface, wherein in the spatial audio state 1, the first electronic equipment is main sound equipment, and the target second electronic equipment is auxiliary sound equipment; the target second electronic device in the spatial audio state 2 is a main sound device, and the first electronic device is a consonant device. Currently in the spatial audio state 1, and switching to the spatial audio state 2 according to user operation.
(C2) The pupil gaze point is moved.
For example, the user initially looks at the computer directly in front and then turns his head to look at the phone to the left.
The second primary-secondary switching instruction and the third primary-secondary switching instruction are similar to the first primary-secondary switching instruction, and are not described herein.
The spatial state information of the second electronic device corresponds to the position information of the virtual sound source corresponding to the second electronic device in the preset sphere, which is located at the preset position of the main sound device in the preset sphere, or the pupil gaze point is located in the second electronic device.
In this case, the audio data played by the second electronic device sounds to be transmitted from the right front.
The spatial state information of the first electronic device corresponds to the non-front side of the user, and may be that the position information of the virtual sound source corresponding to the first electronic device in the preset sphere is not at the preset position of the main sound device in the preset sphere, or that the pupil gaze point is located outside the first electronic device.
In this case, the audio data played by the first electronic device sounds to be transmitted from the other direction than the straight ahead.
The spatial state information of the first electronic device corresponds to the front of the user, and the spatial state information of the target second electronic device corresponds to the non-front of the user, similar to the above features, and will not be repeated here.
It should be noted that, in the case where the actual positions of the first electronic device and the second electronic devices are not changed, when the spatial position information of the first electronic device in the preset sphere is changed, the spatial position information of each second electronic device in the preset sphere is changed accordingly, and fig. 4A and fig. 4B may be referred to.
Fig. 4A is a schematic diagram of a first scenario of an audio playing method according to an embodiment of the present application; fig. 4B is a schematic diagram of a second scenario of an audio playing method according to an embodiment of the present application.
As shown in fig. 4A, in a first scenario, a first electronic device 401 is located directly in front of a user and a second electronic device 402 is located on the right hand side of the user. In a second scenario, as shown in fig. 4B, a first electronic device 401 is located on the left side of the user and a second electronic device 402 is located directly in front of the user.
For example, in a first scenario, the user is looking at the first electronic device 401 directly in front, listening to both the audio data played by the first electronic device 401 and the audio data played by the second electronic device on the right. Next, in the second scenario, the user turns his head towards the second electronic device 402, listening to both the audio data played by the first electronic device 401 and the audio data played by the second electronic device on the right. The actual positions of the first electronic device 401 and the second electronic device 402 are not changed, and the position of the pupil gaze point of the user is changed, so that the first electronic device 401 is positioned on the left side of the user in the case that the second electronic device 402 is positioned right in front of the user.
Optionally, the control instruction for acquiring the space state information includes: and acquiring a first detection result of the first electronic device aiming at the pupil fixation point and a second detection result of each second electronic device aiming at the pupil fixation point.
The first electronic device and each of the second electronic devices may be provided with a sensor having detection capabilities of a pupil gaze point. Each of the second electronic devices may transmit the obtained second detection result to the first electronic device after pupil gaze point detection by the sensor.
At the same point in time, the pupil gaze point may be present in the first electronic device or in a second electronic device.
The first detection result may include whether the pupil gaze point is detected or not, and may also include position information of the pupil gaze point on the first electronic device.
The second detection result may include whether the pupil gaze point is detected or not, and may also include position information of the pupil gaze point on the second electronic device.
And 104, determining the space state information of the first electronic equipment and each second electronic equipment according to the control instruction.
Optionally, determining the spatial state information of the first electronic device and each second electronic device according to the control instruction includes: determining a target position watched by the pupil according to the first detection result and at least one second detection result; and determining the space state information of the first electronic device and each second electronic device according to the target position.
If the first detection result may indicate that the pupil fixation point and the position information of the pupil fixation point are detected on the first electronic device, and the second detection result may indicate that the pupil fixation point is not detected on the second electronic device, the target position at which the pupil is fixation may be determined on the first electronic device, and the position information of the target position is that of the pupil fixation point.
If the second detection result may indicate that the pupil fixation point and the position information of the pupil fixation point are detected on the second electronic device, and the first detection result may indicate that the pupil fixation point is not detected on the first electronic device, it may be determined that the target position focused by the pupil is at the second electronic device, and the position information of the target position is that of the pupil fixation point.
Based on the position information of the pupil gaze point, the spatial state information of the first electronic device and each second electronic device can be determined
Optionally, determining the spatial state information of the first electronic device and each second electronic device according to the target position includes: determining a target electronic device gazed by the pupil and at least one non-target electronic device which is not gazed in the first electronic device and at least one second electronic device according to the target position; determining space state information of target electronic equipment according to the target position; and determining the space state information of each non-target electronic device according to the preset space state information set and the space state information of the target electronic device.
For example, the first electronic device is connected to two second electronic devices, namely the second electronic device 1 and the second electronic device 2, respectively. If the target position is on the first electronic device, the first electronic device is determined as the target electronic device that is gazed by the pupil, and the second electronic device 1 and the second electronic device 2 are determined as non-target electronic devices that are not gazed. If the target position is on the second electronic device 2, the second electronic device 2 is determined as the target electronic device that is gazed by the pupil, and the second electronic device 1 and the first electronic device are determined as non-target electronic devices that are not gazed.
The spatial state information of the target electronic device is determined according to the target position, and it can be understood that the position information of the pupil fixation point corresponds to a position corresponding to the right front of the user in the preset sphere.
The preset set of spatial state information may correspond to a preset spatial audio state. The spatial state information of each non-target electronic device is determined according to the preset spatial state information set and the spatial state information of the target electronic device, and it can be understood that the relative positions of the first electronic device and each second electronic device in the preset sphere are fixed, so that the spatial state information of each non-target electronic device can be determined under the condition that the position information of the pupil gaze point corresponds to the position corresponding to the right front of the user in the preset sphere.
Fig. 5 is a schematic diagram of a third scenario of an audio playing method according to an embodiment of the present application.
As shown in fig. 5, first, the sensor detects that the pupil gaze point 503 of the user is located in the first electronic device 501, and the user focuses on the first electronic device 501, where the first electronic device 501 is located directly in front of the user, and the spatial position information of the first electronic device 501 in the preset sphere corresponds to the directly in front of the user, and the volume of the first electronic device 501 is a preset value. The second electronic device 502 is located on the right hand side of the user, the spatial position information of the second electronic device 502 in the preset sphere corresponds to the right side of the user, and the volume of the second electronic device 502 is slightly smaller than that of the first electronic device 501. At this time, the audio data played by the first electronic device 501 is mainly and the audio data played by the second electronic device 502 is secondarily.
Second, the user's pupil gaze point 503 is moved out of the screen edge of the first electronic device. The volume of the first electronic device 501 gradually decreases as the pupil gaze point 503 moves out of the screen. The spatial position information of the first electronic device 501 in the preset sphere gradually moves leftwards as the pupil gaze point 503 moves, with the direction tending to the left hand of the user.
Next, the pupil gaze point 503 of the user starts to enter the screen edge of the second electronic device 502. The volume of the second electronic device 502 becomes progressively larger as the pupil gaze point 503 enters the screen. The spatial position information of the second electronic device in the preset sphere gradually moves leftwards as the pupil gaze point 503 moves, and the direction tends to be right in front of the user.
Finally, a pupil gaze point 503 is located at the second electronic device 502. The volume of the second electronic device 502 gradually increases until the volume increases to a preset value, and the spatial position information of the second electronic device 502 in the preset sphere corresponds to the right front of the user. The volume of the first electronic device 501 gradually decreases, and the spatial position information of the first electronic device in the preset sphere corresponds to the left hand side of the user. At this time, the audio data played by the second electronic device 502 is mainly the audio data played by the first electronic device 501.
Optionally, determining the spatial state information of the first electronic device and each second electronic device according to the target position includes: determining a target electronic device gazed by the pupil and at least one non-target electronic device which is not gazed in the first electronic device and at least one second electronic device according to the target position; determining corresponding space state information combinations from a plurality of preset candidate space state information combinations according to the target electronic equipment; the spatial state information combination corresponding to the target electronic device includes spatial state information of the target electronic device and spatial state information of each non-target electronic device.
The preset plurality of candidate spatial state information combinations may be spatial state information combinations corresponding to a plurality of preset spatial audio states. For example, the candidate spatial state information combination may include a spatial state information combination 1 corresponding to the first electronic device directly in front of the user and a spatial state information combination 2 corresponding to the second electronic device directly in front of the user, and if the target electronic device is the first electronic device, the spatial state information combination 1 may be determined according to the first electronic device, and each spatial state information corresponding to the spatial state combination 1 may be determined as the spatial state information of the first electronic device and each second electronic device.
And 106, performing spatial mixing processing on the audio data played by the first electronic equipment and the audio data played by each second audio data according to each piece of spatial state information to obtain first target audio data.
The spatial mixing process may be such that, in the case where there are at least two sound sources at different positions, audio data from the respective sound sources are mixed so that, when the mixed plurality of audio data are played by the audio playing device, each of the audio data sounds to be transmitted from the corresponding sound source, not from the same direction. In this process, although the mixing operation is performed on a plurality of audio data, the mixing operation may be to make the plurality of audio data into one audio file, and when the audio file is played, each audio data still sounds independent and comes from a corresponding sound source, respectively, and the plurality of sound sources are located in different directions of the user. For example, the spatial location information of the first electronic device corresponds to the front of the user, the value of the volume information of the first electronic device is x, the spatial location information of the second electronic device corresponds to the left of the user, and the value of the volume information of the second electronic device is y, x > y. After spatial mixing processing is performed on the first audio data and the second audio data, the first audio data sounds to come from the front of the user, the volume is x, and the second audio data sounds to come from the left side of the user, and the volume is y.
Through the spatial mixing processing, the audio experience with definite azimuth can be obtained, each audio data sounds to come from the corresponding sound source, each sound source can be located in different directions, and various audio parameters such as the volume of each audio data can be flexibly set respectively, so that the effect that a plurality of audio playing devices located in different directions of a user play corresponding audio data simultaneously can be realized by playing the audio data after the spatial mixing processing through only one audio playing device.
Step 108, sending the first target audio data to the audio playing device for playing.
The first electronic device may send the first target audio data to the headphones, thereby playing the first target audio data through the headphones. For example, the spatial state information of the first electronic device corresponds to the front of the user, and the spatial state information of the second electronic device corresponds to the left of the user, and in the case where the user listens to the first target audio data through the headphones, the user can hear the first audio data from the front and with a larger volume, while the user can hear the second audio data from the left and with a smaller volume.
Optionally, the audio playing method further comprises: under the condition that the audio data played by the first electronic equipment are replaced, according to each piece of space state information, performing space mixing processing on the replaced audio data played by the first electronic equipment and the audio data played by each piece of second audio data to obtain second target audio data, and sending the second target audio data to the audio playing equipment for playing; or under the condition that the audio data played by each second electronic device are replaced, according to each piece of space state information, performing space mixing processing on the audio data played by the first electronic device and the replaced audio data played by each piece of second audio data to obtain third target audio data, and sending the third target audio data to the audio playing device for playing.
In the embodiment of the audio playing method shown in fig. 1, a control instruction of the spatial state information is acquired; determining space state information of the first electronic equipment and each second electronic equipment according to the control instruction; according to each piece of space state information, performing space mixing processing on audio data played by the first electronic equipment and audio data played by each piece of second audio data to obtain first target audio data; and sending the first target audio data to the audio playing device for playing. According to the technical scheme provided by the embodiment of the application, the spatial mixing processing can be performed according to the spatial state information corresponding to the first electronic equipment and the at least one second electronic equipment, so that the sense of azimuth confusion generated when a user listens to a plurality of electronic equipment positioned at different positions to play different audio data at the same time is avoided.
Based on the same technical concept, the application also provides an embodiment of an audio playing method, as shown in fig. 4. Fig. 6 is a second flowchart of an audio playing method according to an embodiment of the present application. The audio playback device in this embodiment may be a headphone.
Referring to fig. 6, in step 602, a first electronic device is connected to a second electronic device, and an earphone is connected to the first electronic device.
The second electronic device transmits audio data to the first electronic device, step 604.
In step 606, the user sets spatial location information according to the actual location status or preference of the device.
In step 608, the user sets the volume level of the different scenes, and the user can configure various volume states.
Step 610, performing spatial mixing processing on audio data played by the first electronic device and the second electronic device, and outputting the audio data to the earphone.
After step 610, at least one of step 612, step 618, and step 620 may be performed.
Step 612, pupil gaze point is detected.
Step 614, whether the target electronic device switches.
If yes, go back to execute step 608; if not, go to step 606.
Step 616, maintain the original output state.
Step 618, determine if the audio data is replaced.
If yes, go back to execute step 606; if not, go to step 616.
Step 620, determine whether the volume status is switched.
If yes, go back to execute step 608; if not, go to step 616.
The audio playing method provided in the embodiment shown in fig. 6 can implement each process implemented by the foregoing embodiment of the audio playing method, and in order to avoid repetition, a description is omitted here.
It should be noted that, in the audio playing method provided by the embodiment of the present application, the execution body may be an audio playing device, or a control module of the audio playing device for executing the audio playing method. In the embodiment of the present application, a method for executing audio playing by an audio playing device is taken as an example, and the audio playing device provided by the embodiment of the present application is described.
Fig. 7 is a schematic structural diagram of an audio playing device according to an embodiment of the present application.
Referring to fig. 7, an audio playing apparatus applied to a first electronic device, the first electronic device being connected to at least one second electronic device, the first electronic device being connected to an audio playing device, the audio playing apparatus comprising:
An acquisition module 701, configured to acquire a control instruction of the spatial state information;
A determining module 702, configured to determine spatial state information of the first electronic device and each second electronic device according to the control instruction;
A processing module 703, configured to perform spatial mixing processing on audio data played by the first electronic device and audio data played by each second audio data according to each spatial state information, so as to obtain first target audio data;
and the sending module 704 is configured to send the first target audio data to the audio playing device for playing.
Optionally, the first electronic device is a main sound device; each second electronic device is a consonant device; the acquisition module is specifically configured to:
acquiring a first main and auxiliary switching instruction; the first main-auxiliary switching instruction is used for changing the spatial state information of the first electronic device and each second electronic device, so that the spatial state information of the selected second electronic device corresponds to the front of the user, and the spatial state information of the first electronic device corresponds to the non-front of the user;
Or the at least one second electronic device comprises a target second electronic device; the target second electronic equipment is a main sound equipment, and the first electronic equipment is a consonant equipment; the acquisition module is specifically configured to:
Acquiring a second main and auxiliary switching instruction; the second switching instruction is used for changing the space state information of the first electronic device and the target second electronic device, so that the space state information of the first electronic device corresponds to the front of the user, and the space state information of the target second electronic device corresponds to the non-front of the user.
Optionally, the acquiring module 701 is specifically configured to:
and acquiring a first detection result of the first electronic device aiming at the pupil fixation point and a second detection result of each second electronic device aiming at the pupil fixation point.
Optionally, the determining module 702 includes:
A first determining unit, configured to determine a target position gazed by the pupil according to the first detection result and at least one second detection result;
and the second determining unit is used for determining the space state information of the first electronic equipment and each second electronic equipment according to the target position.
Optionally, the second determining unit is specifically configured to:
Determining a target electronic device gazed by the pupil and at least one non-target electronic device which is not gazed in the first electronic device and at least one second electronic device according to the target position;
determining space state information of target electronic equipment according to the target position;
And determining the space state information of each non-target electronic device according to the preset space state information set and the space state information of the target electronic device.
Optionally, the second determining unit is specifically configured to:
Determining a target electronic device gazed by the pupil and at least one non-target electronic device which is not gazed in the first electronic device and at least one second electronic device according to the target position;
Determining corresponding space state information combinations from a plurality of preset candidate space state information combinations according to the target electronic equipment; the spatial state information combination corresponding to the target electronic device includes spatial state information of the target electronic device and spatial state information of each non-target electronic device.
Optionally, the audio playing device further comprises:
The audio mixing module is used for carrying out spatial audio mixing processing on the replaced audio data played by the first electronic equipment and the audio data played by each second audio data according to each piece of spatial state information under the condition that the audio data played by the first electronic equipment are replaced, obtaining second target audio data and sending the second target audio data to the audio playing equipment for playing; or under the condition that the audio data played by each second electronic device are replaced, according to each piece of space state information, performing space mixing processing on the audio data played by the first electronic device and the replaced audio data played by each piece of second audio data to obtain third target audio data, and sending the third target audio data to the audio playing device for playing.
Optionally, the spatial state information includes spatial location information and audio parameters, and the acquiring module 701 includes:
a first receiving unit configured to receive a first control instruction for spatial position information;
And/or the number of the groups of groups,
And the second receiving unit is used for receiving a second control instruction aiming at the audio parameter.
Optionally, the first receiving unit is specifically configured to:
In the first electronic equipment and at least one second electronic equipment, determining the position information of a virtual sound source corresponding to the electronic equipment in a preset sphere as the spatial position information of the electronic equipment on a user interaction interface aiming at any one electronic equipment;
receiving a position adjustment instruction of a virtual sound source; the position adjusting instruction is used for adjusting the position information of the virtual sound source in the preset sphere.
Optionally, the audio parameter comprises volume information; the second receiving unit is specifically configured to:
And receiving a volume adjustment instruction for the electronic equipment on a user interaction interface aiming at any one of the first electronic equipment and at least one second electronic equipment.
The audio playing device provided by the embodiment of the application acquires the control instruction of the space state information; determining space state information of the first electronic equipment and each second electronic equipment according to the control instruction; according to each piece of space state information, performing space mixing processing on audio data played by the first electronic equipment and audio data played by each piece of second audio data to obtain first target audio data; and sending the first target audio data to the audio playing device for playing. According to the technical scheme provided by the embodiment of the application, the spatial mixing processing can be performed according to the spatial state information corresponding to the first electronic equipment and the at least one second electronic equipment, so that the sense of azimuth confusion generated when a user listens to a plurality of electronic equipment positioned at different positions to play different audio data at the same time is avoided.
The audio playing device in the embodiment of the application can be a device, and can also be a component, an integrated circuit or a chip in the terminal. The device may be a mobile electronic device or a non-mobile electronic device. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), etc., and the non-mobile electronic device may be a server, a network attached storage (Network Attached Storage, NAS), a personal computer (personal computer, PC), a Television (TV), a teller machine, a self-service machine, etc., and the embodiments of the present application are not limited in particular.
The audio playing device in the embodiment of the application can be a device with an operating system. The operating system may be an Android operating system, an ios operating system, or other possible operating systems, and the embodiment of the present application is not limited specifically.
The audio playing device provided by the embodiment of the application can realize each process realized by the embodiment of the audio playing method, and in order to avoid repetition, the description is omitted here.
Optionally, as shown in fig. 8, an electronic device 800 is further provided in the embodiment of the present application, which includes a processor 801, a memory 802, and a program or an instruction stored in the memory 802 and capable of being executed on the processor 801, where the program or the instruction implements each process of the embodiment of the audio playing method when executed by the processor 801, and the same technical effects are achieved, and for avoiding repetition, a detailed description is omitted herein.
The electronic device in the embodiment of the application includes the mobile electronic device and the non-mobile electronic device.
Fig. 9 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 900 includes, but is not limited to: radio frequency unit 901, network module 902, audio output unit 903, input unit 904, sensor 905, display unit 906, user input unit 907, interface unit 908, memory 909, and processor 910.
Those skilled in the art will appreciate that the electronic device 900 may also include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 910 by a power management system to perform functions such as managing charge, discharge, and power consumption by the power management system. The electronic device structure shown in fig. 9 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
Wherein, the processor 910 is configured to obtain a control instruction of the spatial state information;
determining space state information of the first electronic equipment and each second electronic equipment according to the control instruction;
According to each piece of space state information, performing space mixing processing on audio data played by the first electronic equipment and audio data played by each piece of second audio data to obtain first target audio data;
and sending the first target audio data to the audio playing device for playing.
In the embodiment of the application, a control instruction of space state information is acquired; determining space state information of the first electronic equipment and each second electronic equipment according to the control instruction; according to each piece of space state information, performing space mixing processing on audio data played by the first electronic equipment and audio data played by each piece of second audio data to obtain first target audio data; and sending the first target audio data to the audio playing device for playing. According to the technical scheme provided by the embodiment of the application, the spatial mixing processing can be performed according to the spatial state information corresponding to the first electronic equipment and the at least one second electronic equipment, so that the sense of azimuth confusion generated when a user listens to a plurality of electronic equipment positioned at different positions to play different audio data at the same time is avoided.
Optionally, the first electronic device is a main sound device located directly in front of the user; each second electronic device is a consonant device positioned in front of the user; a processor 910 for:
a control instruction for acquiring spatial state information, comprising:
acquiring a first main and auxiliary switching instruction; the first main-auxiliary switching instruction is used for changing the spatial state information of the first electronic device and each second electronic device, so that the spatial state information of the selected second electronic device corresponds to the front of the user, and the spatial state information of the first electronic device corresponds to the non-front of the user;
Or the at least one second electronic device comprises a target second electronic device; the target second electronic equipment is a main sound equipment, and the first electronic equipment is a consonant equipment; a control instruction for acquiring spatial state information, comprising:
Acquiring a second main and auxiliary switching instruction; the second switching instruction is used for changing the space state information of the first electronic device and the target second electronic device, so that the space state information of the first electronic device corresponds to the front of the user, and the space state information of the target second electronic device corresponds to the non-front of the user.
Optionally, the processor 910 is further configured to:
a control instruction for acquiring spatial state information, comprising:
and acquiring a first detection result of the first electronic device aiming at the pupil fixation point and a second detection result of each second electronic device aiming at the pupil fixation point.
Optionally, the processor 910 is further configured to:
According to the control instruction, determining the space state information of the first electronic device and each second electronic device, including:
determining a target position watched by the pupil according to the first detection result and at least one second detection result;
and determining the space state information of the first electronic device and each second electronic device according to the target position.
Optionally, the processor 910 is further configured to:
determining spatial state information of the first electronic device and each second electronic device according to the target position, wherein the spatial state information comprises:
Determining a target electronic device gazed by the pupil and at least one non-target electronic device which is not gazed in the first electronic device and at least one second electronic device according to the target position;
determining space state information of target electronic equipment according to the target position;
And determining the space state information of each non-target electronic device according to the preset space state information set and the space state information of the target electronic device.
Optionally, the processor 910 is further configured to:
determining spatial state information of the first electronic device and each second electronic device according to the target position, wherein the spatial state information comprises:
Determining a target electronic device gazed by the pupil and at least one non-target electronic device which is not gazed in the first electronic device and at least one second electronic device according to the target position;
Determining corresponding space state information combinations from a plurality of preset candidate space state information combinations according to the target electronic equipment; the spatial state information combination corresponding to the target electronic device includes spatial state information of the target electronic device and spatial state information of each non-target electronic device.
Optionally, the processor 910 is further configured to:
Under the condition that the audio data played by the first electronic equipment are replaced, according to each piece of space state information, performing space mixing processing on the replaced audio data played by the first electronic equipment and the audio data played by each piece of second audio data to obtain second target audio data, and sending the second target audio data to the audio playing equipment for playing;
Or alternatively
And under the condition that the audio data played by each second electronic device are replaced, according to each piece of space state information, performing space mixing processing on the audio data played by the first electronic device and the replaced audio data played by each piece of second audio data to obtain third target audio data, and sending the third target audio data to the audio playing device for playing.
Optionally, the processor 910 is further configured to:
the spatial state information comprises spatial position information and audio parameters, and a control instruction for acquiring the spatial state information comprises the following steps:
receiving a first control instruction for spatial location information;
And/or the number of the groups of groups,
A second control instruction for the audio parameter is received.
Optionally, the processor 910 is further configured to:
receiving a first control instruction for spatial location information, comprising:
In the first electronic equipment and at least one second electronic equipment, determining the position information of a virtual sound source corresponding to the electronic equipment in a preset sphere as the spatial position information of the electronic equipment on a user interaction interface aiming at any one electronic equipment;
receiving a position adjustment instruction of a virtual sound source; the position adjusting instruction is used for adjusting the position information of the virtual sound source in the preset sphere.
Optionally, the audio parameter comprises volume information; processor 910 is further configured to:
receiving a second control instruction for the audio parameter, comprising:
And receiving a volume adjustment instruction for the electronic equipment on a user interaction interface aiming at any one of the first electronic equipment and at least one second electronic equipment.
In the embodiment of the application, through the first main and auxiliary switching instruction and the second main and auxiliary switching instruction, the switching of the main sound equipment and the auxiliary sound equipment can be flexibly performed in the first electronic equipment and at least one second electronic equipment; according to the pupil fixation point detection, the spatial position information and the spatial volume information can be freely and flexibly controlled, and the audio playing effect which is most suitable for the user is provided for the user in the process of naturally transferring the sight of the user; by receiving the position adjustment instruction of the virtual sound source and the volume adjustment instruction aiming at the electronic equipment on the user interaction interface, the space state information of each electronic equipment can be flexibly set, and the hearing effect is enriched.
It should be appreciated that in embodiments of the present application, the input unit 904 may include a graphics processor (Graphics Processing Unit, GPU) 9041 and a microphone 9042, with the graphics processor 9041 processing image data of still pictures or video obtained by an image capture device (e.g., a camera) in a video capture mode or an image capture mode. The display unit 906 may include a display panel 9061, and the display panel 9061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 907 includes a touch panel 9071 and other input devices 9072. Touch panel 9071, also referred to as a touch screen. The touch panel 9071 may include two parts, a touch detection device and a touch controller. Other input devices 9072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein. Memory 909 may be used to store software programs as well as various data including, but not limited to, application programs and an operating system. The processor 910 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 910.
The embodiment of the application also provides a readable storage medium, on which a program or an instruction is stored, which when executed by a processor, implements each process of the above-mentioned audio playing method embodiment, and can achieve the same technical effects, and in order to avoid repetition, the description is omitted here.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium such as a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk or an optical disk, and the like.
The embodiment of the application further provides a chip, the chip comprises a processor and a communication interface, the communication interface is coupled with the processor, the processor is used for running programs or instructions, the processes of the embodiment of the audio playing method can be realized, the same technical effects can be achieved, and the repetition is avoided, and the description is omitted here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one, second electronic device 2, …" does not exclude that an additional identical element is present in a process, method, article or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are to be protected by the present application.

Claims (12)

1. An audio playing method is applied to a first electronic device, the first electronic device is connected with at least one second electronic device, and the first electronic device is connected with an audio playing device, and is characterized by comprising the following steps:
a control instruction for acquiring spatial state information, the spatial state information including at least one of spatial position information and audio parameters;
Determining space state information of the first electronic equipment and each second electronic equipment according to the control instruction;
According to each piece of space state information, performing space mixing processing on the audio data played by the first electronic equipment and the audio data played by each piece of second electronic equipment to obtain first target audio data, wherein the positions of the first electronic equipment and each piece of second electronic equipment are different, and the space mixing processing comprises mixing the audio data played by the first electronic equipment and the audio data played by each piece of second electronic equipment so as to enable the first target audio data to correspond to sound sources in different directions when being played;
And sending the first target audio data to the audio playing device for playing.
2. The method of claim 1, wherein the first electronic device is a main sound device located directly in front of a user; each of the second electronic devices is a consonant device located non-directly in front of the user; the control instruction for acquiring the space state information comprises the following steps:
Acquiring a first main and auxiliary switching instruction; the first main-auxiliary switching instruction is used for changing the spatial state information of the first electronic device and each second electronic device, so that the spatial state information of the selected second electronic device corresponds to the front of the user, and the spatial state information of the first electronic device corresponds to the non-front of the user;
Or the at least one second electronic device comprises a target second electronic device; the target second electronic equipment is the main sound equipment, and the first electronic equipment is the consonant equipment; the control instruction for acquiring the space state information comprises the following steps:
acquiring a second main and auxiliary switching instruction; the second main-auxiliary switching instruction is used for changing the space state information of the first electronic device and the target second electronic device, so that the space state information of the first electronic device corresponds to the front of the user, and the space state information of the target second electronic device corresponds to the non-front of the user.
3. The method of claim 1, wherein the control instruction for obtaining the spatial state information comprises:
and acquiring a first detection result of the first electronic device aiming at the pupil fixation point and a second detection result of each second electronic device aiming at the pupil fixation point.
4. A method according to claim 3, wherein said determining spatial state information of the first electronic device and each of the second electronic devices according to the control instructions comprises:
Determining a target position watched by the pupil according to the first detection result and at least one second detection result;
and determining the space state information of the first electronic device and each second electronic device according to the target position.
5. The method of claim 4, wherein said determining spatial state information for the first electronic device and each of the second electronic devices based on the target location comprises:
Determining a target electronic device gazed by the pupil and at least one non-target electronic device which is not gazed in the first electronic device and the at least one second electronic device according to the target position;
determining space state information of the target electronic equipment according to the target position;
And determining the space state information of each non-target electronic device according to the preset space state information set and the space state information of the target electronic device.
6. The method of claim 4, wherein said determining spatial state information for the first electronic device and each of the second electronic devices based on the target location comprises:
Determining a target electronic device gazed by the pupil and at least one non-target electronic device which is not gazed in the first electronic device and the at least one second electronic device according to the target position;
determining a corresponding space state information combination from a plurality of preset alternative space state information combinations according to the target electronic equipment; the spatial state information combination corresponding to the target electronic device comprises the spatial state information of the target electronic device and the spatial state information of each non-target electronic device.
7. The method as recited in claim 1, further comprising:
Under the condition that the audio data played by the first electronic equipment are replaced, according to each piece of space state information, performing space mixing processing on the replaced audio data played by the first electronic equipment and the audio data played by each piece of second electronic equipment to obtain second target audio data, and sending the second target audio data to the audio playing equipment for playing;
Or alternatively
And under the condition that the audio data played by each second electronic device are replaced, according to each piece of space state information, performing space mixing processing on the audio data played by the first electronic device and the replaced audio data played by each second electronic device to obtain third target audio data, and sending the third target audio data to the audio playing device for playing.
8. The method of claim 1, wherein the control instruction for obtaining the spatial state information comprises:
receiving a first control instruction for the spatial location information;
And/or the number of the groups of groups,
A second control instruction for the audio parameter is received.
9. The method of claim 8, wherein the receiving the first control instruction for the spatial location information comprises:
in the first electronic device and the at least one second electronic device, determining, on a user interaction interface, position information of a virtual sound source corresponding to any one electronic device in a preset sphere as spatial position information of the electronic device;
receiving a position adjustment instruction of the virtual sound source; the position adjusting instruction is used for adjusting the position information of the virtual sound source in the preset sphere.
10. The method of claim 8, wherein the audio parameters include volume information; the receiving a second control instruction for the audio parameter includes:
and receiving a volume adjustment instruction for any one electronic device in the first electronic device and the at least one second electronic device on a user interaction interface.
11. An audio playing device applied to a first electronic device, the first electronic device being connected to at least one second electronic device, the first electronic device being connected to an audio playing device, comprising:
An acquisition module for acquiring control instructions of spatial state information including at least one of spatial position information and audio parameters;
the determining module is used for determining the space state information of the first electronic equipment and each second electronic equipment according to the control instruction;
The processing module is used for carrying out spatial mixing processing on the audio data played by the first electronic equipment and the audio data played by the second electronic equipment according to each piece of spatial state information to obtain first target audio data, wherein the positions of the first electronic equipment and each piece of second electronic equipment are different, and the spatial mixing processing comprises the steps of mixing the audio data played by the first electronic equipment and the audio data played by each piece of second electronic equipment so as to enable the first target audio data to correspond to sound sources in different directions when being played;
and the sending module is used for sending the first target audio data to the audio playing device for playing.
12. An electronic device comprising a processor, a memory and a program or instruction stored on the memory and executable on the processor, which when executed by the processor, implements the steps of the audio playback method of any one of claims 1-10.
CN202210225832.8A 2022-03-07 2022-03-07 Audio playing method and electronic equipment Active CN114650496B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210225832.8A CN114650496B (en) 2022-03-07 2022-03-07 Audio playing method and electronic equipment
PCT/CN2023/079874 WO2023169367A1 (en) 2022-03-07 2023-03-06 Audio playing method and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210225832.8A CN114650496B (en) 2022-03-07 2022-03-07 Audio playing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN114650496A CN114650496A (en) 2022-06-21
CN114650496B true CN114650496B (en) 2024-09-27

Family

ID=81993315

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210225832.8A Active CN114650496B (en) 2022-03-07 2022-03-07 Audio playing method and electronic equipment

Country Status (2)

Country Link
CN (1) CN114650496B (en)
WO (1) WO2023169367A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114650496B (en) * 2022-03-07 2024-09-27 维沃移动通信有限公司 Audio playing method and electronic equipment

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111586526A (en) * 2020-05-26 2020-08-25 维沃移动通信有限公司 Audio output method, audio output device and electronic equipment

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7876903B2 (en) * 2006-07-07 2011-01-25 Harris Corporation Method and apparatus for creating a multi-dimensional communication space for use in a binaural audio system
US10038957B2 (en) * 2013-03-19 2018-07-31 Nokia Technologies Oy Audio mixing based upon playing device location
US10079941B2 (en) * 2014-07-07 2018-09-18 Dolby Laboratories Licensing Corporation Audio capture and render device having a visual display and user interface for use for audio conferencing
CA3029123A1 (en) * 2016-06-23 2017-12-28 Josh KILLHAM Positional audio assignment system
US9955281B1 (en) * 2017-12-02 2018-04-24 Philip Scott Lyren Headphones with a digital signal processor (DSP) and error correction
CN110677377B (en) * 2018-07-03 2022-03-04 中兴通讯股份有限公司 Recording processing and playing method and device, server, terminal and storage medium
CN109286862B (en) * 2018-07-31 2022-02-18 咪咕音乐有限公司 Information processing method and device, electronic device and storage medium
CN109379490B (en) * 2018-09-30 2021-02-05 Oppo广东移动通信有限公司 Audio playing method and device, electronic equipment and computer readable medium
US11206504B2 (en) * 2019-04-02 2021-12-21 Syng, Inc. Systems and methods for spatial audio rendering
CN112533041A (en) * 2019-09-19 2021-03-19 百度在线网络技术(北京)有限公司 Video playing method and device, electronic equipment and readable storage medium
JP7492330B2 (en) * 2019-12-04 2024-05-29 ローランド株式会社 headphone
CN113890932A (en) * 2020-07-02 2022-01-04 华为技术有限公司 An audio control method, system and electronic device
CN112286481A (en) * 2020-10-28 2021-01-29 维沃移动通信(杭州)有限公司 Audio output method and electronic equipment
CN112581932A (en) * 2020-11-26 2021-03-30 交通运输部南海航海保障中心广州通信中心 Wired and wireless sound mixing system based on DSP
CN112542183B (en) * 2020-12-09 2022-03-18 阿波罗智联(北京)科技有限公司 Audio data processing method, device, equipment and storage medium
CN113793625B (en) * 2021-08-04 2024-06-25 维沃移动通信有限公司 Audio playing method and device
CN113840032A (en) * 2021-09-23 2021-12-24 Oppo广东移动通信有限公司 Audio control method, audio control device and electronic equipment
CN113823250B (en) * 2021-11-25 2022-02-22 广州酷狗计算机科技有限公司 Audio playing method, device, terminal and storage medium
CN113821190B (en) * 2021-11-25 2022-03-15 广州酷狗计算机科技有限公司 Audio playing method, device, equipment and storage medium
CN114650496B (en) * 2022-03-07 2024-09-27 维沃移动通信有限公司 Audio playing method and electronic equipment

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111586526A (en) * 2020-05-26 2020-08-25 维沃移动通信有限公司 Audio output method, audio output device and electronic equipment

Also Published As

Publication number Publication date
CN114650496A (en) 2022-06-21
WO2023169367A1 (en) 2023-09-14

Similar Documents

Publication Publication Date Title
US11375560B2 (en) Point-to-point ad hoc voice communication
CN112237012B (en) Apparatus and method for controlling audio in multi-view omni-directional contents
CN109660817B (en) Video live broadcast method, device and system
CN110719529B (en) Multi-channel video synchronization method, device, storage medium and terminal
US20230024761A1 (en) Method for playing videos and electronic device
US20170192741A1 (en) Method, System, and Computer Storage Medium for Voice Control of a Split-Screen Terminal
CN112764710A (en) Audio playing mode switching method and device, electronic equipment and storage medium
CN112673651A (en) Multi-view multi-user audio user experience
CN107896337A (en) Information popularization method, apparatus and storage medium
CN114650496B (en) Audio playing method and electronic equipment
CN112291672A (en) Speaker control method, control device and electronic equipment
Marentakis et al. A comparison of feedback cues for enhancing pointing efficiency in interaction with spatial audio displays
CN113672191B (en) Audio playback method and device
CN111176605A (en) Audio output method and electronic equipment
JP2018522294A (en) Method, apparatus, program, and recording medium for controlling operating state
CN113038333B (en) Bluetooth headset control method and device, electronic equipment and readable storage medium
WO2023246166A1 (en) Method and apparatus for adjusting video progress, and computer device and storage medium
CN106293596A (en) A kind of control method and electronic equipment
CN113115179B (en) Working state adjusting method and device
CN113840033B (en) Audio data playing method and device
CN114035765A (en) Audio playing method and device
CN113709652B (en) Audio play control method and electronic equipment
CN104423871A (en) Information processing method and electronic device
CN115348240B (en) Voice call method, device, electronic equipment and storage medium for sharing document
CN115955624A (en) Earphone control method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant