Detailed Description
In order to facilitate the clear description of the technical solutions of the embodiments of the present application, the following simply describes some terms and techniques involved in the embodiments of the present application:
1. 3A algorithm the 3A algorithm may include acoustic echo cancellation (acoustic echo cancellation, AEC), background noise suppression (automatic noise suppression, ANS), automatic gain control (automatic gain cotrol, AGC).
2. MSBC codec modified sub-band coding (mSBC) is a codec technique that can encode or decode data transmitted between an electronic device and a bluetooth headset.
3. Terminology
In embodiments of the present application, the words "first," "second," and the like are used to distinguish between identical or similar items that have substantially the same function and effect. For example, the first chip and the second chip are merely for distinguishing different chips, and the order of the different chips is not limited. It will be appreciated by those of skill in the art that the words "first," "second," and the like do not limit the amount and order of execution, and that the words "first," "second," and the like do not necessarily differ.
It should be noted that, in the embodiments of the present application, words such as "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
In the embodiments of the present application, "at least one" means one or more, and "a plurality" means two or more. "and/or" describes an association of associated objects, meaning that there may be three relationships, e.g., A and/or B, and that there may be A alone, while A and B are present, and B alone, where A, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (a, b, or c) of a, b, c, a-b, a-c, b-c, or a-b-c may be represented, wherein a, b, c may be single or plural.
4. Electronic equipment
The electronic device according to the embodiments of the present application may also be any form of terminal device, for example, the electronic device may include a mobile phone, a tablet computer, a palm computer, a notebook computer, a mobile internet device (mobile INTERNET DEVICE, MID), a wearable device, a Virtual Reality (VR) device, an augmented reality (augmented reality, AR) device, a wireless terminal in industrial control (industrial control), a wireless terminal in unmanned (SELF DRIVING), a wireless terminal in teleoperation (remote medical surgery), a wireless terminal in smart grid (SMART GRID), a wireless terminal in transportation security (transportation safety), a wireless terminal in smart city (SMART CITY), a wireless terminal in smart home (smart home), a cellular phone, a cordless phone, a session initiation protocol (session initiation protocol, SIP) phone, a wireless local loop (wireless local loop, WLL) station, a personal digital assistant (personal DIGITAL ASSISTANT, PDA), a handheld device with wireless communication function, a computing device or other processing device connected to a wireless modem, a wireless terminal in smart grid (transportation safety), a wireless terminal in smart home (smart home) device, a wireless terminal in smart home (smart home) wireless phone, a wireless communication system, a wireless communication device in future implementation of the present application, or the like, and the present application is not limited to this application.
By way of example, and not limitation, in embodiments of the application, the electronic device may also be a wearable device. The wearable device can also be called as a wearable intelligent device, and is a generic name for intelligently designing daily wear by applying wearable technology and developing wearable devices, such as glasses, gloves, watches, clothes, shoes and the like. The wearable device is a portable device that is worn directly on the body or integrated into the clothing or accessories of the user. The wearable device is not only a hardware device, but also can realize a powerful function through software support, data interaction and cloud interaction. The generalized wearable intelligent device comprises full functions, large size and complete or partial functions which can be realized independently of a smart phone, such as a smart watch, a smart glasses and the like, and is only focused on certain application functions, and needs to be matched with other devices such as the smart phone for use, such as various smart bracelets, smart jewelry and the like for physical sign monitoring.
In addition, in the embodiment of the application, the electronic device can also be an electronic device in an internet of things (internet of things, ioT) system, and the IoT is an important component of the development of future information technology, and the main technical characteristics of the IoT system are that the article is connected with a network through a communication technology, so that the man-machine interconnection and the intelligent network of the internet of things are realized.
The electronic device in the embodiments of the present application may also be referred to as a User Equipment (UE), a Mobile Station (MS), a Mobile Terminal (MT), an access terminal, a subscriber unit, a subscriber station, a mobile station, a remote terminal, a mobile device, a user terminal, a wireless communication device, a user agent, or a user equipment, etc.
In an embodiment of the present application, the electronic device or each network device includes a hardware layer, an operating system layer running on top of the hardware layer, and an application layer running on top of the operating system layer. The hardware layer includes hardware such as a central processing unit (central processing unit, CPU), a memory management unit (memory management unit, MMU), and a memory (also referred to as a main memory). The operating system may be any one or more computer operating systems that implement business processes through processes (processes), such as a Linux operating system, a Unix operating system, an Android operating system, an iOS operating system, or a windows operating system. The application layer comprises applications such as a browser, an address book, word processing software, instant messaging software and the like.
By way of example, fig. 1 shows a schematic diagram of an electronic device.
The electronic device may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, keys 190, a motor 191, an indicator 192, a camera 193, a display 194, and a subscriber identity module (subscriber identification module, SIM) card interface 195, etc. The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It should be understood that the structure illustrated in the embodiments of the present application does not constitute a specific limitation on the electronic device. In other embodiments of the application, the electronic device may include more or less components than illustrated, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, for example, the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (IMAGE SIGNAL processor, ISP), a controller, a video codec, a digital signal processor (DIGITAL SIGNAL processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include an integrated circuit (inter-INTEGRATED CIRCUIT, I2C) interface, an integrated circuit built-in audio (inter-INTEGRATED CIRCUIT SOUND, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a SIM card interface, and/or a USB interface, among others.
It should be understood that the connection relationship between the modules illustrated in the embodiments of the present application is only illustrative, and does not limit the structure of the electronic device. In other embodiments of the present application, the electronic device may also use different interfacing manners, or a combination of multiple interfacing manners in the foregoing embodiments.
The internal memory 121 may be used to store computer-executable program code that includes instructions. The internal memory 121 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the electronic device (e.g., audio data, phonebook, etc.), and so forth. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like. The processor 110 performs various functional applications of the electronic device and data processing by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor. For example, the method of the embodiments of the present application may be performed.
The electronic device may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as audio playback or recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The speaker 170A, also referred to as a "horn," is used to convert an audio electrical signal into a sound signal, and may include 1 or N speakers 170A, N being a positive integer greater than 1 in the electronic device. The electronic device may listen to music, video, or hands-free conversation, etc., through speaker 170A. A receiver 170B, also referred to as a "earpiece", is used to convert the audio electrical signal into a sound signal. When the electronic device picks up a phone call or voice message, the voice can be picked up by placing the receiver 170B close to the human ear. Microphone 170C, also known as a "microphone" or "microphone," is used to convert sound signals into electrical signals. The earphone interface 170D is used to connect a wired earphone.
Fig. 2 is a software configuration block diagram of an electronic device according to an embodiment of the present application. The layered architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into five layers, from top to bottom, an application layer, an application framework layer, an Zhuoyun rows (Android runtime) and system libraries, a hardware abstraction layer, and a kernel layer, respectively.
The application layer may also be referred to as an application layer, which may include a series of application packages. As shown in FIG. 2, the application packages may include applications such as audio recordings, telephones, music, calendars, cameras, games, memos, videos, and the like. Applications may include system applications and three-way applications.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for the application of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 2, the application framework layer may include a resource manager, a content provider, a view system, and the like. The application framework layer may also include recording path controls and the like.
The window manager is used for managing window programs. The window manager may obtain the display screen size, determine if there is a status bar, lock screen, touch screen, drag screen, intercept screen, etc.
The resource manager provides various resources for the application program, such as localization strings, icons, pictures, layout files, video files, and the like.
The content provider is used for realizing the function of data sharing among different application programs, allowing one program to access the data in the other program, and simultaneously ensuring the safety of the accessed data.
The view system may be responsible for interface rendering and event handling for the application.
The recording path control may be used to communicate recording data to the recording application and may also be used to determine whether the recording application is an application in the application whitelist.
Android runtime include core libraries and virtual machines. Android runtime is responsible for scheduling and management of the android system.
The core library comprises two parts, wherein one part is a function required to be called by java language, and the other part is an android core library.
The application layer and the application framework layer run in a virtual machine. The virtual machine executes java files of the application program layer and the application program framework layer as binary files. The virtual machine is used for executing the functions of object life cycle management, stack management, thread management, security and exception management, garbage collection and the like.
The system library may also be referred to as Native layer, which may include a plurality of functional modules. Such as media libraries (function libraries), function libraries (function libraries), graphics processing libraries (e.g., openGL ES), etc.
Media libraries support a variety of commonly used audio, video format playback and recording, still image files, and the like. The media library may support a variety of audio and video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, etc.
The function library provides multiple service API interfaces for the developer, and is convenient for the developer to integrate and realize various functions quickly.
The graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The hardware abstraction layer (hardware abstract layer, HAL) is a layer of structure that is abstracted between the kernel layer and Android runtime. The hardware abstraction layer may be a package for hardware drivers that provides a unified interface for the invocation of upper layer applications. In the embodiment of the application, the hardware abstraction layer can comprise a sound recording algorithm module, a sound mixing (audio mixing, mix) module and the like.
The recording algorithm module can be used for transmitting the recording data to the recording channel control module, and can also be used for amplifying, reducing noise, filtering and the like on the recording data.
The sound mixing module can be used for mixing sound recording data picked up by the Bluetooth headset mic with downlink channel data of the SCO channel. The mixing module may also be referred to as a mixing mix module.
The kernel layer is a layer between hardware and software. The kernel layer may include a display driver, a camera driver, an audio driver, a battery driver, a bluetooth driver, a central processor driver, a USB driver, etc.
It should be noted that, the embodiment of the present application is only illustrated by using an android system, and in other operating systems (such as a Windows system, an IOS system, etc.), the scheme of the present application can be implemented as long as the functions implemented by each functional module are similar to those implemented by the embodiment of the present application.
When talking or listening to music, the user can connect the Bluetooth headset with the electronic device through Bluetooth, and use the Bluetooth headset to pass through or listen to music. In addition, the user can also use the Bluetooth headset to record or video. The bluetooth headset may include a truly wireless stereo (true wireless stereo, TWS) bluetooth headset. For convenience of description, the electronic device is taken as an example of a mobile phone.
However, when the user uses the bluetooth headset to record, the bluetooth headset cannot play the sound of the surrounding environment, and meanwhile, the user wears the bluetooth headset to prevent the user from receiving the sound of the external environment, so that the user may not receive communication information of other people nearby or sound information of surrounding automobiles. Therefore, when a user uses the Bluetooth headset to record, the transmission capability of surrounding environment sounds is poor, so that the user experience is reduced.
In view of this, according to the audio control method provided by the embodiment of the application, when the mobile phone and the bluetooth headset transmit the recording data, the bluetooth headset can be used for playing surrounding environmental sounds, so that the transmission of the environmental sounds is realized while the recording is stored, and the user experience is improved.
For convenience of description, in the embodiment of the present application, a data link of a mobile phone and a bluetooth headset transmitting a recording may be referred to as a recording path, and a data link of the bluetooth headset playing an environmental sound may be referred to as a transparent transmission path. The recording path and the transparent path are described below with reference to fig. 3.
As shown in fig. 3, the mobile phone 300 may include a top microphone 301 and a bottom microphone 302, where the microphones may also be referred to as mic, and the specific positions of the top mic301 and the bottom mic302 in the mobile phone 300 are not limited in the embodiments of the present application. The mobile phone 300 may be provided with a recording application 303, where the recording application 303 may be a preset application of the mobile phone 300 or may be a third party application, and the embodiment of the present application is not limited.
The bluetooth headset 304 may include a headset mic305 and a headset speaker 306.
A recording channel 307 and a transparent channel 308 can be set up between the mobile phone 300 and the Bluetooth headset 304.
Wherein the recording path 307 may be used to communicate recording data. Illustratively, at the time of recording, the user may open the recording function of the recording application 303 of the mobile phone 300. Based on the established recording path 307, the earphone mic305 of the bluetooth headset 304 may pick up a recording signal, which may be transferred to the recording application 303 via the recording path 307, and the recording application 303 may save the recording signal as a recording file.
The pass-through channel 308 may be used to pass ambient sound pass-through data. In one possible implementation, the earpiece mic305 of the bluetooth earpiece 304 may pick up an ambient sound, return the ambient sound to the earpiece horn 306 based on the established transparent communication path 308, and the human ear may receive the ambient sound. In another possible implementation, the top mic301 and/or the bottom mic302 of the handset 300 may pick up an ambient sound, return the ambient sound to the earpiece speaker 306 based on the built through-transmission channel 308, and the human ear may receive the ambient sound.
It can be understood that after the recording channel 307 and the transparent channel 308 are set up, a record flow in the application framework layer of the mobile phone can be started, and when the connection state of the bluetooth headset is detected, the record flow can trigger the recording channel to transmit recording data, and the transparent channel to transmit transparent data.
In some implementations, the working path between the bluetooth headset and the handset may include an advanced audio distribution mode (advanced audio distribution profile, A2 DP) path and a synchronous directional connection (synchronous connection oriented, SCO) path based on supporting high-fidelity output and low-fidelity stereo bidirectional transport protocols between the bluetooth headset and the handset. When the Bluetooth earphone and the mobile phone perform audio data transmission such as music and prompt tone, an A2DP channel can be used, and when the Bluetooth earphone and the mobile phone perform uplink and downlink audio data transmission such as conversation, an SCO channel can be used.
The audio control method provided by the embodiment of the application can multiplex the SCO channel to build a recording channel between the mobile phone and the Bluetooth headset.
The following figure 4 shows the construction flow of the recording path.
S401, starting a recording application.
The method for starting the recording application can comprise clicking an icon of the recording application, or can comprise voice input to start the recording application, and the method for starting the recording application is not limited in the embodiment of the application.
S402, monitoring connection of a Bluetooth headset.
In one possible implementation, the recording application may enable discovery and connection of bluetooth headsets through a registered broadcast.
In another possible implementation, after detecting the bluetooth headset connection, the mobile phone may report a message of the bluetooth headset connection to the recording application. It will be appreciated that this implementation may enable the initiation of the recording application and the connection detection of the bluetooth headset to be synchronized.
The connection monitoring method of the Bluetooth headset is not limited in the embodiment of the application.
S403, opening an SCO path from the Bluetooth headset to the mobile phone.
When the recording application is started and the Bluetooth headset is connected to the mobile phone, the recording application can call related interfaces, transfer parameters such as starting of the recording application, connection of the Bluetooth headset and the like, and then open an SCO (SCO) path from the Bluetooth headset to the mobile phone. The SCO path may include an uplink path, which may be understood as a path from the bluetooth headset to the mobile phone, and a downlink path, which may be understood as a path from the mobile phone to the bluetooth headset.
S404, the Bluetooth headset transmits the sound recording to the mobile phone side DSP through the SCO channel.
After the SCO passage from the Bluetooth headset to the mobile phone is opened, the Bluetooth headset mic can pick up the recording data, transfer the recording data through the uplink passage of the SCO passage, and transfer the recording data to the digital signal processor DSP at the mobile phone side. The recording data may also be referred to as sound data or voice data.
S405, the DSP transmits the recording data to a recording module.
After the DSP at the mobile phone side acquires the recording data, the recording control module can be called to transmit the recording data to the chip platform recording algorithm module of the hardware abstraction layer. The chip platform recording algorithm module can amplify, reduce noise, filter and the like the recording data. The control recording module may also be referred to as a control recording interface or a recording module, and the chip platform recording algorithm module may also be referred to as a recording algorithm module. It will be appreciated that the process of the DSP invoking the control recording module to transfer the recording data to the recording algorithm module may be referred to as forced handling data or handling data.
S406, bluetooth record storage.
After the recording algorithm module acquires the recording data, the recording data can be transmitted to a recording application of an application layer through a recording channel control module of an application program framework, the recording application can store the recording data into a Bluetooth recording file, and the Bluetooth recording file can be also called a recording file.
After the sound recording channel is built, the audio control method of the embodiment of the application can also build a transmission channel between the mobile phone and the Bluetooth headset, so that the transmission of environmental sound can be realized when the Bluetooth headset is used for recording sound. The establishment of the transparent transmission path can comprise the following steps of (1) establishing the transparent transmission path based on the pick-up of the environmental sound by the Bluetooth earphone mic and (2) establishing the transparent transmission path based on the pick-up of the environmental sound by the mobile phone mic.
Fig. 5 below shows (1) a flow of picking up an environmental sound based on the bluetooth headset mic to build up a transparent transmission path.
S501, starting a recording application.
The method for starting the recording application may refer to the related description in step S401 in the embodiment corresponding to fig. 4, which is not repeated.
S502, monitoring connection of a Bluetooth headset.
The implementation manner of the connection monitoring of the bluetooth headset may refer to the related description in step S402 of the corresponding embodiment of fig. 4, which is not repeated.
S503, opening an SCO path from the Bluetooth headset to the mobile phone.
The implementation manner of opening the SCO path from the bluetooth headset to the mobile phone may refer to the related description in step S403 in the embodiment corresponding to fig. 4, which is not repeated.
S504, loading an uplink and downlink mixing mix module in the DSP of the Bluetooth earphone side.
When the SCO path from the Bluetooth earphone to the mobile phone is opened, an uplink and downlink mixing mix module can be loaded in the DSP at the Bluetooth earphone side, so that the subsequent mixing, data transmission and the like of the environmental sound data can be conveniently carried out. The upmix and downmix mix module may also be referred to as a mix module.
S505, the DSP at the Bluetooth earphone side synchronously carries the environmental sound data to the mixing mix module.
After the SCO path from the Bluetooth headset to the mobile phone is opened, the Bluetooth headset mic can pick up the environmental sound data. It is understood that the bluetooth headset mic can pick up the ambient sound data and also can be understood as sound recording data.
On the one hand, the environmental sound data is transmitted to the recording application through the recording channel, namely the uplink channel of the SCO channel, so that the recording file is stored. The process of transmitting the recording data to the recording application of the mobile phone by the specific bluetooth headset can refer to the related description in the embodiment corresponding to fig. 4, and will not be repeated.
On the other hand, the bluetooth headset mic may transfer ambient sound data to a CODEC of the bluetooth headset side, which may transfer the ambient sound data of the digital signal to a DSP of the bluetooth headset side after converting the ambient sound data from an analog signal to the digital signal. After acquiring the ambient sound data, the DSP on the bluetooth headset side may call the relevant interface to transfer the ambient sound data to the mixing mix module already loaded in step S704.
S506, mixing the environmental sound data picked up by the Bluetooth headset mix to the SCO downlink path.
The DSP on the Bluetooth earphone side can mix the environmental sound data picked up by the Bluetooth earphone mic with the downlink channel data of the SCO channel through the mixing mix module.
S507, the environmental sound data is transmitted to the earphone, so that the environmental sound transmission is realized.
After the sound mixing mix module mixes the environmental sound data, the DSP of the Bluetooth earphone side can transmit the environmental sound data to the Bluetooth earphone, so that the transparent transmission of the environmental sound is realized.
Fig. 6 shows a block interaction diagram of a sound recording channel and a transparent transmission channel based on the bluetooth headset mic picking up environmental sounds.
6.1, Recording the channel.
After the Bluetooth headset mic picks up the recording data, the Bluetooth headset mic can transfer the recording data to a CODEC of the Bluetooth headset side, and after the CODEC converts the recording data from an analog signal to a digital signal, the Bluetooth headset side can transfer the recording data of the data signal to a DSP of the Bluetooth headset side. The DSP on the Bluetooth earphone side can carry out 3A algorithm processing on the recording data, and after the 3A algorithm processing, the DSP on the Bluetooth earphone side can transmit the recording data to the Bluetooth chip on the Bluetooth earphone side. The Bluetooth chip of the Bluetooth earphone side can carry out mSBC codes on the recorded data, after mSBC codes are completed, the Bluetooth chip of the Bluetooth earphone side can transmit the recorded data to the Bluetooth chip of the mobile phone side through an uplink path of an SCO path, and the Bluetooth chip of the mobile phone side can carry out mSBC decoding on the recorded data.
For the SCO channel in the call scene, the mobile phone can transmit the decoded audio data to a high-fidelity (HIFI) DSP on the mobile phone side, and the HIFI DSP can send the audio data out through a Modem after performing 3A algorithm processing on the audio data.
It can be understood that the execution logic of the SCO path in the recording scenario is different from the execution logic of the SCO path in the call scenario, and in the recording scenario, the execution logic of the SCO path in the call scenario may be suspended, and the audio data may not be sent out through the Modem.
Specifically, for the SCO path in the recording scene, after the mobile phone transmits the decoded recording data to HIFIDSP on the mobile phone side, the HIFI DSP can call and control the recording module to transmit the recording data to the recording algorithm module of the hardware abstraction layer. After the recording algorithm module acquires the recording data, the recording data can be transmitted to the recording application of the application layer through the recording channel control module of the application program framework, and the recording application can store the recording data into a Bluetooth recording file.
It can be understood that, in order to ensure the stable operation of the mobile phone, the mobile phone can test some recording applications, and for the tested recording applications, a recording channel can be used. In a possible implementation, for the tested recording applications, the mobile phone may add the recording applications to the application whitelist, and when the recording path control module determines that the current recording application is in the application whitelist, the recording path control module may transmit recording data to the recording application. Therefore, the recording application can run normally on the recording channel, and the probability of abnormality in the running process of the application is reduced.
And 6.2, picking up a transparent transmission path of environmental sound based on the Bluetooth headset mic.
And when the SCO path from the Bluetooth headset to the mobile phone is opened, an uplink and downlink mixing mix module can be loaded in the DSP at the Bluetooth headset side. After the Bluetooth earphone mic picks up the environmental sound data, the DSP at the Bluetooth earphone side can perform 3A algorithm processing on the environmental sound data and transmit the environmental sound data to a sound mixing mix module loaded in the DSP at the Bluetooth earphone side, and the sound mixing mix module can mix the environmental sound data with downlink channel data of the SCO channel. After mixing, the DSP at the Bluetooth earphone side can transmit the environmental sound data to the Bluetooth earphone, thereby realizing the transparent transmission of the environmental sound.
Fig. 7 shows a timing diagram of a sound recording path and a transparent transmission path for picking up ambient sound based on a bluetooth headset mic.
After a user opens the recording application, the recording application can start an SCO path between the Bluetooth headset and the mobile phone based on the information that the recording application opens and connects with the Bluetooth headset.
7.1, Recording channel.
The Bluetooth earphone mic can pick up recording data and transmit the recording data to the DSP of the Bluetooth earphone side, the DSP of the Bluetooth earphone side can transmit the recording data to the Bluetooth chip of the Bluetooth earphone side after 3A algorithm processing is carried out on the recording data, and the Bluetooth chip of the Bluetooth earphone side can transmit the recording data to the Bluetooth chip of the mobile phone side after mSBC codes are carried out on the recording data.
After mSBC decoding the recording data, the Bluetooth chip at the mobile phone side can transmit the recording data to the DSP at the mobile phone side, and the DSP at the mobile phone side can call and control the recording module to transmit the recording data to the recording algorithm module. After the recording algorithm module acquires the recording data, the recording data can be transmitted to a recording application through the recording access control module, and then the recording application can store the recording data into a Bluetooth recording file. The process of transmitting the recording data to the recording application of the mobile phone by the specific bluetooth headset can refer to the related description in the embodiments corresponding to fig. 4 and fig. 6, and will not be repeated.
And 7.2, picking up a transparent transmission path of environmental sound based on the Bluetooth headset mic.
It will be appreciated that the recorded data picked up by the bluetooth headset mic may also be referred to as ambient sound data.
And a mixing mix module can be loaded in the DSP at the side of the Bluetooth earphone while the SCO path between the Bluetooth earphone and the mobile phone is started. After the mixing mix module obtains the environmental sound data, the environmental sound data and the SCO downlink channel data can be mixed. After the sound mixing, the DSP at the Bluetooth earphone side can transmit the environmental sound data to the Bluetooth earphone, so that the transparent transmission of the environmental sound is realized. The transparent transmission path for picking up the environmental sound based on the bluetooth headset mic may refer to the related description in the embodiments corresponding to fig. 5 and fig. 6, and will not be repeated.
Fig. 8 below shows (2) a flow of setting up a transparent transfer path based on picking up an environmental sound by a mobile phone mic.
S801, starting a recording application.
The method for starting the recording application may refer to the related description in step S401 in the embodiment corresponding to fig. 4, which is not repeated.
S802, bluetooth headset connection monitoring.
The implementation manner of the connection monitoring of the bluetooth headset may refer to the related description in step S402 of the corresponding embodiment of fig. 4, which is not repeated.
S803, opening an SCO path from the Bluetooth headset to the mobile phone.
The implementation manner of opening the SCO path from the bluetooth headset to the mobile phone may refer to the related description in step S403 in the embodiment corresponding to fig. 4, which is not repeated.
S804, loading a mixing mix module on a hardware abstraction layer of the mobile phone side.
The hardware abstraction layer at the mobile phone side can load the mixing mix module when the SCO path from the Bluetooth headset to the mobile phone is opened, so that the subsequent mixing and data transfer of the environmental sound data can be performed.
S805, the environmental sound data picked up by the mobile phone mic is transferred to a mixing mix module of the hardware abstraction layer.
After the mobile phone mic picks up the environmental sound data, the environmental sound data can be transferred to a CODEC, and after the CODEC converts the environmental sound data from an analog signal to a digital signal, the environmental sound data of the digital signal can be transferred to a mixing mix module of a hardware abstraction layer.
S806, mixing the environmental sound data picked up by the mobile phone mic into an SCO downlink channel.
The mixing mix module can mix the environmental sound data picked up by the mobile phone mic with the downlink channel data of the SCO channel.
S807, the environmental sound data is transmitted to the earphone, so that the environmental sound transmission is realized.
After the sound mixing, the mobile phone can transmit the environmental sound data to the Bluetooth headset through the downlink channel of the SCO channel, so that the transparent transmission of the environmental sound is realized.
Fig. 9 shows a sound recording path and a transparent transmission path based on picking up environmental sounds by a mobile phone mic.
In the embodiment of the present application, the process of transmitting the recording data to the recording application of the mobile phone by the specific bluetooth headset may refer to the related description in the embodiment corresponding to fig. 4 and fig. 6, which is not repeated.
After the mobile phone mic picks up the environmental sound data, the environmental sound data can be transferred to a CODEC, which converts the environmental sound data from an analog signal to a digital signal, and then the environmental sound data of the digital signal can be transferred to a mixing mix module of the hardware abstraction layer. The sound mixing mix module can mix the environment sound data with the downlink channel data of the SCO channel, and after the sound mixing, the mobile phone can transmit the environment sound data to the Bluetooth headset through the downlink channel of the SCO channel, thereby realizing the transparent transmission of the environment sound.
Fig. 10 shows a timing diagram of a recording path and a transparent transmission path based on picking up ambient sounds by a handset mic.
After a user opens the recording application, the recording application can start an SCO path between the Bluetooth headset and the mobile phone based on the information that the recording application opens and connects with the Bluetooth headset.
The process of transmitting the recording data to the recording application of the mobile phone by the specific bluetooth headset can refer to the related description in the embodiments corresponding to fig. 4 and fig. 6, and will not be repeated.
The mobile phone side can load the mixing mix module at the hardware abstraction layer while starting the SCO path between the Bluetooth headset and the mobile phone. The mobile phone mic can pick up the environmental sound data and transmit the environmental sound data to a mixing mix module of the hardware abstraction layer through the CODEC, and the mixing mix module can mix the environmental sound data with downlink channel data of the SCO channel. After the sound mixing, the mobile phone can transmit the environmental sound data to the Bluetooth headset through the downlink channel of the SCO channel, thereby realizing the transparent transmission of the environmental sound.
The method according to the embodiment of the present application will be described in detail by way of specific examples. The following embodiments may be combined with each other or implemented independently, and the same or similar concepts or processes may not be described in detail in some embodiments.
Fig. 11 shows an audio control method of an embodiment of the present application. The method comprises the following steps:
S1101, the first electronic device and the second electronic device establish communication connection.
In the embodiment of the present application, the first electronic device may include the mobile phone 300 in the embodiment corresponding to fig. 3, and the second electronic device may include the bluetooth headset 304 in the embodiment corresponding to fig. 3, which is not limited to the first electronic device and the second electronic device.
The communication connection between the first electronic device and the second electronic device may refer to the description related to step S402 in the embodiment corresponding to fig. 4, which is not repeated.
S1102, the first electronic device starts a recording application.
In the embodiment of the present application, the method for starting the recording application by the first electronic device may refer to the description related to step S401 in the embodiment corresponding to fig. 4, which is not repeated.
And S1103, the second electronic device acquires the first audio signal.
In the embodiment of the present application, the first audio signal may be understood as an audio signal acquired by the second electronic device based on the microphone device. The audio signal may be understood as recording data or ambient sound data in the embodiment corresponding to fig. 5 described above, for example.
S1104, the second electronic device sends the first audio signal to the first electronic device and the second electronic device plays the second audio signal, wherein the second audio signal is identical to the first audio signal or is received by the second electronic device from the first electronic device.
In the embodiment of the present application, the process of sending the first audio signal to the first electronic device by the second electronic device may be understood as a process of sending the recording data to the mobile phone by the bluetooth headset, and specifically, reference may be made to the related description in the embodiment corresponding to fig. 4, which is not repeated.
The second audio signal may include an audio signal acquired by the first electronic device based on the microphone device, and may further include an audio signal acquired by the second electronic device based on the microphone device, where the audio signal may be understood as the environmental sound data in the embodiment corresponding to fig. 5 or fig. 8.
The process of playing the second audio signal by the second electronic device may include the process of building the transparent transmission path based on the environmental sound picked up by the bluetooth headset mic in the embodiment corresponding to fig. 8, or may include the process of building the transparent transmission path based on the environmental sound picked up by the mobile phone mic in the embodiment corresponding to fig. 8.
S1105, the recording application of the first electronic device generates a recording based on the first audio signal.
In the embodiment of the application, the first electronic equipment and the second electronic equipment can be used for playing the surrounding environmental sound while transmitting the recording data, so that the transparent transmission of the environmental sound is realized while recording and storing, and the user experience can be improved.
Optionally, on the basis of the embodiment corresponding to fig. 11, the second audio signal is the same as the first audio signal, the second electronic device includes a first microphone device, the second electronic device acquires the first audio signal, and the method may include that the second electronic device acquires the first source audio signal through the first microphone device, a CODEC of the second electronic device converts the first source audio signal from an analog signal to a digital signal, and a digital signal processor DSP of the second electronic device mixes the first source audio signal converted to the digital signal to obtain the first audio signal.
In the embodiment of the present application, the process of obtaining the first audio signal by the second electronic device may refer to the related description in the embodiment corresponding to fig. 5, fig. 6 or fig. 7, which is not repeated.
The audio signal based on the second electronic equipment is used for realizing sound recording and transparent transmission of environmental sound, and the first electronic equipment is not required to collect the audio signal, so that the power consumption of the first electronic equipment can be reduced, and the cruising ability is improved.
Optionally, based on the embodiment corresponding to fig. 11, the second audio signal is received by the second electronic device from the first electronic device, and may include that the second audio signal is received by the second electronic device from the first electronic device based on the synchronous directional connection SCO path.
In the embodiment of the application, the second electronic equipment receives the second audio signal from the first electronic equipment based on the SCO path which is connected in a synchronous and directional way, and the SCO path can be multiplexed, so that the voice call can be realized based on the second electronic equipment, the recording function can be realized, and the compatibility of data transmission between the first electronic equipment and the second electronic equipment is improved.
Optionally, on the basis of the embodiment corresponding to fig. 11, before the second electronic device plays the second audio signal, the first electronic device may include acquiring the second source audio signal by the second microphone device, a CODEC of the first electronic device converts the second source audio signal from an analog signal to a digital signal, and a mixing module of the first electronic device mixes the second source audio signal converted to the digital signal to obtain the second audio signal.
In the embodiment of the present application, the process of playing the second audio signal by the second electronic device may refer to the related description in the embodiment corresponding to fig. 8, 9 or 10, which is not repeated.
The transmission of environmental sound is realized based on the audio signal acquired by the first electronic equipment, so that the power consumption of the second electronic equipment can be reduced, and the endurance of the second electronic equipment is improved.
Optionally, the second electronic device sending the first audio signal to the first electronic device based on the embodiment corresponding to fig. 11 may include the second electronic device sending the first audio signal to the first electronic device based on a synchronous directional connection SCO path.
In the embodiment of the application, the second electronic equipment sends the first audio signal from the first electronic equipment based on the synchronous directional connection SCO path, so that multiplexing of the SCO path can be realized, a recording function can be realized based on the SCO path, and the compatibility of the SCO path is improved.
Optionally, on the basis of the embodiment corresponding to fig. 11, the first electronic device includes a recording algorithm module, the recording path control module may include that before the second electronic device sends the first audio signal to the first electronic device based on the synchronous directional connection SCO path, the bluetooth chip of the second electronic device encodes the first source audio signal by using the improved subband coding mSBC technology, the first source audio signal is collected by the first microphone, after the second electronic device sends the first audio signal to the first electronic device based on the synchronous directional connection SCO path, the second electronic device may include that the bluetooth chip of the first electronic device decodes the first source audio signal by using the improved subband coding mSBC technology to obtain the first audio signal, the digital signal processor DSP of the first electronic device invokes the algorithm module that controls the recording interface to transfer the first audio signal to the hardware abstraction layer, the recording algorithm module transfers the first audio signal to the recording path control module of the application program framework, and the recording path control module transfers the first audio signal to the application of the application layer.
In the embodiment of the present application, the process of sending the first audio signal to the first electronic device by the second electronic device based on the synchronous directional connection SCO path may refer to the related description in the embodiment corresponding to fig. 5, which is not repeated. Therefore, the SCO channel can be multiplexed to build a recording channel between the first electronic device and the second electronic device, so that a recording function is realized based on the second electronic device.
Optionally, on the basis of the embodiment corresponding to fig. 11, the first electronic device is configured with a target list, and the recording path control module transmits the first audio signal to the recording application of the application layer, and may include transmitting the first audio signal to the recording application of the application layer by the recording path control module if the target list includes the recording application.
In the embodiment of the present application, the target list may be understood as the application white list in the embodiment corresponding to fig. 6. The specific recording path control module may refer to the related description in the embodiment corresponding to fig. 6 for transmitting the first audio signal to the recording application of the application layer, which is not described herein. Therefore, the recording application can run normally on the recording channel, and the probability of abnormality in the running process of the application is reduced.
It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to data for analysis, stored data, presented data, etc.) related to the present application are information and data authorized by the user or fully authorized by each party, and the collection, use and processing of the related data need to comply with the related laws and regulations and standards of the related country and region, and provide corresponding operation entries for the user to select authorization or rejection.
The foregoing description of the solution provided by the embodiments of the present application has been mainly presented in terms of a method. To achieve the above functions, it includes corresponding hardware structures and/or software modules that perform the respective functions. Those of skill in the art will readily appreciate that the present application may be implemented in hardware or a combination of hardware and computer software, as the method steps of the examples described in connection with the embodiments disclosed herein. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The embodiment of the application can divide the functional modules of the device for realizing the method according to the method example, for example, each functional module can be divided corresponding to each function, and two or more functions can be integrated in one processing module. The integrated modules may be implemented in hardware or in software functional modules. It should be noted that, in the embodiment of the present application, the division of the modules is schematic, which is merely a logic function division, and other division manners may be implemented in actual implementation.
Fig. 12 is a schematic structural diagram of a chip according to an embodiment of the present application. Chip 1200 includes one or more (including two) processors 1201, communication lines 1202, communication interface 1203, and memory 1204.
In some implementations, the memory 1204 stores elements of executable modules or data structures, or a subset thereof, or an extended set thereof.
The method described in the above embodiments of the present application may be applied to the processor 1201 or implemented by the processor 1201. The processor 1201 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in the processor 1201 or by instructions in the form of software. The processor 1201 may be a general purpose processor (e.g., a microprocessor or a conventional processor), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gates, transistor logic, or discrete hardware components, and the processor 1201 may implement or perform the methods, steps, and logic diagrams related to the disclosed processes in the embodiments of the present application.
The steps of the method disclosed in connection with the embodiments of the present application may be embodied directly in the execution of a hardware decoding processor, or in the execution of a combination of hardware and software modules in a decoding processor. The software modules may be located in any well-known storage medium such as ram, rom, or EEPROM (ELECTRICALLY ERASABLE PROGRAMMABLE READ ONLY MEMORY, EEPROM). The storage medium is located in the memory 1204, and the processor 1201 reads information in the memory 1204 and performs the steps of the method described above in combination with its hardware.
The processor 1201, the memory 1204, and the communication interface 1203 may communicate with each other via the communication line 1202.
In the above embodiments, the instructions stored by the memory for execution by the processor may be implemented in the form of a computer program product. The computer program product may be written in the memory in advance, or may be downloaded in the form of software and installed in the memory.
Embodiments of the present application also provide a computer program product comprising one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the processes or functions in accordance with embodiments of the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wired (e.g., coaxial cable, fiber optic, digital subscriber line (digital subscriber line, DSL), or wireless (e.g., infrared, wireless, microwave, etc.), or semiconductor media (e.g., solid state disk (solid state STATE DISK, SSD)), the computer-readable storage medium may be any available medium that can be stored by the computer or a data storage device such as a server, data center, etc., comprising an integration of one or more available media.
The embodiment of the application also provides a computer readable storage medium. The methods described in the above embodiments may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. Computer readable media can include computer storage media and communication media and can include any medium that can transfer a computer program from one place to another. The storage media may be any target media that is accessible by a computer.
As one possible design, the computer-readable medium may include a compact disk-read only memory (CD-ROM), RAM, ROM, EEPROM, or other optical disk storage, and the computer-readable medium may include a magnetic disk storage or other magnetic disk storage device. Moreover, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, includes Compact Disc (CD), laser disc, optical disc, digital versatile disc (DIGITAL VERSATILE DISC, DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers.
Embodiments of the present application are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processing unit of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processing unit of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.