CN112394771A - Communication method, communication device, wearable device and readable storage medium - Google Patents
Communication method, communication device, wearable device and readable storage medium Download PDFInfo
- Publication number
- CN112394771A CN112394771A CN202011331303.3A CN202011331303A CN112394771A CN 112394771 A CN112394771 A CN 112394771A CN 202011331303 A CN202011331303 A CN 202011331303A CN 112394771 A CN112394771 A CN 112394771A
- Authority
- CN
- China
- Prior art keywords
- voice
- server
- translation
- loudspeaker
- wearable device
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 59
- 238000004891 communication Methods 0.000 title claims abstract description 58
- 238000013519 translation Methods 0.000 claims abstract description 97
- 230000009467 reduction Effects 0.000 claims description 46
- 238000012545 processing Methods 0.000 claims description 45
- 230000008569 process Effects 0.000 description 14
- 230000005540 biological transmission Effects 0.000 description 12
- 238000010586 diagram Methods 0.000 description 9
- 230000006870 function Effects 0.000 description 6
- 238000004519 manufacturing process Methods 0.000 description 6
- 230000000694 effects Effects 0.000 description 3
- 238000012546 transfer Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 210000005069 ears Anatomy 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/16—Constructional details or arrangements
- G06F1/1613—Constructional details or arrangements for portable computers
- G06F1/163—Wearable computers, e.g. on a belt
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/16—Constructional details or arrangements
- G06F1/1613—Constructional details or arrangements for portable computers
- G06F1/1633—Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
- G06F1/1684—Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675
- G06F1/1688—Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675 the I/O peripheral being integrated loudspeakers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/165—Management of the audio stream, e.g. setting of volume, audio stream path
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/40—Processing or translation of natural language
- G06F40/58—Use of machine translation, e.g. for multi-lingual retrieval, for server-side translation for client devices or for real-time translation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- Human Computer Interaction (AREA)
- Audiology, Speech & Language Pathology (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Machine Translation (AREA)
Abstract
The application discloses a communication method, a communication device, wearable equipment and a readable storage medium, and belongs to the technical field of communication. The communication method is applied to the wearable device, the wearable device comprises a first loudspeaker and a second loudspeaker, and the method comprises the following steps: receiving a first input; responding to the first input, acquiring first voice of the first sound-making object and sending the first voice to the server so that the server translates the first voice into first translation voice; acquiring a first translation voice sent by a server, and playing the first translation voice through a first loudspeaker; receiving a second input; responding to the second input, acquiring second voice of the second sound-producing object and sending the second voice to the server so that the server translates the second voice into second translated voice; and acquiring the second translation voice sent by the server, and playing the second translation voice through a second loudspeaker. The communication method, the communication device, the wearable device and the readable storage medium can improve the efficiency of voice translation.
Description
Technical Field
The application belongs to the technical field of communication, and particularly relates to a communication method, a communication device, wearable equipment and a readable storage medium.
Background
With the acceleration of the globalization process, speech translation is often required.
In the process of implementing the present application, the inventors found that at least the following problems exist in the related art:
at present, voice translation usually needs to carry special translation equipment, which brings inconvenience to users, and for some translation equipment, when a plurality of users use one translation equipment at the same time, the translation equipment needs to be transmitted back and forth, which results in low translation efficiency.
Therefore, how to improve the efficiency of speech translation is a technical problem that needs to be solved urgently by those skilled in the art.
Disclosure of Invention
An object of the embodiments of the present application is to provide a communication method, an apparatus, a wearable device, and a readable storage medium, which can improve efficiency of speech translation.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides a communication method, which is applied to a wearable device, where the wearable device includes a first speaker and a second speaker, and the method includes:
receiving a first input;
responding to the first input, acquiring first voice of the first sound-making object and sending the first voice to the server so that the server translates the first voice into first translation voice; the distance between the first sounding object and the wearing equipment is smaller than a first preset value;
acquiring a first translation voice sent by a server, and playing the first translation voice through a first loudspeaker;
receiving a second input;
responding to the second input, acquiring second voice of the second sound-producing object and sending the second voice to the server so that the server translates the second voice into second translated voice; the distance between the second sounding object and the wearing equipment is larger than a second preset value;
and acquiring the second translation voice sent by the server, and playing the second translation voice through a second loudspeaker.
Optionally, after acquiring the first voice of the first sound-emitting object, the method further includes:
and carrying out noise reduction processing on the first voice, and sending the first voice subjected to noise reduction processing to the server.
Optionally, the noise reduction processing is performed on the first voice, and the first voice after the noise reduction processing is sent to the server, including:
and carrying out noise reduction processing on the first voice, and sending the first voice subjected to noise reduction processing to a server through Bluetooth and/or a wireless network.
Optionally, the first speaker is a speaker disposed on an outer surface of the wearable device; and/or the presence of a gas in the gas,
the second speaker is the speaker that sets up in wearing the inside of equipment.
In a second aspect, an embodiment of the present application provides a communication apparatus, which is applied to a wearable device, where the wearable device includes a first speaker and a second speaker, and the apparatus includes:
the first receiving module is used for receiving a first input;
the first sending module is used for responding to the first input, acquiring first voice of the first sound-making object and sending the first voice to the server so that the server translates the first voice into first translation voice; the distance between the first sounding object and the wearing equipment is smaller than a first preset value;
the first acquisition module is used for acquiring the first translation voice sent by the server and playing the first translation voice through a first loudspeaker;
the second receiving module is used for receiving a second input;
the second sending module is used for responding to the second input, acquiring second voice of the second sound-producing object and sending the second voice to the server so that the server translates the second voice into second translated voice; the distance between the second sounding object and the wearing equipment is larger than a second preset value;
and the second acquisition module is used for acquiring the second translation voice sent by the server and playing the second translation voice through a second loudspeaker.
Optionally, the apparatus further comprises:
and the noise reduction processing module is used for performing noise reduction processing on the first voice and sending the first voice subjected to the noise reduction processing to the server.
Optionally, the noise reduction processing module includes:
and the noise reduction processing unit is used for performing noise reduction processing on the first voice and sending the first voice subjected to the noise reduction processing to the server through Bluetooth and/or a wireless network.
Optionally, the first speaker is a speaker disposed on an outer surface of the wearable device; and/or the presence of a gas in the gas,
the second speaker is the speaker that sets up in wearing the inside of equipment.
In a third aspect, an embodiment of the present application provides a wearable device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the communication method according to the first aspect.
In a fourth aspect, the present application provides a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the communication method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the communication method according to the first aspect.
The communication method provided by the embodiment of the application is applied to the wearable device, the wearable device comprises the first loudspeaker and the second loudspeaker, the translation voice playing can be performed on the sounding objects which are different from the wearable device in the distance direction, the voice translation can be achieved without back and forth transmission among different sounding objects, and therefore the voice translation efficiency is improved.
Drawings
Fig. 1 is a schematic flowchart of a communication method according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a speech translation system according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an earphone according to an embodiment of the present application;
fig. 4 is a schematic diagram of a far-field foreign language speech pick-up and translation process provided by an embodiment of the present application;
fig. 5 is a schematic diagram illustrating a near-field native language speech pickup and translation process according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a communication device according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a wearable device provided in an embodiment of the present application;
fig. 8 is a schematic hardware structure diagram of a wearable device provided in an embodiment of the present application;
description of reference numerals: 101-an earplug; 102-near field microphone; 103-far field microphone; 104-a loudspeaker; 105-a mode selection switch; 106-sub-mode selection button.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
Based on the background technology, currently, speech translation usually needs to carry a special translation device, which brings inconvenience to users, and for some translation devices, when a plurality of users use one translation device at the same time, the translation device needs to be transmitted back and forth, which results in low translation efficiency.
The applicant finds that the communication method can be applied to the wearable device, the wearable device comprises the first loudspeaker and the second loudspeaker, translation voice playing can be performed on sounding objects which are far away from and near the wearable device respectively, voice translation is achieved without back and forth transmission among different sounding objects, and therefore voice translation efficiency is improved.
The communication method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
Fig. 1 is a schematic flowchart of a communication method provided in an embodiment of the present application, where the communication method is applied to a wearable device, the wearable device includes a first speaker and a second speaker, and the communication method includes:
s101, receiving a first input.
S102, responding to the first input, acquiring first voice of the first sound-making object and sending the first voice to the server so that the server translates the first voice into first translation voice; and the distance between the first sound-emitting object and the wearing equipment is smaller than a first preset value.
To make the speech translation more accurate, in one embodiment, after obtaining the first speech of the first originating object, the method further comprises: and carrying out noise reduction processing on the first voice, and sending the first voice subjected to noise reduction processing to the server.
Since the noise reduction processing is performed on the first voice and the noise-reduced first voice is sent to the server, the server can more accurately translate the first voice into the first translation voice based on the noise-reduced first voice.
In order to perform voice transmission more conveniently and quickly, in one embodiment, noise reduction processing is performed on the first voice, and the noise-reduced first voice is sent to the server, including: and carrying out noise reduction processing on the first voice, and sending the first voice subjected to noise reduction processing to a server through Bluetooth and/or a wireless network.
The first voice after noise reduction processing is sent to the server through wireless transmission modes such as Bluetooth and/or a wireless network, laying or connecting circuits can be avoided, and voice transmission can be carried out more conveniently and rapidly.
S103, acquiring the first translation voice sent by the server, and playing the first translation voice through a first loudspeaker.
And S104, receiving a second input.
S105, responding to the second input, acquiring second voice of the second sound-producing object and sending the second voice to the server so that the server translates the second voice into second translated voice; and the distance between the second sounding object and the wearing equipment is greater than a second preset value.
And S106, acquiring the second translation voice sent by the server, and playing the second translation voice through a second loudspeaker.
In order to improve the speech translation efficiency, in one embodiment, the first loudspeaker is a loudspeaker arranged on the outer surface of the wearable device; and/or the second loudspeaker is a loudspeaker arranged inside the wearable device.
Because set up in the speaker of wearing equipment surface can with external voice, set up in the inside speaker of wearing equipment and can supply the user to hear pronunciation, the privacy is better, and need not to make a round trip to transmit translation equipment between different sound production objects and realize speech translation, and then improves speech translation efficiency.
The communication method provided by the embodiment of the application is applied to the wearable device, the wearable device comprises the first loudspeaker and the second loudspeaker, the translation voice playing can be performed on the sounding objects which are different from the wearable device in the distance direction, the voice translation can be achieved without back and forth transmission among different sounding objects, and therefore the voice translation efficiency is improved.
The embodiment of the application further provides a voice translation system, which comprises the wearable device, the transfer electronic device and the server, wherein the wearable device and the server perform information interaction through the transfer electronic device. Wearing equipment can be the earphone, and the transfer electronic equipment can be the cell-phone, and the server can be the high in the clouds server, and correspondingly, speech translation system is then the system of constituteing by earphone, cell-phone and high in the clouds server. The following description will specifically take the earphone and the voice translation system composed of the earphone, the mobile phone and the cloud server as examples.
As shown in fig. 2, the headset is connected to the mobile phone via bluetooth or Wi-Fi, and the mobile phone is connected to the cloud server via the cellular network.
Fig. 3 is a schematic structural diagram of an earphone according to an embodiment of the present application, and the following description is made with respect to fig. 3.
The earplug 101 of fig. 3, which is intended to be worn in the ear of a user; a near field microphone 102 for picking up a native language of a user; a far-field microphone 103 for picking up a foreign language of an opposite interlocutor within 3 meters; a speaker 104 for playing a foreign language to an opposite speaker; a mode selection switch 105 for switching between a translation mode and an untranslated mode; and a sub-mode selection button 106 for switching between the far-field foreign language sound pickup sub-mode and the near-field mother language sound pickup sub-mode in the translation mode.
When the mode selection switch 105 is toggled to the non-translation mode, the earphone is a normal earphone, and after the earphone is connected with the mobile phone, normal operations such as call receiving and calling, music listening and the like can be performed.
When the mode selection switch 105 is switched to the translation mode, the headset enters the translation mode, at this time, the default sub-mode selection button 106 is in the far-field foreign language pickup sub-mode state, the far-field microphone 103 picks up voice (foreign language) within 3 meters opposite to the far-field microphone, noise reduction is performed after the voice is picked up, then the voice is packaged and transmitted to the cloud server through the network for translation, the generated mother-language voice is sent back to the headset through the network, and the voice is played from the earplug 101 to the ears of a user for listening.
If the user presses sub-mode selection button 106 to switch to near-field native language pickup sub-mode, near-field microphone 102 picks up the native language of the user, and after picking up the pronunciation, it makes an uproar to fall, then packs and conveys to the high in the clouds server through the network and translate, generates the foreign language and sends back the earphone through the network, plays from speaker 104, lets opposite interlocutor hear.
In one embodiment, the far-field foreign language voice picking and translating process is as shown in fig. 4, and the far-field foreign language voice of the opposite speaker is picked up by the far-field microphone 103, processed by noise reduction, and then packed by the processor. And transferring the foreign language voice to a cloud server through a mobile phone by using a Bluetooth or Wi-Fi module. The cloud server translates the foreign language voice to generate a native language voice, and the native language voice is transmitted back to the Bluetooth or Wi-Fi module through the mobile phone. The bluetooth or Wi-Fi module passes the native language voice to the processor, which plays the native language voice in the earpiece 101 to the user.
In one embodiment, the near-field native language speech pickup and translation process is as shown in fig. 5, where the near-field native language speech of the user is picked up by the near-field microphone 102, processed by the noise reduction, and then sorted and packaged by the processor. And the Bluetooth or Wi-Fi module is used for transmitting the native language voice to the cloud server through the mobile phone. The cloud server translates the native language voice to generate foreign language voice, and the foreign language voice is transferred back to the Bluetooth or Wi-Fi module through the mobile phone. The bluetooth or Wi-Fi module passes the foreign language voice to the processor, and the processor plays the foreign language voice at the speaker 104 for the opposite speaker to hear.
The embodiment has a translation function through the design of the earphone. Pick up the foreign language pronunciation of opposite (within 3 meters) people through far-field microphone 103 on the user's earphone, upload the high in the clouds server after gathering, the high in the clouds server is translated, produces the mother language and passes down to the earplug 101 of user's earphone and play. In addition, the user speaks into the near-field microphone 102, the native language is uploaded to the cloud, a foreign language is generated, and the foreign language is downloaded to the speaker 104 of the user's earphone for playing.
It should be noted that, in the communication method provided in the embodiment of the present application, the execution main body may be a communication device, or a control module in the communication device for executing the loading communication method. The embodiment of the present application takes a method for a communication device to execute a loading communication as an example, and describes the communication device provided in the embodiment of the present application.
As shown in fig. 6, an embodiment of the present application further provides a communication apparatus, which is applied to a wearable device, where the wearable device includes a first speaker and a second speaker, and the apparatus includes:
a first receiving module 601, configured to receive a first input;
a first sending module 602, configured to, in response to a first input, obtain a first voice of a first sound-emitting object and send the first voice to a server, so that the server translates the first voice into a first translated voice; the distance between the first sounding object and the wearing equipment is smaller than a first preset value;
a first obtaining module 603, configured to obtain a first translation voice sent by a server, and play the first translation voice through a first speaker;
a second receiving module 604 for receiving a second input;
a second sending module 605, configured to, in response to the second input, obtain a second voice of the second sound object and send the second voice to the server, so that the server translates the second voice into a second translated voice; the distance between the second sounding object and the wearing equipment is larger than a second preset value;
and a second obtaining module 606, configured to obtain the second translation voice sent by the server, and play the second translation voice through a second speaker.
To make the speech translation more accurate, in one embodiment, the apparatus further comprises: and the noise reduction processing module is used for performing noise reduction processing on the first voice and sending the first voice subjected to the noise reduction processing to the server.
Since the noise reduction processing is performed on the first voice and the noise-reduced first voice is sent to the server, the server can more accurately translate the first voice into the first translation voice based on the noise-reduced first voice.
In order to perform voice transmission more conveniently and quickly, in one embodiment, the noise reduction processing module includes: and the noise reduction processing unit is used for performing noise reduction processing on the first voice and sending the first voice subjected to the noise reduction processing to the server through Bluetooth and/or a wireless network.
The first voice after noise reduction processing is sent to the server through wireless transmission modes such as Bluetooth and/or a wireless network, laying or connecting circuits can be avoided, and voice transmission can be carried out more conveniently and rapidly.
In order to improve the speech translation efficiency, in one embodiment, the first loudspeaker is a loudspeaker arranged on the outer surface of the wearable device; and/or the second loudspeaker is a loudspeaker arranged inside the wearable device.
Because set up in the speaker of wearing equipment surface can with external voice, set up in the inside speaker of wearing equipment and can supply the user to hear pronunciation, the privacy is better, and need not to make a round trip to transmit translation equipment between different sound production objects and realize speech translation, and then improves speech translation efficiency.
The communication device that this application embodiment provided is applied to wearing equipment, and wearing equipment includes first speaker and second speaker, can translate the pronunciation broadcast to the sound production object of distance wearing equipment difference far and near respectively, need not to make a round trip to transmit between different sound production objects and realizes the pronunciation translation, and then improves pronunciation translation efficiency.
The communication device in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal.
The communication device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The communication device provided in the embodiment of the present application can implement each process of the method embodiment shown in fig. 1, and is not described here again to avoid repetition.
As shown in fig. 7, an embodiment of the present application further provides a wearable device 700, which includes a processor 701, a memory 702, and a program or an instruction stored in the memory 702 and executable on the processor 701, and when the program or the instruction is executed by the processor 701, the process of the communication method embodiment is implemented, and the same technical effect can be achieved, and details are not repeated here to avoid repetition.
Fig. 8 is a schematic diagram of a hardware structure of a wearable device for implementing an embodiment of the present application.
The wearable device 800 includes but is not limited to: a radio frequency unit 801, a network module 802, an audio output unit 803, an input unit 804, a sensor 805, a display unit 806, a user input unit 807, an interface unit 808, a memory 809, and a processor 810.
Those skilled in the art will appreciate that the wearable device 800 may further include a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 810 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The wearable device structure shown in fig. 8 does not constitute a limitation of the wearable device, and the wearable device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the detailed description is omitted.
Wherein, the processor 810 is configured to receive a first input; responding to the first input, acquiring first voice of the first sound-making object and sending the first voice to the server so that the server translates the first voice into first translation voice; the distance between the first sounding object and the wearing equipment is smaller than a first preset value; acquiring a first translation voice sent by a server, and playing the first translation voice through a first loudspeaker; receiving a second input; responding to the second input, acquiring second voice of the second sound-producing object and sending the second voice to the server so that the server translates the second voice into second translated voice; the distance between the second sounding object and the wearing equipment is larger than a second preset value; and acquiring the second translation voice sent by the server, and playing the second translation voice through a second loudspeaker.
This wearing equipment can translate pronunciation broadcast to the sound production object of distance wearing equipment difference respectively through first speaker and second speaker, need not to make a round trip to transmit between different sound production objects and realizes the speech translation, and then improves speech translation efficiency.
In order to make the speech translation more accurate, in an embodiment, the processor 810 is configured to, after acquiring the first speech of the first uttering object, perform noise reduction processing on the first speech, and send the noise-reduced first speech to the server.
Since the noise reduction processing is performed on the first voice and the noise-reduced first voice is sent to the server, the server can more accurately translate the first voice into the first translation voice based on the noise-reduced first voice.
For more convenient and fast voice transmission, in one embodiment, the processor 810 is configured to perform noise reduction processing on the first voice and send the noise-reduced first voice to the server via bluetooth and/or a wireless network.
The first voice after noise reduction processing is sent to the server through wireless transmission modes such as Bluetooth and/or a wireless network, laying or connecting circuits can be avoided, and voice transmission can be carried out more conveniently and rapidly.
It should be understood that in the embodiment of the present application, the input Unit 804 may include a Graphics Processing Unit (GPU) 8041 and a microphone 8042, and the Graphics Processing Unit 8041 processes image data of a still picture or a video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 806 may include a display panel 8061, and the display panel 8061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 807 includes a touch panel 8071 and other input devices 8072. A touch panel 8071, also referred to as a touch screen. The touch panel 8071 may include two portions of a touch detection device and a touch controller. Other input devices 8072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 809 may be used to store software programs as well as various data including, but not limited to, application programs and operating systems. The processor 810 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 810.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the communication method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the wearable device in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the communication method embodiment, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (10)
1. A communication method is applied to a wearable device, wherein the wearable device comprises a first loudspeaker and a second loudspeaker, and the method comprises the following steps:
receiving a first input;
responding to the first input, acquiring first voice of a first sound-making object and sending the first voice to a server so that the server translates the first voice into first translation voice; wherein the distance between the first sound-emitting object and the wearable device is smaller than a first preset value;
acquiring the first translation voice sent by the server, and playing the first translation voice through the first loudspeaker;
receiving a second input;
responding to the second input, acquiring second voice of a second sound-producing object and sending the second voice to the server so as to enable the server to translate the second voice into second translation voice; the distance between the second sound-producing object and the wearable device is larger than a second preset value;
and acquiring the second translation voice sent by the server, and playing the second translation voice through the second loudspeaker.
2. The communication method according to claim 1, wherein after the acquiring the first voice of the first sound-producing object, the method further comprises:
and carrying out noise reduction processing on the first voice, and sending the first voice subjected to noise reduction processing to the server.
3. The communication method according to claim 2, wherein the denoising the first voice and transmitting the denoised first voice to the server comprises:
and carrying out noise reduction processing on the first voice, and sending the first voice subjected to noise reduction processing to the server through Bluetooth and/or a wireless network.
4. The communication method according to claim 1, wherein the first speaker is a speaker provided on an outer surface of the wearable device; and/or the presence of a gas in the gas,
the second loudspeaker is a loudspeaker arranged inside the wearable device.
5. A communication device is applied to wearable equipment, wherein the wearable equipment comprises a first loudspeaker and a second loudspeaker, and the device comprises:
the first receiving module is used for receiving a first input;
the first sending module is used for responding to the first input, acquiring first voice of a first sound-making object and sending the first voice to a server so that the server translates the first voice into first translation voice; wherein the distance between the first sound-emitting object and the wearable device is smaller than a first preset value;
the first obtaining module is used for obtaining the first translation voice sent by the server and playing the first translation voice through the first loudspeaker;
the second receiving module is used for receiving a second input;
the second sending module is used for responding to the second input, acquiring second voice of a second sound-producing object and sending the second voice to the server so that the server translates the second voice into second translated voice; the distance between the second sound-producing object and the wearable device is larger than a second preset value;
and the second obtaining module is used for obtaining the second translation voice sent by the server and playing the second translation voice through the second loudspeaker.
6. The communications apparatus of claim 5, the apparatus further comprising:
and the noise reduction processing module is used for carrying out noise reduction processing on the first voice and sending the first voice subjected to the noise reduction processing to the server.
7. The communications device of claim 6, wherein the denoising module comprises:
and the noise reduction processing unit is used for performing noise reduction processing on the first voice and sending the first voice subjected to the noise reduction processing to the server through Bluetooth and/or a wireless network.
8. The communication device according to claim 5, wherein the first speaker is a speaker disposed on an outer surface of the wearable device; and/or the presence of a gas in the gas,
the second loudspeaker is a loudspeaker arranged inside the wearable device.
9. A wearable device comprising a processor, a memory, and a program or instructions stored on the memory and executable on the processor, which when executed by the processor, implement the steps of the communication method of any of claims 1 to 4.
10. A readable storage medium on which a program or instructions are stored, which program or instructions, when executed by a processor, carry out the steps of the communication method according to any one of claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011331303.3A CN112394771A (en) | 2020-11-24 | 2020-11-24 | Communication method, communication device, wearable device and readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011331303.3A CN112394771A (en) | 2020-11-24 | 2020-11-24 | Communication method, communication device, wearable device and readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112394771A true CN112394771A (en) | 2021-02-23 |
Family
ID=74606621
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011331303.3A Pending CN112394771A (en) | 2020-11-24 | 2020-11-24 | Communication method, communication device, wearable device and readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112394771A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114267358A (en) * | 2021-12-17 | 2022-04-01 | 北京百度网讯科技有限公司 | Audio processing method, device, apparatus, storage medium, and program |
WO2024140397A1 (en) * | 2022-12-26 | 2024-07-04 | 维沃移动通信有限公司 | Wearable device and control method therefor |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108710615A (en) * | 2018-05-03 | 2018-10-26 | Oppo广东移动通信有限公司 | Interpretation method and relevant device |
CN109117484A (en) * | 2018-08-13 | 2019-01-01 | 北京帝派智能科技有限公司 | A kind of voice translation method and speech translation apparatus |
-
2020
- 2020-11-24 CN CN202011331303.3A patent/CN112394771A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108710615A (en) * | 2018-05-03 | 2018-10-26 | Oppo广东移动通信有限公司 | Interpretation method and relevant device |
CN109117484A (en) * | 2018-08-13 | 2019-01-01 | 北京帝派智能科技有限公司 | A kind of voice translation method and speech translation apparatus |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114267358A (en) * | 2021-12-17 | 2022-04-01 | 北京百度网讯科技有限公司 | Audio processing method, device, apparatus, storage medium, and program |
CN114267358B (en) * | 2021-12-17 | 2023-12-12 | 北京百度网讯科技有限公司 | Audio processing method, device, equipment and storage medium |
WO2024140397A1 (en) * | 2022-12-26 | 2024-07-04 | 维沃移动通信有限公司 | Wearable device and control method therefor |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109166589B (en) | Application sound suppression method, device, medium and equipment | |
CN108550367A (en) | A kind of portable intelligent interactive voice control device, method and system | |
CN108966067B (en) | Play control method and related product | |
WO2020010579A1 (en) | Smart watch having voice interaction function-enabled earphones | |
CN109104684B (en) | Microphone plugging detection method and related products | |
EP3660660A1 (en) | Processing method for sound effect of recording and mobile terminal | |
CN109150221B (en) | Master-slave switching method for wearable equipment and related product | |
CN108683980B (en) | Audio signal transmission method and mobile terminal | |
CN106126165B (en) | A kind of audio stream processing method and mobile terminal | |
US20210090548A1 (en) | Translation system | |
CN108777827A (en) | Wireless headset, method for regulation of sound volume and Related product | |
CN113490089B (en) | Noise reduction control method, electronic device and computer readable storage device | |
CN108897516A (en) | A kind of wearable device method for regulation of sound volume and Related product | |
CN108834013B (en) | Wearable equipment electric quantity balancing method and related product | |
CN112394771A (en) | Communication method, communication device, wearable device and readable storage medium | |
CN106506834A (en) | Method, terminal and system for adding background sound during call | |
CN114466283B (en) | Audio acquisition method, device, electronic device and peripheral component method | |
CN110058837B (en) | Audio output method and terminal | |
CN109873894B (en) | Volume adjusting method and mobile terminal | |
CN110012164B (en) | Sound playing method of equipment, wearable equipment and computer readable storage medium | |
CN108668018A (en) | Mobile terminal, volume control method and related product | |
WO2018149073A1 (en) | Noise-cancelling headphone and electronic device | |
CN115037831B (en) | Mode control method and device, electronic equipment and earphone | |
CN106131747A (en) | A sound effect adding method and user terminal | |
CN110839108A (en) | Noise reduction method and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |