WO2025046549A1 - A portable and instantaneous translation system - Google Patents
A portable and instantaneous translation system Download PDFInfo
- Publication number
- WO2025046549A1 WO2025046549A1 PCT/IB2024/058520 IB2024058520W WO2025046549A1 WO 2025046549 A1 WO2025046549 A1 WO 2025046549A1 IB 2024058520 W IB2024058520 W IB 2024058520W WO 2025046549 A1 WO2025046549 A1 WO 2025046549A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- base unit
- unit
- headset
- translation
- translation system
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/40—Processing or translation of natural language
- G06F40/58—Use of machine translation, e.g. for multi-lingual retrieval, for server-side translation for client devices or for real-time translation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/80—Services using short range communication, e.g. near-field communication [NFC], radio-frequency identification [RFID] or low energy communication
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R1/00—Details of transducers, loudspeakers or microphones
- H04R1/10—Earpieces; Attachments therefor ; Earphones; Monophonic headphones
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2201/00—Details of transducers, loudspeakers or microphones covered by H04R1/00 but not provided for in any of its subgroups
- H04R2201/10—Details of earpieces, attachments therefor, earphones or monophonic headphones covered by H04R1/10 but not provided for in any of its subgroups
- H04R2201/107—Monophonic and stereophonic headphones with microphone for two-way hands free communication
Definitions
- the present invention relates to a translation system. More particularly, the present invention relates to a translation system that allows near-instantaneous two-way verbal communication between two users. Even more particularly, the present invention relates to a translation system that allows near-instantaneous two-way verbal and written communication between two users.
- US2020/0226327 describes and shows a system for providing direct speech to target language translation.
- the target text is converted to speech in the target language through a TTS system.
- the system simplifies speech recognition and translation process by providing direct translation, and includes mechanisms that facilitate mixed language source speech translation, and punctuating output text streams in the target language. It also allows translation of speech into the target language to reflect the voice of the speaker of the source speech based on characteristics of the source language speech and speaker's voice and to produce subtitled data in the target language corresponding to the source speech.
- the system uses models having been trained using (i) encoder-decoder architectures with attention mechanisms and training data using TTS and (ii) parallel text training data in more than two different languages.
- the present invention may broadly be said to consist in a translation system, comprising: a headset unit wearable by a user; a portable base unit comprising a visual display panel configured to display text to the first user, and at least one loudspeaker; the base unit further comprising a control processor located within the base unit, the control processor configured to receive and process audio data signals received from the headset unit, to send audio data signals to the headset unit and visual data signals to the visual display panel, and to transmit broadcast audio signals to the at least one loudspeaker; the control processor further configured to translate audio data signals received from the headset unit in a first language to a second language, and to broadcast the translation via the at least one loudspeaker and display the translation on the visual display panel; the base unit and at least one earphone configured to connect to one another automatically when an activation action occurs.
- the base unit further comprises a base unit microphone configured to receive audio from at least one other user located proximal to the first user, the control processor further configured to receive the audio, and, if this is in a second language, to translate this to the first language, and to transmit the translation to the headset unit.
- the base unit and headset unit are mutually configured so that the base unit charges the headset unit when not in use.
- the headset unit comprises a pair of earbuds.
- the headset unit and the base unit are configured to co-locate when not in use, and removal of the headset unit from the base unit comprises an activation action.
- the headset unit is configured to actively search for the base unit on removal from the base unit, and to transmit unique identification data so as to pair with the base unit once located.
- connection between the headset unit and the base unit is via wireless transmission.
- the connection is an LTE connection.
- the wireless transmission is substantially within the 2.4 GHz frequency range.
- the base unit is further configured for internet connection.
- control processor is configured so that the text displayed on the visual display panel comprises a transcription of the words of the audio signals received, and translation of these into the first or second language respectively.
- This invention may also be said broadly to consist in the parts, elements and features referred to or indicated in the specification of the application, individually or collectively, and any or all combinations of any two or more said parts, elements or features, and where specific integers are mentioned herein which have known equivalents in the art to which this invention relates, such known equivalents are deemed to be incorporated herein as if individually set forth.
- Figure 1A shows an embodiment of headset unit that forms part of a translation system in accordance with an embodiment of the invention.
- FIGS. 1B and 1C show an embodiment of a portable base unit that forms part of a translation system in accordance with an embodiment of the invention.
- Figure 2 shows a block diagram that illustrates the base unit of figures 1B and 1C connecting to a cloud-based translation service.
- Figure 3 shows an interaction diagram of a method of translation in accordance with an embodiment of the invention.
- Figure 4 illustrates a use-case of the first user speaking German and communicating in real-time with another user who is speaking in English, the users using a translation system in accordance with an embodiment of the invention.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Signal Processing (AREA)
- General Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Machine Translation (AREA)
- Headphones And Earphones (AREA)
Abstract
A translation system comprises a headset unit; a portable base unit comprising a visual display panel configured to display text, and at least one loudspeaker; the base unit further comprising a control processor located within the base unit, the control processor configured to receive and process audio data signals received from the headset unit, to send audio data signals to the headset unit and visual data signals to the visual display panel, and to transmit broadcast audio signals to the at least one loudspeaker; the control processor further configured to translate audio data signals received from the headset unit in a first language to a second language, and to broadcast the translation via the at least one loudspeaker and display the translation on the visual display panel; the base unit and at least one earphone configured to connect to one another automatically when an activation action occurs.
Description
A Portable and Instantaneous Translation System
FIELD
The present invention relates to a translation system. More particularly, the present invention relates to a translation system that allows near-instantaneous two-way verbal communication between two users. Even more particularly, the present invention relates to a translation system that allows near-instantaneous two-way verbal and written communication between two users.
BACKGROUND
Communication with people in a different language can be extremely difficult and time-consuming. Unless the people have some knowledge of each other’s language, then real-time or near real-time communication is impossible. Although it is possible for each party to use online translation tools such as Google Translate to write out what they wish to say, this is time-consuming and slow, and frustrating for the users.
US2020/0226327 describes and shows a system for providing direct speech to target language translation. The target text is converted to speech in the target language through a TTS system. The system simplifies speech recognition and translation process by providing direct translation, and includes mechanisms that facilitate mixed language source speech translation, and punctuating output text streams in the target language. It also allows translation of speech into the target language to reflect the voice of the speaker of the source speech based on characteristics of the source language speech and speaker's voice and to produce subtitled data in the target language corresponding to the source speech. The system uses models having been trained using (i) encoder-decoder architectures with attention mechanisms and training data using TTS and (ii) parallel text training data in more than two different languages.
In this specification where reference has been made to patent specifications, other external documents, or other sources of information, this is generally for the purpose of providing a context for discussing the features of the invention. Unless specifically stated otherwise, reference to such external documents is not to be construed as an admission that such documents, or such sources of information, in any jurisdiction, are prior art, or form part of the common general knowledge in the art.
SUMMARY
It is an object of the present invention to provide a translation system which goes some way to overcoming the abovementioned disadvantages or which at least
provides the public or industry with a useful choice.
The term “comprising” as used in this specification and indicative independent claims means “consisting at least in part of”. When interpreting each statement in this specification and indicative independent claims that includes the term “comprising”, features other than that or those prefaced by the term may also be present. Related terms such as “comprise” and “comprises” are to be interpreted in the same manner.
As used herein the term “and/or” means “and” or “or”, or both.
As used herein “(s)” following a noun means the plural and/or singular forms of the noun.
Accordingly, in a first aspect the present invention may broadly be said to consist in a translation system, comprising: a headset unit wearable by a user; a portable base unit comprising a visual display panel configured to display text to the first user, and at least one loudspeaker; the base unit further comprising a control processor located within the base unit, the control processor configured to receive and process audio data signals received from the headset unit, to send audio data signals to the headset unit and visual data signals to the visual display panel, and to transmit broadcast audio signals to the at least one loudspeaker; the control processor further configured to translate audio data signals received from the headset unit in a first language to a second language, and to broadcast the translation via the at least one loudspeaker and display the translation on the visual display panel; the base unit and at least one earphone configured to connect to one another automatically when an activation action occurs.
In an embodiment, the base unit further comprises a base unit microphone configured to receive audio from at least one other user located proximal to the first user, the control processor further configured to receive the audio, and, if this is in a second language, to translate this to the first language, and to transmit the translation to the headset unit.
In an embodiment, the base unit and headset unit are mutually configured so that the base unit charges the headset unit when not in use.
In an embodiment, the headset unit comprises a pair of earbuds.
In an embodiment, the headset unit and the base unit are configured to co-locate when not in use, and removal of the headset unit from the base unit comprises an activation action.
In an embodiment, the headset unit is configured to actively search for the base unit on removal from the base unit, and to transmit unique identification data so as to pair with the base unit once located.
In an embodiment, connection between the headset unit and the base unit is via wireless transmission.
In an embodiment, the connection is an LTE connection.
In an embodiment, the wireless transmission is substantially within the 2.4 GHz frequency range.
In an embodiment, the base unit is further configured for internet connection.
In an embodiment, the control processor is configured so that the text displayed on the visual display panel comprises a transcription of the words of the audio signals received, and translation of these into the first or second language respectively.
With respect to the above description then, it is to be realised that the optimum dimensional relationships for the parts of the invention, to include variations in size, materials, shape, form, function and manner of operation, assembly and use, are deemed readily apparent and obvious to one skilled in the art, and all equivalent relationships to those illustrated in the drawings and described in the specification are intended to be encompassed by the present invention.
This invention may also be said broadly to consist in the parts, elements and features referred to or indicated in the specification of the application, individually or collectively, and any or all combinations of any two or more said parts, elements or features, and where specific integers are mentioned herein which have known equivalents in the art to which this invention relates, such known equivalents are deemed to be incorporated herein as if individually set forth.
Therefore, the foregoing is considered as illustrative only of the principles of the invention. Further, since numerous modifications and changes will readily occur to those skilled in the art, it is not desired to limit the invention to the exact construction and operation shown and described, and accordingly, all suitable modifications and equivalents may be resorted to, falling within the scope of the invention.
BRIEF DESCRIPTION OF THE FIGURES
Further aspects of the invention will become apparent from the following description which is given by way of example only and with reference to the accompanying drawings which show an embodiment of the device by way of example, and in which:
Figure 1A shows an embodiment of headset unit that forms part of a translation system in accordance with an embodiment of the invention.
Figures 1B and 1C show an embodiment of a portable base unit that forms part of a translation system in accordance with an embodiment of the invention.
Figure 2 shows a block diagram that illustrates the base unit of figures 1B and 1C connecting to a cloud-based translation service.
Figure 3 shows an interaction diagram of a method of translation in accordance with an embodiment of the invention.
Figure 4 illustrates a use-case of the first user speaking German and communicating in real-time with another user who is speaking in English, the users using a translation system in accordance with an embodiment of the invention.
Figure 5 shows a flow diagram that illustrates a method of translation in accordance with an embodiment of the invention..
DETAILED DESCRIPTION
Embodiments of the invention, and variations thereof, will now be described in detail with reference to the figures.
In this specification, if terms such as "a first", "a second", "a third", and "a fourth" are used, these are used in order to distinguish between similar objects and are not necessarily used to describe a specific sequence or order. It should be understood that the terms so used are interchangeable under appropriate circumstances, so that the implementations of the disclosure described herein are, for example, capable of being implemented in sequences other than the sequences illustrated or described herein. Furthermore, the terms "include" and "have" and any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, a method, a system, a product, or a device that includes a series of steps or units, is not necessarily limited to expressly listed steps or units but may include other steps or units that are not expressly listed or that are inherent to such process, method, product, or device.
Where ‘a pair’ of earbuds are referred to, this should be taken to indicate a two individual units configured to be used together, typically to provide stereo audio or facilitate communication.
‘Portable base unit’ should be taken to mean a compact, movable device that provides processing, translation, and communication based on audio data signals in the translation system.
An earbud that forms part or the whole of a headset unit 102 according to an embodiment of the invention is shown in figure 1 A. The earbud can be worn in the normal manner. The headset unit 102 includes a headset unit microphone 104, configured to capture audio data signals from the surrounding environment, and in particular, the voice of the wearer/user.
A portable base unit 106 according to an embodiment of the invention is shown in figures 1 B and 1C. The base unit 106 includes a visual display panel 108, one or more loudspeakers 110, a control processor 112 and a base unit microphone 114. The visual display panel 108 is configured to display text to the first user.
The base unit and headset unit are mutually configured for wireless communication, with these units containing appropriate transmission and receiving apparatus.
In use, the control processor 112 receives and processes the audio data signals received from the headset unit 102. The control processor 112 sends the audio data signals to the headset unit 102 and visual data signals to the visual display panel 108. The control processor 112 transmits broadcast audio data signals to the one or more loudspeakers 110.
The control processor 112 is configured to translate the audio data signals received in a first language from the headset unit 102 to a second language, and to broadcast the translation using the one or more loudspeakers 110. The control processor 112 also displays the translation on the visual display panel 108. The portable base unit 106 and the headset unit 102 are configured to connect to one another automatically when an activation action occurs.
As shown in figure 2, the way in which translation takes place is for the portable base unit 206 to transmit the audio signal to a cloud-based translation service 204. The portable base unit 206 and the cloud-based translation service 204 are communicatively connected using a network 202. It is preferred that the network 202 is a Long-Term Evolution, LTE connection. The portable base unit 206 is configured with an audio codec, an embedded translation unit and an operating system to automatically detect (i) a first language from the audio data signal of a first user and (ii) a second language from the audio data signal of the one or more other users. The operating system communicatively connects to the cloud-based translation service
204 to access real-time language data and to translate audio data signals received in the first language, to the second language.
An interaction diagram of a method of translation in accordance with an implementation of the disclosure is shown in figure 3. The interaction diagram includes a first user 302, a headset unit 304, a portable base unit 306, a cloud-based translation device 308, and other users 310. At a step 312, the headset unit 304 and the portable base unit 306 establish a secure Bluetooth connection with a predefined passkey. The secure Bluetooth connection enables the headset unit 304 and the portable base unit 306 to communicate and activate a preinstalled translation application on the portable base unit 306 when within range of one another. At step 314, a press and hold action of a button on the headset unit 304 activates speech recording on the headset unit 304, transmitting this data to the base unit. Upon release of the button, the data relating to the speech recording is translated into the second language, as outlined above. The headset unit 304 and the portable base unit 306 use a Bluetooth AVRCP to enable remote control of speech record and play back. At step 316, the translation is broadcast to other users 310 using one or more loudspeakers on the portable base unit 306. Simultaneously, a translation is displayed as text on the visual display panel.
When the other or second user speaks in turn, the portable base unit 306 detects the speech of the other user, and translates this from the second language to the first language as outlined above. The base unit then relays audio data signals to the first user 302 using the headset unit 304, so that the first user hears the translation via the loudspeakers in the headset. The earbuds transmit control events, formatted according to the Bluetooth AVRCP protocol, that are received by an operating system at the portable base unit 306. The operating system includes a translation application that performs speech and text translation, by using online or cloud translation services, as outlined above.
A user-case is illustrated in figure 4. In this example, a first user is speaking German, and is communicating in real-time with another user speaking in English.
The first user is equipped with a translation system according to an embodiment of the invention, the translation system having a headset unit 402 and a portable base unit 404. The first user, who is a native German speaker, wishes to communicate with the other user who speaks English.
As the first user speaks in German - “GUTEN TAG, WIE GEHT ES IHNEN HEUTE?” - they are pressing the button to activate the microphone, so that these words are
picked up by the microphone in the headset unit. The data relating to this is then transmitted to the control processor of the portable base unit 404, which translates the spoken words from German (the first language) to English (the second language), in real-time, as “GOOD DAY, HOW ARE YOU TODAY?”. The translated audio is then broadcast through the one or more loudspeakers of the portable base unit 404, ensuring that the other user can hear the conversation clearly. Simultaneously, the visual display panel of the portable base unit 404 displays the English translation in written form, enabling the other user to read the conversation. This visual feedback enhances comprehension and facilitates smooth communication.
As the other user responds in English by speaking “YES, WE'RE HAVING A WONDERFUL DAY, THANK YOU FOR ASKING”, the base unit microphone of the portable base unit 404 captures the audio data signal. The control processor within the portable base unit 404 translates the English response back into German and relays an audio data signal comprising “JA, WIR HABEN EINEN WUNDERBAREN TAG, DANKE FUR DIE NACHFRAGE” to the first user using the headset unit 402, thereby enabling seamless exchange of multilingual communication.
A flow diagram that illustrates this method of translation is shown in figure 5. At step 502 of the method, audio data signals are received from a headset unit in a portable base unit and processed. At step 504, the audio data signals received from the headset unit are translated from a first language to a second language. At step 506, the translation is broadcast using the one or more loudspeakers, and displayed on a visual display panel.
Claims
1. A translation system, comprising: a headset unit (102, 304, 402) wearable by a user; a portable base unit (106, 206, 306, 404) comprising a visual display panel (108) configured to display text to the first user (302), and at least one loudspeaker (110); the base unit (106, 206, 306, 404) further comprising a control processor (112) located within the base unit (106, 206, 306, 404), the control processor (112) configured to receive and process audio data signals received from the headset unit (102, 304, 402), to send audio data signals to the headset unit and visual data signals to the visual display panel (108), and to transmit broadcast audio signals to the at least one loudspeaker (110); characterised in that the control processor (112) is further configured to translate audio data signals received from the headset unit (102, 304, 402) in a first language to a second language, and to broadcast the translation via the at least one loudspeaker (110) and display the translation on the visual display panel (108); the base unit (106, 206, 306, 404) and headset unit (102, 304, 402) configured to connect to one another automatically when an activation action occurs.
2. A translation system as claimed in claim 1 wherein the base unit further comprises a base unit microphone configured to receive audio from at least one other user located proximal to the first user, the control processor further configured to receive the audio, and, if this is in a second language, to translate this to the first language, and to transmit the translation to the headset unit.
3. A translation system as claimed in claim 1 or claim 2 wherein the base unit and headset unit are mutually configured so that the base unit charges the headset unit when not in use.
4. A translation system as claimed in any one of claims 1 to 3 wherein the headset unit comprises a pair of earbuds.
5. A translation system as claimed in any one of claims 1 to 4 wherein the headset unit and the base unit are configured to co-locate when not in use, and removal of the headset unit from the base unit comprises an activation action.
6. A translation system as claimed in claim 5 wherein the headset unit is configured to actively search for the base unit on removal from the base unit, and to transmit unique identification data so as to pair with the base unit once located.
7. A translation system as claimed in any one of claims 1 to 6 wherein connection between the headset unit and the base unit is via wireless transmission.
8. A translation system as claimed in claim 6 wherein the connection is an LTE connection.
9. A translation system as claimed in claim 7 or claim 8 wherein the wireless transmission is substantially within the 2.4 GHz frequency range.
10. A translation system as claimed in any one of claims 1 to 9 wherein the base unit is further configured for internet connection.
11. A translation system as claimed in any one of claims 1 to 10 wherein the control processor is configured so that the text displayed on the visual display panel comprises a transcription of the words of the audio signals received, and translation of these into the first or second language respectively.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB2313393.7 | 2023-09-01 | ||
GBGB2313393.7A GB202313393D0 (en) | 2023-09-01 | 2023-09-01 | A portable and instantaneous translation system |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2025046549A1 true WO2025046549A1 (en) | 2025-03-06 |
Family
ID=88296778
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/IB2024/058520 WO2025046549A1 (en) | 2023-09-01 | 2024-09-02 | A portable and instantaneous translation system |
Country Status (2)
Country | Link |
---|---|
GB (1) | GB202313393D0 (en) |
WO (1) | WO2025046549A1 (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWM590893U (en) * | 2019-10-29 | 2020-02-21 | 鋒霖科技股份有限公司 | Earphone storage box device |
US20200226327A1 (en) | 2019-01-11 | 2020-07-16 | Applications Technology (Apptek), Llc | System and method for direct speech translation system |
US20210296915A1 (en) * | 2020-03-20 | 2021-09-23 | Dongguan Xuntao Electronic Co., Ltd. | Wireless earphone device and method for using the same |
-
2023
- 2023-09-01 GB GBGB2313393.7A patent/GB202313393D0/en active Pending
-
2024
- 2024-09-02 WO PCT/IB2024/058520 patent/WO2025046549A1/en unknown
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200226327A1 (en) | 2019-01-11 | 2020-07-16 | Applications Technology (Apptek), Llc | System and method for direct speech translation system |
TWM590893U (en) * | 2019-10-29 | 2020-02-21 | 鋒霖科技股份有限公司 | Earphone storage box device |
US20210296915A1 (en) * | 2020-03-20 | 2021-09-23 | Dongguan Xuntao Electronic Co., Ltd. | Wireless earphone device and method for using the same |
Non-Patent Citations (3)
Title |
---|
ANONYMOUS: "Fairphone True Wireless Stereo (TWS) Earbuds - Support", 29 May 2023 (2023-05-29), XP093233574, Retrieved from the Internet <URL:https://web.archive.org/web/20230529023937/https://support.fairphone.com/hc/en-us/articles/4407930364177-Fairphone-True-Wireless-Stereo-TWS-Earbuds> * |
ANONYMOUS: "True Wireless Earphones and Charging Case, Sound Shells, Item ref: 100.571UK, User Manual", 22 November 2021 (2021-11-22), XP093233560, Retrieved from the Internet <URL:https://web.archive.org/web/20211122030922if_/https://www.avsl.com/assets/manuals/1/0/100571UK.pdf> * |
MADE BY GOOGLE: "How to Translate Using Your Google Pixel Buds Pro", 28 July 2022 (2022-07-28), XP093233078, Retrieved from the Internet <URL:https://www.youtube.com/watch?v=7ist0_mPSYM&t=1s> * |
Also Published As
Publication number | Publication date |
---|---|
GB202313393D0 (en) | 2023-10-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10599785B2 (en) | Smart sound devices and language translation system | |
US11227125B2 (en) | Translation techniques with adjustable utterance gaps | |
EP2842055B1 (en) | Instant translation system | |
US8527258B2 (en) | Simultaneous interpretation system | |
US10334349B1 (en) | Headphone-based language communication device | |
US11528568B1 (en) | Assisted hearing aid with synthetic substitution | |
CN108345591A (en) | Voice real time translating method based on mobile terminal ears wireless headset and system | |
CN110915239B (en) | On-line automatic audio transcription for hearing aid users | |
JP2021150946A (en) | Wireless earphone device and method for using the same | |
KR20160093529A (en) | A wearable device for hearing impairment person | |
BR102014009020A2 (en) | device, pluggable terminal, part kit and method | |
CN105007557A (en) | Intelligent hearing aid with voice identification and subtitle display functions | |
WO2019240035A1 (en) | Conversation assistance device, conversation assistance method, and program | |
KR20160142079A (en) | Language interpreter, speech synthesis server, speech recognition server, alarm device, lecture local server, and voice call support application for deaf auxiliaries based on the local area wireless communication network | |
KR102087827B1 (en) | Multi-headset Apparatus for Combined with the Function of Simultaneous Translation and Interpretation | |
US20180246882A1 (en) | Real time speech translation system | |
WO2025046549A1 (en) | A portable and instantaneous translation system | |
KR102170902B1 (en) | Real-time multi-language interpretation wireless transceiver and method | |
US10936830B2 (en) | Interpreting assistant system | |
JP3217741U (en) | Multi-function immediate speech translator | |
CN116685977A (en) | Voice translation processing device | |
KR101906549B1 (en) | A wearable device for hearing impairment person | |
JP2002101204A (en) | Communication meditating system and telephone set for aurally handicapped person | |
JP2019056893A (en) | Conversation apparatus | |
JP2003509712A (en) | Personal information system for wireless transmission and reception of speech information |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24786144 Country of ref document: EP Kind code of ref document: A1 |