WO2015036458A1 - Wireless headset - Google Patents
Wireless headset Download PDFInfo
- Publication number
- WO2015036458A1 WO2015036458A1 PCT/EP2014/069333 EP2014069333W WO2015036458A1 WO 2015036458 A1 WO2015036458 A1 WO 2015036458A1 EP 2014069333 W EP2014069333 W EP 2014069333W WO 2015036458 A1 WO2015036458 A1 WO 2015036458A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- wireless headset
- controller
- wireless
- remote server
- headset
- Prior art date
Links
- 238000004891 communication Methods 0.000 claims description 13
- 238000000034 method Methods 0.000 claims description 11
- 230000036772 blood pressure Effects 0.000 claims description 10
- 230000004044 response Effects 0.000 claims description 9
- 238000003860 storage Methods 0.000 claims description 8
- 238000004590 computer program Methods 0.000 claims description 4
- 238000005259 measurement Methods 0.000 claims description 3
- 230000001419 dependent effect Effects 0.000 claims 2
- 238000009530 blood pressure measurement Methods 0.000 claims 1
- 230000009471 action Effects 0.000 description 21
- 238000013523 data management Methods 0.000 description 9
- 238000007726 management method Methods 0.000 description 9
- 238000010586 diagram Methods 0.000 description 5
- 238000010295 mobile communication Methods 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 230000003993 interaction Effects 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 3
- 230000008685 targeting Effects 0.000 description 3
- 241001415395 Spea Species 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 210000005069 ears Anatomy 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000003760 hair shine Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000036651 mood Effects 0.000 description 1
- 230000002040 relaxant effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
- 210000000707 wrist Anatomy 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R5/00—Stereophonic arrangements
- H04R5/04—Circuit arrangements, e.g. for selective connection of amplifier inputs/outputs to loudspeakers, for loudspeaker detection, or for adaptation of settings to personal preferences or hearing impairments
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/60—Substation equipment, e.g. for use by subscribers including speech amplifiers
- H04M1/6033—Substation equipment, e.g. for use by subscribers including speech amplifiers for providing handsfree use or a loudspeaker mode in telephone sets
- H04M1/6041—Portable telephones adapted for handsfree use
- H04M1/6058—Portable telephones adapted for handsfree use involving the use of a headset accessory device connected to the portable telephone
- H04M1/6066—Portable telephones adapted for handsfree use involving the use of a headset accessory device connected to the portable telephone including a wireless connection
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R1/00—Details of transducers, loudspeakers or microphones
- H04R1/10—Earpieces; Attachments therefor ; Earphones; Monophonic headphones
- H04R1/1041—Mechanical or electronic switches, or control elements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/167—Audio in a user interface, e.g. using voice commands for navigating, audio feedback
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M2250/00—Details of telephonic subscriber devices
- H04M2250/22—Details of telephonic subscriber devices including a touch pad, a touch sensor or a touch detector
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2420/00—Details of connection covered by H04R, not provided for in its groups
- H04R2420/07—Applications of wireless loudspeakers or wireless microphones
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R5/00—Stereophonic arrangements
- H04R5/033—Headphones for stereophonic communication
Definitions
- the present invention relates to a wireless headset.
- Headphones (which may also be referred to as “headsets”) are well known.
- Headsets can be used to play or stream audio content from playback device, such as a portable media player, or a mobile communication device, such as a smart phone.
- playback device such as a portable media player, or a mobile communication device, such as a smart phone.
- Headsets generally can be divided into two groups, namely wire and wireless headsets.
- EP 2 362 619 Al describes a mobile communication device connected by wire to a headset.
- the headset is provided with an in-line button controller which can be used to control playback of audio.
- WO 2009/126614 Al describes a headset which can be tethered to a device wirelessly.
- the device can connect wirelessly to a website or, through a wireless network adapter, an MP3 player or laptop.
- control is performed using the website or the tethered device.
- a wireless headset comprising a speaker, a touch pad for tracking motion of a finger sliding thereon and a controller.
- the controller is configured to acquire motion information from the touch pad, to identify a gesture from the motion information and to provide audio feedback via the speaker which depends on the gesture.
- the user can play audio content (which may be locally stored or being streamed) using the headset and control the headset without the need for a tethered device, such as a smart phone or other form of mobile terminal.
- a tethered device such as a smart phone or other form of mobile terminal.
- the wireless headset may comprise ear-fitting headphones (or “earbuds”), on-ear headphones (which may be referred to as “supra-aural headphones”) or over-the-ear headphones (which maybe referred to as “circumaural headphones”).
- earbuds ear-fitting headphones
- on-ear headphones which may be referred to as “supra-aural headphones”
- over-the-ear headphones which maybe referred to as “circumaural headphones”
- the wireless headset may be placed over the crown of the head.
- the wireless headset may be placed around the neck (sometimes referred to as a "neckband wearing style” headphone).
- the wireless headset may be display-less.
- the controller may be configured, in response to receiving a predetermined gesture, to allow further operation of the headset (i.e. "unlock” the headset).
- the audio feedback may comprise a voice signal.
- the voice signal may confirm the
- command or action such as, for example, "Switching to Radio Station ABC", “Skipping to next track” and so on.
- the wireless headset may further comprise another speaker, i.e. for stereo playback.
- the controller may be configured to provide the audio feedback via one speaker or both of the speakers.
- the wireless headset may comprise a multi-part body which houses the speaker, the touch pad and the controller.
- the speaker may be housed in a part which is releasably attached to the rest of the body.
- the wireless headset may comprise first and second main body portions, a connecting portion arranged to connect the first and second body portions, and first and second detachable headphone portions, releasably attached to the first and second main body portions.
- the controller may be configured to stream audio content from a remote server.
- the wireless headset When the wireless headset receives audio content, it can receive the audio content without the need for a mobile communication device or portable media player. Thus, the wireless headset can receive audio content "directly" from the remote server, via network infrastructure. However, the wireless headset and a mobile communication device or portable media play can be paired and exchange audio content.
- the wireless headset may comprise further storage, for example, in the form of flash memory.
- the controller may be configured to download and store the audio content in the storage.
- the wireless headset may comprise a wireless communications network interface, such as a wireless local area network (WLAN) module.
- the controller may be configured to communicate, via a wireless communications network, with a remote server.
- the controller may be configured to cooperate with the remote server to authenticate the headset.
- the controller may be configured to retrieve content from the remote server or another remote server oniy when the headset is authenticated.
- the controller may perform authentication in response to the headset being unlocked and/or in response to user instruction.
- the wireless headset may further comprise a positioning device, such as a Global Positioning System receiver, for determining position of the headset.
- the wireless headset may comprise a wireless communications network interface, such as a wireless local area network (WLAN) module.
- the controller may be configured to transmit, via the wireless communications network, the position of the device to the remote server.
- the wireless headset may further comprise a short-range wireless communications network interface, such as Near Field Communication (NFC) module or Bluetooth 35 module.
- the controller may be configured to communicate, via the short-range wireless communications network, with a mobile terminal, such as a smart phone.
- the controller may be configured to transmit data relating to audio content to the mobile terminal.
- the controller may be configured to transmit and/or receive audio content to and/or from the mobile terminal.
- the touch pad may be a two-dimensional sensor.
- the touch pad may be a one-dimensional sensor (or "touch slider").
- the touch pad may be a touch switch.
- the touch pad may be a capacitive touch pad.
- the touch pad may be a resistive touch pad.
- the remote server may be configured to authenticate the wireless headset and to provide the content to the wireless headset in response to successful authentication.
- the remote server may be a first remote server and the system may further comprise a second, different remote server which serves the content.
- the second remote server may transmit the content to the first server and the first server may transmit the content to the wireless headset.
- the second server may encrypt the content using a first encryption process.
- the first server may encrypt the content using a second encryption process.
- a method comprising acquiring motion information from a touch pad, identifying a gesture from the motion information and providing audio feedback via a speaker which depends on the gesture.
- a computer program comprising instructions which, when executed by at least one processor, causes the at least one processor to perform the method.
- Figure I is a schematic block diagram of an audio system which includes a wireless headset and a server providing audio content;
- Figure 2 is a perspective view of a wireless headset
- Figure 3 is a schematic block diagram of a wireless headset including a controller
- FIG. 4 is a schematic block diagram of the software components implemented by the controller shown in Figure 3
- Figure 5 is a schematic block diagram of a headset retrieving audio content from a database server
- Figure 6 is a perspective view of a user controlling a headset via a touch pad
- Figure 7 is process flow diagram of a method of controlling the headset in response to user gesture.
- an audio system 1 which comprises a wireless headset 2 (which may also be referred to as "wireless headphones") that can establish a wireless connection 3, for example using I EE 802.11 protocols, with a wireless router 4 so as to be connected to a network 5, such as the Internet.
- the wireless headset 2 can download or stream audio content 6, 7, such as music, from servers 8, 9 serving content from respective databases 10,11 via the network 5.
- a first server 8 is a data management server that processes and hosts user data 6.
- the second server 9 is a third-party content server.
- the data management server 8 acts as a hub for third-party audio content 7.
- the wireless headset 2 can be used with a peer-to-peer music streaming service, such as Spotify.
- the second server 9 may be a central server, such as that described in US 2009/0019174 Al (which is incorporated herein by reference).
- the wireless headset 2 can also establish a short-range wireless connection 2, for example Bluetooth protocol, with a mobile device 13, such as a portable media player or mobile communications device, such as a smart phone.
- a mobile device 13 such as a portable media player or mobile communications device, such as a smart phone.
- the wireless headset 2 can be paired with a mobile device 13 and can exchange audio content, the mobile device 13 is not necessarily required to obtain the audio content 6, 7 from the servers 8, 9.
- the wireless headset 2 is shown in more detail.
- the headset 2 comprises first and second main body portions 21, 22 intended to sit on either side of a user's head (not shown), a connecting portion 23 which is arranged to connect (structurally and electronically) the first and second body portions 21 , 22 and first and second detachable headphone portions 24, 25.
- the headphone portions 24, 25 are provided with inwardly-directed earpieces 26, 27 for sitting in the user's ears.
- An blood pressure sensor 29 is located on at least one of the portions 24, 25.
- One or both of the body portions 21, 22 can be detached from the connecting portion 23, for example, to allow a body portion 21, 22 to be connected to a power supply (not shown) for recharging.
- the body portions 21, 22 may include respective permanent magnets (not shown) which allow the body portions 21, 22 to be releasably fastened together. This enables the headset 2 to form a loop which can be attached around the user's neck or wrist when the headset 2 is not in use.
- the main body portions 21, 22 are generally cylindrical in shape.
- the main body portions 21, 22 house most of the headset's circuitry 30 (Figure 3), such as controller 31 ( Figure 3).
- An outwardly facing surface 28 of the body portion 21 provides a surface for a touch pad 32 ( Figure 3).
- the headphone portions 24, 25 house speaker drivers (or “speakers”) 33, 34 ( Figure 3).
- the headphone portions 24, 25 are releasably connected to the first and second main body portions 21, 22 using respective pairs of jack plug and jack socket (not shown). This can allow different earphones 26, 27 to be used in different environments.
- headset circuitry 30 includes a controller 31, a touch pad 32 for capturing user gestures which can be used to control the headset, speakers 33, 34, a microphone 35 for voice input, (volatile) memory 36 and storage 37, for example, in the form of (non-volatile) flash memory.
- the touch pad 32 takes the form of a two-dimensional sensor (sometimes referred to as an "x-y touch pad").
- the touch pad 32 is able to sense proximity of a user's finger, for example, capacitively.
- the touch pad 32 is not provided as part of a touch panel display, i.e. the touch pad is a display-less touch pad 32.
- the headset 2 need not be provided with any display or screen. Headset 2 may include simple indicators, such as, for example, light-emitting diode(s) for indicating headset operation, content access, device paring and/or other similar functions.
- the headset circuitry 30 includes a set of wireless interfaces 38.
- the wireless interfaces 38 can include a Near Field Communication (NFC) module 39, a WiFi (i.e. IEEE 802.11) module 40 and a Bluetooth (e.g. Bluetooth V4) module 41.
- the wireless interfaces 38 can include a mobile network module 42.
- Each module 39, 40, 41, 42 is provided with a respective antenna 43, 44, 45, 46.
- the wireless interfaces 38 can be used for one or more purposes.
- An interface 38 can be used to establish a connection 12 ( Figure 1 ⁇ with a mobile device 13 ( Figure 1) and allow transmission of audio and/or data signals between the headset 2 and the device 13 ( Figure 1).
- An interface 38 can be used to establish a connection with the data management server 8, via a wireless router 4, to allow transmission of audio content and/or data signals between the headset 2 and the data management server 8.
- the controller 31 and other circuits are powered by a rechargeable battery 47 via a power manager 48.
- GPS Global Positioning System
- a motion sensor 52 may be provided, capable of measuring gyroscopic forces, accelerating and direction of motion.
- headset system architecture 61 is shown.
- the system includes application software 62, kernel 63 and hardware 64.
- the application software 62 includes a plurality of executable modules 65, 67, 68, 69, 70, 71, 72, 73.
- the application software 62 includes a security module 65 which implements security policies for both user access to the headset and headphone access to the data management server 8 ( Figure 1).
- the application software 62 includes a user interface modules 66 which includes a local user interface module 67 which handles local user interactions and generates feedback tones and audio prompts and generates haptic feedback for the local user, a remote user interface module 68 which handles remote user interactions, and a voice recognition module 73 which handles local user voice inputs.
- the application software 62 also includes an audio output module 69 which provides playback of audio content and prompts, a data management server access module 70 which handles interactions with the data management server 8 ( Figure 1), a status and statistics module 71 which collects, stores and prepares upload, status and statistical information, and generates user alerts, a database management module 72 which manages data, such as audio content data and other non- audio content data (for example, play lists, preferences etc.) held in storage 37 ( Figure 3), an analysis module 74 which analyses information collected from the blood pressure sensor 29 about the wearer, for example, heart rate, for distribution to the database management module 72.
- an audio output module 69 which provides playback of audio content and prompts
- a data management server access module 70 which handles interactions with the data management server 8 ( Figure 1)
- a status and statistics module 71 which collects, stores and prepares upload, status and statistical information, and generates user alerts
- a database management module 72 which manages data, such as audio content data and other non- audio content data (for example, play lists
- the headset 2 can be used as a portable media player to playback audio content 7 (Figure 1), e.g. music, downloaded from the content server 9 ( Figure 1), and/or to stream live audio content 7 (Figure 1) from the content server 9 ( Figure 1).
- audio content 7 Figure 1
- Figure 1 e.g. music
- the audio content 7 ( Figure 1) need not be downloaded first to a mobile device 13 ( Figure 1) and then transferred to the headset 2.
- the headset 2 can be used independently.
- FIG. 5 illustrates communication between the content server 9 and the headset 2 which employs encryption.
- unlocking the headset 2 involves using a predetermined action, such as a gesture, voice command or code.
- a security module 81 in the database management server 8 sends an authentication request (not shown) to the headset 2.
- the headset 2 returns an authentication token 82.
- the headset 2 is authenticated and the user is free to download and/or stream content 7 via the database management server 8. If the token 82 is not validated by the security module 81 in the database management server 8, then an error message (not shown) or error message identifier (not shown) is transmitted to the headset 2.
- the audio output module 69 outputs an audible error message.
- the user may be asked to input a predetermined security gesture/action. This can be input using the touch pad 32 or microphone 35 to re-authenticate the headset 2.
- the user may identify content 7 held by the content database 11 which he or she wishes to consume.
- the user instructs the headset 2 to send a request 83 to download or stream the content 7 to the database management server 8.
- the request 83 includes a content identifier to identify specific content, such as a music track, or a source of content, such as radio station.
- the database management server 8 passes a request 84 (which may be a copy of the request 83) to the content server 9.
- the content server 9 retrieves the content 7.
- a security module 85 in the content server 9 encrypts the content 7 with a first key and transmits the encrypted content 7 to the database management server 8.
- the security module 81 in the database management server 8 encrypts the content with a second key and transmits the doubly-encrypted content to the headset 2 via network 5
- the security module 65 decrypts the content 7 and passes the decrypted content to the audio output module 69 for playback by the speakers 33, 34.
- the data management server 8 may recommend or provide content to the user profile associated with the headset 2.
- the user content may be targeted to a subset of users or to a specific user.
- the targeting could be based upon a number of generic factors related to the user profile, for example, their gender or age, or could be a factor specific to the user at that moment in time, for example, their location, mood, or action.
- the targeting could be based on a combination of the aforementioned factors.
- the targeted content may be targeted to the user based upon the blood pressure sensor 29 output.
- the targeting of content may be targeted to a user based on their inferred action, for example, if the blood pressure sensor 29 and analysis module 74 determine that the wearer was running or in the gym a more "upbeat" playlist can be provided to the user, or, for example, if the user's location has been static for a period and the user's heart rate has dropped, then a more "relaxing" playlist can be provided to the user.
- Figure 6 illustrates user control of the headset 2 ( Figure 2) using gestures, such as tap, double tap, single forward swipe, and so on.
- the controller 31 repeatedly polls the touch pad 32 to identify any user input (steps SI & S2). If the user has touched the touch pad 32, the controller 32 identifies the gesture ⁇ step S3). The controller 32 determines whether the gesture corresponds to one of a fixed number of gestures (step S4). If the controller 31 identifies the gesture, then it retrieves message data, for example data for speech synthesis, from storage 37 (step S5). The controller 31 outputs a message, for example to a voice synthesiser, which is output as an audible message via one or both speakers 33, 34 (step S6). The controller 31 executes the command (step S7). The controller 31 may request that the user confirm the action, wait for a gesture to confirm the action and, in response to receiving confirmation within a given period of time, execute the command.
- controller 31 If the controller 31 does not identify the gesture or it identifies the gesture by the command is invalid, then it retrieves an error message data from storage 37 (step 58) and outputs the message via spea.ker(s) 33, 34 (step 59).
- the process is ongoing until, for example, the headset 2 is switched off (step S10).
- a user can set a security gesture which can be used permit further operation of the headset 2.
- the gesture is input using touch pad 32 and is encoded into a gesture descriptor (not shown).
- the gesture descriptor is transmitted to the security module 81 and is stored for subsequent verification of user identity.
- the gesture may not be input via the touch pad and instead be a voice activated action input via the microphone 35.
- the voice activated action may comprise, for example, a voice command or key phrase.
- the voice action can be predetermined and set during manufacture.
- the voice input can be set by the user from a pre-set selection of voice actions, or, the voice input can be set as an independently defined user voice action.
- the voice input is encoded and sent to the voice recognition module 73.
- the voice recognition module compares the voice input with the voice action set as the security gesture and stored in the security module 65.
- the comparison may comprise matching key content of the voice action. Alternatively, the comparison may require a complete match of the content of the required voice action.
- the voice recognition module sends the result of the comparison to the security module 65, If the security module determines that the voice input matches the voice action for a user profile associated with the headset 2 then the user is allowed to further interact with the headset 2.
- the security gesture/action further comprises a secondary step of identification and authentication of the user through monitoring their blood pressure.
- a sensor capable of measuring blood pressure to obtain an ECG pattern is located on the headset. ECG patterns being based on these properties are unique to each user, !n a preferred embodiment a photopiethysmogram (PPG) sensor 29 is located on the headphone portion 24, 25 of the headset 2.
- the PPG sensor 29 shines an infra-red beam or green LED light beam onto the user's ear. The transmission or reflection of this light beam is measured by a photodiode in the PPG sensor to infer properties of the wearer's blood pressure (the person skilled in the art will be aware that measurement of other properties is also possible) to develop an ECG pattern.
- the vessels of the ear are a preferential location for the sensor to obtain the measurements.
- the PPG sensor 29 transmits the results of the PPG detection to the security module 65.
- the security module 65 compares the PPG sensor 29 results with known user profiles associated with the headset 2.
- a sensor configured to measure blood pressure and obtain an ECG pattern could be a separate device. This device may be located on another part of the user, for example the user's finger. The sensor can be configured to wirelessly communicate with said headset 2 to transmit data relating to the user's ECG pattern.
- the security module determines that the voice input matches the voice action for a user profile associated with the headset 2 then the user is allowed to further interact with the headset 2.
- each user profile may have the same security gesture to identify the user.
- the user gesture/action may be different for each user.
- the security gesture is required to permit further interaction with the headset 2. If the user enters an incorrect gesture, they may be prompted to re-try. Up to 5 incorrect entries may be permitted before the headset 2 is temporarily blocked and requires a PIN or command issued from the data management server 8 to unlock the device. The number of incorrect entries permitted may be more than 5. The number of incorrect entries permitted may be less than 5.
- the gesture/action can also be input on a paired mobile device 13 and transmitted, via the headset 2, to the security module 81.
- the touch pad 32 may be disposed in one of the headphone portions 24, 25.
- the touch pad may be located at a distal end of the headphone portions 24, 25 which is adjacent to the user's ear. This can make the touch pad 32 easier to locate.
- More than one touch pad 32 may be provided, for example, one touch pad on each body portion 21, 22.
- the touch pad 32 need not necessarily detect motion in 2 dimensions, i.e. it need not necessarily be an x-y touch pad.
- the touch pad 32 may take the form of a touch slider (for example, a linear touch slider or an arcuate track slider) for detecting motion along a line.
- the slider can be used to detect, for example, direction and speed of a continuous stroke along the slider or a pattern of taps.
- the touch pad 32 may take the form of a touch switch, for example, which can detect simple gestures such a single tap and/or multiple taps. It can detect taps of different durations (e.g. short or long) and multiple taps of different patterns (e.g. double short tap and double long tap).
- the headset can take the forms of over-the-ear headphones.
- the main body portions 21, 22 need not be cylindrical. Instead, the main body portions 21, 22 can be rectangular.
- the touch pad 32 need not be a capacitive touch pad.
- the touch pad 32 may be a resistive touch pad.
Landscapes
- Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Computer Networks & Wireless Communication (AREA)
- Telephone Function (AREA)
Abstract
A wireless headset (2) is disclosed. The headset comprises a speaker (31), a touch pad (45) for tracking motion of a finger sliding thereon and a controller (33) configured to acquire motion information from the touch pad, to identify a gesture from the motion information and to provide audio feedback via the speaker which depends on the gesture.
Description
Wireless headset
Field of the invention
The present invention relates to a wireless headset.
Background
Headphones (which may also be referred to as "headsets") are well known.
Headsets can be used to play or stream audio content from playback device, such as a portable media player, or a mobile communication device, such as a smart phone.
Headsets generally can be divided into two groups, namely wire and wireless headsets.
EP 2 362 619 Al describes a mobile communication device connected by wire to a headset. The headset is provided with an in-line button controller which can be used to control playback of audio.
WO 2009/126614 Al describes a headset which can be tethered to a device wirelessly. The device can connect wirelessly to a website or, through a wireless network adapter, an MP3 player or laptop. In this type of device, control is performed using the website or the tethered device.
Summary
According to a first aspect of the present invention there is provided a wireless headset (or "headphones") comprising a speaker, a touch pad for tracking motion of a finger sliding thereon and a controller. The controller is configured to acquire motion information from the touch pad, to identify a gesture from the motion information and to provide audio feedback via the speaker which depends on the gesture.
Thus, the user can play audio content (which may be locally stored or being streamed) using the headset and control the headset without the need for a tethered device, such as a smart phone or other form of mobile terminal.
The wireless headset may comprise ear-fitting headphones (or "earbuds"), on-ear headphones (which may be referred to as "supra-aural headphones") or over-the-ear headphones (which maybe referred to as "circumaural headphones").
The wireless headset may be placed over the crown of the head. The wireless headset may be placed around the neck (sometimes referred to as a "neckband wearing style" headphone).
The wireless headset may be display-less.
The controller may be configured, in response to receiving a predetermined gesture, to allow further operation of the headset (i.e. "unlock" the headset).
The audio feedback may comprise a voice signal. The voice signal may confirm the
command or action, such as, for example, "Switching to Radio Station ABC", "Skipping to next track" and so on.
The wireless headset may further comprise another speaker, i.e. for stereo playback. The controller may be configured to provide the audio feedback via one speaker or both of the speakers.
The wireless headset may comprise a multi-part body which houses the speaker, the touch pad and the controller. The speaker may be housed in a part which is releasably attached to the rest of the body.
The wireless headset may comprise first and second main body portions, a connecting portion arranged to connect the first and second body portions, and first and second detachable headphone portions, releasably attached to the first and second main body portions. The controller may be configured to stream audio content from a remote server.
When the wireless headset receives audio content, it can receive the audio content without the need for a mobile communication device or portable media player. Thus, the wireless headset can receive audio content "directly" from the remote server, via network infrastructure. However, the wireless headset and a mobile communication device or portable media play can be paired and exchange audio content.
The wireless headset may comprise further storage, for example, in the form of flash memory. The controller may be configured to download and store the audio content in the storage.
The wireless headset may comprise a wireless communications network interface, such as a wireless local area network (WLAN) module. The controller may be configured to communicate, via a wireless communications network, with a remote server. The controller may be configured to cooperate with the remote server to authenticate the headset. The controller may be configured to retrieve content from the remote server or another remote server oniy when the headset is authenticated. The controller may perform authentication in response to the headset being unlocked and/or in response to user instruction.
The wireless headset may further comprise a positioning device, such as a Global Positioning System receiver, for determining position of the headset. The wireless headset may comprise a wireless communications network interface, such as a wireless local area network (WLAN) module. The controller may be configured to transmit, via the wireless communications network, the position of the device to the remote server. The wireless headset may further comprise a short-range wireless communications network interface, such as Near Field Communication (NFC) module or Bluetooth 35 module. The controller may be configured to communicate, via the short-range wireless communications network, with a mobile terminal, such as a smart phone.
The controller may be configured to transmit data relating to audio content to the mobile terminal. The controller may be configured to transmit and/or receive audio content to and/or from the mobile terminal. The touch pad may be a two-dimensional sensor. The touch pad may be a one-dimensional sensor (or "touch slider"). The touch pad may be a touch switch.
The touch pad may be a capacitive touch pad. The touch pad may be a resistive touch pad. According to a second aspect of the present invention there is provided a system comprising a wireless headset and a remote server configured to provide content to the wireless headset.
The remote server may be configured to authenticate the wireless headset and to provide the content to the wireless headset in response to successful authentication.
The remote server may be a first remote server and the system may further comprise a second, different remote server which serves the content. The second remote server may transmit the content to the first server and the first server may transmit the content to the wireless headset. The second server may encrypt the content using a first encryption process. The first server may encrypt the content using a second encryption process.
According to a third aspect of the present invention there is provided a method comprising acquiring motion information from a touch pad, identifying a gesture from the motion information and providing audio feedback via a speaker which depends on the gesture.
According to a fourth aspect of the present invention there is provided a computer program comprising instructions which, when executed by at least one processor, causes the at least one processor to perform the method.
According to a fifth aspect of the present invention there is provided a computer readable medium storing thereon the computer program.
Brief Description of the Drawings
Certain embodiments of the present invention will now be described, by way of example, with reference to the accompanying drawings, in which:
Figure I is a schematic block diagram of an audio system which includes a wireless headset and a server providing audio content;
Figure 2 is a perspective view of a wireless headset;
Figure 3 is a schematic block diagram of a wireless headset including a controller;
Figure 4 is a schematic block diagram of the software components implemented by the controller shown in Figure 3
Figure 5 is a schematic block diagram of a headset retrieving audio content from a database server; Figure 6 is a perspective view of a user controlling a headset via a touch pad; and
Figure 7 is process flow diagram of a method of controlling the headset in response to user gesture.
Detailed Description of Certain Embodiments
Referring to Figure 1, an audio system 1 is shown which comprises a wireless headset 2 (which may also be referred to as "wireless headphones") that can establish a wireless connection 3, for example using I EE 802.11 protocols, with a wireless router 4 so as to be connected to a network 5, such as the Internet. The wireless headset 2 can download or stream audio content 6, 7, such as music, from servers 8, 9 serving content from respective databases 10,11 via the network 5. A first server 8 is a data management server that processes and hosts user data 6. The second server 9 is a third-party content server. The data management server 8 acts as a hub for third-party audio content 7. The wireless headset 2 can be used with a peer-to-peer music streaming service, such as Spotify. For example, the second server 9 may be a central server, such as that described in US 2009/0019174 Al (which is incorporated herein by reference).
The wireless headset 2 can also establish a short-range wireless connection 2, for example Bluetooth protocol, with a mobile device 13, such as a portable media player or mobile communications device, such as a smart phone. As will be explained in more detail later, although the wireless headset 2 can be paired with a mobile device 13 and
can exchange audio content, the mobile device 13 is not necessarily required to obtain the audio content 6, 7 from the servers 8, 9.
Referring to Figure 2, the wireless headset 2 is shown in more detail.
The headset 2 comprises first and second main body portions 21, 22 intended to sit on either side of a user's head (not shown), a connecting portion 23 which is arranged to connect (structurally and electronically) the first and second body portions 21 , 22 and first and second detachable headphone portions 24, 25. The headphone portions 24, 25 are provided with inwardly-directed earpieces 26, 27 for sitting in the user's ears. An blood pressure sensor 29 is located on at least one of the portions 24, 25.
One or both of the body portions 21, 22 can be detached from the connecting portion 23, for example, to allow a body portion 21, 22 to be connected to a power supply (not shown) for recharging.
The body portions 21, 22 may include respective permanent magnets (not shown) which allow the body portions 21, 22 to be releasably fastened together. This enables the headset 2 to form a loop which can be attached around the user's neck or wrist when the headset 2 is not in use.
As shown in Figure 2, the main body portions 21, 22 are generally cylindrical in shape. The main body portions 21, 22 house most of the headset's circuitry 30 (Figure 3), such as controller 31 ( Figure 3). An outwardly facing surface 28 of the body portion 21 provides a surface for a touch pad 32 (Figure 3).
The headphone portions 24, 25 house speaker drivers (or "speakers") 33, 34 (Figure 3).
The headphone portions 24, 25 are releasably connected to the first and second main body portions 21, 22 using respective pairs of jack plug and jack socket (not shown). This can allow different earphones 26, 27 to be used in different environments.
Referring to Figure 3, headset circuitry 30 includes a controller 31, a touch pad 32 for capturing user gestures which can be used to control the headset, speakers 33, 34, a microphone 35 for voice input, (volatile) memory 36 and storage 37, for example, in the form of (non-volatile) flash memory.
The touch pad 32 takes the form of a two-dimensional sensor (sometimes referred to as an "x-y touch pad"). The touch pad 32 is able to sense proximity of a user's finger, for example, capacitively. The touch pad 32 is not provided as part of a touch panel display, i.e. the touch pad is a display-less touch pad 32. The headset 2 need not be provided with any display or screen. Headset 2 may include simple indicators, such as, for example, light-emitting diode(s) for indicating headset operation, content access, device paring and/or other similar functions.
The headset circuitry 30 includes a set of wireless interfaces 38. For example, the wireless interfaces 38 can include a Near Field Communication (NFC) module 39, a WiFi (i.e. IEEE 802.11) module 40 and a Bluetooth (e.g. Bluetooth V4) module 41. Optionally, the wireless interfaces 38 can include a mobile network module 42. Each module 39, 40, 41, 42 is provided with a respective antenna 43, 44, 45, 46.
The wireless interfaces 38 can be used for one or more purposes.
An interface 38 can be used to establish a connection 12 (Figure 1} with a mobile device 13 (Figure 1) and allow transmission of audio and/or data signals between the headset 2 and the device 13 (Figure 1).
An interface 38 can be used to establish a connection with the data management server 8, via a wireless router 4, to allow transmission of audio content and/or data signals between the headset 2 and the data management server 8.
The controller 31 and other circuits are powered by a rechargeable battery 47 via a power manager 48.
Optionally, a Global Positioning System (GPS) receiver 49 and antenna 50 may be provided.
Optionally, a motion sensor 52 may be provided, capable of measuring gyroscopic forces, accelerating and direction of motion. Referring to Figure 4, headset system architecture 61 is shown. The system includes application software 62, kernel 63 and hardware 64. The application software 62 includes a plurality of executable modules 65, 67, 68, 69, 70, 71, 72, 73.
The application software 62 includes a security module 65 which implements security policies for both user access to the headset and headphone access to the data management server 8 (Figure 1). The application software 62 includes a user interface modules 66 which includes a local user interface module 67 which handles local user interactions and generates feedback tones and audio
prompts and generates haptic feedback for the local user, a remote user interface module 68 which handles remote user interactions, and a voice recognition module 73 which handles local user voice inputs. The application software 62 also includes an audio output module 69 which provides playback of audio content and prompts, a data management server access module 70 which handles interactions with the data management server 8 (Figure 1), a status and statistics module 71 which collects, stores and prepares upload, status and statistical information, and generates user alerts, a database management module 72 which manages data, such as audio content data and other non- audio content data (for example, play lists, preferences etc.) held in storage 37 (Figure 3), an analysis module 74 which analyses information collected from the blood pressure sensor 29 about the wearer, for example, heart rate, for distribution to the database management module 72.
The headset 2 can be used as a portable media player to playback audio content 7 (Figure 1), e.g. music, downloaded from the content server 9 (Figure 1), and/or to stream live audio content 7 (Figure 1) from the content server 9 (Figure 1).
The audio content 7 (Figure 1) need not be downloaded first to a mobile device 13 (Figure 1) and then transferred to the headset 2. Thus, the headset 2 can be used independently.
Figure 5 illustrates communication between the content server 9 and the headset 2 which employs encryption.
Referring to Figures 1, 2,3, 4 and 5, when a user starts to use the headset 2, for example, for the first time after the headset 2 is switched on or after a period of inactivity, the user is required to unlock the headset 2 to permit further use. As will be explained in more detail later, unlocking the headset 2 involves using a predetermined action, such as a gesture, voice command or code.
A security module 81 in the database management server 8 sends an authentication request (not shown) to the headset 2. The headset 2 returns an authentication token 82.
If the token 82 is valid, then the headset 2 is authenticated and the user is free to download and/or stream content 7 via the database management server 8. If the token 82 is not validated by the security module 81 in the database management server 8, then an error message (not shown) or error message identifier (not shown) is transmitted to the headset 2. The audio output module 69 outputs an audible error message. The user may be asked to input a predetermined security gesture/action. This can be input using the touch pad 32 or microphone 35 to re-authenticate the headset 2.
The user may identify content 7 held by the content database 11 which he or she wishes to consume. The user instructs the headset 2 to send a request 83 to download or stream the content 7 to the database management server 8. The request 83 includes a content identifier to identify specific content, such as a music track, or a source of content, such as radio station.
The database management server 8 passes a request 84 (which may be a copy of the request 83) to the content server 9.
The content server 9 retrieves the content 7. A security module 85 in the content server 9 encrypts the content 7 with a first key and transmits the encrypted content 7 to the database management server 8.
The security module 81 in the database management server 8 encrypts the content with a second key and transmits the doubly-encrypted content to the headset 2 via network 5
In the headset 2, the security module 65 decrypts the content 7 and passes the decrypted content to the audio output module 69 for playback by the speakers 33, 34.
Alternatively, or additionally, the data management server 8 may recommend or provide content to the user profile associated with the headset 2. The user content may be targeted to a subset of users or to a specific user. The targeting could be based upon a number of generic factors related to the user profile, for example, their gender or age, or could be a factor specific to the user at that moment in time, for example, their location, mood, or action. The targeting could be based on a combination of the aforementioned factors. The targeted content may be targeted to the user based upon the blood pressure sensor 29 output. The targeting of content may be targeted to a user based on their inferred action, for example, if the blood pressure sensor 29 and analysis module 74 determine that the wearer was running or in the gym a more "upbeat" playlist can be provided to the user, or, for example, if the user's location has been static for a period and the user's heart rate has dropped, then a more "relaxing" playlist can be provided to the user.
Figure 6 illustrates user control of the headset 2 (Figure 2) using gestures, such as tap, double tap, single forward swipe, and so on.
Referring to Figures 1, 3, 4, 6 and 7, a method of controlling the headset 2 using gestures will now be described in more detail.
The controller 31 repeatedly polls the touch pad 32 to identify any user input (steps SI & S2). If the user has touched the touch pad 32, the controller 32 identifies the gesture {step S3). The controller 32 determines whether the gesture corresponds to one of a fixed number of gestures (step S4). If the controller 31 identifies the gesture, then it retrieves message data, for example data for speech synthesis, from storage 37 (step S5). The controller 31 outputs a message, for example to a voice synthesiser, which is output as an audible message via one or both speakers 33, 34 (step S6). The controller 31 executes the command (step S7). The controller 31 may request that the user confirm the action, wait for a gesture to confirm the action and, in response to receiving confirmation within a given period of time, execute the command.
If the controller 31 does not identify the gesture or it identifies the gesture by the command is invalid, then it retrieves an error message data from storage 37 (step 58) and outputs the message via spea.ker(s) 33, 34 (step 59).
The process is ongoing until, for example, the headset 2 is switched off (step S10). As explained earlier, when the user first uses the headset 2, following a period of inactivity, on request from the security module 81, or when they wish to reset the security settings, a user can set a security gesture which can be used permit further operation of the headset 2.
The gesture is input using touch pad 32 and is encoded into a gesture descriptor (not shown). The gesture descriptor is transmitted to the security module 81 and is stored for subsequent verification of user identity.
In some embodiments, the gesture may not be input via the touch pad and instead be a voice activated action input via the microphone 35. The voice activated action may comprise, for example, a voice command or key phrase. The voice action can be predetermined and set during manufacture. Alternatively, the voice input can be set by the user from a pre-set selection of voice actions, or, the voice input can be set as an independently defined user voice action.
Once the user has input the voice action, the voice input is encoded and sent to the voice recognition module 73. The voice recognition module compares the voice input with the voice action set as the security gesture and stored in the security module 65. The comparison may comprise matching key content of the voice action. Alternatively, the comparison may require a complete match of the content of the required voice action. Once the result of the comparison has been determined, the voice recognition module sends the result of the comparison to the security module
65, If the security module determines that the voice input matches the voice action for a user profile associated with the headset 2 then the user is allowed to further interact with the headset 2.
In some embodiments, the security gesture/action further comprises a secondary step of identification and authentication of the user through monitoring their blood pressure. A sensor capable of measuring blood pressure to obtain an ECG pattern is located on the headset. ECG patterns being based on these properties are unique to each user, !n a preferred embodiment a photopiethysmogram (PPG) sensor 29 is located on the headphone portion 24, 25 of the headset 2. The PPG sensor 29 shines an infra-red beam or green LED light beam onto the user's ear. The transmission or reflection of this light beam is measured by a photodiode in the PPG sensor to infer properties of the wearer's blood pressure (the person skilled in the art will be aware that measurement of other properties is also possible) to develop an ECG pattern. The vessels of the ear are a preferential location for the sensor to obtain the measurements. The PPG sensor 29 then transmits the results of the PPG detection to the security module 65. The security module 65 compares the PPG sensor 29 results with known user profiles associated with the headset 2.
Alternatively, a sensor configured to measure blood pressure and obtain an ECG pattern could be a separate device. This device may be located on another part of the user, for example the user's finger. The sensor can be configured to wirelessly communicate with said headset 2 to transmit data relating to the user's ECG pattern.
If the security module determines that the voice input matches the voice action for a user profile associated with the headset 2 then the user is allowed to further interact with the headset 2.
It should be noted, for all embodiments, it is possible for different user profiles to be contained on the headset. Each user profile may have the same security gesture to identify the user. Alternatively, the user gesture/action may be different for each user. In addition, for all embodiments, once the user has set the security gesture, the security gesture is required to permit further interaction with the headset 2. If the user enters an incorrect gesture, they may be prompted to re-try. Up to 5 incorrect entries may be permitted before the headset 2 is temporarily blocked and requires a PIN or command issued from the data management server 8 to unlock the device. The number of incorrect entries permitted may be more than 5. The number of incorrect entries permitted may be less than 5.
In some embodiments, the gesture/action can also be input on a paired mobile device 13 and transmitted, via the headset 2, to the security module 81.
It will be appreciated that many modifications may be made to the embodiments hereinbefore described. For example, the touch pad 32 may be disposed in one of the headphone portions 24, 25. In particular, the touch pad may be located at a distal end of the headphone portions 24, 25 which is adjacent to the user's ear. This can make the touch pad 32 easier to locate.
More than one touch pad 32 may be provided, for example, one touch pad on each body portion 21, 22.
The touch pad 32 need not necessarily detect motion in 2 dimensions, i.e. it need not necessarily be an x-y touch pad. The touch pad 32 may take the form of a touch slider (for example, a linear touch slider or an arcuate track slider) for detecting motion along a line. Thus, the slider can be used to detect, for example, direction and speed of a continuous stroke along the slider or a pattern of taps. The touch pad 32 may take the form of a touch switch, for example, which can detect simple gestures such a single tap and/or multiple taps. It can detect taps of different durations (e.g. short or long) and multiple taps of different patterns (e.g. double short tap and double long tap).
The headset can take the forms of over-the-ear headphones.
The main body portions 21, 22 need not be cylindrical. Instead, the main body portions 21, 22 can be rectangular.
The touch pad 32 need not be a capacitive touch pad. The touch pad 32 may be a resistive touch pad.
Claims
1. A wireless headset comprising:
a speaker;
a microphone;
a touch pad for tracking motion of a finger sliding thereon;
a sensor configured to obtain blood pressure of a wearer; and
a controller configured to use said ECG pattern to identify a wearer of the headset.
2. A wireless headset according to claim 1, wherein said identification of the user further comprises comparing measurements of a blood pressure sensor to previously stored blood pressure measurements of user profiles.
3. A wireless headset according to claim 1 or 2, wherein the controller is configured to stream audio content from a remote server in response to said identification of said user.
4. A wireless headset according to any preceding claim, wherein the sensor is a PPG sensor.
5. A wireless headset according to any preceding claim, wherein the headset further comprises a motion sensor.
6. A wireless headset comprising:
a speaker;
a touch pad for tracking motion of a finger sliding thereon;
a controller configured to acquire motion information from the touch pad, to identify a gesture from the motion information and to provide audio feedback via the speaker which depends on the gesture.
7. A wireless headset according to claim 6, wherein the controller is configured, in response to receiving a predetermined gesture, to allow further operation of the headset.
8. A wireless headset according to claim 6 or 7, wherein the audio feedback comprises a voice signal.
9. A wireless headset according to any preceding claim, further comprising: another speaker,
wherein the controller is configured to provide the audio feedback via one or both of the speakers,
10. A wireless headset according to any preceding claim, comprising a multi-part body which houses the speaker, the touch pad and the controller.
11. A wireless headset according to claim 10, wherein the speaker is housed in a part which is releasably attached to the rest of the body.
12. A wireless headset according to any preceding claim, wherein the controller is configured to stream audio content from a remote server.
13. A wireless headset according to any preceding claim, further comprising: storage;
wherein the controller is configured to download and store the audio content in the storage.
14. A wireless headset according to any preceding claim, further comprising;
a wireless communications network interface; wherein the controller is configured to communicate, via a wireless interface, with a remote server.
15. A wireless headset according to claim 14, wherein the controller is configured to cooperate with the remote server to authenticate the headset.
16. A wireless headset according to claim 15, wherein the controller is configured to retrieve content from the remote server or another remote server only when the headset is authenticated.
17. A wireless headset according to any preceding claim, further comprising: a positioning device for determining position of the headset.
18. A wireless headset according to claim 17, further comprising: a wireless
communications network interface;
wherein the controller is configured to transmit, via a wireless interface, the position of the device to the remote server.
19. A wireless headset according to any preceding claim, further comprising: a short-range wireless communications network interface;
wherein the controller is configured to communicate, via a short-range wireless communications network, with a mobile terminal.
20. A wireless headset according to claim 19, wherein the controller is configured to transmit data relating to audio content to the mobile terminal.
21. A wireless headset according to claim 19 or 20, wherein the controller is configured to transmit and/or receive audio content to and/or from the mobile terminal.
22. A system comprising:
a wireless headset according to any preceding claim; and
a remote server configured to provide content to the wireless headset.
23. A system according to claim 22, wherein the remote server is configured to authenticate the wireless headset and to provide the content to the wireless headset in response to successful authentication.
24. A system according to claim 23, when claim 22 is dependent on any of claims 1 to 4, wherein the remote server is configured to provide content to the wireless headset on the basis of said identification of said user through said PPG sensor.
25. A system according to claim 24, when claim 22 is dependent on claim 5, wherein the remote server is configured to provide content to the wireless headset on the basis of said identification of said user through said PPG sensor and motion data from the motion sensor.
26. A system according to claim 22 to 25, wherein the remote server is a first remote server and the system further comprises;
a second, different remote server which serves the content;
wherein the second remote server transmits the content to the first server and wherein the first server transmits the content to the wireless headset.
27. A method, comprising:
acquiring motion information from a touch pad;
identifying a gesture from the motion information;
providing audio feedback via a speaker which depends on the gesture.
28. A computer program comprising instruction which, when executed by at least one processor, causes the at least one processor to perform the method of claim 27.
computer readable medium storing thereon a computer program according
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB1316109.6A GB2518008B (en) | 2013-09-10 | 2013-09-10 | Wireless Headset |
GB1316109.6 | 2013-09-10 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2015036458A1 true WO2015036458A1 (en) | 2015-03-19 |
Family
ID=49487005
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/EP2014/069333 WO2015036458A1 (en) | 2013-09-10 | 2014-09-10 | Wireless headset |
Country Status (2)
Country | Link |
---|---|
GB (1) | GB2518008B (en) |
WO (1) | WO2015036458A1 (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016182120A1 (en) * | 2015-05-11 | 2016-11-17 | 송창호 | Neckband-type bluetooth earphone for mobile phone |
KR20170082022A (en) * | 2016-01-05 | 2017-07-13 | 삼성전자주식회사 | Audio output apparatus and method for operating audio output apparatus |
CN108429958A (en) * | 2018-05-02 | 2018-08-21 | 歌尔股份有限公司 | Line control earphone |
CN109511045A (en) * | 2015-12-07 | 2019-03-22 | 京东方科技集团股份有限公司 | Earphone control device, earphone, wearable device and headset control method |
CN113364914A (en) * | 2020-03-05 | 2021-09-07 | 阿里巴巴集团控股有限公司 | Control device, mobile terminal, electronic apparatus, and computer storage medium |
US11761830B2 (en) | 2019-01-15 | 2023-09-19 | Pixart Imaging Inc. | Earphone with force sensor |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104768089A (en) * | 2015-03-30 | 2015-07-08 | 深圳市莱瑞尔科技有限公司 | Bluetooth earphone |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1475035A1 (en) * | 2003-05-09 | 2004-11-10 | Samsung Electronics Co., Ltd. | Ear type apparatus for measuring a bio signal and measuring method therefor |
EP2293589A1 (en) * | 2009-08-28 | 2011-03-09 | Nxp B.V. | Electronic circuit for a headset and method thereof |
EP2540221A1 (en) * | 2011-06-29 | 2013-01-02 | LG Electronics Inc. | Mobile terminal with at least a body sensor |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001283577A (en) * | 2000-03-31 | 2001-10-12 | Matsushita Electric Ind Co Ltd | Recorded information reproducing device and its remote control device |
US7107010B2 (en) * | 2003-04-16 | 2006-09-12 | Nokia Corporation | Short-range radio terminal adapted for data streaming and real time services |
EP1905221A1 (en) * | 2005-07-21 | 2008-04-02 | Southwing S.L. | Hands-free device producing a spoken prompt with spatial effect |
US8335312B2 (en) * | 2006-10-02 | 2012-12-18 | Plantronics, Inc. | Donned and doffed headset state detection |
US20080130910A1 (en) * | 2006-11-30 | 2008-06-05 | Motorola, Inc. | Gestural user interface devices and methods for an accessory to a wireless communication device |
US20080146290A1 (en) * | 2006-12-18 | 2008-06-19 | Motorola, Inc. | Changing a mute state of a voice call from a bluetooth headset |
GB2445803A (en) * | 2007-01-19 | 2008-07-23 | Southwing S L | Personal Communications Systems |
US8885851B2 (en) * | 2008-02-05 | 2014-11-11 | Sony Corporation | Portable device that performs an action in response to magnitude of force, method of operating the portable device, and computer program |
JP2012079082A (en) * | 2010-10-01 | 2012-04-19 | Sony Corp | Input device |
US20120196540A1 (en) * | 2011-02-02 | 2012-08-02 | Cisco Technology, Inc. | Method and apparatus for a bluetooth-enabled headset with a multitouch interface |
-
2013
- 2013-09-10 GB GB1316109.6A patent/GB2518008B/en active Active
-
2014
- 2014-09-10 WO PCT/EP2014/069333 patent/WO2015036458A1/en active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1475035A1 (en) * | 2003-05-09 | 2004-11-10 | Samsung Electronics Co., Ltd. | Ear type apparatus for measuring a bio signal and measuring method therefor |
EP2293589A1 (en) * | 2009-08-28 | 2011-03-09 | Nxp B.V. | Electronic circuit for a headset and method thereof |
EP2540221A1 (en) * | 2011-06-29 | 2013-01-02 | LG Electronics Inc. | Mobile terminal with at least a body sensor |
Non-Patent Citations (1)
Title |
---|
P SASIKALA ET AL: "Identification of Individuals using Electrocardiogram", IJCSNS INTERNATIONAL JOURNAL OF COMPUTER SCIENCE AND NETWORK SECURITY VOL.10 NO. 12, 20 December 2010 (2010-12-20), pages 147 - 153, XP055157278, Retrieved from the Internet <URL:http://paper.ijcsns.org/07_book/201012/20101220.pdf> [retrieved on 20141208] * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016182120A1 (en) * | 2015-05-11 | 2016-11-17 | 송창호 | Neckband-type bluetooth earphone for mobile phone |
CN109511045A (en) * | 2015-12-07 | 2019-03-22 | 京东方科技集团股份有限公司 | Earphone control device, earphone, wearable device and headset control method |
KR20170082022A (en) * | 2016-01-05 | 2017-07-13 | 삼성전자주식회사 | Audio output apparatus and method for operating audio output apparatus |
KR102501759B1 (en) * | 2016-01-05 | 2023-02-20 | 삼성전자주식회사 | Audio output apparatus and method for operating audio output apparatus |
CN108429958A (en) * | 2018-05-02 | 2018-08-21 | 歌尔股份有限公司 | Line control earphone |
US11761830B2 (en) | 2019-01-15 | 2023-09-19 | Pixart Imaging Inc. | Earphone with force sensor |
CN113364914A (en) * | 2020-03-05 | 2021-09-07 | 阿里巴巴集团控股有限公司 | Control device, mobile terminal, electronic apparatus, and computer storage medium |
Also Published As
Publication number | Publication date |
---|---|
GB2518008A (en) | 2015-03-11 |
GB201316109D0 (en) | 2013-10-23 |
GB2518008B (en) | 2018-03-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2015036458A1 (en) | Wireless headset | |
CN105794244B (en) | Trust group extending user certification across smart machine | |
US11082504B2 (en) | Networked device authentication, pairing and resource sharing | |
US10575086B2 (en) | System and method for sharing wireless earpieces | |
US9048923B2 (en) | System and method for wireless device pairing | |
CN109564777B (en) | Wearable computer with fitness machine connectivity for improved activity monitoring | |
EP2998822B1 (en) | Mobile communication device using a plurality of wearable devices in parallel | |
US9215304B2 (en) | Data store and enhanced features for headset of portable media device | |
US20140310764A1 (en) | Method and apparatus for providing user authentication and identification based on gestures | |
WO2017186100A1 (en) | Identity authentication method, system and device | |
US10869194B2 (en) | Devices, systems, and processes for authenticating devices | |
US9444565B1 (en) | Wireless audio communications device, system and method | |
TW200803253A (en) | Distance-based association | |
US9219807B1 (en) | Wireless audio communications device, system and method | |
US10276127B2 (en) | Identifying users from screen touch events | |
US10511390B2 (en) | Data sharing using body coupled communication | |
US20180356881A1 (en) | Pairing of wireless earpiece to phone or other device | |
WO2023196854A1 (en) | Integrated headset controller storage and connection | |
CN108882112B (en) | Audio playback control method, device, storage medium and terminal device | |
CN112464196B (en) | Bluetooth headset connection method, device and storage medium | |
US20250148464A1 (en) | System, method, and apparatus for downloading content directly into a wearable device | |
US20240330425A1 (en) | Information processing system, information processing apparatus and method, storage case and information processing method, and program | |
JP2015195425A (en) | Transmitter and communication control method | |
TW201716123A (en) | Intelligent starting system of interactive game and method thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 14766448 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 14766448 Country of ref document: EP Kind code of ref document: A1 |