Specific embodiment
The present invention is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched
The specific embodiment stated is used to explain the present invention, rather than limitation of the invention.It also should be noted that for the ease of retouching
It states, only the parts related to the present invention are shown in attached drawing rather than entire infrastructure.
Fig. 1 is a kind of flow chart of speech playing method provided by the embodiments of the present application, is applicable to automatic voice playing
The case where, this method can be executed by wearable device provided by the embodiments of the present application, and the voice of the wearable device plays
The mode that software and/or hardware can be used in device is realized, as shown in Figure 1, technical solution provided in this embodiment may include:
Step S101, when detecting automatic voice playing instruction, automatic voice playing mode is opened.
Wherein, voice play instruction is used to indicate wearable device and opens automatic voice playing mode.Automatic voice playing
The triggering mode of instruction includes user speech triggering and non-voice triggering, and non-voice triggering includes rolling of the user to wearable device
The controls operation such as dynamic, touching or pressing.It, will be corresponding after wearable device receives the triggering voice or trigger action of user
Voice or operation are converted to automatic voice playing instruction, meanwhile, the detection program of wearable device in real time carries out the instruction
Detection opens automatic voice playing mode, property easy to use with higher when detecting the instruction.
Example one, before detecting automatic voice playing instruction, this method further include: according to the acoustic information received
Generate automatic voice playing instruction.Wearable device is turned the acoustic information of user by carrying out semantics recognition to user speech
Automatic voice playing instruction is changed into, so as to automatically opening voice play mode.Wherein, wearable device has automatic speech noise reduction
Function, when ambient noise is larger, such as the public places such as scenic spot, exhibition center, museum and odeum, it is also ensured that quasi-
Really identification user speech.
Example two, before detecting automatic voice playing instruction, this method further include: obtain the sensing of sensor acquisition
Data, wherein sensing data includes angle and the direction that user shakes operation, touches the touch position, dynamics and number of operation,
Or the dynamics and duration of pressing operation;Detect whether the sensing data corresponds in wearable device in addition to unlatching voice is automatic
System operatio except play mode, if there is no correspondence, i.e. the other systems of user's current operation and wearable device are grasped
Make the sensing data that user's current operation generates then to be converted to automatic voice playing instruction, to open automatically there is no conflicting
Open voice play mode.
Step S102, acquisition and voice play associated trigger data, judge whether trigger data meets voice and play item
Part.
Trigger data is to determine whether the judgement data for obtaining voice broadcast source and playing out, including in wearable device
The sensing data of integrated sensor acquisition.Specifically, can determine whether wearable device is in use based on trigger data
State and the limbs state of user, or determine current spatial with the presence or absence of playable voice broadcast source.Correspondingly, when can
To determine that wearable device is in use state, and user is in the limbs state for things of touring, or determines current sky
Between it is middle there is playable voice broadcast source, i.e. trigger data meets voice playing condition.
If step S103, trigger data meets voice playing condition, voice broadcast source is obtained by network and carries out language
Sound plays.
When trigger data meets voice playing condition, wearable device obtains voice broadcasting by network communication automatically
Source, and store in the system cache, then play to user.It is played out based on system cache, is equivalent to a kind of offline broadcasting
Form can get rid of the constraint of network state in voice playing process, guarantee the fluency played.Voice play mode include
Earphone play mode and osteoacusis play mode can be broadcast according to the current environment level of noise that wearable device detects at two kinds
Automatically switch between mode playback.
Optionally, before obtaining voice broadcast source by network and carrying out voice broadcasting, this method can also include: to receive
Acquisition purview certification to voice broadcast source obtains voice broadcast source and plays out as a result, if certification passes through.Specifically
, after the trigger data of acquisition meets voice playing condition, server from wearable device to storaged voice broadcast source or deposit
It stores up equipment and sends voice data acquisition request, include the id information of wearable device in the voice data acquisition request;Service
Device stores equipment in response to the voice data acquisition request of wearable device, according to id information therein to wearable device
Legitimacy authenticated, and authentication result is fed back into wearable device.If wearable device is legal, server can be incited somebody to action
It authenticates the result passed through and feeds back to wearable device, while playing source database to wearable device opening voice, to dress
Formula equipment obtain needed for data, certainly, if wearable device is legal, can not also feedback validation as a result, direct opening voice
Play source database.
As shown in the above, automatic voice playing mould is opened when detecting automatic voice playing instruction in this programme
Formula, obtains and voice plays associated trigger data, judges whether the trigger data meets voice playing condition, if triggering number
According to voice playing condition is met, then voice broadcast source is obtained by network and carry out voice broadcasting.When wearable device opens voice
After automatic play mode, is participated in from trigger data is obtained to voice broadcasting, centre is carried out without user, it is manual not to be related to user
The operation such as selection or determination, thus reduce voice and play step, improve voice playing efficiency.
Fig. 2 is the flow chart of another speech playing method provided by the embodiments of the present application, and optionally, trigger data includes
The sensing data of acceleration transducer and gyro sensor acquisition, acceleration transducer and gyro sensor are integrated in wearing
In formula equipment, correspondingly, it includes: that sensing data meets preset threshold range that trigger data, which meets voice playing condition,;Optionally,
It obtains voice broadcast source and carries out voice to play including: to obtain to be currently located the showpiece speech sound eeplaining information that the service provider in place provides
And carry out voice broadcasting, wherein the showpiece of showpiece speech sound eeplaining information and user's present position is corresponding.As shown in Fig. 2, this
Embodiment provide technical solution may include:
Step S201, when detecting automatic voice playing instruction, automatic voice playing mode is opened.
Step S202, acquisition and voice play associated sensing data, judge whether sensing data meets preset threshold model
It encloses.
Wherein, sensing data includes that the acceleration value of acceleration transducer acquisition and the angle of gyro sensor acquisition accelerate
Angle value, acceleration transducer and gyro sensor are integrated in wearable device, which refers to that wear-type can be worn
Wear equipment.Based on the non-zero sensing data that acceleration transducer and gyro sensor acquire, it can determine that wearable device is in
Use state, and then determine the current limbs state of user, such as the speed and head inclination degree etc. of the speed of travel.
When user wears wearable device, such as intelligent glasses, in the sight of the public places such as exhibition center, museum and odeum
When seeing showpiece, limbs state is different from the limbs state in normal walking, and the collected sensing data of sensor is also different.Sentence
Whether disconnected sensing data meets preset threshold range, i.e., determines whether user is interested in viewing based on the limbs state of user
Showpiece.Acceleration rate threshold and angular acceleration threshold value in preset threshold range can gradually be become according to human body by normal walking speed
Slow behavioral statistics data determine.Specifically, when collected acceleration value meets threshold range, i.e., acceleration value be less than or
Equal to acceleration rate threshold, show that user is currently at a slow speed or stationary state;When collected angular acceleration values meet threshold value
When range, i.e., angular acceleration values are greater than or equal to angular acceleration threshold value, show that user is currently at and look up or overlooking state.
Optionally, sensing data further includes camera collection image data, correspondingly, it is pre- to judge whether sensing data meets
If threshold range further include: judge whether image data remains unchanged within a preset time, wherein being adapted to property of preset time
It is set as 30 seconds.User is considered during watching interested showpiece, the image data that camera acquires within a preset time
The image of i.e. same showpiece, therefore, it is possible to by judging whether the image data that camera acquires within a preset time changes, really
Recognize whether user is in the state of touring.On acceleration value and the judgement basis of angular acceleration values, image data is sentenced in increase
It is disconnected, wearable device can be increased to the judgment accuracy of User Status, further avoid the showpiece speech sound eeplaining information played
The phenomenon not corresponding with current showpiece.
If step S203, sensing data meets preset threshold range, the clothes for being currently located place are obtained by network
The showpiece speech sound eeplaining information of business quotient's offer simultaneously carries out voice broadcasting.
Sensing data meets preset threshold range, that is, determines that user currently checks interested showpiece, then pass through network
Automatic obtain is currently located the showpiece speech sound eeplaining information that the service provider in place provides, and plays to user.
Specifically, obtaining the showpiece speech sound eeplaining information for the service provider's offer for being currently located place by network can wrap
Include: wearable device calls camera to be scanned and identify the number of current showpiece, is being deposited according to the recognition result of number
It stores up in the cloud server of exhibit information and carries out matching search, obtain the speech sound eeplaining information of current showpiece.Alternatively, according to wearing
The current location data of formula equipment determines the position data of current showpiece, based on location matches relationship in the cloud for storing exhibit information
Matching search is carried out in the server of end, obtains the speech sound eeplaining information of current showpiece.
In addition, this method can also wrap before carrying out matching search in the cloud server for carrying out storage exhibit information
It includes: obtaining the purview certification of access server as a result, if certification passes through, search for the speech sound eeplaining information of current showpiece.Tool
Body, server can carry out purview certification to it according to the ID of wearable device, such as, if it is determined that the ID of wearable device
Belong to legal ID, then purview certification passes through.If certification is not over exiting automatic voice playing mode, and prompt user
The reason of mode exits.
Optionally, service provider provide showpiece speech sound eeplaining information include: service provider provide speech ciphering equipment in store
Showpiece speech sound eeplaining information, wherein the position where one or more showpieces is arranged in speech ciphering equipment;Correspondingly, obtaining voice
Broadcast source carries out voice to play including: the voice broadcast source stored in the speech ciphering equipment obtained in the preset range of current location.
Showpiece speech sound eeplaining information can not only be obtained from the cloud server of service provider in this programme, can also be from exhibition
It is obtained in the speech ciphering equipment at the grade place of setting.According to the difference of showpiece service provider management strategy, each showpiece can be corresponded to individually
One speech ciphering equipment;Multiple showpieces can also be concentrated use in a speech ciphering equipment, store multiple showpieces respectively in speech ciphering equipment
Speech sound eeplaining information.
Specifically, obtaining the speech ciphering equipment in the preset range of current location when sensing data meets preset threshold range
The voice broadcast source of middle storage may include: the current location data for obtaining wearable device, be existed according to its current location data
Target voice equipment is determined within the scope of predeterminable area, wherein predeterminable area range is determined centered on current location data
Geometric areas, wearable device are less than or equal to pre-determined distance threshold value at a distance from target voice equipment;Target voice is obtained to set
Speech sound eeplaining information in standby.Distance threshold can according to need carry out adaptability setting, for example, can be set to 50 centimetres or
1 meter etc..I.e. wearable device can only identify the speech ciphering equipment for meeting distance threshold, then play speech sound eeplaining information.
If wearable device recognizes multiple speech ciphering equipments simultaneously, showpiece speech sound eeplaining information can be playd in order.
Alternatively, being ranked up according to the distance value of speech ciphering equipment and wearable device, the language in speech ciphering equipment is played according to ranking results
Sound explaining information, wherein distance value is smaller, and the corresponding ranking of speech ciphering equipment is more forward.Alternatively, wearable device prompt user's inspection
Multiple speech ciphering equipments are measured, and the corresponding showpiece thumbnail of speech ciphering equipment is shown in the display unit of wearable device, with
Just speech ciphering equipment or the customized playing sequence that speech sound eeplaining information in multiple speech ciphering equipments is set needed for user's selection.
It can be seen from the above, obtaining in this programme after wearable device opens automatic voice playing mode and voice playing
Associated sensing data then obtains the service for being currently located place when sensing data meets preset threshold range by network
The showpiece speech sound eeplaining information of quotient's offer simultaneously carries out voice broadcasting, improves the voice playing efficiency of wearable device, reduces
Voice plays step, while extending the application function of wearable device;In addition, being based on acceleration transducer, gyroscope simultaneously
The sensing data of sensor and camera acquisition is confirmed whether to obtain showpiece speech sound eeplaining information, increases wearable device pair
The judgment accuracy of User Status, the showpiece speech sound eeplaining information for further avoiding broadcasting are not corresponding existing with current showpiece
As.
Fig. 3 is the flow chart of another speech playing method provided by the embodiments of the present application, and optionally, trigger data includes
The current location data of wearable device, correspondingly, it includes: to exist and current location number that trigger data, which meets voice playing condition,
According to associated sight spot speech sound eeplaining information;Optionally, obtaining voice broadcast source and carrying out voice broadcasting includes: acquisition and current location
The sight spot speech sound eeplaining information of data correlation simultaneously carries out voice broadcasting.As shown in figure 3, technical solution provided in this embodiment can be with
Include:
Step S301, when detecting automatic voice playing instruction, automatic voice playing mode is opened.
Step S302, acquisition and voice play the current location data of associated wearable device, judge whether there is and
The associated sight spot speech sound eeplaining information of current location data.
This programme can be adapted for the case where carrying out automatic briefing to sight spot using the wearable device of user in scenic spot.
In concrete scheme, current location data, that is, user current location data of wearable device can according to the position data of user
To determine the interested scenic spot location data of user, and then is detected by searching network data or identification code and determine that the sight spot is
It is no that there are corresponding speech sound eeplaining information.
Optionally, after the current location data for obtaining wearable device, target can be determined according to current location data
Sight spot within the scope of target area is determined as target scenic spot by regional scope;Wearable device connects scenic area server, root
It is scanned in the database of scenic area server according to location matches relationship, it is determined whether exist and the matched voice of target scenic spot
Explaining information.Wherein, for the position of target scenic spot, wearable device can be according to its current location data and the target determined
Regional scope by infrared distance measurement, position analysis and is calculated;Target area range can be centered on current location,
Using the border circular areas that pre-determined distance value is determined as radius, pre-determined distance value, which can according to need, to be configured, such as can be set
It is 1 meter.
With before the matched speech sound eeplaining information of target scenic spot, this method can also include: for search in scenic area server
The purview certification of access server is obtained as a result, if certification passes through, is continued searching.Specifically, server can be according to wearing
The ID for wearing formula equipment carries out purview certification to it, such as, if it is determined that the ID of wearable device belongs to legal ID, then purview certification
Pass through.If certification is not over can directly exit automatic voice playing mode, and the original for prompting user mode to exit
Cause.
Alternatively, wearable device calls whether deposit at camera detection target scenic spot position after determining target scenic spot
Include the speech sound eeplaining information of target scenic spot in two dimensional code, the two dimensional code, is judged whether there is and current location with realizing
The effect of the sight spot speech sound eeplaining information of data correlation.In the case, scenic spot management service provider needs to configure in advance for sight spot
It include the two dimensional code of speech sound eeplaining information.If wearable device can scan the two dimensional code, it is determined that target scenic spot is deposited
In speech sound eeplaining information.
Step S303, if it is present passing through network acquisition and the associated sight spot speech sound eeplaining information of current location data
And carry out voice broadcasting.
Illustratively, after determining target scenic spot according to the current location data of wearable device, if being based on network data
Searching for determining target scenic spot, there are corresponding speech sound eeplaining information, then the speech sound eeplaining of target scenic spot is obtained from scenic area server
Information, and play out.Determine that target scenic spot there are corresponding speech sound eeplaining information, then passes through if it is detecting based on identification code
Identification to the identification code, obtains the speech sound eeplaining information of target scenic spot, and plays out.
It can be seen from the above, obtaining in this programme after wearable device opens automatic voice playing mode and voice playing
The current location data of associated wearable device judges whether there is and the associated sight spot speech sound eeplaining letter of current location data
Breath, if it is present by network acquisition and the associated sight spot speech sound eeplaining information of current location data and voice broadcasting is carried out,
The voice playing efficiency for improving wearable device reduces voice and plays step.
Fig. 4 is a kind of structural block diagram of voice playing device provided by the embodiments of the present application, and the device is above-mentioned for executing
The speech playing method that embodiment provides, has the corresponding functional module of execution method and beneficial effect.As shown in figure 4, the dress
It sets and specifically includes: play mode opening module 101, trigger data judgment module 102 and voice playing module 103, wherein
Play mode opening module 101, for opening automatic voice playing mould when detecting automatic voice playing instruction
Formula;
Trigger data judgment module 102 plays associated trigger data with voice for obtaining, whether judges trigger data
Meet voice playing condition;
Voice playing module 103 obtains voice by network and broadcasts if meeting voice playing condition for trigger data
It puts source and carries out voice broadcasting.
In a possible embodiment, the trigger data in trigger data judgment module 102 includes acceleration transducer
With the sensing data of gyro sensor acquisition, acceleration transducer and gyro sensor are integrated in wearable device, phase
It answers, voice playing module 103 is specifically used for:
If sensing data meets preset threshold range, voice broadcast source is obtained by network and carries out voice broadcasting.
In a possible embodiment, voice playing module 103 is specifically used for:
The showpiece speech sound eeplaining information for the service provider's offer for being currently located place is provided and carries out voice broadcasting, wherein exhibition
The showpiece of product speech sound eeplaining information and user's present position is corresponding.
In a possible embodiment, the showpiece speech sound eeplaining packet that service provider provides in voice playing module 103
It includes:
The showpiece speech sound eeplaining information stored in the speech ciphering equipment that service provider provides, wherein speech ciphering equipment is arranged at one
Or the position where multiple showpieces;Correspondingly, voice playing module 103 is specifically used for:
Obtain the voice broadcast source stored in the speech ciphering equipment in the preset range of current location.
In a possible embodiment, the trigger data in trigger data judgment module 102 includes wearable device
Current location data, correspondingly, voice playing module 103 is specifically used for:
If there is with the associated sight spot speech sound eeplaining information of current location data, then pass through network obtain voice broadcast source
Carry out voice broadcasting.
In a possible embodiment, voice playing module 103 is specifically used for:
Acquisition and the associated sight spot speech sound eeplaining information of current location data simultaneously carry out voice broadcasting.
In a possible embodiment, device further include: phonetic order generation module 104 is received for basis
Acoustic information generate automatic voice playing instruction.
The present embodiment provides a kind of wearable device on the basis of the various embodiments described above, and Fig. 5 is the embodiment of the present application
A kind of structural schematic diagram of the wearable device provided, Fig. 6 is a kind of signal of wearable device provided by the embodiments of the present application
Pictorial diagram.As shown in Figure 5 and Figure 6, which includes: memory 201, processor (Central Processing
Unit, CPU) 202, display unit 203, touch panel 204, heart rate detection mould group 205, range sensor 206, camera 207,
Bone-conduction speaker 208, microphone 209, breath light 210, these components pass through one or more communication bus or signal wire 211
To communicate.
It should be understood that illustrating the example that wearable device is only wearable device, and wearable device
It can have than shown in the drawings more or less component, can combine two or more components, or can be with
It is configured with different components.Various parts shown in the drawings can include one or more signal processings and/or dedicated
It is realized in the combination of hardware, software or hardware and software including integrated circuit.
Just the wearable device provided in this embodiment played for voice is described in detail below, this is wearable to set
For by taking intelligent glasses as an example.
Memory 201, the memory 201 can be accessed by CPU202, and the memory 201 may include that high speed is random
Access memory, can also include nonvolatile memory, for example, one or more disk memory, flush memory device or its
His volatile solid-state part.
Display unit 203, can be used for the operation and control interface of display image data and operating system, and display unit 203 is embedded in
In the frame of intelligent glasses, frame is internally provided with inner transmission lines 211, the inner transmission lines 211 and display unit
203 connections.
The outside of at least one intelligent glasses temple is arranged in touch panel 204, the touch panel 204, for obtaining touching
Data are touched, touch panel 204 is connected by inner transmission lines 211 with CPU202.Wherein, touch panel 204 can detect user
Finger sliding, clicking operation, and accordingly the data detected be transmitted to processor 202 handled it is corresponding to generate
Control instruction, illustratively, can be left shift instruction, right shift instruction, move up instruction, move down instruction etc..Illustratively, display unit
Part 203 can video-stream processor 202 transmit virtual image data, which can be accordingly according to touch panel 204
The user's operation that detects carries out corresponding change, specifically, can be carry out screen switching, when detecting left shift instruction or move to right
Switch upper one or next virtual image picture after instruction accordingly;It, should when display unit 203 shows video playing information
Left shift instruction, which can be, plays out playbacking for content, and right shift instruction can be the F.F. for playing out content;Work as display unit
203 displays are when being editable word content, and the left shift instruction, right shift instruction move up instruction, move down instruction and can be to cursor
Displacement operation, i.e. the position of cursor can move the touch operation of touch tablet according to user;When display unit 203 is aobvious
When the content shown is game animation picture, the left shift instruction, right shift instruction move up instruction, move down instruction and can be in game
Object controlled, in machine game like flying, can by the left shift instruction, right shift instruction, move up instruction, move down instruction control respectively
The heading of aircraft processed;When display unit 203 can show the video pictures of different channel, the left shift instruction, right shift instruction,
Move up instruction, move down instruction and can carry out the switching of different channel, wherein move up instruction and move down instruction can be to switch to it is preset
Channel (the common channel that such as user uses);When display unit 203 show static images when, the left shift instruction, right shift instruction, on
It moves instruction, move down the switching that instructs and can carry out between different pictures, wherein left shift instruction can be to switch to a width picture,
Right shift instruction, which can be, switches to next width figure, and an atlas can be to switch to by moving up instruction, and moving down instruction can be switching
To next atlas.The touch panel 204 can also be used to control the display switch of display unit 203, illustratively, work as length
When pressing 204 touch area of touch panel, display unit 203, which is powered, shows graphic interface, presses touch panel 204 when long again
When touch area, display unit 203 power off, when display unit 203 be powered after, can by touch panel 204 carry out upper cunning and
Operation glide to adjust the brightness or resolution ratio that show image in display unit 203.
Heart rate detection mould group 205, for measuring the heart rate data of user, heart rate refers to beats per minute, the heart rate
Mould group 205 is detected to be arranged on the inside of temple.Specifically, the heart rate detection mould group 205 can be in such a way that electric pulse measures
Human body electrocardio data are obtained using stemness electrode, heart rate size is determined according to the amplitude peak in electrocardiogram (ECG) data;The heart rate detection
Mould group 205 can also be by being formed using the light transmitting and light receiver of photoelectric method measurement heart rate, correspondingly, the heart rate is examined
Mould group 205 is surveyed to be arranged at temple bottom, the ear-lobe of human body auricle.Heart rate detection mould group 205 can phase after collecting heart rate data
The progress data processing in processor 202 that is sent to answered has obtained the current heart rate value of wearer, in one embodiment, processing
Device 202, can be by the heart rate value real-time display in display unit 203 after determining the heart rate value of user, optional processor
202 are determining that heart rate value lower (such as less than 50) or higher (such as larger than 100) can trigger alarm accordingly, while by the heart
Rate value and/or the warning message of generation are sent to server by communication module.
Range sensor 206, may be provided on frame, the distance which is used to incude face to frame,
The realization of infrared induction principle can be used in the range sensor 206.Specifically, the range sensor 206 is by the range data of acquisition
It is sent to processor 202, data control the bright dark of display unit 203 to processor 202 according to this distance.Illustratively, work as determination
When the collected distance of range sensor 206 is less than 5 centimetres out, the corresponding control display unit 203 of processor 202, which is in, to be lighted
State, when determine range sensor be detected with object close to when, it is corresponding control display unit 204 and be in close shape
State.
Breath light 210 may be provided at the edge of frame, when display unit 203 closes display picture, the breath light 210
It can be lighted according to the control of processor 202 in the bright dark effect of gradual change.
Camera 207 can be the position that the upper side frame of frame is arranged in, and acquire the proactive of the image data in front of user
As module, the rear photographing module of user eyeball information can also be acquired, is also possible to the combination of the two.Specifically, camera 207
When acquiring forward image, the image of acquisition is sent to the identification of processor 202, processing, and trigger accordingly according to recognition result
Trigger event.Illustratively, when user wears the wearable device at home, by being identified to the forward image of acquisition,
If recognizing article of furniture, corresponding inquiry whether there is corresponding control event, if it is present accordingly by the control
The corresponding control interface of event processed is shown in display unit 203, and user can carry out corresponding furniture object by touch panel 204
The control of product, wherein the article of furniture and intelligent glasses are connected to the network by bluetooth or wireless self-networking;When user is at family
When outer wearing wearable device, target identification mode can be opened accordingly, which can be used to identify specific people,
The image of acquisition is sent to processor 202 and carries out recognition of face processing by camera 207, if recognizing the default people of setting
Face, the then loudspeaker that can be integrated accordingly by intelligent glasses carry out sound casting, which can be also used for knowing
Not different plants, for example, processor 202 acquired according to the touch operation of touch panel 204 with recording camera 207 it is current
Image is simultaneously sent to server by communication module to be identified, server identify to the plant in acquisition image simultaneously anti-
It presents relevant botanical name, introduce to intelligent glasses, and feedback data is shown in display unit 203.Camera 207 may be used also
To be the image for acquiring user's eye such as eyeball, different control instructions is generated by the identification of the rotation to eyeball, is shown
Example property, control instruction is moved up as eyeball is rotated up generation, eyeball rotates down generation and moves down control instruction, and eyeball is turned left
Dynamic generation moves to left control instruction, and the eyeball generation that turns right moves to right control instruction, wherein qualified, display unit 203 can show place
Manage the virtual image data that device 202 transmits, the user eyeball which can detect according to camera 207 accordingly
Mobile variation generate control instruction and change, specifically, can be carry out screen switching, move to left control instruction when detecting
Or switch upper one or next virtual image picture accordingly after moving to right control instruction;When display unit 203 shows that video is broadcast
When putting information, this, which moves to left control instruction and can be, plays out playbacking for content, moves to right in control instruction can be and play out
The F.F. of appearance;When display unit 203 display be editable word content when, this move to left control instruction, move to right control instruction, on
Control instruction is moved, control instruction is moved down and can be displacement operation to cursor, is i.e. the position of cursor can be according to user to touch tablet
Touch operation and moved;When display unit 203 show content be game animation picture when, this move to left control instruction,
Control instruction is moved to right, control instruction is moved up, moves down control instruction and can be the object in game is controlled, machine game like flying
In, control instruction can be moved to left by this, moved to right control instruction, moved up control instruction, moving down control instruction and control aircraft respectively
Heading;When display unit 203 can show the video pictures of different channel, this move to left control instruction, move to right control instruction,
Control instruction is moved up, control instruction is moved down and can carry out the switching of different channel, wherein moves up control instruction and moves down control instruction
Pre-set channel (the common channel that such as user uses) can be to switch to;When display unit 203 shows static images, this is moved to left
Control instruction moves to right control instruction, moves up control instruction, moving down control instruction and can carry out switching between different pictures, wherein
A width picture can be to switch to by moving to left control instruction, moved to right control instruction and be can be and switch to next width figure, and control is moved up
Instruction can be to switch to an atlas, move down control instruction and can be and switch to next atlas.
The inner wall side of at least one temple is arranged in bone-conduction speaker 208, bone-conduction speaker 208, for that will receive
To processor 202 send audio signal be converted to vibration signal.Wherein, sound is passed through skull by bone-conduction speaker 208
It is transferred to human body inner ear, is transmitted in skull cochlea by the way that the electric signal of audio is changed into vibration signal, then by auditory nerve
It is perceived.Reduce hardware configuration thickness as sounding device by bone-conduction speaker 208, weight is lighter, while without electromagnetism
Radiation will not be influenced by electromagnetic radiation, and have antinoise, waterproof and liberation ears a little.
Microphone 209, may be provided on the lower frame of frame, for acquiring external (user, environment) sound and being transmitted to
Processor 202 is handled.Illustratively, the sound that microphone 209 issues user be acquired and pass through processor 202 into
Row Application on Voiceprint Recognition can receive subsequent voice control, specifically, user if being identified as the vocal print of certification user accordingly
Collected voice is sent to processor 202 and identified according to recognition result generation pair by capable of emitting voice, microphone 209
The control instruction answered, such as " booting ", " shutdown ", " promoting display brightness ", " reducing display brightness ", the subsequent basis of processor 202
The control instruction of the generation executes corresponding control processing.
The executable present invention of the voice playing device and wearable device of the wearable device provided in above-described embodiment appoints
The speech playing method of wearable device provided by embodiment of anticipating has and executes the corresponding functional module of this method and beneficial to effect
Fruit.The not technical detail of detailed description in the above-described embodiments wearable is set reference can be made to provided by any embodiment of the invention
Standby speech playing method.
The embodiment of the present application also provides a kind of storage medium comprising wearable device executable instruction, described wearable to set
Standby executable instruction is used to execute a kind of speech playing method when being executed by wearable device processor, this method comprises:
When detecting automatic voice playing instruction, automatic voice playing mode is opened;
It obtains and voice plays associated trigger data, judge whether trigger data meets voice playing condition;
If trigger data meets voice playing condition, voice broadcast source is obtained by network and carries out voice broadcasting.
In a possible embodiment, trigger data includes the sensing of acceleration transducer and gyro sensor acquisition
Data, acceleration transducer and gyro sensor are integrated in wearable device, correspondingly, trigger data meets voice broadcasting
Condition includes:
Sensing data meets preset threshold range.
In a possible embodiment, obtaining the progress voice broadcasting of voice broadcast source includes:
The showpiece speech sound eeplaining information for the service provider's offer for being currently located place is provided and carries out voice broadcasting, showpiece voice
The showpiece of explaining information and user's present position is corresponding.
In a possible embodiment, the showpiece speech sound eeplaining information of service provider's offer includes:
The showpiece speech sound eeplaining information stored in the speech ciphering equipment that service provider provides, wherein speech ciphering equipment is arranged at one
Or the position where multiple showpieces;Correspondingly, acquisition voice broadcast source progress voice broadcasting includes:
Obtain the voice broadcast source stored in the speech ciphering equipment in the preset range of current location.
In a possible embodiment, trigger data includes the current location data of wearable device, correspondingly, triggering
Data meet voice playing condition
In the presence of with the associated sight spot speech sound eeplaining information of current location data.
In a possible embodiment, obtaining the progress voice broadcasting of voice broadcast source includes:
Acquisition and the associated sight spot speech sound eeplaining information of current location data simultaneously carry out voice broadcasting.
In a possible embodiment, before detecting automatic voice playing instruction, this method further include:
Automatic voice playing instruction is generated according to the acoustic information received.
Storage medium --- any various types of memory devices or storage equipment.Term " storage medium " is intended to wrap
It includes: install medium, such as CD-ROM, floppy disk or magnetic tape equipment;Computer system memory or random access memory, such as
DRAM, DDR RAM, SRAM, EDO RAM, blue Bath (Rambus) RAM etc.;Nonvolatile memory, such as flash memory, magnetic medium
(such as hard disk or optical storage);Register or the memory component of other similar types etc..Storage medium can further include other
Memory of type or combinations thereof.In addition, storage medium can be located at program in the first computer system being wherein performed,
Or can be located in different second computer systems, second computer system is connected to the by network (such as internet)
One computer system.Second computer system can provide program instruction to the first computer for executing." storage is situated between term
Matter " may include may reside in different location (such as by network connection different computer systems in) two or
More storage mediums.Storage medium can store the program instruction that can be performed by one or more processors and (such as implement
For computer program).
Certainly, a kind of storage medium comprising computer executable instructions, computer provided by the embodiment of the present application
The speech playing method operation that executable instruction is not limited to the described above, can also be performed provided by any embodiment of the invention
Relevant operation in speech playing method.
Note that the above is only a better embodiment of the present invention and the applied technical principle.It will be appreciated by those skilled in the art that
The invention is not limited to the specific embodiments described herein, be able to carry out for a person skilled in the art it is various it is apparent variation,
It readjusts and substitutes without departing from protection scope of the present invention.Therefore, although being carried out by above embodiments to the present invention
It is described in further detail, but the present invention is not limited to the above embodiments only, without departing from the inventive concept, also
It may include more other equivalent embodiments, and the scope of the invention is determined by the scope of the appended claims.