[go: up one dir, main page]

CN109144465A - Speech playing method, device, wearable device and storage medium - Google Patents

Speech playing method, device, wearable device and storage medium Download PDF

Info

Publication number
CN109144465A
CN109144465A CN201811001035.1A CN201811001035A CN109144465A CN 109144465 A CN109144465 A CN 109144465A CN 201811001035 A CN201811001035 A CN 201811001035A CN 109144465 A CN109144465 A CN 109144465A
Authority
CN
China
Prior art keywords
voice
wearable device
speech
playing
trigger data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811001035.1A
Other languages
Chinese (zh)
Inventor
魏苏龙
林肇堃
麦绮兰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201811001035.1A priority Critical patent/CN109144465A/en
Publication of CN109144465A publication Critical patent/CN109144465A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B1/00Details of transmission systems, not covered by a single one of groups H04B3/00 - H04B13/00; Details of transmission systems not characterised by the medium used for transmission
    • H04B1/38Transceivers, i.e. devices in which transmitter and receiver form a structural unit and in which at least one part is used for functions of transmitting and receiving
    • H04B1/3827Portable transceivers
    • H04B1/385Transceivers carried on the body, e.g. in helmets
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本申请实施例公开了一种语音播放方法、装置、穿戴式设备及存储介质,该方法包括:当检测到语音自动播放指令时,开启语音自动播放模式;获取和语音播放关联的触发数据,判断所述触发数据是否满足语音播放条件;如果所述触发数据满足语音播放条件,则通过网络获取语音播放源进行语音播放。本方案提高了穿戴式设备的语音播放效率,减少了语音播放步骤。

The embodiments of the present application disclose a voice playback method, device, wearable device and storage medium. The method includes: when a voice automatic playback instruction is detected, turning on a voice automatic playback mode; acquiring trigger data associated with the voice playback, and determining Whether the trigger data satisfies the voice playback condition; if the trigger data satisfies the voice playback condition, obtain the voice playback source through the network to perform the voice playback. The solution improves the voice playback efficiency of the wearable device and reduces the steps of voice playback.

Description

Speech playing method, device, wearable device and storage medium
Technical field
The invention relates to computer technology more particularly to a kind of speech playing method, device, wearable device and Storage medium.
Background technique
With the progress for the development and Internet technology for calculating equipment, the interaction between user and smart machine is increasingly Frequently, as watched film, TV play using smart phone, TV programme is watched using smart television, are checked using smartwatch Short message, physical sign parameters etc..
Wherein, for wearable device as one of smart machine more and more popular with users, function is stronger and stronger, is User's daily life is provided convenience.However, the existing voice broadcast mode existing defects based on wearable device, need to change Into.
Summary of the invention
The present invention provides a kind of speech playing method, device, wearable device and storage mediums, can be improved wearable The voice playing efficiency of equipment reduces voice and plays step.
In a first aspect, the embodiment of the present application provides a kind of speech playing method, comprising:
When detecting automatic voice playing instruction, automatic voice playing mode is opened;
It obtains and voice plays associated trigger data, judge whether the trigger data meets voice playing condition;
If the trigger data meets voice playing condition, voice broadcast source progress voice is obtained by network and is broadcast It puts.
Second aspect, the embodiment of the present application also provides a kind of voice playing devices, comprising:
Play mode opening module, for opening automatic voice playing mode when detecting automatic voice playing instruction;
Trigger data judgment module plays associated trigger data with voice for obtaining, judges that the trigger data is It is no to meet voice playing condition;
Voice playing module obtains voice by network if meeting voice playing condition for the trigger data Broadcast source carries out voice broadcasting.
The third aspect, the embodiment of the present application also provides a kind of wearable devices, comprising: processor, memory and deposits The computer program that can be run on a memory and on a processor is stored up, the processor is realized when executing the computer program Speech playing method as described in the embodiment of the present application.
Fourth aspect, the embodiment of the present application also provides a kind of storage medium comprising wearable device executable instruction, The wearable device executable instruction as wearable device processor when being executed for executing described in the embodiment of the present application Speech playing method.
In the present solution, opening automatic voice playing mode when detecting automatic voice playing instruction, obtains and voice is broadcast Associated trigger data is put, judges whether the trigger data meets voice playing condition, if trigger data meets voice broadcasting Condition then obtains voice broadcast source by network and carries out voice broadcasting, and the voice that this programme improves wearable device plays effect Rate reduces voice and plays step.
Detailed description of the invention
By reading a detailed description of non-restrictive embodiments in the light of the attached drawings below, of the invention other Feature, objects and advantages will become more apparent upon:
Fig. 1 is a kind of flow chart of speech playing method provided by the embodiments of the present application;
Fig. 2 is the flow chart of another speech playing method provided by the embodiments of the present application;
Fig. 3 is the flow chart of another speech playing method provided by the embodiments of the present application;
Fig. 4 is a kind of structural block diagram of voice playing device provided by the embodiments of the present application;
Fig. 5 is a kind of structural schematic diagram of wearable device provided by the embodiments of the present application
Fig. 6 is a kind of signal pictorial diagram of wearable device provided by the embodiments of the present application.
Specific embodiment
The present invention is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched The specific embodiment stated is used to explain the present invention, rather than limitation of the invention.It also should be noted that for the ease of retouching It states, only the parts related to the present invention are shown in attached drawing rather than entire infrastructure.
Fig. 1 is a kind of flow chart of speech playing method provided by the embodiments of the present application, is applicable to automatic voice playing The case where, this method can be executed by wearable device provided by the embodiments of the present application, and the voice of the wearable device plays The mode that software and/or hardware can be used in device is realized, as shown in Figure 1, technical solution provided in this embodiment may include:
Step S101, when detecting automatic voice playing instruction, automatic voice playing mode is opened.
Wherein, voice play instruction is used to indicate wearable device and opens automatic voice playing mode.Automatic voice playing The triggering mode of instruction includes user speech triggering and non-voice triggering, and non-voice triggering includes rolling of the user to wearable device The controls operation such as dynamic, touching or pressing.It, will be corresponding after wearable device receives the triggering voice or trigger action of user Voice or operation are converted to automatic voice playing instruction, meanwhile, the detection program of wearable device in real time carries out the instruction Detection opens automatic voice playing mode, property easy to use with higher when detecting the instruction.
Example one, before detecting automatic voice playing instruction, this method further include: according to the acoustic information received Generate automatic voice playing instruction.Wearable device is turned the acoustic information of user by carrying out semantics recognition to user speech Automatic voice playing instruction is changed into, so as to automatically opening voice play mode.Wherein, wearable device has automatic speech noise reduction Function, when ambient noise is larger, such as the public places such as scenic spot, exhibition center, museum and odeum, it is also ensured that quasi- Really identification user speech.
Example two, before detecting automatic voice playing instruction, this method further include: obtain the sensing of sensor acquisition Data, wherein sensing data includes angle and the direction that user shakes operation, touches the touch position, dynamics and number of operation, Or the dynamics and duration of pressing operation;Detect whether the sensing data corresponds in wearable device in addition to unlatching voice is automatic System operatio except play mode, if there is no correspondence, i.e. the other systems of user's current operation and wearable device are grasped Make the sensing data that user's current operation generates then to be converted to automatic voice playing instruction, to open automatically there is no conflicting Open voice play mode.
Step S102, acquisition and voice play associated trigger data, judge whether trigger data meets voice and play item Part.
Trigger data is to determine whether the judgement data for obtaining voice broadcast source and playing out, including in wearable device The sensing data of integrated sensor acquisition.Specifically, can determine whether wearable device is in use based on trigger data State and the limbs state of user, or determine current spatial with the presence or absence of playable voice broadcast source.Correspondingly, when can To determine that wearable device is in use state, and user is in the limbs state for things of touring, or determines current sky Between it is middle there is playable voice broadcast source, i.e. trigger data meets voice playing condition.
If step S103, trigger data meets voice playing condition, voice broadcast source is obtained by network and carries out language Sound plays.
When trigger data meets voice playing condition, wearable device obtains voice broadcasting by network communication automatically Source, and store in the system cache, then play to user.It is played out based on system cache, is equivalent to a kind of offline broadcasting Form can get rid of the constraint of network state in voice playing process, guarantee the fluency played.Voice play mode include Earphone play mode and osteoacusis play mode can be broadcast according to the current environment level of noise that wearable device detects at two kinds Automatically switch between mode playback.
Optionally, before obtaining voice broadcast source by network and carrying out voice broadcasting, this method can also include: to receive Acquisition purview certification to voice broadcast source obtains voice broadcast source and plays out as a result, if certification passes through.Specifically , after the trigger data of acquisition meets voice playing condition, server from wearable device to storaged voice broadcast source or deposit It stores up equipment and sends voice data acquisition request, include the id information of wearable device in the voice data acquisition request;Service Device stores equipment in response to the voice data acquisition request of wearable device, according to id information therein to wearable device Legitimacy authenticated, and authentication result is fed back into wearable device.If wearable device is legal, server can be incited somebody to action It authenticates the result passed through and feeds back to wearable device, while playing source database to wearable device opening voice, to dress Formula equipment obtain needed for data, certainly, if wearable device is legal, can not also feedback validation as a result, direct opening voice Play source database.
As shown in the above, automatic voice playing mould is opened when detecting automatic voice playing instruction in this programme Formula, obtains and voice plays associated trigger data, judges whether the trigger data meets voice playing condition, if triggering number According to voice playing condition is met, then voice broadcast source is obtained by network and carry out voice broadcasting.When wearable device opens voice After automatic play mode, is participated in from trigger data is obtained to voice broadcasting, centre is carried out without user, it is manual not to be related to user The operation such as selection or determination, thus reduce voice and play step, improve voice playing efficiency.
Fig. 2 is the flow chart of another speech playing method provided by the embodiments of the present application, and optionally, trigger data includes The sensing data of acceleration transducer and gyro sensor acquisition, acceleration transducer and gyro sensor are integrated in wearing In formula equipment, correspondingly, it includes: that sensing data meets preset threshold range that trigger data, which meets voice playing condition,;Optionally, It obtains voice broadcast source and carries out voice to play including: to obtain to be currently located the showpiece speech sound eeplaining information that the service provider in place provides And carry out voice broadcasting, wherein the showpiece of showpiece speech sound eeplaining information and user's present position is corresponding.As shown in Fig. 2, this Embodiment provide technical solution may include:
Step S201, when detecting automatic voice playing instruction, automatic voice playing mode is opened.
Step S202, acquisition and voice play associated sensing data, judge whether sensing data meets preset threshold model It encloses.
Wherein, sensing data includes that the acceleration value of acceleration transducer acquisition and the angle of gyro sensor acquisition accelerate Angle value, acceleration transducer and gyro sensor are integrated in wearable device, which refers to that wear-type can be worn Wear equipment.Based on the non-zero sensing data that acceleration transducer and gyro sensor acquire, it can determine that wearable device is in Use state, and then determine the current limbs state of user, such as the speed and head inclination degree etc. of the speed of travel.
When user wears wearable device, such as intelligent glasses, in the sight of the public places such as exhibition center, museum and odeum When seeing showpiece, limbs state is different from the limbs state in normal walking, and the collected sensing data of sensor is also different.Sentence Whether disconnected sensing data meets preset threshold range, i.e., determines whether user is interested in viewing based on the limbs state of user Showpiece.Acceleration rate threshold and angular acceleration threshold value in preset threshold range can gradually be become according to human body by normal walking speed Slow behavioral statistics data determine.Specifically, when collected acceleration value meets threshold range, i.e., acceleration value be less than or Equal to acceleration rate threshold, show that user is currently at a slow speed or stationary state;When collected angular acceleration values meet threshold value When range, i.e., angular acceleration values are greater than or equal to angular acceleration threshold value, show that user is currently at and look up or overlooking state.
Optionally, sensing data further includes camera collection image data, correspondingly, it is pre- to judge whether sensing data meets If threshold range further include: judge whether image data remains unchanged within a preset time, wherein being adapted to property of preset time It is set as 30 seconds.User is considered during watching interested showpiece, the image data that camera acquires within a preset time The image of i.e. same showpiece, therefore, it is possible to by judging whether the image data that camera acquires within a preset time changes, really Recognize whether user is in the state of touring.On acceleration value and the judgement basis of angular acceleration values, image data is sentenced in increase It is disconnected, wearable device can be increased to the judgment accuracy of User Status, further avoid the showpiece speech sound eeplaining information played The phenomenon not corresponding with current showpiece.
If step S203, sensing data meets preset threshold range, the clothes for being currently located place are obtained by network The showpiece speech sound eeplaining information of business quotient's offer simultaneously carries out voice broadcasting.
Sensing data meets preset threshold range, that is, determines that user currently checks interested showpiece, then pass through network Automatic obtain is currently located the showpiece speech sound eeplaining information that the service provider in place provides, and plays to user.
Specifically, obtaining the showpiece speech sound eeplaining information for the service provider's offer for being currently located place by network can wrap Include: wearable device calls camera to be scanned and identify the number of current showpiece, is being deposited according to the recognition result of number It stores up in the cloud server of exhibit information and carries out matching search, obtain the speech sound eeplaining information of current showpiece.Alternatively, according to wearing The current location data of formula equipment determines the position data of current showpiece, based on location matches relationship in the cloud for storing exhibit information Matching search is carried out in the server of end, obtains the speech sound eeplaining information of current showpiece.
In addition, this method can also wrap before carrying out matching search in the cloud server for carrying out storage exhibit information It includes: obtaining the purview certification of access server as a result, if certification passes through, search for the speech sound eeplaining information of current showpiece.Tool Body, server can carry out purview certification to it according to the ID of wearable device, such as, if it is determined that the ID of wearable device Belong to legal ID, then purview certification passes through.If certification is not over exiting automatic voice playing mode, and prompt user The reason of mode exits.
Optionally, service provider provide showpiece speech sound eeplaining information include: service provider provide speech ciphering equipment in store Showpiece speech sound eeplaining information, wherein the position where one or more showpieces is arranged in speech ciphering equipment;Correspondingly, obtaining voice Broadcast source carries out voice to play including: the voice broadcast source stored in the speech ciphering equipment obtained in the preset range of current location.
Showpiece speech sound eeplaining information can not only be obtained from the cloud server of service provider in this programme, can also be from exhibition It is obtained in the speech ciphering equipment at the grade place of setting.According to the difference of showpiece service provider management strategy, each showpiece can be corresponded to individually One speech ciphering equipment;Multiple showpieces can also be concentrated use in a speech ciphering equipment, store multiple showpieces respectively in speech ciphering equipment Speech sound eeplaining information.
Specifically, obtaining the speech ciphering equipment in the preset range of current location when sensing data meets preset threshold range The voice broadcast source of middle storage may include: the current location data for obtaining wearable device, be existed according to its current location data Target voice equipment is determined within the scope of predeterminable area, wherein predeterminable area range is determined centered on current location data Geometric areas, wearable device are less than or equal to pre-determined distance threshold value at a distance from target voice equipment;Target voice is obtained to set Speech sound eeplaining information in standby.Distance threshold can according to need carry out adaptability setting, for example, can be set to 50 centimetres or 1 meter etc..I.e. wearable device can only identify the speech ciphering equipment for meeting distance threshold, then play speech sound eeplaining information.
If wearable device recognizes multiple speech ciphering equipments simultaneously, showpiece speech sound eeplaining information can be playd in order. Alternatively, being ranked up according to the distance value of speech ciphering equipment and wearable device, the language in speech ciphering equipment is played according to ranking results Sound explaining information, wherein distance value is smaller, and the corresponding ranking of speech ciphering equipment is more forward.Alternatively, wearable device prompt user's inspection Multiple speech ciphering equipments are measured, and the corresponding showpiece thumbnail of speech ciphering equipment is shown in the display unit of wearable device, with Just speech ciphering equipment or the customized playing sequence that speech sound eeplaining information in multiple speech ciphering equipments is set needed for user's selection.
It can be seen from the above, obtaining in this programme after wearable device opens automatic voice playing mode and voice playing Associated sensing data then obtains the service for being currently located place when sensing data meets preset threshold range by network The showpiece speech sound eeplaining information of quotient's offer simultaneously carries out voice broadcasting, improves the voice playing efficiency of wearable device, reduces Voice plays step, while extending the application function of wearable device;In addition, being based on acceleration transducer, gyroscope simultaneously The sensing data of sensor and camera acquisition is confirmed whether to obtain showpiece speech sound eeplaining information, increases wearable device pair The judgment accuracy of User Status, the showpiece speech sound eeplaining information for further avoiding broadcasting are not corresponding existing with current showpiece As.
Fig. 3 is the flow chart of another speech playing method provided by the embodiments of the present application, and optionally, trigger data includes The current location data of wearable device, correspondingly, it includes: to exist and current location number that trigger data, which meets voice playing condition, According to associated sight spot speech sound eeplaining information;Optionally, obtaining voice broadcast source and carrying out voice broadcasting includes: acquisition and current location The sight spot speech sound eeplaining information of data correlation simultaneously carries out voice broadcasting.As shown in figure 3, technical solution provided in this embodiment can be with Include:
Step S301, when detecting automatic voice playing instruction, automatic voice playing mode is opened.
Step S302, acquisition and voice play the current location data of associated wearable device, judge whether there is and The associated sight spot speech sound eeplaining information of current location data.
This programme can be adapted for the case where carrying out automatic briefing to sight spot using the wearable device of user in scenic spot. In concrete scheme, current location data, that is, user current location data of wearable device can according to the position data of user To determine the interested scenic spot location data of user, and then is detected by searching network data or identification code and determine that the sight spot is It is no that there are corresponding speech sound eeplaining information.
Optionally, after the current location data for obtaining wearable device, target can be determined according to current location data Sight spot within the scope of target area is determined as target scenic spot by regional scope;Wearable device connects scenic area server, root It is scanned in the database of scenic area server according to location matches relationship, it is determined whether exist and the matched voice of target scenic spot Explaining information.Wherein, for the position of target scenic spot, wearable device can be according to its current location data and the target determined Regional scope by infrared distance measurement, position analysis and is calculated;Target area range can be centered on current location, Using the border circular areas that pre-determined distance value is determined as radius, pre-determined distance value, which can according to need, to be configured, such as can be set It is 1 meter.
With before the matched speech sound eeplaining information of target scenic spot, this method can also include: for search in scenic area server The purview certification of access server is obtained as a result, if certification passes through, is continued searching.Specifically, server can be according to wearing The ID for wearing formula equipment carries out purview certification to it, such as, if it is determined that the ID of wearable device belongs to legal ID, then purview certification Pass through.If certification is not over can directly exit automatic voice playing mode, and the original for prompting user mode to exit Cause.
Alternatively, wearable device calls whether deposit at camera detection target scenic spot position after determining target scenic spot Include the speech sound eeplaining information of target scenic spot in two dimensional code, the two dimensional code, is judged whether there is and current location with realizing The effect of the sight spot speech sound eeplaining information of data correlation.In the case, scenic spot management service provider needs to configure in advance for sight spot It include the two dimensional code of speech sound eeplaining information.If wearable device can scan the two dimensional code, it is determined that target scenic spot is deposited In speech sound eeplaining information.
Step S303, if it is present passing through network acquisition and the associated sight spot speech sound eeplaining information of current location data And carry out voice broadcasting.
Illustratively, after determining target scenic spot according to the current location data of wearable device, if being based on network data Searching for determining target scenic spot, there are corresponding speech sound eeplaining information, then the speech sound eeplaining of target scenic spot is obtained from scenic area server Information, and play out.Determine that target scenic spot there are corresponding speech sound eeplaining information, then passes through if it is detecting based on identification code Identification to the identification code, obtains the speech sound eeplaining information of target scenic spot, and plays out.
It can be seen from the above, obtaining in this programme after wearable device opens automatic voice playing mode and voice playing The current location data of associated wearable device judges whether there is and the associated sight spot speech sound eeplaining letter of current location data Breath, if it is present by network acquisition and the associated sight spot speech sound eeplaining information of current location data and voice broadcasting is carried out, The voice playing efficiency for improving wearable device reduces voice and plays step.
Fig. 4 is a kind of structural block diagram of voice playing device provided by the embodiments of the present application, and the device is above-mentioned for executing The speech playing method that embodiment provides, has the corresponding functional module of execution method and beneficial effect.As shown in figure 4, the dress It sets and specifically includes: play mode opening module 101, trigger data judgment module 102 and voice playing module 103, wherein
Play mode opening module 101, for opening automatic voice playing mould when detecting automatic voice playing instruction Formula;
Trigger data judgment module 102 plays associated trigger data with voice for obtaining, whether judges trigger data Meet voice playing condition;
Voice playing module 103 obtains voice by network and broadcasts if meeting voice playing condition for trigger data It puts source and carries out voice broadcasting.
In a possible embodiment, the trigger data in trigger data judgment module 102 includes acceleration transducer With the sensing data of gyro sensor acquisition, acceleration transducer and gyro sensor are integrated in wearable device, phase It answers, voice playing module 103 is specifically used for:
If sensing data meets preset threshold range, voice broadcast source is obtained by network and carries out voice broadcasting.
In a possible embodiment, voice playing module 103 is specifically used for:
The showpiece speech sound eeplaining information for the service provider's offer for being currently located place is provided and carries out voice broadcasting, wherein exhibition The showpiece of product speech sound eeplaining information and user's present position is corresponding.
In a possible embodiment, the showpiece speech sound eeplaining packet that service provider provides in voice playing module 103 It includes:
The showpiece speech sound eeplaining information stored in the speech ciphering equipment that service provider provides, wherein speech ciphering equipment is arranged at one Or the position where multiple showpieces;Correspondingly, voice playing module 103 is specifically used for:
Obtain the voice broadcast source stored in the speech ciphering equipment in the preset range of current location.
In a possible embodiment, the trigger data in trigger data judgment module 102 includes wearable device Current location data, correspondingly, voice playing module 103 is specifically used for:
If there is with the associated sight spot speech sound eeplaining information of current location data, then pass through network obtain voice broadcast source Carry out voice broadcasting.
In a possible embodiment, voice playing module 103 is specifically used for:
Acquisition and the associated sight spot speech sound eeplaining information of current location data simultaneously carry out voice broadcasting.
In a possible embodiment, device further include: phonetic order generation module 104 is received for basis Acoustic information generate automatic voice playing instruction.
The present embodiment provides a kind of wearable device on the basis of the various embodiments described above, and Fig. 5 is the embodiment of the present application A kind of structural schematic diagram of the wearable device provided, Fig. 6 is a kind of signal of wearable device provided by the embodiments of the present application Pictorial diagram.As shown in Figure 5 and Figure 6, which includes: memory 201, processor (Central Processing Unit, CPU) 202, display unit 203, touch panel 204, heart rate detection mould group 205, range sensor 206, camera 207, Bone-conduction speaker 208, microphone 209, breath light 210, these components pass through one or more communication bus or signal wire 211 To communicate.
It should be understood that illustrating the example that wearable device is only wearable device, and wearable device It can have than shown in the drawings more or less component, can combine two or more components, or can be with It is configured with different components.Various parts shown in the drawings can include one or more signal processings and/or dedicated It is realized in the combination of hardware, software or hardware and software including integrated circuit.
Just the wearable device provided in this embodiment played for voice is described in detail below, this is wearable to set For by taking intelligent glasses as an example.
Memory 201, the memory 201 can be accessed by CPU202, and the memory 201 may include that high speed is random Access memory, can also include nonvolatile memory, for example, one or more disk memory, flush memory device or its His volatile solid-state part.
Display unit 203, can be used for the operation and control interface of display image data and operating system, and display unit 203 is embedded in In the frame of intelligent glasses, frame is internally provided with inner transmission lines 211, the inner transmission lines 211 and display unit 203 connections.
The outside of at least one intelligent glasses temple is arranged in touch panel 204, the touch panel 204, for obtaining touching Data are touched, touch panel 204 is connected by inner transmission lines 211 with CPU202.Wherein, touch panel 204 can detect user Finger sliding, clicking operation, and accordingly the data detected be transmitted to processor 202 handled it is corresponding to generate Control instruction, illustratively, can be left shift instruction, right shift instruction, move up instruction, move down instruction etc..Illustratively, display unit Part 203 can video-stream processor 202 transmit virtual image data, which can be accordingly according to touch panel 204 The user's operation that detects carries out corresponding change, specifically, can be carry out screen switching, when detecting left shift instruction or move to right Switch upper one or next virtual image picture after instruction accordingly;It, should when display unit 203 shows video playing information Left shift instruction, which can be, plays out playbacking for content, and right shift instruction can be the F.F. for playing out content;Work as display unit 203 displays are when being editable word content, and the left shift instruction, right shift instruction move up instruction, move down instruction and can be to cursor Displacement operation, i.e. the position of cursor can move the touch operation of touch tablet according to user;When display unit 203 is aobvious When the content shown is game animation picture, the left shift instruction, right shift instruction move up instruction, move down instruction and can be in game Object controlled, in machine game like flying, can by the left shift instruction, right shift instruction, move up instruction, move down instruction control respectively The heading of aircraft processed;When display unit 203 can show the video pictures of different channel, the left shift instruction, right shift instruction, Move up instruction, move down instruction and can carry out the switching of different channel, wherein move up instruction and move down instruction can be to switch to it is preset Channel (the common channel that such as user uses);When display unit 203 show static images when, the left shift instruction, right shift instruction, on It moves instruction, move down the switching that instructs and can carry out between different pictures, wherein left shift instruction can be to switch to a width picture, Right shift instruction, which can be, switches to next width figure, and an atlas can be to switch to by moving up instruction, and moving down instruction can be switching To next atlas.The touch panel 204 can also be used to control the display switch of display unit 203, illustratively, work as length When pressing 204 touch area of touch panel, display unit 203, which is powered, shows graphic interface, presses touch panel 204 when long again When touch area, display unit 203 power off, when display unit 203 be powered after, can by touch panel 204 carry out upper cunning and Operation glide to adjust the brightness or resolution ratio that show image in display unit 203.
Heart rate detection mould group 205, for measuring the heart rate data of user, heart rate refers to beats per minute, the heart rate Mould group 205 is detected to be arranged on the inside of temple.Specifically, the heart rate detection mould group 205 can be in such a way that electric pulse measures Human body electrocardio data are obtained using stemness electrode, heart rate size is determined according to the amplitude peak in electrocardiogram (ECG) data;The heart rate detection Mould group 205 can also be by being formed using the light transmitting and light receiver of photoelectric method measurement heart rate, correspondingly, the heart rate is examined Mould group 205 is surveyed to be arranged at temple bottom, the ear-lobe of human body auricle.Heart rate detection mould group 205 can phase after collecting heart rate data The progress data processing in processor 202 that is sent to answered has obtained the current heart rate value of wearer, in one embodiment, processing Device 202, can be by the heart rate value real-time display in display unit 203 after determining the heart rate value of user, optional processor 202 are determining that heart rate value lower (such as less than 50) or higher (such as larger than 100) can trigger alarm accordingly, while by the heart Rate value and/or the warning message of generation are sent to server by communication module.
Range sensor 206, may be provided on frame, the distance which is used to incude face to frame, The realization of infrared induction principle can be used in the range sensor 206.Specifically, the range sensor 206 is by the range data of acquisition It is sent to processor 202, data control the bright dark of display unit 203 to processor 202 according to this distance.Illustratively, work as determination When the collected distance of range sensor 206 is less than 5 centimetres out, the corresponding control display unit 203 of processor 202, which is in, to be lighted State, when determine range sensor be detected with object close to when, it is corresponding control display unit 204 and be in close shape State.
Breath light 210 may be provided at the edge of frame, when display unit 203 closes display picture, the breath light 210 It can be lighted according to the control of processor 202 in the bright dark effect of gradual change.
Camera 207 can be the position that the upper side frame of frame is arranged in, and acquire the proactive of the image data in front of user As module, the rear photographing module of user eyeball information can also be acquired, is also possible to the combination of the two.Specifically, camera 207 When acquiring forward image, the image of acquisition is sent to the identification of processor 202, processing, and trigger accordingly according to recognition result Trigger event.Illustratively, when user wears the wearable device at home, by being identified to the forward image of acquisition, If recognizing article of furniture, corresponding inquiry whether there is corresponding control event, if it is present accordingly by the control The corresponding control interface of event processed is shown in display unit 203, and user can carry out corresponding furniture object by touch panel 204 The control of product, wherein the article of furniture and intelligent glasses are connected to the network by bluetooth or wireless self-networking;When user is at family When outer wearing wearable device, target identification mode can be opened accordingly, which can be used to identify specific people, The image of acquisition is sent to processor 202 and carries out recognition of face processing by camera 207, if recognizing the default people of setting Face, the then loudspeaker that can be integrated accordingly by intelligent glasses carry out sound casting, which can be also used for knowing Not different plants, for example, processor 202 acquired according to the touch operation of touch panel 204 with recording camera 207 it is current Image is simultaneously sent to server by communication module to be identified, server identify to the plant in acquisition image simultaneously anti- It presents relevant botanical name, introduce to intelligent glasses, and feedback data is shown in display unit 203.Camera 207 may be used also To be the image for acquiring user's eye such as eyeball, different control instructions is generated by the identification of the rotation to eyeball, is shown Example property, control instruction is moved up as eyeball is rotated up generation, eyeball rotates down generation and moves down control instruction, and eyeball is turned left Dynamic generation moves to left control instruction, and the eyeball generation that turns right moves to right control instruction, wherein qualified, display unit 203 can show place Manage the virtual image data that device 202 transmits, the user eyeball which can detect according to camera 207 accordingly Mobile variation generate control instruction and change, specifically, can be carry out screen switching, move to left control instruction when detecting Or switch upper one or next virtual image picture accordingly after moving to right control instruction;When display unit 203 shows that video is broadcast When putting information, this, which moves to left control instruction and can be, plays out playbacking for content, moves to right in control instruction can be and play out The F.F. of appearance;When display unit 203 display be editable word content when, this move to left control instruction, move to right control instruction, on Control instruction is moved, control instruction is moved down and can be displacement operation to cursor, is i.e. the position of cursor can be according to user to touch tablet Touch operation and moved;When display unit 203 show content be game animation picture when, this move to left control instruction, Control instruction is moved to right, control instruction is moved up, moves down control instruction and can be the object in game is controlled, machine game like flying In, control instruction can be moved to left by this, moved to right control instruction, moved up control instruction, moving down control instruction and control aircraft respectively Heading;When display unit 203 can show the video pictures of different channel, this move to left control instruction, move to right control instruction, Control instruction is moved up, control instruction is moved down and can carry out the switching of different channel, wherein moves up control instruction and moves down control instruction Pre-set channel (the common channel that such as user uses) can be to switch to;When display unit 203 shows static images, this is moved to left Control instruction moves to right control instruction, moves up control instruction, moving down control instruction and can carry out switching between different pictures, wherein A width picture can be to switch to by moving to left control instruction, moved to right control instruction and be can be and switch to next width figure, and control is moved up Instruction can be to switch to an atlas, move down control instruction and can be and switch to next atlas.
The inner wall side of at least one temple is arranged in bone-conduction speaker 208, bone-conduction speaker 208, for that will receive To processor 202 send audio signal be converted to vibration signal.Wherein, sound is passed through skull by bone-conduction speaker 208 It is transferred to human body inner ear, is transmitted in skull cochlea by the way that the electric signal of audio is changed into vibration signal, then by auditory nerve It is perceived.Reduce hardware configuration thickness as sounding device by bone-conduction speaker 208, weight is lighter, while without electromagnetism Radiation will not be influenced by electromagnetic radiation, and have antinoise, waterproof and liberation ears a little.
Microphone 209, may be provided on the lower frame of frame, for acquiring external (user, environment) sound and being transmitted to Processor 202 is handled.Illustratively, the sound that microphone 209 issues user be acquired and pass through processor 202 into Row Application on Voiceprint Recognition can receive subsequent voice control, specifically, user if being identified as the vocal print of certification user accordingly Collected voice is sent to processor 202 and identified according to recognition result generation pair by capable of emitting voice, microphone 209 The control instruction answered, such as " booting ", " shutdown ", " promoting display brightness ", " reducing display brightness ", the subsequent basis of processor 202 The control instruction of the generation executes corresponding control processing.
The executable present invention of the voice playing device and wearable device of the wearable device provided in above-described embodiment appoints The speech playing method of wearable device provided by embodiment of anticipating has and executes the corresponding functional module of this method and beneficial to effect Fruit.The not technical detail of detailed description in the above-described embodiments wearable is set reference can be made to provided by any embodiment of the invention Standby speech playing method.
The embodiment of the present application also provides a kind of storage medium comprising wearable device executable instruction, described wearable to set Standby executable instruction is used to execute a kind of speech playing method when being executed by wearable device processor, this method comprises:
When detecting automatic voice playing instruction, automatic voice playing mode is opened;
It obtains and voice plays associated trigger data, judge whether trigger data meets voice playing condition;
If trigger data meets voice playing condition, voice broadcast source is obtained by network and carries out voice broadcasting.
In a possible embodiment, trigger data includes the sensing of acceleration transducer and gyro sensor acquisition Data, acceleration transducer and gyro sensor are integrated in wearable device, correspondingly, trigger data meets voice broadcasting Condition includes:
Sensing data meets preset threshold range.
In a possible embodiment, obtaining the progress voice broadcasting of voice broadcast source includes:
The showpiece speech sound eeplaining information for the service provider's offer for being currently located place is provided and carries out voice broadcasting, showpiece voice The showpiece of explaining information and user's present position is corresponding.
In a possible embodiment, the showpiece speech sound eeplaining information of service provider's offer includes:
The showpiece speech sound eeplaining information stored in the speech ciphering equipment that service provider provides, wherein speech ciphering equipment is arranged at one Or the position where multiple showpieces;Correspondingly, acquisition voice broadcast source progress voice broadcasting includes:
Obtain the voice broadcast source stored in the speech ciphering equipment in the preset range of current location.
In a possible embodiment, trigger data includes the current location data of wearable device, correspondingly, triggering Data meet voice playing condition
In the presence of with the associated sight spot speech sound eeplaining information of current location data.
In a possible embodiment, obtaining the progress voice broadcasting of voice broadcast source includes:
Acquisition and the associated sight spot speech sound eeplaining information of current location data simultaneously carry out voice broadcasting.
In a possible embodiment, before detecting automatic voice playing instruction, this method further include:
Automatic voice playing instruction is generated according to the acoustic information received.
Storage medium --- any various types of memory devices or storage equipment.Term " storage medium " is intended to wrap It includes: install medium, such as CD-ROM, floppy disk or magnetic tape equipment;Computer system memory or random access memory, such as DRAM, DDR RAM, SRAM, EDO RAM, blue Bath (Rambus) RAM etc.;Nonvolatile memory, such as flash memory, magnetic medium (such as hard disk or optical storage);Register or the memory component of other similar types etc..Storage medium can further include other Memory of type or combinations thereof.In addition, storage medium can be located at program in the first computer system being wherein performed, Or can be located in different second computer systems, second computer system is connected to the by network (such as internet) One computer system.Second computer system can provide program instruction to the first computer for executing." storage is situated between term Matter " may include may reside in different location (such as by network connection different computer systems in) two or More storage mediums.Storage medium can store the program instruction that can be performed by one or more processors and (such as implement For computer program).
Certainly, a kind of storage medium comprising computer executable instructions, computer provided by the embodiment of the present application The speech playing method operation that executable instruction is not limited to the described above, can also be performed provided by any embodiment of the invention Relevant operation in speech playing method.
Note that the above is only a better embodiment of the present invention and the applied technical principle.It will be appreciated by those skilled in the art that The invention is not limited to the specific embodiments described herein, be able to carry out for a person skilled in the art it is various it is apparent variation, It readjusts and substitutes without departing from protection scope of the present invention.Therefore, although being carried out by above embodiments to the present invention It is described in further detail, but the present invention is not limited to the above embodiments only, without departing from the inventive concept, also It may include more other equivalent embodiments, and the scope of the invention is determined by the scope of the appended claims.

Claims (10)

1. speech playing method characterized by comprising
When detecting automatic voice playing instruction, automatic voice playing mode is opened;
It obtains and voice plays associated trigger data, judge whether the trigger data meets voice playing condition;
If the trigger data meets voice playing condition, voice broadcast source is obtained by network and carries out voice broadcasting.
2. the method according to claim 1, wherein the trigger data includes acceleration transducer and gyroscope The sensing data of sensor acquisition, the acceleration transducer and the gyro sensor are integrated in wearable device, phase It answers, the trigger data meets voice playing condition and includes:
The sensing data meets preset threshold range.
3. according to the method described in claim 2, it is characterized in that, acquisition voice broadcast source progress voice broadcasting includes:
The showpiece speech sound eeplaining information for the service provider's offer for being currently located place is provided and carries out voice broadcasting, the showpiece voice The showpiece of explaining information and user's present position is corresponding.
4. according to the method described in claim 3, it is characterized in that, the showpiece speech sound eeplaining packet that the service provider provides It includes:
The showpiece speech sound eeplaining information stored in the speech ciphering equipment that service provider provides, wherein the speech ciphering equipment is arranged at one Or the position where multiple showpieces;Correspondingly, the acquisition voice broadcast source progress voice broadcasting includes:
Obtain the voice broadcast source stored in the speech ciphering equipment in the preset range of current location.
5. the method according to claim 1, wherein the trigger data includes the current location of wearable device Data, correspondingly, the trigger data meets voice playing condition includes:
In the presence of with the associated sight spot speech sound eeplaining information of the current location data.
6. according to the method described in claim 5, it is characterized in that, acquisition voice broadcast source progress voice broadcasting includes:
Acquisition and the associated sight spot speech sound eeplaining information of the current location data simultaneously carry out voice broadcasting.
7. method according to claim 1 to 6, which is characterized in that detecting that automatic voice playing instructs it Before, further includes:
Automatic voice playing instruction is generated according to the acoustic information received.
8. voice playing device characterized by comprising
Play mode opening module, for opening automatic voice playing mode when detecting automatic voice playing instruction;
Trigger data judgment module plays associated trigger data with voice for obtaining, and judges whether the trigger data is full Sufficient voice playing condition;
Voice playing module obtains voice by network and plays if meeting voice playing condition for the trigger data Source carries out voice broadcasting.
9. a kind of wearable device, comprising: processor, memory and storage can be run on a memory and on a processor Computer program, which is characterized in that the processor is realized when executing the computer program such as any one of claim 1-7 The speech playing method.
10. a kind of storage medium comprising wearable device executable instruction, which is characterized in that the wearable device is executable Instruction by wearable device processor when being executed for executing such as voice broadcasting side of any of claims 1-7 Method.
CN201811001035.1A 2018-08-30 2018-08-30 Speech playing method, device, wearable device and storage medium Pending CN109144465A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811001035.1A CN109144465A (en) 2018-08-30 2018-08-30 Speech playing method, device, wearable device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811001035.1A CN109144465A (en) 2018-08-30 2018-08-30 Speech playing method, device, wearable device and storage medium

Publications (1)

Publication Number Publication Date
CN109144465A true CN109144465A (en) 2019-01-04

Family

ID=64829206

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811001035.1A Pending CN109144465A (en) 2018-08-30 2018-08-30 Speech playing method, device, wearable device and storage medium

Country Status (1)

Country Link
CN (1) CN109144465A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110222550A (en) * 2019-07-11 2019-09-10 上海肇观电子科技有限公司 Information broadcasting method, circuit, casting equipment, storage medium, intelligent glasses
CN117492562A (en) * 2023-11-02 2024-02-02 深圳腾信百纳科技有限公司 An exhibition hall control method, system and storage medium combined with smart wear

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1405997A (en) * 2002-10-30 2003-03-26 上海奥达光电子科技有限公司 Infrared tour-guiding system
US20130228615A1 (en) * 2012-03-01 2013-09-05 Elwha Llc Systems and methods for scanning a user environment and evaluating data of interest
JP2013186476A (en) * 2012-03-09 2013-09-19 Hon Hai Precision Industry Co Ltd Voice guidance system and method for the same
CN205666036U (en) * 2016-02-22 2016-10-26 陈进民 On --spot automatic explanation system based on intelligent vision
CN106131312A (en) * 2016-06-21 2016-11-16 广东欧珀移动通信有限公司 Voice message playback method, device and mobile terminal
CN106557166A (en) * 2016-11-23 2017-04-05 上海擎感智能科技有限公司 Intelligent glasses and its control method, control device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1405997A (en) * 2002-10-30 2003-03-26 上海奥达光电子科技有限公司 Infrared tour-guiding system
US20130228615A1 (en) * 2012-03-01 2013-09-05 Elwha Llc Systems and methods for scanning a user environment and evaluating data of interest
JP2013186476A (en) * 2012-03-09 2013-09-19 Hon Hai Precision Industry Co Ltd Voice guidance system and method for the same
CN205666036U (en) * 2016-02-22 2016-10-26 陈进民 On --spot automatic explanation system based on intelligent vision
CN106131312A (en) * 2016-06-21 2016-11-16 广东欧珀移动通信有限公司 Voice message playback method, device and mobile terminal
CN106557166A (en) * 2016-11-23 2017-04-05 上海擎感智能科技有限公司 Intelligent glasses and its control method, control device

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110222550A (en) * 2019-07-11 2019-09-10 上海肇观电子科技有限公司 Information broadcasting method, circuit, casting equipment, storage medium, intelligent glasses
CN117492562A (en) * 2023-11-02 2024-02-02 深圳腾信百纳科技有限公司 An exhibition hall control method, system and storage medium combined with smart wear
CN117492562B (en) * 2023-11-02 2025-04-29 深圳腾信百纳科技有限公司 A exhibition hall control method, system and storage medium combined with smart wearable

Similar Documents

Publication Publication Date Title
CN108833818B (en) Video recording method, device, terminal and storage medium
CN109600678B (en) Information display method, device and system, server, terminal and storage medium
CN110830811B (en) Live broadcast interaction method, device, system, terminal and storage medium
CN108259945B (en) Method and device for processing playing request for playing multimedia data
CN109255064A (en) Information searching method and device, intelligent glasses and storage medium
CN109087485B (en) Driving reminding method and device, intelligent glasses and storage medium
CN108848313B (en) Multi-person photographing method, terminal and storage medium
CN109116991A (en) Wearable device control method and device, storage medium and wearable device
CN109224432B (en) Entertainment application control method and device, storage medium and wearable device
CN109119080A (en) Voice recognition method and device, wearable device and storage medium
CN109358744A (en) Information sharing method and device, storage medium and wearable device
CN109061903A (en) Data display method, device, intelligent glasses and storage medium
CN108848394A (en) Net cast method, apparatus, terminal and storage medium
CN109067627A (en) Household appliance control method and device, wearable device and storage medium
CN110933468A (en) Playing method, playing device, electronic equipment and medium
CN109257490A (en) Audio-frequency processing method, device, wearable device and storage medium
CN109240639A (en) Method, device, storage medium and terminal for acquiring audio data
CN109241900A (en) Wearable device control method and device, storage medium and wearable device
CN109189225A (en) Display interface adjustment method, device, wearable device and storage medium
CN109145847A (en) Identification method and device, wearable device and storage medium
CN109117819B (en) Target recognition method, device, storage medium and wearable device
CN109068126A (en) Video broadcasting method, device, storage medium and wearable device
CN109144465A (en) Speech playing method, device, wearable device and storage medium
CN109144263A (en) Social householder method, device, storage medium and wearable device
CN109144265A (en) Display changeover method, device, wearable device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190104

RJ01 Rejection of invention patent application after publication