CN111367407B - Intelligent glasses interaction method, intelligent glasses interaction device and intelligent glasses - Google Patents
Intelligent glasses interaction method, intelligent glasses interaction device and intelligent glasses Download PDFInfo
- Publication number
- CN111367407B CN111367407B CN202010113481.2A CN202010113481A CN111367407B CN 111367407 B CN111367407 B CN 111367407B CN 202010113481 A CN202010113481 A CN 202010113481A CN 111367407 B CN111367407 B CN 111367407B
- Authority
- CN
- China
- Prior art keywords
- virtual reality
- interaction interface
- intelligent glasses
- voice
- reality interaction
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/04815—Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/223—Execution procedure of a spoken command
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/225—Feedback of the input speech
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The application is applicable to the technical field of intelligent glasses, and provides an intelligent glasses interaction method, an intelligent glasses interaction device, intelligent glasses and a computer-readable storage medium, which comprise the following steps: when the intelligent glasses are in a preset mode, voice instructions are acquired through the intelligent glasses; determining a scene category corresponding to the voice instruction; if the scene category does not belong to the voice control scene category, generating first prompt information, wherein the first prompt information is used for prompting a user whether to display a virtual reality interaction interface or not; and if feedback information confirming to display the virtual reality interaction interface is received, displaying display content associated with the voice instruction on the virtual reality interaction interface through the intelligent glasses. By the method, the problem that the existing interaction mode with the intelligent glasses is single and is difficult to meet the needs of different users and different scenes can be solved.
Description
Technical Field
The application belongs to the technical field of intelligent glasses, and particularly relates to an intelligent glasses interaction method, an intelligent glasses interaction device, intelligent glasses and a computer-readable storage medium.
Background
With the development of technology, intelligent glasses are gradually entering into the life of people.
In daily use, a user needs to perform a large amount of interaction operations with the virtual reality interaction interface, for example, by clicking the virtual display interaction interface to control the intelligent glasses, and at this time, the user needs to perform operations such as touch control and clicking by hand. Therefore, the existing interaction mode with the intelligent glasses is single, and the needs of different users and different scenes are difficult to meet.
Disclosure of Invention
The embodiment of the application provides an intelligent glasses interaction method, an intelligent glasses interaction device, intelligent glasses and a computer readable storage medium, which can solve the problem that the existing interaction mode with the intelligent glasses is single and is difficult to meet the needs of different users and different scenes.
In a first aspect, an embodiment of the present application provides an intelligent glasses interaction method, including:
when the intelligent glasses are in a preset mode, voice instructions are acquired through the intelligent glasses;
determining a scene category corresponding to the voice command;
if the scene category does not belong to the voice control scene category, generating first prompt information, wherein the first prompt information is used for prompting a user whether to display a virtual reality interaction interface or not;
and if feedback information confirming to display the virtual reality interaction interface is received, displaying display content related to the voice instruction on the virtual reality interaction interface through the intelligent glasses.
In a second aspect, an embodiment of the present application provides an intelligent glasses interaction device, including:
the acquisition module is used for acquiring voice instructions through the intelligent glasses when the intelligent glasses are in a preset mode;
the determining module is used for determining the scene category corresponding to the voice instruction;
the prompting module is used for generating first prompting information if the scene category does not belong to the voice control scene category, wherein the first prompting information is used for prompting a user whether to display a virtual reality interaction interface or not;
and the display module is used for displaying the display content related to the voice instruction on the virtual reality interaction interface through the intelligent glasses if the feedback information confirming the display of the virtual reality interaction interface is received.
In a third aspect, an embodiment of the present application provides a smart glasses, including a memory, a processor, a display, and a computer program stored in the memory and capable of running on the processor, where the processor implements the smart glasses interaction method as described in the first aspect when executing the computer program.
In a fourth aspect, an embodiment of the present application provides a computer readable storage medium, where a computer program is stored, where the computer program is executed by a processor to implement the smart glasses interaction method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a computer program product, which when run on smart glasses, causes the smart glasses to perform the smart glasses interaction method described in the first aspect.
Compared with the prior art, the embodiment of the application has the beneficial effects that: in the embodiment of the application, when the intelligent glasses are in the preset mode, the voice command is acquired through the intelligent glasses, and the scene category corresponding to the voice command is determined, so that when the scene category does not belong to the voice control scene category, first prompt information for prompting the user whether to display the virtual reality interaction interface is generated, the user can firstly send the control command to interact with the intelligent glasses in voice in the preset mode, and when the scene related to the voice sent by the user exceeds the scene related to the voice control, the user is prompted whether to display the virtual reality interaction interface. According to the embodiment of the application, the interaction mode with the intelligent glasses can be determined according to the scene category corresponding to the voice command, so that the interaction mode with the intelligent glasses can be adjusted according to the requirements of users, and accordingly, the corresponding more efficient interaction mode can be selected for different scenes, the interaction requirements of different users and different scenes are met, and the use experience of the users is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments or the description of the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of an interaction method of smart glasses according to an embodiment of the present application;
FIG. 2 is a flowchart of step S104 according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an intelligent glasses interaction device according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an intelligent glasses provided by an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, techniques, etc., in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in the present specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
As used in the present description and the appended claims, the term "if" may be interpreted as "when..once" or "in response to a determination" or "in response to detection" depending on the context. Similarly, the phrase "if a determination" or "if a [ described condition or event ] is detected" may be interpreted in the context of meaning "upon determination" or "in response to determination" or "upon detection of a [ described condition or event ]" or "in response to detection of a [ described condition or event ]".
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
Specifically, fig. 1 shows a flowchart of an intelligent glasses interaction method provided by an embodiment of the present application, where the intelligent glasses interaction method is applied to intelligent glasses.
In the embodiment of the present application, the specific structure of the smart glasses may be various, which is not limited herein. For example, in some examples, the smart glasses may include a control circuit, a frame body, a first temple and/or a second temple, and the like, one end of the frame body may be connected to the first temple, the other end of the frame body may be connected to the second temple, and the control circuit may be used for performing data processing. In addition, the intelligent glasses can further comprise other components. Further, in some embodiments, the smart glasses described above may include one or more sensors, such as may include one or more of a depth camera, an infrared sensor, an ultrasonic sensor, a hall sensor, and the like. The specific setting positions of the sensors can be determined according to actual application scenes.
The intelligent glasses interaction method can comprise the following steps:
step S101, when the intelligent glasses are in a preset mode, voice instructions are acquired through the intelligent glasses.
In the embodiment of the present application, the preset mode may be set according to an actual application scenario, for example, the preset mode may be a default mode of the smart glasses in an enabled state; alternatively, the preset mode may be a voice control mode or the like selected by the user in advance. When the intelligent glasses are in the preset mode, the basic interaction mode with the intelligent glasses can be considered to be a voice interaction mode.
The voice command may be obtained by the smart glasses through a voice recognition mode, or the smart glasses may receive the voice command from other terminals (such as headphones, mobile phones, or other wearable devices) through a preset information transmission mode.
In some embodiments, when the smart glasses are in the preset mode, the method for acquiring the voice command through the smart glasses includes:
when the intelligent glasses are in a preset mode, voice information is received through the intelligent glasses;
and carrying out voice recognition on the voice information to obtain a voice instruction.
The smart glasses may include a microphone for receiving the voice information. The specific method for performing the voice recognition on the voice information may be various, and is not limited herein.
Step S102, determining the scene category corresponding to the voice command.
In an embodiment of the present application, the scene category may be determined according to a keyword included in the voice command. Alternatively, the scene category corresponding to the voice instruction may be determined based on machine learning. In some cases, the scene category corresponding to the voice command may include one or more of the following. The types of the scene categories may be various, and may be adjusted according to the actual application scene. For example, in some examples the categories of fields Jing Leibie may include a voice control scene category and a non-voice control scene category, where the non-voice control scene category may include one or more of a visual scene category, a category controlled by other input devices (e.g., keyboard, mouse, etc.), and so forth, depending on the application scene needs.
For example, in some examples, if it is determined that the voice command relates to a predetermined movie ticket or viewing content requiring visual support, such as a picture, it may be determined that the scene category corresponding to the voice command belongs to a visual scene category.
Step S103, if the scene category does not belong to the voice control scene category, generating first prompt information, wherein the first prompt information is used for prompting a user whether to display a virtual reality interaction interface.
In the embodiment of the present application, the scene category may include one category or multiple categories, and if the scene category includes multiple categories, the scene category not belonging to the voice control scene category may refer to a category that does not belong to the voice control scene category in the scene category.
For example, in one scenario, if the voice command relates to a predetermined movie ticket, it may be determined that the scenario category corresponding to the voice command belongs to a visual scenario category but not to a voice control scenario category, and at this time, the first prompt may be generated. The first prompt information may be, for example, a voice prompt information, or may also be visual display information implemented by a preset display device (such as smart glasses or a handheld device), or the like. Through the first prompt information, a user can be inquired whether to display the virtual reality interaction interface so as to realize virtual reality interaction.
Step S104, if feedback information confirming to display the virtual reality interaction interface is received, display content related to the voice command is displayed on the virtual reality interaction interface through the intelligent glasses.
In the embodiment of the application, the specific form and the acquisition mode of the feedback information can be determined according to the actual application scene. By way of example, the feedback information may be voice information, or the like. If feedback information confirming display of the virtual reality interaction interface is received by the user, display content related to the voice command can be determined and displayed on the virtual reality interaction interface. In some examples, after displaying the virtual reality interaction interface, the basic interaction mode of the smart glasses may be changed to an interaction mode based on the virtual reality interaction interface.
In some embodiments, as shown in fig. 2, if feedback information confirming display of the virtual reality interaction interface is received, display content associated with the voice command is displayed on the virtual reality interaction interface through the smart glasses, including:
step S201, if feedback information confirming display of the virtual reality interaction interface is received, determining a display area of the virtual reality interaction interface;
step S202, displaying the display content on the virtual reality interaction interface through the intelligent glasses according to the display area.
In the embodiment of the application, the display area of the virtual reality interaction interface can be determined according to the voice command, the preset information, the use state of the intelligent glasses and the like.
In some embodiments, after displaying the display content associated with the voice command through the virtual reality interaction interface of the smart glasses, the method further includes:
detecting the operation of a user on the virtual reality interaction interface;
according to the above operation, an operation instruction is determined.
In the embodiment of the application, the operation of the user on the virtual reality interaction interface can be realized through gesture operation, virtual reality interaction operation with the virtual reality interaction interface and the like. The virtual reality interaction operation can be determined according to a projection position of the virtual reality interface in a three-dimensional map corresponding to the intelligent glasses, an operation position of the user operation in the three-dimensional map, and the like; at this time, the operation intention of the user may be determined according to the operation, so that a corresponding operation instruction is determined according to the operation.
In some embodiments, the operation of the virtual reality interaction interface by the user may be detected by a device such as a depth camera, an infrared sensor, an ultrasonic sensor, or the like, for example.
In some embodiments, the detecting the operation of the user on the virtual reality interaction interface includes:
detecting the operation of a user on the virtual reality interaction interface through a depth camera on the intelligent glasses;
according to the above operation, determining the operation instruction includes:
if the operation is in the target operation area, determining an operation instruction according to the operation.
In the embodiment of the application, the spatial position information of the operation of the user on the virtual reality interaction interface can be determined through the depth camera, and whether the operation is in a target operation area or not is judged according to the spatial position information. The target operation area may be a two-dimensional space area or a three-dimensional space area. If the operation is within the target operation region, the operation instruction may be determined according to the operation.
In some embodiments, after detecting the operation of the user on the virtual reality interaction interface by the depth camera on the smart glasses, the method further includes:
and if the operation is not in the target operation area, displaying second prompting information on the virtual reality interaction interface according to the relative position relation between the operation and the target operation area, wherein the second prompting information is used for prompting a user to adjust the operation to the target operation area.
In the embodiment of the application, if the operation is not in the target operation area, the second prompt information can be displayed on the virtual reality interaction interface to help the user to determine the position relationship in the corresponding three-dimensional map of the virtual reality interaction interface relative to the actual physical environment, and the target operation area is determined, so that the user is helped to execute the operation more accurately.
In some embodiments, after displaying the display content associated with the voice command on the virtual reality interaction interface through the smart glasses, the method further includes:
and if a closing instruction for closing the virtual reality interaction interface is received, closing the virtual reality interaction interface and closing the depth camera.
In the embodiment of the present application, the closing instruction may be obtained in various manners. For example, the user may input the closing instruction through a virtual key in the virtual reality interactive interface, or may input the closing instruction through voice, or the like. In some examples, if the user completes the operation on the virtual reality interaction interface, the smart glasses may be instructed to close the virtual reality interaction interface and close the depth camera by inputting the closing instruction, so as to reduce power consumption. In some examples, after the virtual reality interaction interface is closed, the basic interaction mode of the smart glasses may be a voice interaction mode, so that the interaction mode of the smart glasses is more concise.
In the embodiment of the application, when the intelligent glasses are in the preset mode, the voice command is acquired through the intelligent glasses, and the scene category corresponding to the voice command is determined, so that when the scene category does not belong to the voice control scene category, first prompt information for prompting the user whether to display the virtual reality interaction interface is generated, the user can firstly send the control command to interact with the intelligent glasses in voice in the preset mode, and when the scene related to the voice sent by the user exceeds the scene related to the voice control, the user is prompted whether to display the virtual reality interaction interface. According to the embodiment of the application, the interaction mode with the intelligent glasses can be determined according to the scene category corresponding to the voice command, so that the interaction mode with the intelligent glasses can be adjusted according to the requirements of users, and accordingly, the corresponding more efficient interaction mode can be selected for different scenes, the interaction requirements of different users and different scenes are met, and the use experience of the users is improved.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present application.
Corresponding to the above-mentioned intelligent glasses interaction method of the embodiment, fig. 3 shows a block diagram of an intelligent glasses interaction device according to an embodiment of the present application, and for convenience of explanation, only the parts related to the embodiment of the present application are shown.
Referring to fig. 3, the smart glasses interaction device 3 includes:
the obtaining module 301 is configured to obtain a voice command through the smart glasses when the smart glasses are in a preset mode;
a determining module 302, configured to determine a scene category corresponding to the voice command;
the prompting module 303 is configured to generate first prompting information if the scene category does not belong to the voice control scene category, where the first prompting information is used to prompt a user whether to display a virtual reality interaction interface;
and the display module 304 is configured to display, on the virtual reality interaction interface, display content associated with the voice command through the smart glasses if feedback information confirming display of the virtual reality interaction interface is received.
Optionally, the display module 304 includes:
the determining unit is used for determining the display area of the virtual reality interaction interface if feedback information confirming display of the virtual reality interaction interface is received;
and the display unit is used for displaying the display content on the virtual reality interaction interface through the intelligent glasses according to the display area.
Optionally, the smart glasses interaction device 3 further includes:
the detection module is used for detecting the operation of the user on the virtual reality interaction interface;
and the second determining module is used for determining an operation instruction according to the operation.
Optionally, the detection module is specifically configured to:
detecting the operation of a user on the virtual reality interaction interface through a depth camera on the intelligent glasses;
the second determining module is specifically configured to:
if the operation is in the target operation area, determining an operation instruction according to the operation.
Optionally, the smart glasses interaction device 3 further includes:
and the second prompting module is used for displaying second prompting information on the virtual reality interaction interface according to the relative position relation between the operation and the target operation area if the operation is not in the target operation area, and the second prompting information is used for prompting a user to adjust the operation to the target operation area.
Optionally, the smart glasses interaction device 3 further includes:
and the closing module is used for closing the virtual reality interaction interface and closing the depth camera if a closing instruction for closing the virtual reality interaction interface is received.
Optionally, the acquiring module 301 specifically includes:
the receiving unit is used for receiving voice information through the intelligent glasses when the intelligent glasses are in a preset mode;
and the recognition unit is used for carrying out voice recognition on the voice information to obtain a voice instruction.
In the embodiment of the application, when the intelligent glasses are in the preset mode, the voice command is acquired through the intelligent glasses, and the scene category corresponding to the voice command is determined, so that when the scene category does not belong to the voice control scene category, first prompt information for prompting the user whether to display the virtual reality interaction interface is generated, the user can firstly send the control command to interact with the intelligent glasses in voice in the preset mode, and when the scene related to the voice sent by the user exceeds the scene related to the voice control, the user is prompted whether to display the virtual reality interaction interface. According to the embodiment of the application, the interaction mode with the intelligent glasses can be determined according to the scene category corresponding to the voice command, so that the interaction mode with the intelligent glasses can be adjusted according to the requirements of users, and accordingly, the corresponding more efficient interaction mode can be selected for different scenes, the interaction requirements of different users and different scenes are met, and the use experience of the users is improved.
It should be noted that, because the content of information interaction and execution process between the above devices/units is based on the same concept as the method embodiment of the present application, specific functions and technical effects thereof may be referred to in the method embodiment section, and will not be described herein.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, the specific names of the functional units and modules are only for distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
Fig. 4 is a schematic structural diagram of an intelligent glasses according to an embodiment of the application. As shown in fig. 4, the smart glasses 4 of this embodiment include: at least one processor 40 (only one is shown in fig. 4), a memory 41, and a computer program 42 stored in the memory 41 and executable on the at least one processor 40, the steps of any of the various smart-eyewear interaction method embodiments described above being implemented by the processor 40 when the computer program 42 is executed.
In an embodiment of the present application, the smart glasses 4 may include, but are not limited to, a processor 40 and a memory 41. Moreover, the specific structure of the smart glasses 4 may be various. By way of example, the smart glasses 4 described above may include one or more sensors (e.g., cameras, infrared sensors, ultrasonic sensors, hall sensors, etc.) such as control circuitry, a frame body, a first temple, a second temple, and one or more sensors. Wherein in some examples the processor 40 and the memory 41 may be provided in the control circuit. It will be appreciated by those skilled in the art that fig. 4 is merely an example of the smart glasses 4 and is not meant to be limiting as the smart glasses 4, and may include more or fewer components than shown, or may combine certain components, or different components, such as may also include input-output devices, network access devices, etc.
The processor 40 may be a central processing unit (Central Processing Unit, CPU), and the processor 40 may be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 41 may be an internal storage unit of the control circuit 103, such as a hard disk or a memory of the control circuit 103, in some embodiments. The memory 41 may also be an external storage device of the control circuit 103, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash Card (Flash Card) or the like, which are provided on the control circuit 103 in other embodiments. Further, the memory 41 may include both an internal memory unit and an external memory device of the control circuit 103. The memory 41 is used for storing an operating system, an application program, a boot loader (BootLoader), data, other programs, and the like, such as program codes of the computer programs. The above-described memory 41 may also be used to temporarily store data that has been output or is to be output.
In the embodiment of the present application, when the processor 40 executes the computer program 42 to implement the steps in any of the embodiments of the interaction method for smart glasses, when the smart glasses are in the preset mode, a voice command is acquired through the smart glasses, and a scene category corresponding to the voice command is determined, so that when the scene category does not belong to a voice control scene category, first prompt information for prompting the user whether to display a virtual reality interaction interface is generated, so that the user can firstly send a control command to interact with the smart glasses in voice in the preset mode, and when a scene related to voice sent by the user exceeds a scene related to voice control, then prompt the user whether to display the virtual reality interaction interface. According to the embodiment of the application, the interaction mode with the intelligent glasses can be determined according to the scene category corresponding to the voice command, so that the interaction mode with the intelligent glasses can be adjusted according to the requirements of users, and accordingly, the corresponding more efficient interaction mode can be selected for different scenes, the interaction requirements of different users and different scenes are met, and the use experience of the users is improved.
The embodiment of the application also provides a computer readable storage medium, wherein the computer readable storage medium stores a computer program, and the computer program realizes the steps in the intelligent glasses interaction method embodiments when being executed by a processor.
Embodiments of the present application provide a computer program product that, when run on smart glasses, enables the smart glasses to implement the steps in the various smart glasses interaction method embodiments described above when executed.
The integrated units described above, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application may implement all or part of the flow of the method of the above embodiment, and may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, where the computer program, when executed by a processor, may implement the steps of each of the method embodiments described above. The computer program comprises computer program code, and the computer program code can be in a source code form, an object code form, an executable file or some intermediate form and the like. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing device/terminal apparatus, recording medium, computer Memory, read-Only Memory (ROM), random access Memory (RAM, random Access Memory), electrical carrier signals, telecommunications signals, and software distribution media. Such as a U-disk, removable hard disk, magnetic or optical disk, etc. In some jurisdictions, computer readable media may not be electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/network device and method may be implemented in other manners. For example, the apparatus/network device embodiments described above are merely illustrative, e.g., the division of modules or elements described above is merely a logical functional division, and there may be additional divisions in actual implementation, e.g., multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
The above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.
Claims (9)
1. An intelligent glasses interaction method is characterized by comprising the following steps:
when the intelligent glasses are in a preset mode, voice instructions are acquired through the intelligent glasses;
determining a scene category corresponding to the voice instruction;
if the scene category does not belong to the voice control scene category, generating first prompt information, wherein the first prompt information is used for prompting a user whether to display a virtual reality interaction interface or not;
if feedback information confirming to display the virtual reality interaction interface is received, displaying display content associated with the voice instruction on the virtual reality interaction interface through the intelligent glasses;
detecting operation of a user on the virtual reality interaction interface;
and determining an operation instruction according to the operation.
2. The method of claim 1, wherein displaying, by the smart glasses, the display content associated with the voice command on the virtual reality interaction interface if feedback information confirming that the virtual reality interaction interface is displayed is received, comprising:
if feedback information confirming display of the virtual reality interaction interface is received, determining a display area of the virtual reality interaction interface;
and displaying the display content on the virtual reality interaction interface through the intelligent glasses according to the display area.
3. The smart glasses interaction method according to claim 1, wherein the detecting the operation of the virtual reality interaction interface by the user includes:
detecting the operation of a user on the virtual reality interaction interface through a depth camera on the intelligent glasses;
the determining an operation instruction according to the operation comprises the following steps:
and if the operation is in the target operation area, determining an operation instruction according to the operation.
4. The smart glasses interaction method as claimed in claim 3, further comprising, after detecting the user's operation of the virtual reality interaction interface through a depth camera on the smart glasses:
and if the operation is not in the target operation area, displaying second prompt information on the virtual reality interaction interface according to the relative position relation between the operation and the target operation area, wherein the second prompt information is used for prompting a user to adjust the operation into the target operation area.
5. The smart glasses interaction method according to claim 3, further comprising, after displaying display content associated with the voice instruction on the virtual reality interaction interface through the smart glasses:
and if a closing instruction for closing the virtual reality interaction interface is received, closing the virtual reality interaction interface and closing the depth camera.
6. The smart glasses interaction method according to any one of claims 1 to 5, wherein, when the smart glasses are in a preset mode, acquiring a voice command through the smart glasses comprises:
when the intelligent glasses are in a preset mode, voice information is received through the intelligent glasses;
and carrying out voice recognition on the voice information to obtain a voice instruction.
7. An intelligent eyeglass interaction device, comprising:
the acquisition module is used for acquiring voice instructions through the intelligent glasses when the intelligent glasses are in a preset mode;
the determining module is used for determining the scene category corresponding to the voice instruction;
the prompting module is used for generating first prompting information if the scene category does not belong to the voice control scene category, wherein the first prompting information is used for prompting a user whether to display a virtual reality interaction interface or not;
the display module is used for displaying display content associated with the voice instruction on the virtual reality interaction interface through the intelligent glasses if feedback information confirming to display the virtual reality interaction interface is received;
the detection module is used for detecting the operation of the user on the virtual reality interaction interface;
and the second determining module is used for determining an operation instruction according to the operation.
8. A smart glasses comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the smart glasses interaction method according to any of claims 1 to 6 when executing the computer program.
9. A computer readable storage medium storing a computer program, characterized in that the computer program, when executed by a processor, implements the smart glasses interaction method according to any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010113481.2A CN111367407B (en) | 2020-02-24 | 2020-02-24 | Intelligent glasses interaction method, intelligent glasses interaction device and intelligent glasses |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010113481.2A CN111367407B (en) | 2020-02-24 | 2020-02-24 | Intelligent glasses interaction method, intelligent glasses interaction device and intelligent glasses |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111367407A CN111367407A (en) | 2020-07-03 |
CN111367407B true CN111367407B (en) | 2023-10-10 |
Family
ID=71206269
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010113481.2A Active CN111367407B (en) | 2020-02-24 | 2020-02-24 | Intelligent glasses interaction method, intelligent glasses interaction device and intelligent glasses |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111367407B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113329375B (en) * | 2021-05-27 | 2023-06-27 | Oppo广东移动通信有限公司 | Content processing method, device, system, storage medium and electronic equipment |
CN113421567A (en) * | 2021-08-25 | 2021-09-21 | 江西影创信息产业有限公司 | Terminal equipment control method and system based on intelligent glasses and intelligent glasses |
CN113900578A (en) * | 2021-09-08 | 2022-01-07 | 北京乐驾科技有限公司 | Method for interaction of AR glasses, and AR glasses |
CN115793848B (en) * | 2022-11-04 | 2023-11-24 | 浙江舜为科技有限公司 | Virtual reality information interaction method, virtual reality device and storage medium |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106293346A (en) * | 2016-08-11 | 2017-01-04 | 深圳市金立通信设备有限公司 | The changing method of a kind of virtual reality what comes into a driver's and terminal |
CN106558310A (en) * | 2016-10-14 | 2017-04-05 | 北京百度网讯科技有限公司 | Virtual reality sound control method and device |
WO2017129148A1 (en) * | 2016-01-25 | 2017-08-03 | 亮风台(上海)信息科技有限公司 | Method and devices used for implementing augmented reality interaction and displaying |
CN107300970A (en) * | 2017-06-05 | 2017-10-27 | 百度在线网络技术(北京)有限公司 | Virtual reality exchange method and device |
CN107515674A (en) * | 2017-08-08 | 2017-12-26 | 山东科技大学 | A method for implementing multi-interaction in mining operations based on virtual reality and augmented reality |
US20180011531A1 (en) * | 2016-07-07 | 2018-01-11 | Google Inc. | Methods and apparatus to determine objects to present in virtual reality environments |
US20180067717A1 (en) * | 2016-09-02 | 2018-03-08 | Allomind, Inc. | Voice-driven interface to control multi-layered content in a head mounted display |
CN108334199A (en) * | 2018-02-12 | 2018-07-27 | 华南理工大学 | The multi-modal exchange method of movable type based on augmented reality and device |
CN108874126A (en) * | 2018-05-30 | 2018-11-23 | 北京致臻智造科技有限公司 | Exchange method and system based on virtual reality device |
CN109960537A (en) * | 2019-03-29 | 2019-07-02 | 北京金山安全软件有限公司 | Interaction method and device and electronic equipment |
CN110634477A (en) * | 2018-06-21 | 2019-12-31 | 海信集团有限公司 | Context judgment method, device and system based on scene perception |
US20200027459A1 (en) * | 2019-09-09 | 2020-01-23 | Lg Electronics Inc. | Artificial intelligence apparatus and method for recognizing speech of user |
CN110825224A (en) * | 2019-10-25 | 2020-02-21 | 北京威尔文教科技有限责任公司 | Interaction method, interaction system and display device |
-
2020
- 2020-02-24 CN CN202010113481.2A patent/CN111367407B/en active Active
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017129148A1 (en) * | 2016-01-25 | 2017-08-03 | 亮风台(上海)信息科技有限公司 | Method and devices used for implementing augmented reality interaction and displaying |
US20180011531A1 (en) * | 2016-07-07 | 2018-01-11 | Google Inc. | Methods and apparatus to determine objects to present in virtual reality environments |
CN106293346A (en) * | 2016-08-11 | 2017-01-04 | 深圳市金立通信设备有限公司 | The changing method of a kind of virtual reality what comes into a driver's and terminal |
US20180067717A1 (en) * | 2016-09-02 | 2018-03-08 | Allomind, Inc. | Voice-driven interface to control multi-layered content in a head mounted display |
CN106558310A (en) * | 2016-10-14 | 2017-04-05 | 北京百度网讯科技有限公司 | Virtual reality sound control method and device |
US20180108357A1 (en) * | 2016-10-14 | 2018-04-19 | Beijing Baidu Netcom Science And Technology Co., L Td. | Virtual reality speech control method and apparatus |
CN107300970A (en) * | 2017-06-05 | 2017-10-27 | 百度在线网络技术(北京)有限公司 | Virtual reality exchange method and device |
CN107515674A (en) * | 2017-08-08 | 2017-12-26 | 山东科技大学 | A method for implementing multi-interaction in mining operations based on virtual reality and augmented reality |
CN108334199A (en) * | 2018-02-12 | 2018-07-27 | 华南理工大学 | The multi-modal exchange method of movable type based on augmented reality and device |
CN108874126A (en) * | 2018-05-30 | 2018-11-23 | 北京致臻智造科技有限公司 | Exchange method and system based on virtual reality device |
CN110634477A (en) * | 2018-06-21 | 2019-12-31 | 海信集团有限公司 | Context judgment method, device and system based on scene perception |
CN109960537A (en) * | 2019-03-29 | 2019-07-02 | 北京金山安全软件有限公司 | Interaction method and device and electronic equipment |
US20200027459A1 (en) * | 2019-09-09 | 2020-01-23 | Lg Electronics Inc. | Artificial intelligence apparatus and method for recognizing speech of user |
CN110825224A (en) * | 2019-10-25 | 2020-02-21 | 北京威尔文教科技有限责任公司 | Interaction method, interaction system and display device |
Also Published As
Publication number | Publication date |
---|---|
CN111367407A (en) | 2020-07-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111367407B (en) | Intelligent glasses interaction method, intelligent glasses interaction device and intelligent glasses | |
JP7089106B2 (en) | Image processing methods and equipment, electronic devices, computer-readable storage media and computer programs | |
EP3086275A1 (en) | Numerical value transfer method, terminal, cloud server, computer program and recording medium | |
US10198831B2 (en) | Method, apparatus and system for rendering virtual content | |
CN109086742A (en) | scene recognition method, scene recognition device and mobile terminal | |
CN110457963B (en) | Display control method, display control device, mobile terminal and computer-readable storage medium | |
CN109166156A (en) | A kind of generation method, mobile terminal and the storage medium of camera calibration image | |
CN113936089B (en) | Interface rendering method and device, storage medium and electronic device | |
CN109118447A (en) | Picture processing method, picture processing device and terminal equipment | |
CN105183571A (en) | Function calling method and device | |
CN105354005A (en) | Method and apparatus for renovating point ranking | |
CN115984126A (en) | Optical image correction method and device based on input instruction | |
CN111597009B (en) | Application program display method and device and terminal equipment | |
CN112633218B (en) | Face detection method, face detection device, terminal equipment and computer readable storage medium | |
CN112734015B (en) | Network generation method and device, electronic equipment and storage medium | |
CN107704884B (en) | Image tag processing method, image tag processing device and electronic terminal | |
CN110134370B (en) | Graph drawing method and device, electronic equipment and storage medium | |
CN107340962B (en) | Input method, device and virtual reality device based on virtual reality device | |
CN115578290A (en) | Image refining method and device based on high-precision shooting matrix | |
CN112203131B (en) | Prompting method and device based on display equipment and storage medium | |
CN111741222B (en) | Image generation method, image generation device and terminal equipment | |
CN116468883B (en) | High-precision image data volume fog recognition method and device | |
CN116664413B (en) | Image volume fog eliminating method and device based on Abbe convergence operator | |
CN116485912B (en) | Multi-module coordination method and device for light field camera | |
CN116088580B (en) | Flying object tracking method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |