CN110858097A - Interaction method and device - Google Patents
Interaction method and device Download PDFInfo
- Publication number
- CN110858097A CN110858097A CN201810979356.2A CN201810979356A CN110858097A CN 110858097 A CN110858097 A CN 110858097A CN 201810979356 A CN201810979356 A CN 201810979356A CN 110858097 A CN110858097 A CN 110858097A
- Authority
- CN
- China
- Prior art keywords
- head
- signal
- head movement
- user
- movement signal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/012—Head tracking input arrangements
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The application discloses an interaction method and an interaction device, wherein the method comprises the following steps: acquiring a head movement signal through an acceleration sensor, wherein the head movement signal is generated through head movement of a user; determining the head movement of the user according to the head movement signal; and according to the head action of the user, performing the interactive operation associated with the head action of the user. By using the method, the interactive operation of the user in the virtual reality scene, the augmented reality scene or the mixed reality scene can be realized according to the head action of the user, and the interactive operation can be used as a supplement to the existing interactive operation mode, so that the interactive experience of the user can be effectively improved.
Description
Technical Field
The application relates to the technical field of interaction, in particular to an interaction method. The application also relates to an interaction device, an electronic device and a computer readable storage medium.
Background
The interactive technology is a technology used for realizing transmission and communication of data, data and technical knowledge between moving objects, and comprises various types such as silent speech recognition, electric tactile stimulation, human-computer interface and the like, and is widely applied to various fields such as virtual reality technology, augmented reality technology and the like at present.
Virtual reality technology is a computer simulation technology that can create and experience a virtual world, which utilizes a computer to generate an interactive immersive simulation environment in which a user can obtain an immersive experience. The interaction mode of a user in a virtual reality scene is limited, and at present, the user mainly realizes the operations of determining operation, canceling operation, entering or exiting a designated interface and the like aiming at the virtual reality scene by means of keys and gestures.
However, the above interaction method has the following disadvantages: the process of key operation needs to correspond to the appointed button of the operation equipment, and the operation process is complicated; the gesture operation needs to be carried out in the air, and the user can cause limb fatigue when carrying out the gesture operation for a long time; in addition, in some scenes where the user cannot use the arm for interactive operation, such as in the process of flying a plane or playing an electronic game, the user cannot perform interactive operation through keys and gestures when the arm of the user is already occupied. Therefore, the interactive experience of the user is influenced by the interactive mode of the keys and the gestures.
Disclosure of Invention
The application provides an interaction method, which aims to solve the problem that the existing interaction mode influences the interaction experience of a user. The application further provides an interaction device, an electronic device and a computer readable storage medium.
The application provides an interaction method, which is applied to a head-mounted display device and comprises the following steps:
acquiring a head movement signal through an acceleration sensor, wherein the head movement signal is generated through head movement of a user;
determining the head action of the user according to the head action signal;
and according to the head action of the user, performing the interactive operation associated with the head action of the user.
Optionally, the determining the head movement of the user according to the head movement signal includes:
acquiring a characteristic value corresponding to the head movement signal;
and matching the characteristic value corresponding to the head movement signal according to a preset condition to determine the head movement of the user.
Optionally, the obtaining a characteristic value corresponding to the head movement signal includes:
extracting a characteristic value of the head movement signal in a time domain to obtain a time domain signal characteristic value; and/or the presence of a gas in the gas,
and extracting the characteristic value of the head movement signal in a frequency domain to obtain the characteristic value of the frequency domain signal.
Optionally, the time-domain signal characteristic value includes at least one of the following:
the maximum value and the minimum value of the head movement signal in each axial direction;
the maximum value and the minimum value of the head movement signal in each axial direction;
a kurtosis or kurtosis of the head movement signal.
Optionally, the extracting the feature value of the head movement signal in the frequency domain includes:
performing frequency domain transformation on the head movement signal by a Fourier transformation method to obtain frequency domain energy distribution of the head movement signal;
and calculating according to the frequency domain energy distribution to obtain a frequency domain signal characteristic value.
Optionally, the frequency domain signal characteristic value includes:
a ratio of low frequency energy to high frequency energy of the head movement signal.
Optionally, matching the characteristic value corresponding to the head movement signal according to a preset condition includes:
comparing a characteristic value corresponding to a single-frame motion signal with a preset characteristic threshold value set to obtain a head motion matched with the single-frame motion signal;
the characteristic threshold value set is preset according to a head movement signal corresponding to a preset head movement.
Optionally, matching the characteristic value corresponding to the head movement signal according to a preset condition includes:
comparing a characteristic value corresponding to a first single-frame motion signal with a preset characteristic threshold value set to obtain a first head motion matched with the first single-frame motion signal;
comparing a characteristic value corresponding to at least one second single-frame motion signal generated within a preset time with the characteristic threshold value set to obtain a second head motion matched with the at least one second single-frame motion signal;
summarizing the first head action and the second head action to obtain a summarized result;
and determining the head action of the user according to the summary result.
Optionally, the method further includes:
pre-processing the head movement signal according to at least one of the following methods:
performing filtering operation on the head movement signal;
carrying out wild value elimination operation on the head movement signal;
and removing a direct current component from the head movement signal.
Optionally, the head movement signal is an analog signal, and after the head movement signal is acquired by the acceleration sensor, the method further includes:
the analog signal is converted to a digital signal.
Optionally, the interactive operation associated with the head action of the user includes one of the following:
determining or canceling current operation items;
entering or exiting a designated interface;
opening or closing a current page;
the specified function is adjusted.
Optionally, the head action of the user includes at least one of:
nodding with preset amplitude, preset direction and preset times;
a predetermined amplitude, a predetermined direction and a predetermined number of shaking movements;
a predetermined magnitude, a predetermined direction, and a predetermined number of head rotations.
Optionally, the acceleration sensor is a three-axis acceleration sensor.
The present application further provides an interaction device applied to a head-mounted display apparatus, including:
the head movement signal acquisition unit is used for acquiring a head movement signal through the acceleration sensor;
a head action determining unit for determining the head action of the user according to the head action signal;
and the interactive operation execution unit is used for executing the interactive operation associated with the head action of the user according to the head action of the user.
Optionally, the head motion determining unit includes:
the characteristic value acquisition subunit is used for acquiring a characteristic value corresponding to the head movement signal;
and the characteristic value matching subunit is used for matching the characteristic value corresponding to the head movement signal according to a preset condition and determining the head movement of the user.
Optionally, the obtaining a characteristic value corresponding to the head movement signal includes:
extracting a characteristic value of the head movement signal in a time domain to obtain a time domain signal characteristic value; and/or the presence of a gas in the gas,
and extracting the characteristic value of the head movement signal in a frequency domain to obtain the characteristic value of the frequency domain signal.
Optionally, the feature value matching subunit includes:
and the head action obtaining subunit is used for comparing the characteristic value corresponding to the single-frame action signal with a preset characteristic threshold value set to obtain the head action matched with the single-frame action signal.
Optionally, the feature value matching subunit includes:
the first head action obtaining subunit is configured to compare a feature value corresponding to a first single-frame head action signal with a preset feature threshold value set, and obtain a first head action matched with the first single-frame head action signal;
a second head action obtaining subunit, configured to compare a feature value corresponding to at least one second single-frame head action signal generated within a predetermined time with the feature threshold set, and obtain a second head action matched with the at least one second single-frame head action signal;
the summarizing subunit is used for summarizing the first head action and the second head action to obtain a summarizing result;
and the head action determining subunit is used for determining the head action of the user according to the summary result.
The present application further provides an electronic device, comprising:
a processor;
a memory for storing an interactive program, which when read and executed by the processor, performs the following operations:
acquiring a head movement signal through an acceleration sensor; the head movement signal is generated through the head movement of a user;
determining the head action of the user according to the head action signal;
and according to the head action of the user, performing the interactive operation associated with the head action of the user.
The present application also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of:
acquiring a head movement signal through an acceleration sensor; the head movement signal is generated through the head movement of a user;
determining the head action of the user according to the head action signal;
and according to the head action of the user, performing the interactive operation associated with the head action of the user.
Compared with the prior art, the method has the following advantages:
according to the interaction method, a head movement signal is obtained through an acceleration sensor, and the head movement signal is generated through head movement of a user; determining the head movement of the user according to the head movement signal; according to the head action of the user, the interactive operation associated with the head action is executed. By using the method, the interactive operation of the user in the virtual reality scene, the augmented reality scene or the mixed reality scene can be realized according to the head action of the user, and the interactive operation can be used as a supplement to the existing interactive operation mode, so that the interactive experience of the user can be effectively improved.
Drawings
FIG. 1 is a flow chart of a method provided in a first embodiment of the present application;
FIG. 2 is a flow chart for determining a head movement of a user according to a head movement signal according to a first embodiment of the present application;
FIG. 3 is a block diagram of the apparatus elements provided in a second embodiment of the present application;
fig. 4 is a schematic diagram of an electronic device according to a third embodiment of the present application.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. This application is capable of implementation in many different ways than those herein set forth and of similar import by those skilled in the art without departing from the spirit of this application and is therefore not limited to the specific implementations disclosed below.
For a Virtual Reality (VR) scene, an Augmented Reality (AR) scene, and a Mixed Reality (MR) scene, a user needs to operate on content provided by a simulation environment and obtain feedback information from the simulation environment, which is an interactive process of the user on the simulation environment. In order to improve the interaction experience of a user in the interaction process, the application provides an interaction method, an interaction device corresponding to the interaction method, electronic equipment and a computer readable storage medium. The following embodiments are provided to explain the method, apparatus, electronic device, and computer-readable storage medium in detail.
A first embodiment of the present application provides an interaction method, which may be applied to a head-mounted display, where the head-mounted display is a short-form head-mounted display device, and various head-mounted display devices send optical signals to eyes by using different methods, so that different effects such as Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR) can be achieved. Fig. 1 is a flowchart of an interaction method provided in a first embodiment of the present application, and the method provided in this embodiment is described in detail below with reference to fig. 1. The following description refers to embodiments for the purpose of illustrating the principles of the methods, and is not intended to be limiting in actual use.
As shown in fig. 1, the interaction method provided by this embodiment includes the following steps:
and S101, acquiring a head movement signal through an acceleration sensor.
This step is for acquiring a head movement signal generated by a head movement of the user by the acceleration sensor.
The head movement signal refers to a vibration signal generated by the head movement of the user.
An acceleration sensor refers to a sensor that senses the acceleration of a sensed object and converts the acceleration into a usable output signal. The main sensing mode is that when the sensitive element in the sensor is deformed under the action of acceleration, the deformation of the sensitive element is measured, and the deformation of the sensitive element is converted into voltage by using a related circuit to be output.
Acquiring the head movement signal through the acceleration sensor refers to acquiring the head movement signal through the acceleration sensor according to a preset sampling frequency. The acceleration sensor can be divided into an analog acceleration sensor and a digital acceleration sensor according to different types of output signals.
The output value of the analog acceleration sensor is an analog signal, and the analog signal is characterized in that: time is continuous and signal amplitude is continuous. In the subsequent transmission and processing process of the signal, the analog signal is greatly influenced by the environment, and high-precision signal transmission and signal processing are difficult to achieve, so analog-to-digital conversion is also needed, and the process specifically comprises the following steps: the method comprises the steps of filtering components higher than a certain frequency in an analog signal by using a pre-filter, periodically extracting the amplitude of the analog signal in an analog-to-digital converter (A/D converter) according to a preset sampling period, wherein the sampled signal is called discrete time signal and only represents signal values at a plurality of discrete time points, the sampling process is a process of time discretization of the analog signal, and then converting the sampled signal into a digital signal in a holding circuit of the analog-to-digital converter to finish the time discretization and amplitude quantization of the analog signal.
An analog-to-digital conversion circuit is integrated in an interface chip of the digital acceleration sensor, so that the output value of the digital acceleration sensor is a digital signal, the digital signal is a sequence of numbers, each number is represented by a finite number of binary numbers, and the digital signal is characterized in that: time discretization and signal amplitude quantization.
In this embodiment, the head motion of the user refers to a behavior of nodding the head, shaking the head, or the like performed by the user with respect to a designated operation object provided in the simulation environment, where the designated operation object may be a confirmation operation or a cancel operation button, an operation with respect to a designated interface, such as entering or exiting the designated interface, or an operation for adjusting a designated function, such as adjusting a volume and displaying a page.
In this embodiment, the acceleration sensor is a three-axis acceleration sensor, and the three-axis acceleration sensor has the characteristics of small volume and light weight, can measure the spatial acceleration, and can comprehensively reflect the motion property of an object. The reason for using a three-axis acceleration sensor is that: to accurately know the motion state of the head of the user, the components of an acceleration signal generated by the head of the user on three coordinate axes must be measured; in addition, there are many possibilities of the head movement of the user, and the movement direction of the head of the user is not known in advance.
And S102, determining the head movement of the user according to the head movement signal.
After the head movement signal generated by the head movement of the user is obtained in the above step, the step is used for determining the head movement of the user according to the head movement signal.
The head motion of the user has great randomness, and the amplitude of the head motion is small compared with the motion of other parts, so that a slight head nod or head shaking of the user may be captured and recognized accidentally, and therefore, in order to distinguish the head motion without operation meaning, such as the head nod or head shaking of the user accidentally, from the head motion of the user to be determined in the present embodiment, the head motion signal obtained as described above is processed and analyzed by adopting the following specific method to determine the head motion of the user in the present embodiment.
In order to enable the acquired head movement signal to be processed according to a predetermined requirement, the embodiment preprocesses the head movement signal in the following ways:
and carrying out filtering operation on the head movement signal. The filtering is to filter noise of the digital signal, and specifically, neighborhood smoothing filtering or median filtering of nonlinear filtering in linear filtering may be used, and the neighborhood smoothing filtering is performed by a simple averaging method, where the simple averaging method is to obtain an average brightness value of a neighboring pixel point. The size of the neighborhood is directly related to the smoothing effect, the larger the neighborhood is, the better the smoothing effect is, but the larger the neighborhood is, the larger the edge information loss is due to the fact that the smoothing effect is, and therefore the output image becomes fuzzy; the median filtering can overcome the image blurring problem caused by linear filtering, and better retains the edge information of the image while filtering noise.
And carrying out wild value elimination operation on the head movement signal. Due to the influence of the sensor, the converter and the transmission process, abnormal jumping points often occur in the signal, and such data points deviating from the signal change rule are called outliers. If the outliers are not removed in the signal preprocessing stage, the outliers can seriously affect the precision of signal processing, and the removal of the outliers can make the signal data more stable.
And performing direct current removal operation on the head movement signal. De-dc refers to removing dc components from the digital signal so that the energy characteristics of the signal are more pronounced. The conventional dc removal method is to accumulate the digital signals, shift the accumulated value to obtain a dc component, and delete the dc component.
After the preprocessing, the head movement of the user needs to be determined according to the preprocessed head movement signal, and the process is substantially to identify and match the head movement signal, as shown in fig. 2, the process specifically includes the following steps:
and S1021, acquiring a characteristic value corresponding to the head movement signal.
After the signal preprocessing is completed on the head movement signal obtained in step S101, this step is used to obtain a characteristic value of the head movement signal in a predetermined manner.
There are many methods for acquiring the feature value of the head movement signal, such as a template-based method, a spatial transformation-based method, and the like. In this embodiment, the characteristic value corresponding to the head movement signal may be obtained by the following method:
the method comprises the following steps: and extracting the characteristic value of the head movement signal in a time domain to obtain the characteristic value of the time domain signal. The process is essentially to perform time domain transformation on the head movement signal, and the time domain transformation method can be inverse folding, inverse phase transformation, translation, scale transformation and the like.
The time domain signal characteristic value may be: the maximum value of the head movement signal in each axial direction comprises a maximum value and a minimum value; extreme values of the head movement signal in each axial direction comprise a maximum value and a minimum value; the kurtosis or kurtosis of the head movement signal, which is also called a kurtosis coefficient, is used for describing the steepness of the distribution form, and can be obtained according to the ratio of the square of the peak value to the signal energy, the signal energy of the digital signal is the sum of the squares of signal amplitude values of each point, and the peak value is the extreme value of the head movement signal in each axial direction.
The method 2 comprises the following steps: and extracting the characteristic value of the head movement signal in a frequency domain to obtain the characteristic value of the frequency domain signal. The most important properties of the frequency domain are: it is not true, but is a mathematical construct of the fourier transform of the time domain signal. The signals are converted from the time domain to the frequency domain through Fourier transform, so that the characteristics of the signals which are crossed together in the time domain and cannot be identified can be obviously identified in the frequency domain, the information contained in the signals can be observed more conveniently, and the signals are matched, decomposed or synthesized.
The method for extracting the feature value of the head movement signal in the frequency domain may be: performing frequency domain transformation on the head movement signal by a Fourier transformation method to obtain frequency domain energy distribution of the head movement signal, for example, acquiring acceleration data according to a 1kHz sampling rate, and performing Fourier transformation every 512 points to obtain the frequency domain energy distribution of the signal; and calculating according to the obtained frequency domain energy distribution of the head movement signal to obtain a frequency domain signal characteristic value. In this embodiment, the frequency-domain signal characteristic value is a low-high frequency energy ratio, which refers to a ratio of low-frequency energy to high-frequency energy of the head movement signal, for example, a ratio of a sum of signal energy with a frequency of 1Hz to 5Hz to a sum of signal energy with a frequency of 50Hz to 100 Hz.
In this embodiment, the characteristic value corresponding to the head movement signal is obtained by combining the above-described method 1 and method 2.
And S1022, matching the characteristic values corresponding to the head movement signals according to preset conditions, and determining the head movement of the user.
The step is used for matching the characteristic values of the head movement signals obtained through time domain transformation and frequency domain transformation according to preset conditions, and determining head movement matched with the head movement signals.
In this embodiment, the preset condition may be that the head movement signals generated by the head movements of the user are classified in advance according to the range where the corresponding feature values are located, that is, each head movement of the user corresponds to a certain range of feature values, the head movement signals generated by each head movement are different, the range where the feature values corresponding to different head movement signals are located is also different, the range where each feature value is located may be represented as a group of feature threshold values, all the feature values corresponding to all the preset head movements may be divided by a feature threshold value set, and the feature threshold value set is preset according to the head movement signals corresponding to all the preset head movements. The set of characteristic thresholds is related to head motion as shown in the following table:
in the above table, each category of features may be divided into different value ranges in advance according to different head movements, and if the obtained feature values of the kurtosis, the low-high frequency energy ratio, and the like of the head movement signal are within a value range corresponding to a certain head movement, it may be determined that the head movement is a head movement matched with the head movement signal.
The above matching manner of the characteristic value corresponding to the head movement signal according to the preset condition may be: comparing a characteristic value corresponding to a single-frame moving signal with a preset characteristic threshold value set to obtain a value range of the characteristic value, wherein the single-frame moving signal refers to a moving signal corresponding to a processing period, for example, acceleration data is acquired according to a 1kHz sampling rate, then Fourier transform is performed every 512 points, and the processing period is 0.5 second if the sampling period is 1 second; and according to the value range of the characteristic value, obtaining the head action matched with the single-frame motion signal, wherein the head action can be used as the head action of the user.
The above process is a matching process for a single-frame head movement signal, and in an actual matching process, because the head movement of the user has great randomness and the amplitude of the head movement is small, the head movement matched with the single-frame head movement signal may be generated due to misoperation of the user, which does not represent a real movement intention of the user and does not have an operation meaning, in this embodiment, a manner of matching a feature value corresponding to the head movement signal according to a preset condition is as follows: comparing the characteristic value corresponding to the first single-frame motion signal with a preset characteristic threshold value set to obtain a first head motion matched with the first single-frame motion signal, for example, determining that at least one of the characteristics of the most value, the low-high frequency energy ratio, the kurtosis and the like of the nth frame data signal meets the characteristic threshold value of the head motion signal corresponding to the head shaking through the matching process aiming at the single-frame motion signal, wherein the nth frame data signal is the first single-frame motion signal; comparing a characteristic value corresponding to at least one second single-frame motion signal generated within a predetermined time with the characteristic threshold value set to obtain a second head motion matched with the at least one second single-frame motion signal, for example, respectively matching multiple-frame motion signals such as an N +1 frame, an N +2 frame, an N +3 frame and the like obtained within 2 seconds in the above manner to respectively obtain a second head motion matched with each frame motion signal in the multiple-frame motion signals; summarizing the first head action and the second head action to obtain a summarized result; and determining the head movement of the user according to the summary result, and finally determining the head movement of the user as the head shaking action if the matching results of the plurality of second single-frame head movement signals are all consistent with the matching results of the first single-frame head movement signals, namely the first head movement is consistent with the second head movement. If the first head action and the second head action are not consistent, the second head action is taken as the head action of the user.
The correspondence between the head movement signal and the head movement can be represented by the following table:
s103, according to the head action of the user, executing interactive operation related to the head action of the user.
The step is used for executing the interactive operation associated with the head action according to the head action of the user determined in the step.
In this embodiment, the head motion of the user interacts with a head-mounted display device such as a VR device or an AR device, and the head motion of the user may be a nodding motion with a predetermined amplitude, a predetermined direction, and a predetermined number of times, or a shaking motion with a predetermined amplitude, a predetermined direction, and a predetermined number of times, or a head rotation motion with a predetermined amplitude, a predetermined direction, and a predetermined number of times, or a combination of at least two of the foregoing motions, for example, a nodding motion with a predetermined amplitude, a predetermined direction, and a predetermined number of times, and then a shaking motion with a predetermined amplitude, a predetermined direction, and a predetermined number of times.
According to the actual scene requirements, the nodding can be defined as one or more nodding toward a fixed square point; shaking the head may be defined as shaking the head one or more times to the left, or shaking the head one or more times to the right, or shaking the head one or more times first to the left and then to the right at a predetermined motion amplitude.
In this embodiment, the head movement needs to be correlated with the interactive operation included in the virtual reality scene in advance to form the interactive operation correlated with the head movement of the user. When the user nods, shakes or rotates the head, the head display device can recognize the head motion such as nodding, shaking or rotating the head, and execute the interactive operation associated with the motion, and the interactive operation may be: determining or canceling current operation items; entering or exiting a designated interface; opening or closing a current page; adjust appointed function, for example adjust the volume, control focus through head action and fall on the volume control menu, get into the volume control interface through nodding the head, accomplish volume control through shaking the head many times, for example, will first once to the left, the volume reduces a check, shakes the head once to the right, the volume increases a check. Through a similar method, the functions of the definition, the display condition and the like of the display interface can be adjusted.
In the interaction method provided in this embodiment, a head movement signal generated by the head movement of the user is obtained by the three-axis acceleration sensor, and the head movement of the user is determined according to the head movement signal, which specifically includes: after preprocessing such as filtering operation, wild value eliminating operation and direct current eliminating operation are carried out on the digital signals, characteristic value extraction is carried out on the signals in a time domain and a frequency domain respectively, and characteristic values such as the maximum value, the extreme value, the kurtosis and the low-high frequency energy ratio of the signals are obtained; comparing the characteristic value corresponding to the single-frame head movement signal with a preset characteristic threshold value set to obtain a value range of the characteristic value, and obtaining a head movement matched with the single-frame head movement signal according to the value range; further, comparing a characteristic value corresponding to at least one single-frame motion signal generated within a preset time with the characteristic threshold set to obtain a head motion matched with the at least one single-frame motion signal, summarizing the head motion matched with the at least one single-frame motion signal to obtain a summarizing result, and determining the head motion of the user according to the summarizing result; the head motion can be a nodding motion, a shaking motion, a head rotating motion or a combination of at least two of the above motions with a predetermined amplitude, a predetermined direction and a predetermined number of times; and finally, according to the determined head action of the user, executing the interactive operation related to the head action of the user, wherein the interactive operation can be at least one of the operations of determining or cancelling the current operation item, entering or exiting a specified interface, opening or closing the current page, adjusting the specified function and the like.
By using the method, various interactive operations of the user in the virtual reality scene, the augmented reality scene or the mixed reality scene can be realized according to the head action of the user, and the interactive experience of the user is effectively improved.
The second embodiment of the present application also provides an interactive device, which is substantially similar to the method embodiment and therefore is described more simply, and the details of the related technical features are given in the corresponding description of the method embodiment provided above, and the following description of the device embodiment is only illustrative.
Referring to fig. 3, to understand the embodiment, fig. 3 is a block diagram of a unit of the apparatus provided in the embodiment, and as shown in fig. 3, the apparatus provided in the embodiment includes:
a head movement signal acquisition unit 201 for acquiring a head movement signal by an acceleration sensor;
a head motion determination unit 202, configured to determine a head motion of the user according to the head motion signal;
an interactive operation executing unit 203, configured to execute an interactive operation associated with the head action of the user according to the head action of the user.
A head motion determination unit 202 comprising:
the characteristic value acquisition subunit is used for acquiring a characteristic value corresponding to the head movement signal;
and the characteristic value matching subunit is used for matching the characteristic value corresponding to the head movement signal according to a preset condition and determining the head movement of the user.
The eigenvalue acquisition subunit is specifically configured to:
extracting a characteristic value of the head movement signal in a time domain to obtain a time domain signal characteristic value; and/or the presence of a gas in the gas,
and extracting the characteristic value of the head movement signal in a frequency domain to obtain the characteristic value of the frequency domain signal.
The time domain signal characteristic value comprises at least one of the following:
the maximum value and the minimum value of the head movement signal in each axial direction;
the maximum value and the minimum value of the head movement signal in each axial direction;
a kurtosis or kurtosis of the head movement signal.
The extracting the feature value of the head movement signal in the frequency domain includes:
performing frequency domain transformation on the head movement signal by a Fourier transformation method to obtain frequency domain energy distribution of the head movement signal;
and calculating according to the frequency domain energy distribution to obtain a frequency domain signal characteristic value.
The frequency domain signal characteristic value includes: the ratio of the low frequency energy to the high frequency energy of the head movement signal.
The above feature value matching subunit includes:
the first head action obtaining subunit is configured to compare a feature value corresponding to a first single-frame head action signal with a preset feature threshold value set, and obtain a first head action matched with the first single-frame head action signal;
a second head action obtaining subunit, configured to compare a feature value corresponding to at least one second single-frame head action signal generated within a predetermined time with the feature threshold set, and obtain a second head action matched with the at least one second single-frame head action signal;
the summarizing subunit is used for summarizing the first head action and the second head action to obtain a summarizing result;
and the head action determining subunit is used for determining the head action of the user according to the summary result.
The device also includes: a preprocessing subunit, configured to preprocess the head movement signal according to at least one of the following methods:
performing filtering operation on the head movement signal;
carrying out wild value elimination operation on the head movement signal;
and removing the direct current component of the head movement signal.
In this embodiment, the head movement signal is an analog signal, and the apparatus further includes:
and the analog-to-digital conversion subunit is used for converting the analog signal into a digital signal.
The interactive operation execution unit is specifically configured to execute one of the following operations:
determining or canceling current operation items;
entering or exiting a designated interface;
opening or closing a current page;
the specified function is adjusted.
The head action of the user comprises at least one of the following:
nodding with preset amplitude, preset direction and preset times;
a predetermined amplitude, a predetermined direction and a predetermined number of shaking movements;
a predetermined magnitude, a predetermined direction, and a predetermined number of head rotations.
The acceleration sensor is a three-axis acceleration sensor.
In the foregoing embodiment, an interaction method and an interaction apparatus are provided, and in addition, a third embodiment of the present application further provides an electronic device, where the embodiment of the electronic device is as follows:
please refer to fig. 4 for understanding the present embodiment, fig. 4 is a schematic view of an electronic device provided in the present embodiment.
As shown in fig. 4, the electronic apparatus includes: a processor 301; a memory 302;
the memory 302 is used for storing an interactive program, and when the program is read and executed by the processor, the program performs the following operations:
acquiring a head movement signal through an acceleration sensor, wherein the head movement signal is generated through head movement of a user;
determining the head action of the user according to the head action signal;
and according to the head action of the user, performing the interactive operation associated with the head action of the user.
Optionally, the determining the head movement of the user according to the head movement signal includes:
acquiring a characteristic value corresponding to the head movement signal;
and matching the characteristic value corresponding to the head movement signal according to a preset condition to determine the head movement of the user.
Optionally, the obtaining a characteristic value corresponding to the head movement signal includes:
extracting a characteristic value of the head movement signal in a time domain to obtain a time domain signal characteristic value; and/or the presence of a gas in the gas,
and extracting the characteristic value of the head movement signal in a frequency domain to obtain the characteristic value of the frequency domain signal.
Optionally, the time-domain signal characteristic value includes at least one of the following:
the maximum value and the minimum value of the head movement signal in each axial direction;
the maximum value and the minimum value of the head movement signal in each axial direction;
a kurtosis or kurtosis of the head movement signal.
Optionally, the extracting the feature value of the head movement signal in the frequency domain includes:
performing frequency domain transformation on the head movement signal by a Fourier transformation method to obtain frequency domain energy distribution of the head movement signal;
and calculating according to the frequency domain energy distribution to obtain a frequency domain signal characteristic value.
Optionally, the frequency domain signal characteristic value includes:
a ratio of low frequency energy to high frequency energy of the head movement signal.
Optionally, matching the characteristic value corresponding to the head movement signal according to a preset condition includes:
comparing a characteristic value corresponding to a single-frame motion signal with a preset characteristic threshold value set to obtain a head motion matched with the single-frame motion signal;
the characteristic threshold value set is preset according to a head movement signal corresponding to a preset head movement.
Optionally, matching the characteristic value corresponding to the head movement signal according to a preset condition includes:
comparing a characteristic value corresponding to a first single-frame motion signal with a preset characteristic threshold value set to obtain a first head motion matched with the first single-frame motion signal;
comparing a characteristic value corresponding to at least one second single-frame motion signal generated within a preset time with the characteristic threshold value set to obtain a second head motion matched with the at least one second single-frame motion signal;
summarizing the first head action and the second head action to obtain a summarized result;
and determining the head action of the user according to the summary result.
Optionally, the method further includes: pre-processing the head movement signal according to at least one of the following methods:
performing filtering operation on the head movement signal;
carrying out wild value elimination operation on the head movement signal;
and removing a direct current component from the head movement signal.
Optionally, the head movement signal is an analog signal, and after the head movement signal is acquired by the acceleration sensor, the method further includes:
the analog signal is converted to a digital signal.
Optionally, the interactive operation associated with the head action of the user includes one of the following: determining or canceling current operation items; entering or exiting a designated interface; opening or closing a current page; the specified function is adjusted.
Optionally, the head action of the user includes at least one of:
nodding with preset amplitude, preset direction and preset times;
a predetermined amplitude, a predetermined direction and a predetermined number of shaking movements;
a predetermined magnitude, a predetermined direction, and a predetermined number of head rotations.
Optionally, the acceleration sensor is a three-axis acceleration sensor.
In the foregoing embodiments, an interaction method, an interaction apparatus, and an electronic device are provided, and a fourth embodiment of the present application further provides a computer-readable storage medium. The embodiments of the computer-readable storage medium provided in the present application are described more simply, and for relevant portions, reference may be made to the corresponding descriptions of the above method embodiments, and the following described embodiments are merely illustrative.
The present embodiment provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps of:
acquiring a head movement signal through an acceleration sensor, wherein the head movement signal is generated through head movement of a user;
determining the head action of the user according to the head action signal;
and according to the head action of the user, performing the interactive operation associated with the head action of the user.
Optionally, the determining the head movement of the user according to the head movement signal includes:
acquiring a characteristic value corresponding to the head movement signal;
and matching the characteristic value corresponding to the head movement signal according to a preset condition to determine the head movement of the user.
Optionally, the obtaining a characteristic value corresponding to the head movement signal includes:
extracting a characteristic value of the head movement signal in a time domain to obtain a time domain signal characteristic value; and/or the presence of a gas in the gas,
and extracting the characteristic value of the head movement signal in a frequency domain to obtain the characteristic value of the frequency domain signal.
Optionally, the time-domain signal characteristic value includes at least one of the following:
the maximum value and the minimum value of the head movement signal in each axial direction;
the maximum value and the minimum value of the head movement signal in each axial direction;
a kurtosis or kurtosis of the head movement signal.
Optionally, the extracting the feature value of the head movement signal in the frequency domain includes:
performing frequency domain transformation on the head movement signal by a Fourier transformation method to obtain frequency domain energy distribution of the head movement signal;
and calculating according to the frequency domain energy distribution to obtain a frequency domain signal characteristic value.
Optionally, the frequency domain signal characteristic value includes:
a ratio of low frequency energy to high frequency energy of the head movement signal.
Optionally, matching the characteristic value corresponding to the head movement signal according to a preset condition includes:
comparing a characteristic value corresponding to a single-frame motion signal with a preset characteristic threshold value set to obtain a head motion matched with the single-frame motion signal;
the characteristic threshold value set is preset according to a head movement signal corresponding to a preset head movement.
Optionally, matching the characteristic value corresponding to the head movement signal according to a preset condition includes:
comparing a characteristic value corresponding to a first single-frame motion signal with a preset characteristic threshold value set to obtain a first head motion matched with the first single-frame motion signal;
comparing a characteristic value corresponding to at least one second single-frame motion signal generated within a preset time with the characteristic threshold value set to obtain a second head motion matched with the at least one second single-frame motion signal;
summarizing the first head action and the second head action to obtain a summarized result;
and determining the head action of the user according to the summary result.
Optionally, the method further includes: pre-processing the head movement signal according to at least one of the following methods:
performing filtering operation on the head movement signal;
carrying out wild value elimination operation on the head movement signal;
and removing a direct current component from the head movement signal.
Optionally, the head movement signal is an analog signal, and after the head movement signal is acquired by the acceleration sensor, the method further includes:
the analog signal is converted to a digital signal.
Optionally, the interactive operation associated with the head action of the user includes one of the following:
determining or canceling current operation items;
entering or exiting a designated interface;
opening or closing a current page;
the specified function is adjusted.
Optionally, the head action of the user includes at least one of:
nodding with preset amplitude, preset direction and preset times;
a predetermined amplitude, a predetermined direction and a predetermined number of shaking movements;
a predetermined magnitude, a predetermined direction, and a predetermined number of head rotations.
Optionally, the acceleration sensor is a three-axis acceleration sensor.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
1. Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, computer readable media does not include non-transitory computer readable media (transient media), such as modulated data signals and carrier waves.
2. As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Although the present application has been described with reference to the preferred embodiments, it is not intended to limit the present application, and those skilled in the art can make variations and modifications without departing from the spirit and scope of the present application, therefore, the scope of the present application should be determined by the claims that follow.
Claims (16)
1. An interaction method applied to a head-mounted display device, the method comprising:
acquiring a head movement signal through an acceleration sensor; the head movement signal is generated through the head movement of a user;
determining the head action of the user according to the head action signal;
and according to the head action of the user, performing the interactive operation associated with the head action of the user.
2. The method of claim 1, wherein determining the head movement of the user from the head movement signal comprises:
acquiring a characteristic value corresponding to the head movement signal;
and matching the characteristic value corresponding to the head movement signal according to a preset condition to determine the head movement of the user.
3. The method of claim 2, wherein the obtaining the characteristic value corresponding to the head movement signal comprises:
extracting a characteristic value of the head movement signal in a time domain to obtain a time domain signal characteristic value; and/or the presence of a gas in the gas,
and extracting the characteristic value of the head movement signal in a frequency domain to obtain the characteristic value of the frequency domain signal.
4. The method of claim 3, wherein the time-domain signal characteristic value comprises at least one of:
the maximum value and the minimum value of the head movement signal in each axial direction;
the maximum value and the minimum value of the head movement signal in each axial direction;
a kurtosis or kurtosis of the head movement signal.
5. The method of claim 3, wherein the extracting the feature value of the head movement signal in the frequency domain comprises:
performing frequency domain transformation on the head movement signal by a Fourier transformation method to obtain frequency domain energy distribution of the head movement signal;
and calculating according to the frequency domain energy distribution to obtain a frequency domain signal characteristic value.
6. The method of claim 5, wherein the frequency domain signal feature values comprise:
a ratio of low frequency energy to high frequency energy of the head movement signal.
7. The method according to any one of claims 2 to 6, wherein the matching the feature value corresponding to the head movement signal according to the preset condition comprises:
comparing a characteristic value corresponding to a single-frame motion signal with a preset characteristic threshold value set to obtain a head motion matched with the single-frame motion signal;
the characteristic threshold value set is preset according to a head movement signal corresponding to a preset head movement.
8. The method according to any one of claims 2 to 6, wherein the matching the feature value corresponding to the head movement signal according to the preset condition comprises:
comparing a characteristic value corresponding to a first single-frame motion signal with a preset characteristic threshold value set to obtain a first head motion matched with the first single-frame motion signal;
comparing a characteristic value corresponding to at least one second single-frame motion signal generated within a preset time with the characteristic threshold value set to obtain a second head motion matched with the at least one second single-frame motion signal;
summarizing the first head action and the second head action to obtain a summarized result;
and determining the head action of the user according to the summary result.
9. The method of claim 1, further comprising: pre-processing the head movement signal according to at least one of the following methods:
performing filtering operation on the head movement signal;
carrying out wild value elimination operation on the head movement signal;
and removing the direct current component of the head movement signal.
10. The method of claim 1, wherein the head movement signal is an analog signal, and further comprising, after the acquiring the head movement signal by the acceleration sensor:
the analog signal is converted to a digital signal.
11. The method of claim 1, wherein the interactive operation associated with the head action of the user comprises one of:
determining or canceling current operation items;
entering or exiting a designated interface;
opening or closing a current page;
the specified function is adjusted.
12. The method of claim 1, wherein the head action of the user comprises at least one of:
nodding with preset amplitude, preset direction and preset times;
a predetermined amplitude, a predetermined direction and a predetermined number of shaking movements;
a predetermined magnitude, a predetermined direction, and a predetermined number of head rotations.
13. The method of claim 1, wherein the acceleration sensor is a three-axis acceleration sensor.
14. An interaction device applied to a head-mounted display device, the device comprising:
the head movement signal acquisition unit is used for acquiring a head movement signal through the acceleration sensor;
a head action determining unit for determining the head action of the user according to the head action signal;
and the interactive operation execution unit is used for executing the interactive operation associated with the head action of the user according to the head action of the user.
15. An electronic device, comprising:
a processor;
a memory for storing an interactive program, which when read and executed by the processor, performs the following operations:
acquiring a head movement signal through an acceleration sensor; the head movement signal is generated through the head movement of a user;
determining the head action of the user according to the head action signal;
and according to the head action of the user, performing the interactive operation associated with the head action of the user.
16. A computer-readable storage medium on which a computer program is stored, the program, when executed by a processor, performing the steps of:
acquiring a head movement signal through an acceleration sensor; the head movement signal is generated through the head movement of a user;
determining the head action of the user according to the head action signal;
and according to the head action of the user, performing the interactive operation associated with the head action of the user.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810979356.2A CN110858097A (en) | 2018-08-22 | 2018-08-22 | Interaction method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810979356.2A CN110858097A (en) | 2018-08-22 | 2018-08-22 | Interaction method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110858097A true CN110858097A (en) | 2020-03-03 |
Family
ID=69635683
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810979356.2A Pending CN110858097A (en) | 2018-08-22 | 2018-08-22 | Interaction method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110858097A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024148793A1 (en) * | 2023-01-13 | 2024-07-18 | 魔门塔(苏州)科技有限公司 | Head gesture recognition method and apparatus |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106200899A (en) * | 2016-06-24 | 2016-12-07 | 北京奇思信息技术有限公司 | The method and system that virtual reality is mutual are controlled according to user's headwork |
CN106445176A (en) * | 2016-12-06 | 2017-02-22 | 腾讯科技(深圳)有限公司 | Man-machine interaction system and interaction method based on virtual reality technique |
CN107291222A (en) * | 2017-05-16 | 2017-10-24 | 阿里巴巴集团控股有限公司 | Interaction processing method, device, system and the virtual reality device of virtual reality device |
-
2018
- 2018-08-22 CN CN201810979356.2A patent/CN110858097A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106200899A (en) * | 2016-06-24 | 2016-12-07 | 北京奇思信息技术有限公司 | The method and system that virtual reality is mutual are controlled according to user's headwork |
CN106445176A (en) * | 2016-12-06 | 2017-02-22 | 腾讯科技(深圳)有限公司 | Man-machine interaction system and interaction method based on virtual reality technique |
CN107291222A (en) * | 2017-05-16 | 2017-10-24 | 阿里巴巴集团控股有限公司 | Interaction processing method, device, system and the virtual reality device of virtual reality device |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024148793A1 (en) * | 2023-01-13 | 2024-07-18 | 魔门塔(苏州)科技有限公司 | Head gesture recognition method and apparatus |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108351984B (en) | Hardware-efficient deep convolutional neural network | |
CN110288547A (en) | Method and apparatus for generating image denoising model | |
CN110595602A (en) | Vibration detection method and related products | |
CN104503593A (en) | Control information determination method and device | |
WO2018078712A1 (en) | Pattern recognition apparatus, method and medium | |
CN112489114A (en) | Image conversion method and device, computer readable storage medium and electronic equipment | |
WO2017184274A1 (en) | System and method for determining and modeling user expression within a head mounted display | |
CN110110666A (en) | Object detection method and device | |
EP3940588A1 (en) | Fingerprint image processing methods and apparatuses | |
CN111160251A (en) | Living body identification method and device | |
WO2015126392A1 (en) | Emulating a user performing spatial gestures | |
Bovik et al. | Basic linear filtering with application to image enhancement | |
CN110858097A (en) | Interaction method and device | |
CN109064464B (en) | Method and device for detecting burrs of battery pole piece | |
CN111883151B (en) | Audio signal processing method, device, equipment and storage medium | |
CN113170068A (en) | Video frame luminance filter | |
Ruchay et al. | Removal of impulse noise clusters from color images with local order statistics | |
WO2016197629A1 (en) | System and method for frequency estimation | |
KR101909326B1 (en) | User interface control method and system using triangular mesh model according to the change in facial motion | |
EP4076177A1 (en) | Method and apparatus for automatic cough detection | |
CN117830102A (en) | Image super-resolution restoration method, device, computer equipment and storage medium | |
WO2022181253A1 (en) | Joint point detection device, teaching model generation device, joint point detection method, teaching model generation method, and computer-readable recording medium | |
CN116009712A (en) | Handwriting data processing method and device, electronic equipment and storage medium | |
CN116543246A (en) | Training method of image denoising model, image denoising method, device and equipment | |
US20160180143A1 (en) | Eye tracking with 3d eye position estimations and psf models |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200303 |
|
RJ01 | Rejection of invention patent application after publication |