CN113407046B - User action recognition method and device, electronic equipment and storage medium - Google Patents
User action recognition method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN113407046B CN113407046B CN202110726719.3A CN202110726719A CN113407046B CN 113407046 B CN113407046 B CN 113407046B CN 202110726719 A CN202110726719 A CN 202110726719A CN 113407046 B CN113407046 B CN 113407046B
- Authority
- CN
- China
- Prior art keywords
- user
- sequence
- action
- data
- similarity
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000009471 action Effects 0.000 title claims abstract description 281
- 238000000034 method Methods 0.000 title claims abstract description 112
- 230000033001 locomotion Effects 0.000 claims abstract description 134
- 230000008859 change Effects 0.000 claims abstract description 108
- 230000008569 process Effects 0.000 claims description 29
- 238000004590 computer program Methods 0.000 claims description 12
- 238000004422 calculation algorithm Methods 0.000 claims description 10
- 239000000758 substrate Substances 0.000 claims 1
- 230000035772 mutation Effects 0.000 abstract description 24
- 238000010586 diagram Methods 0.000 description 13
- 230000000875 corresponding effect Effects 0.000 description 8
- 230000003993 interaction Effects 0.000 description 8
- 238000004364 calculation method Methods 0.000 description 7
- 238000004891 communication Methods 0.000 description 6
- 238000012545 processing Methods 0.000 description 6
- 230000009286 beneficial effect Effects 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 5
- 239000011159 matrix material Substances 0.000 description 4
- 238000004904 shortening Methods 0.000 description 3
- 230000001276 controlling effect Effects 0.000 description 2
- 239000013307 optical fiber Substances 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000007499 fusion processing Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/033—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
- G06F3/0346—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The disclosure relates to a user action recognition method, a device, an electronic device and a storage medium, wherein the method comprises the following steps: acquiring motion data of equipment held by a user at a moment of abrupt change of user action; determining a user gesture data variation sequence based on motion data of equipment held by a user at a user action mutation moment; determining a first similarity of a user gesture data variation sequence and a preset standard action gesture data variation sequence; based on the first similarity, a user action recognition result is determined. The method is characterized in that motion data of equipment held by a user during abrupt change of the user motion are captured, only the motion data of the equipment held by the user during abrupt change of the user motion are taken as a basis for user motion recognition, the data quantity required to be reported can be sufficiently reduced, the reporting rate of sensor data is reduced, and the requirements on the performance of the equipment held by the user and terminals are reduced. Because the formed data sequence is shorter, the time spent for calculating the first similarity is shortened, and the space required for storing the data sequence is reduced.
Description
Technical Field
The disclosure relates to the technical field of man-machine interaction, and in particular relates to a user action recognition method, a device, electronic equipment and a storage medium.
Background
User action recognition is a means by which the computing mechanism solves the user's limb language. The currently adopted user action recognition method is mainly finished by relying on a camera or a millimeter wave radar chip and the like.
If the millimeter wave radar chip and/or the camera are relied on for user action recognition, the accuracy requirement on the millimeter wave radar chip and/or the camera is higher. And the final action recognition result is obtained based on the collected original data, the whole calculation process is complex, and the requirement on hardware of the calculation equipment is high. These factors all make the equipment for accomplishing user action recognition expensive, which is not conducive to large scale landing and popularization.
Disclosure of Invention
In order to solve the technical problems described above or at least partially solve the technical problems described above, the present disclosure provides a user action recognition method, a device, an electronic apparatus, and a storage medium.
In a first aspect, the present disclosure provides a user action recognition method, including:
acquiring motion data of equipment held by a user at a moment of abrupt change of user action;
Determining a user gesture data variation sequence based on the motion data of the equipment held by the user at the moment of abrupt change of the user action;
determining a first similarity of the user gesture data variation sequence and a preset standard action gesture data variation sequence;
And determining a user action recognition result based on the first similarity.
In a second aspect, the present disclosure further provides a standard action recording method, including:
acquiring motion data of equipment held by a user at a user action abrupt change moment in the process of completing a preset standard action by the user;
Determining a user gesture data variation sequence based on the motion data of the equipment held by the user at the moment of abrupt change of the user action;
And recording the user gesture data variable quantity sequence as the standard action gesture data variable quantity sequence.
In a third aspect, the present disclosure further provides a user action recognition apparatus, including:
The first acquisition module is used for acquiring motion data of equipment held by a user at the moment of abrupt change of user action;
the first sequence determining module is used for determining a user gesture data variation sequence based on the motion data of the equipment held by the user at the moment of abrupt change of the user action;
The similarity determining module is used for determining first similarity between the user gesture data variable quantity sequence and a preset standard action gesture data variable quantity sequence;
and the recognition result determining module is used for determining a user action recognition result based on the first similarity.
In a fourth aspect, the present disclosure also provides a standard action recording apparatus, including:
the second acquisition module is used for acquiring motion data of equipment held by the user at the moment of abrupt change of the user action in the process of completing the preset standard action by the user;
the second sequence determining module is used for determining a user gesture data variation sequence based on the motion data of the equipment held by the user at the moment of abrupt change of the user action;
And the recording module is used for recording the user gesture data variable quantity sequence as the standard action gesture data variable quantity sequence.
In a fifth aspect, the present disclosure also provides an electronic device, including:
One or more processors;
a storage means for storing one or more programs;
The one or more programs, when executed by the one or more processors, cause the one or more processors to implement the user action recognition method and/or the standard action recording method as described above.
In a sixth aspect, the present disclosure also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a user action recognition method and/or a standard action recording method as described above.
Compared with the prior art, the technical scheme provided by the embodiment of the disclosure has the following advantages:
According to the technical scheme provided by the embodiment of the disclosure, in the whole process of user action recognition, the user action recognition does not need to be performed by relying on a millimeter wave chip and a camera, the whole recognition calculation process is simple, the requirement on hardware of the computing equipment is low, and the requirements of large-scale landing and popularization can be met.
According to the technical scheme provided by the embodiment of the disclosure, the motion data of the equipment held by the user at the moment of abrupt change of the user action is acquired through setting; based on the motion data of the equipment held by the user at the moment of user action mutation, a user gesture data change sequence is determined, the motion data of the equipment held by the user at the moment of user action mutation is essentially captured, and only the motion data of the equipment held by the user at the moment of user action mutation is taken as a basis for user action recognition, so that the data quantity required to be reported can be sufficiently reduced, the reporting rate of sensor data is reduced, and the requirements on the performance of the equipment held by the user and the terminal are reduced. The data sequence formed by the technical scheme provided by the embodiment of the disclosure is shorter, which is beneficial to shortening the time spent on calculating the first similarity and reducing the space required for storing the data sequence.
The technical scheme provided by the embodiment of the disclosure can realize the purpose of identifying specific actions (such as horizontal drawing, vertical drawing, wave line drawing, circular drawing, digital writing, english letter writing and the like) completed in the space by a user. The user action recognition method can be applied to equipment control by completing specific actions by a user in the human-computer interaction process, such as unlocking control equipment, switching display interfaces by the control equipment, controlling characters in a game to complete certain tasks and the like.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
In order to more clearly illustrate the embodiments of the present disclosure or the solutions in the prior art, the drawings that are required for the description of the embodiments or the prior art will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
Fig. 1 is an application scenario diagram of a user action recognition method provided in an embodiment of the present disclosure;
FIG. 2 is a flowchart of a method for identifying user actions according to an embodiment of the present disclosure;
FIG. 3 is a flowchart of another method for identifying user actions according to an embodiment of the present disclosure;
FIG. 4 is a flowchart of a method for recording standard actions according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of a user action recognition device according to an embodiment of the disclosure;
FIG. 6 is a schematic diagram of a standard motion recording device according to an embodiment of the disclosure;
fig. 7 is a schematic structural diagram of an electronic device in an embodiment of the disclosure.
Detailed Description
In order that the above objects, features and advantages of the present disclosure may be more clearly understood, a further description of aspects of the present disclosure will be provided below. It should be noted that, without conflict, the embodiments of the present disclosure and features in the embodiments may be combined with each other.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure, but the present disclosure may be practiced otherwise than as described herein; it will be apparent that the embodiments in the specification are only some, but not all, embodiments of the disclosure.
Fig. 1 is an application scenario diagram of a user action recognition method provided in an embodiment of the present disclosure. The user action recognition method provided by the application can be applied to an application environment shown in figure 1. The user action recognition method is applied to a user action recognition system. The user action recognition system comprises a terminal 1 and a device 2 held by a user.
Alternatively, the terminal 1 and the device 2 held by the user may be integrated into one body or may be provided separately. If the terminal 1 and the device 2 held by the user are respectively set, the terminal 1 and the device 2 held by the user communicate through a network.
The user-held device 2 has mounted therein sensors that can be used to collect its own motion data. The user-held device 2 may be, but is not limited to, a smart phone, an air mouse, a gamepad, a wearable device, etc.
The terminal 1 is used for identifying user actions during man-machine interaction. Specifically, the terminal 1 is configured to perform obtaining motion data of a device held by a user at a moment when a user action is abrupt; determining a user gesture data variation sequence based on motion data of equipment held by a user at a user action mutation moment; determining a first similarity of a user gesture data variation sequence and a preset standard action gesture data variation sequence; and determining a user action recognition result based on the first similarity. The terminal 1 may be, but is not limited to, a smart phone, a palm top computer, a tablet computer, a wearable device with a display screen, a desktop computer, a notebook computer, an integrated machine, a smart home device, and the like.
Fig. 2 is a flowchart of a method for identifying a user action according to an embodiment of the present disclosure, referring to fig. 2, the method includes:
s110, acquiring motion data of equipment held by a user at the moment of abrupt change of the user action.
The abrupt change of the user action means that the movement direction of the limbs of the user is changed greatly in the process of completing a certain action. Illustratively, when a user draws a wavy line in the air with the right hand, the moment when the user's right hand reaches the crest or trough of the drawn wavy line can be regarded as the user action abrupt moment.
In acquiring motion data of a device held by a user, the motion data may be acquired by means of an inertial sensor mounted in the device held by the user. In this case, the motion data of the device held by the user refers to raw data collected by the inertial sensor.
The implementation method of this step is various, and illustratively, the implementation method of this step may be: the equipment held by the user periodically collects and records the motion data of the equipment held by the user through an inertial sensor arranged in the equipment; then the equipment held by the user selects the motion data of the equipment held by the user at the moment of abrupt change of the user action from all the collected motion data; and finally, reporting the motion data of the equipment held by the user at the selected user action mutation moment to the terminal, so that the terminal obtains the motion data of the equipment held by the user at the user action mutation moment.
Further, the device held by the user selects the motion data of the device held by the user at the moment of abrupt change of the user action from all the collected motion data, specifically, the motion direction of the limbs of the user at each collection moment is determined based on all the collected motion data; if the included angle between the moving direction of the user limb at the q-1 acquisition moment and the moving direction of the user limb at the q-1 acquisition moment is larger than the preset included angle, determining that the user action abrupt change moment is the q-1 acquisition moment; and taking the motion data of the equipment held by the user at the q-th acquisition moment as the motion data of the equipment held by the user at the moment of abrupt change of the user action. Wherein q is a positive integer greater than or equal to 2.
Or the implementation method of the step can further comprise: firstly, the equipment held by the user judges whether the current moment is the moment of abrupt change of the action of the user; if so, recording the motion data of the equipment held by the user at the current moment, and reporting the recorded motion data to the terminal, so that the terminal acquires the motion data of the equipment held by the user at the moment of abrupt change of the user action.
The essence of the step is that the motion data of the equipment held by the user during the abrupt change of the user motion is captured, the motion data of the equipment held by the user during the abrupt change of the user motion is taken as a basis for carrying out the user motion recognition, instead of periodically and continuously collecting the motion data of the equipment held by the user, and all the collected motion data of the equipment held by the user are reported to the terminal.
S120, determining a user gesture data change sequence based on motion data of equipment held by a user at the moment of abrupt change of user action.
The user posture data refers to data capable of reflecting the user's limb movements. Alternatively, the motion data of the device held by the user can be directly used as the user gesture data; as the user posture data, posture angle data, rotation matrix data, rotation vector data, or the like may also be employed. The attitude angle data, the rotation matrix data, or the rotation vector data can be regarded as the result of compression processing of the raw data acquired by the sensor (i.e., the motion data of the device held by the user). By adopting the attitude angle data, the rotation matrix data or the rotation vector data as the user attitude data, the complexity of calculating the first similarity of the user attitude data variation sequence and the preset standard action attitude data variation sequence in the follow-up process can be reduced, and the calculation efficiency is improved.
The posture angle data is selected as user posture data with respect to the rotation matrix data or the rotation vector data, which can more directly reflect the user limb motion change situation.
Further, if the posture angle data is used as the user posture data, the implementation method of this step may include: determining attitude angle data of equipment held by a user at the moment of abrupt change of the user action based on the motion data of the equipment held by the user at the moment of abrupt change of the user action; the attitude angle includes at least one of an azimuth angle, a pitch angle, and a roll angle; a sequence of user attitude angle variations is determined based on the plurality of successively determined attitude angle data.
The azimuth angle, the pitch angle and the roll angle are all defined angles based on a world coordinate system. Specifically, azimuth (Azimuth), the horizontal angle of deviation between the current pointing direction of the device held by the user and the magnetic north. Pitch angle (Pitch), i.e., the angle between the plane of the device held by the user and the ground plane. Roll angle (Roll), the angle between the device plane and the ground plane.
The "determining the attitude angle data of the device held by the user at the user action mutation moment" based on the motion data of the device held by the user at the user action mutation moment may specifically be that, based on a kalman filter, data fusion processing is performed on the raw data collected by the accelerometer, the gyroscope and the magnetometer built in the inertial sensor, so as to obtain the attitude angle data of the device held by the user at the user action mutation moment.
The step of determining the user attitude angle change sequence based on the continuously determined plurality of attitude angle data may specifically be that, in the continuously determined plurality of attitude angle data, the attitude angle data at any one time of abrupt change of user action is differed from the attitude angle data at the previous time of abrupt change of user action, so as to obtain a series of attitude angle data change; and according to the time sequence of the abrupt change moment of the user action, arranging a series of attitude angle data variable quantities to obtain a user attitude angle variable quantity sequence.
For example, if the user is in the process of completing a certain action, the user action mutation time includes a plurality of user action mutation times, namely, time T1, time T2, time T3, time T4, time … … and time Tn. The azimuth angle at time T1 is a1, the pitch angle is p 1, and the roll angle is r 1. The azimuth angle at time T2 is a 2, the pitch angle is p 2, and the roll angle is r 2. The azimuth angle at time T3 is a 3, the pitch angle is p 3, the roll angle is r 3.T4, the azimuth angle is a 4, the pitch angle is p 4, and the roll angle is r 4. … …. The azimuth angle at time Tn is a n, the pitch angle is p n, and the roll angle is r n. The azimuth angle variation sequence is: a 2-a1、a3-a2、a4-a3、……、an-an-1. The pitch angle variation sequence is as follows: p 2-p1、p3-p2、p4-p3、……、pn-pn-1. The sequence of the tilting angle change amount is as follows: r 2-r1、r3-r2、r4-r3、……、rn-rn-1.
S130, determining a first similarity between the user gesture data variation sequence and a preset standard action gesture data variation sequence.
The preset standard motion gesture data variable quantity sequence refers to a standard motion gesture data variable quantity sequence stored in the terminal in advance before the user motion recognition method provided by the disclosure is executed.
The standard action posture data variable quantity sequence is obtained through learning record. Alternatively, the steps of the method of learning to record the standard motion gesture data variation sequence are similar to the steps S110 to S130 described above. Specifically, the terminal may send an instruction to the user to complete a certain action, such as an instruction of "please complete right-hand circle drawing action", and after receiving the instruction, the user draws a circle using the right hand. Acquiring motion data of equipment held by a user at the moment of abrupt change of user action in the process of drawing a circle by the user by using the right hand; determining a user gesture data variation sequence based on motion data of equipment held by a user at a user action mutation moment; and recording the user gesture data variable quantity sequence as the standard action gesture data variable quantity sequence. Thus, the learning record of the standard action posture data variable quantity sequence is completed.
The execution subject of learning and recording the standard action posture data variation sequence may be the same as or different from the execution subject of the user action recognition method provided in the present disclosure. If the execution subject that learns to record the standard motion gesture data variation sequence is different from the execution subject of the user motion recognition method provided by the present disclosure, the execution subject that learns to record the standard motion gesture data variation sequence may send the recorded standard motion gesture data variation sequence to the execution subject of the user motion recognition method provided by the present disclosure.
The algorithms that may be used to determine the first similarity of the user gesture data variation sequence and the preset standard motion gesture data variation sequence are various, and the first similarity of the user gesture data variation sequence and the preset standard motion gesture data variation sequence may be determined by one or more of, for example, a hausdorff distance (Hausdorff Distance) algorithm, a dynamic time warping (DYNAMIC TIME WARPING, DTW) algorithm, a furcher discrete distance (fure CHET DISTANCE) algorithm, and a longest common subsequence (Longest Common Subsequence) algorithm.
Compared with other algorithms, the first similarity between the user gesture data variation sequence determined based on the dynamic time warping algorithm and the preset standard action gesture data variation sequence is more accurate. This is because, in practice, different users require different times to complete the same action. The dynamic time warping algorithm may automatically warp the time series (i.e. locally scale on the time axis) so that the morphology of the two sequences is as consistent as possible, yielding the greatest possible similarity.
Further, if the attitude angle variation sequence comprises an azimuth angle variation sequence, a pitch angle variation sequence and a roll angle variation sequence; the implementation method of the steps comprises the following steps: determining a second similarity between an azimuth angle change amount sequence of equipment held by a user and a preset standard action azimuth angle change amount sequence; determining a third similarity between a pitch angle variation sequence of equipment held by a user and a preset standard action pitch angle variation sequence; determining a fourth similarity between a roll angle change sequence of equipment held by a user and a preset standard action roll angle change sequence; and determining the first similarity of the user gesture data variation sequence and the preset standard action gesture data variation sequence based on the second similarity, the third similarity and the fourth similarity.
Further, the specific implementation method of determining the first similarity of the user gesture data variation sequence and the preset standard action gesture data variation sequence based on the second similarity, the third similarity and the fourth similarity is various, which is not limited by the present application. Optionally, taking geometric means of the second similarity, the third similarity and the fourth similarity as the first similarity; or taking the arithmetic mean of the second similarity, the third similarity and the fourth similarity as the first similarity; or firstly, square sums of the second similarity, the third similarity and the fourth similarity are obtained; then the square sum is squared, and the squared result is taken as a first similarity.
And S140, determining a user action recognition result based on the first similarity.
There are various implementation methods of this step, and the present application is not limited thereto. Alternatively, the similarity threshold may be preset; if the first similarity is greater than the similarity threshold, determining the user action as the standard action mentioned in S130; otherwise, it is determined that the user action is not the standard action mentioned in S130. Thereby achieving the purpose of identifying the user action.
Further, a gesture data variable quantity sequence comprising a plurality of different preset standard actions in the database can be set. When the technical scheme provided by the application is executed, the user gesture data change amount sequence obtained in the step S120 is compared with each standard action gesture data change amount sequence one by one, so that whether the user action corresponds to which standard action in the database is determined, and the purpose of finally identifying the user action is achieved.
According to the technical scheme provided by the embodiment of the disclosure, in the whole process of user action recognition, the user action recognition does not need to be performed by relying on a millimeter wave chip and a camera, the whole recognition calculation process is simple, the requirement on hardware of the computing equipment is low, and the requirements of large-scale landing and popularization can be met.
It will be appreciated by those skilled in the art that if the user action mutation moment identification is not performed, the motion data of the device held by the user is continuously collected periodically, and the user action identification is performed on the basis of all the collected motion data. This approach can result in a large amount of data to be reported, requiring a high sensor data reporting rate (greater than 100 Hz). The performance requirements on equipment and terminals held by users are high. And the data sequence formed by the method is overlong, so that the problems of long time spent on the first similarity calculation, large space required for data sequence storage and the like are caused, and the calculation and storage burden is increased.
The technical scheme provided by the embodiment of the disclosure obtains the motion data of the equipment held by the user at the moment of abrupt change of the user action through setting; based on the motion data of the equipment held by the user at the moment of user action mutation, a user gesture data change sequence is determined, the motion data of the equipment held by the user at the moment of user action mutation is essentially captured, and only the motion data of the equipment held by the user at the moment of user action mutation is taken as a basis for user action recognition, so that the data quantity required to be reported can be sufficiently reduced, the reporting rate of sensor data is reduced, and the requirements on the performance of the equipment held by the user and the terminal are reduced. The data sequence formed by the technical scheme provided by the embodiment of the disclosure is shorter, which is beneficial to shortening the time spent on calculating the first similarity and reducing the space required for storing the data sequence.
The technical scheme can realize the purpose of identifying specific actions (such as horizontal drawing, vertical drawing, wave line drawing, circular drawing, digital writing, english letter writing and the like) completed in the space by a user.
The user action recognition method can be applied to a scene of equipment control by completing specific actions by a user in the human-computer interaction process.
For example, the user first writes "M" in the space, and enters an action of writing "M" as an unlocking action into the terminal system (this procedure is a standard action posture data change amount sequence learning recording procedure). Subsequently, when the user needs to unlock the terminal, the user writes 'M' in the space again, and if the terminal recognizes that the user action is consistent with the standard action, the terminal is unlocked.
For another example, in the link of setting the correspondence between the actions and the instructions, the user writes "V" in the space, and takes the action of writing "V" as the action corresponding to the "determination instruction"; drawing a semicircle in the space, and taking the action of drawing the semicircle as the action corresponding to a return instruction; drawing a line segment upwards in the space, and taking the action of drawing the line segment upwards as the action corresponding to the system focus upwards moving instruction; drawing a line segment downwards in the space, and taking the action of drawing the line segment downwards as the action corresponding to a system focus downwards moving instruction; drawing a line segment leftwards in space, and taking the action of drawing the line segment leftwards as the action corresponding to a system focus leftwards moving instruction; drawing a line segment to the right in space, and taking the action of drawing the line segment to the right as the action corresponding to the system focus right movement instruction. Thus, after the setting of the correspondence between the motion and the instruction (i.e., the learning record of the standard motion posture data variation sequence) is completed, 6 standard motions are stored in the database, which are the motion of writing "V", the motion of drawing a semicircle, the motion of drawing a line segment upward, the motion of drawing a line segment downward, the motion of drawing a line segment leftward, and the motion of drawing a line segment rightward, respectively. Subsequently, assuming that the terminal interface displays a certain page of a certain electronic book, the user draws a line segment leftwards in space again, the terminal performs user action recognition by using the user action recognition method provided by the disclosure, and the user action is a leftward drawing action of the line segment and corresponds to a 'system focus left movement instruction', so that the content displayed by the terminal is switched, and the next page of the electronic book is displayed.
The user action recognition method can also be applied to man-machine interaction games. Illustratively, a plurality of standard motion gesture data variation sequences are included in the game package. In the game process, the system continuously gives prompt information of a plurality of actions (the actions given by the system are standard actions included in the game package), so that a user can finish corresponding actions according to the prompt information, the system can identify actions finished by the user, and score according to whether the actions finished by the user are consistent with the actions required to be finished.
The user action recognition method can also be applied to a scene where character input is performed. However, in the above technical solution, in the process of performing motion recognition, recognition needs to be performed by means of a preset standard motion gesture data variable sequence, and the number of identifiable motions mainly depends on the number of standard motions in the database. Therefore, in this case, it is necessary to learn the posture data change amount sequence of the actions corresponding to all the written characters in the course of learning the record.
Those skilled in the art will appreciate that zero crossing of the angular velocity of the device held by the user often means a substantial change in the direction of movement of the user's limb. Here, "angular velocity zero-crossing" means that the angular velocity changes from a positive value to a negative value, or from a negative value to a positive value. Accordingly, the zero crossing point moment of the angular velocity of the equipment held by the user can be used as the user action abrupt change moment. The angular velocity can be directly acquired by the gyroscope in the inertial sensor, and the angular velocity is the original measurement data of the gyroscope, so that whether a moment is the abrupt change moment of the user action can be directly judged without data processing, and the complexity of the user action recognition method can be reduced.
Further, the gyroscope in the inertial sensor is used for collecting the angular velocity of the equipment held by the user rotating around each coordinate axis in the three-dimensional coordinate system of the gyroscope; the zero crossing point moment of the angular velocity of the equipment held by the user rotating around any coordinate axis in the three-dimensional coordinate system of the gyroscope is the abrupt change moment of the action of the user. Compared with the scheme that only the angular speed of the device held by the user rotating around three coordinate axes in the three-dimensional coordinate system of the gyroscope is used as the user action abrupt change moment, the arrangement can ensure that important motion data are not missed, and the accuracy of the follow-up user action recognition can be improved.
Fig. 3 is a flowchart of another user action recognition method according to an embodiment of the present disclosure. Fig. 3 is a specific example of fig. 2. An inertial sensor is mounted in a device held by the user. Inertial sensors include accelerometers, gyroscopes, and magnetometers. Referring to fig. 3, the method includes:
s210, continuously collecting motion data by an inertial sensor in equipment held by a user in the process that the user completes a certain action.
And S220, the equipment held by the user judges whether at least one angular velocity zero crossing point exists in the three angular velocities acquired by the gyroscope at the current moment, and if so, S230 is executed.
And S230, the equipment held by the user reports the motion data acquired at the moment of the zero crossing point of the angular speed to the terminal.
S240, the terminal receives the motion data of the angular velocity zero crossing point moment and determines the attitude angle data of equipment held by the user at the angular velocity zero crossing point moment based on the received motion data of the angular velocity zero crossing point moment; attitude angles include azimuth, pitch, and roll angles.
S250, determining a user attitude angle change sequence based on the continuously determined multiple attitude angle data.
S260, determining a first similarity between a user gesture data variation sequence and a preset standard action gesture data variation sequence;
s270, determining a user action recognition result based on the first similarity.
According to the technical scheme, the zero crossing point moment of the angular velocity of the equipment held by the user is used as the user action mutation moment, the user action mutation moment determining method is simple, the time required for judging the user action mutation moment is shortened, and the whole time consumption for executing the user action recognition method is further shortened.
Fig. 4 is a flowchart of a standard action recording method according to an embodiment of the present disclosure. The embodiment is applicable to the situation that the terminal performs standard action recording before performing man-machine interaction, the method can be performed by a standard action recording device, the device can be implemented in a software and/or hardware mode, and the device can be configured in electronic equipment, such as a terminal, and specifically includes, but is not limited to, a smart phone, a palm computer, a tablet computer, a wearable device with a display screen, a desktop computer, a notebook computer, an integrated machine, smart home equipment and the like.
As shown in fig. 4, the method specifically may include:
s310, acquiring motion data of equipment held by a user at the moment of abrupt change of the user action in the process of completing the preset standard action.
In this step, the preset standard action refers to a pre-specified action. Illustratively, the user is designated to complete the right-hand circle drawing action. When the step is executed, the terminal sends an instruction of 'please complete right-hand circle drawing action' to the user, and the user draws a circle by using the right hand after receiving the instruction. And in the process that the user draws a circle by using the right hand, acquiring motion data of equipment held by the user at the moment of abrupt change of the user action.
Optionally, the zero crossing point moment of the angular velocity of the device held by the user is used as the abrupt change moment of the action of the user.
Optionally, the inertial sensor is used for acquiring motion data of equipment held by the user at the moment of abrupt change of the user action.
Optionally, the gyroscope in the inertial sensor is used for collecting the angular velocity of the rotation of the equipment held by the user around each coordinate axis in the three-dimensional coordinate system of the gyroscope; the zero crossing point moment of the angular velocity of the equipment held by the user rotating around any coordinate axis in the three-dimensional coordinate system of the gyroscope is the abrupt change moment of the action of the user.
The specific implementation method of this step is similar to S110, and will not be repeated here.
S320, determining a user gesture data change sequence based on motion data of equipment held by a user at the moment of abrupt change of user action.
Further, based on the motion data of the equipment held by the user at the moment of abrupt change of the user action, determining the attitude angle data of the equipment held by the user at the moment of abrupt change of the user action; the attitude angle includes at least one of an azimuth angle, a pitch angle, and a roll angle; a sequence of user attitude angle variations is determined based on the plurality of successively determined attitude angle data.
The specific implementation method of this step is similar to S120, and will not be repeated here.
S330, recording the user gesture data variable quantity sequence as the standard action gesture data variable quantity sequence.
The essence of the technical scheme is that the user is firstly appointed to finish the preset standard action, and the terminal learns in the process that the user finishes the preset standard action to obtain the standard action gesture data variable quantity sequence. When the human-computer interaction is carried out subsequently, the standard action posture data variable quantity sequence is used as a judging standard to carry out user action recognition, so that the data quantity required to be reported can be reduced sufficiently, the reporting rate of sensor data is reduced, and the requirements on the performance of equipment and terminals held by a user are reduced. The data sequence formed by the technical scheme provided by the embodiment of the disclosure is shorter, which is beneficial to shortening the time spent on calculating the first similarity when the user action is identified and reducing the space required for storing the data sequence.
The technical scheme can realize the purpose of identifying specific actions (such as horizontal drawing, vertical drawing, wave line drawing, circular drawing, digital writing, english letter writing and the like) completed in the space by a user. The user action recognition method can be applied to equipment control by completing specific actions by a user in the human-computer interaction process, such as unlocking control equipment, switching display interfaces by the control equipment, controlling characters in a game to complete certain tasks and the like.
It should be noted that, for simplicity of description, the foregoing method embodiments are all described as a series of acts, but it should be understood by those skilled in the art that the present invention is not limited by the order of acts described, as some steps may be performed in other orders or concurrently in accordance with the present invention. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required for the present invention.
Fig. 5 is a schematic structural diagram of a user action recognition device in an embodiment of the disclosure. The user action recognition device provided by the embodiment of the disclosure can be configured in a terminal. Referring to fig. 5, the user action recognition apparatus specifically includes:
A first obtaining module 410, configured to obtain motion data of a device held by a user at a moment when a user action is abrupt;
a first sequence determining module 420, configured to determine a sequence of user gesture data change amounts based on motion data of a device held by a user at the moment of abrupt change of user action;
A similarity determining module 430, configured to determine a first similarity between the user gesture data variation sequence and a preset standard motion gesture data variation sequence;
the recognition result determining module 440 is configured to determine a user action recognition result based on the first similarity.
Further, the zero crossing point moment of the angular velocity of the equipment held by the user is used as the abrupt change moment of the action of the user.
Further, the first obtaining module 410 is configured to obtain, by using an inertial sensor, motion data of a device held by the user at the moment of abrupt change of motion of the user.
Further, the gyroscope in the inertial sensor is used for collecting the angular speed of the equipment held by the user rotating around each coordinate axis in the three-dimensional coordinate system of the gyroscope;
And the zero crossing point moment of the angular velocity of the equipment held by the user rotating around any coordinate axis in the three-dimensional coordinate system of the gyroscope is the abrupt change moment of the action of the user.
Further, the first sequence determining module 420 is configured to determine, based on the motion data of the device held by the user at the user action mutation moment, attitude angle data of the device held by the user at the user action mutation moment; the attitude angle includes at least one of an azimuth angle, a pitch angle, and a roll angle;
and determining a user attitude angle variation sequence based on the continuously determined plurality of attitude angle data.
Further, if the attitude angle variation sequence comprises an azimuth angle variation sequence, a pitch angle variation sequence and a roll angle variation sequence; the similarity determining module 430 is configured to:
Determining a second similarity of the azimuth angle variation sequence of the equipment held by the user and the preset standard action azimuth angle variation sequence;
Determining a third similarity between a pitch angle variation sequence of equipment held by the user and the preset standard action pitch angle variation sequence;
determining a fourth similarity between a roll angle change sequence of equipment held by the user and the preset standard action roll angle change sequence;
And determining the first similarity of the user gesture data variation sequence and the preset standard action gesture data variation sequence based on the second similarity, the third similarity and the fourth similarity.
Further, the similarity determining module 430 is configured to:
And determining the first similarity between the user gesture data variation sequence and a preset standard action gesture data variation sequence based on a dynamic time warping algorithm.
The user action recognition device provided by the embodiment of the present disclosure may perform the steps of the user action recognition method provided by the embodiment of the present disclosure, and has the performing steps and beneficial effects, which are not described herein again.
Fig. 6 is a schematic structural diagram of a standard motion recording device in an embodiment of the disclosure. The standard action recording device provided by the embodiment of the disclosure can be configured in a terminal. Referring to fig. 6, the standard motion recording device specifically includes:
A second obtaining module 510, configured to obtain motion data of a device held by a user at a moment when a user action abrupt change occurs in a process of completing a preset standard action;
A second sequence determining module 520, configured to determine a sequence of user gesture data change amounts based on the motion data of the device held by the user at the moment of abrupt change of user action;
a recording module 530, configured to record the user gesture data variation sequence as the standard action gesture data variation sequence.
Further, the zero crossing point moment of the angular velocity of the equipment held by the user is used as the abrupt change moment of the action of the user.
Further, a second obtaining module 510 is configured to obtain, by using an inertial sensor, motion data of a device held by the user at the moment of abrupt change of motion of the user.
Further, the gyroscope in the inertial sensor is used for collecting the angular speed of the equipment held by the user rotating around each coordinate axis in the three-dimensional coordinate system of the gyroscope;
And the zero crossing point moment of the angular velocity of the equipment held by the user rotating around any coordinate axis in the three-dimensional coordinate system of the gyroscope is the abrupt change moment of the action of the user.
Further, the second sequence determining module 520 is configured to:
Determining attitude angle data of equipment held by a user at the user action mutation moment based on the motion data of the equipment held by the user at the user action mutation moment; the attitude angle includes at least one of an azimuth angle, a pitch angle, and a roll angle;
and determining a user attitude angle variation sequence based on the continuously determined plurality of attitude angle data.
The standard action recording device provided by the embodiment of the present disclosure may execute the steps of the standard action recording method provided by the embodiment of the present disclosure, and has the executing steps and beneficial effects, which are not described herein again.
Fig. 7 is a schematic structural diagram of an electronic device in an embodiment of the disclosure. Referring now in particular to fig. 7, a schematic diagram of an electronic device 1000 suitable for use in implementing embodiments of the present disclosure is shown. The electronic device 1000 in the embodiments of the present disclosure may include, but is not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), wearable electronic devices, and the like, and fixed terminals such as digital TVs, desktop computers, smart home devices, and the like. The electronic device shown in fig. 7 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 7, the electronic apparatus 1000 may include a processing device (e.g., a central processing unit, a graphic processor, etc.) 1001 that may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 1002 or a program loaded from a storage device 1008 into a Random Access Memory (RAM) 1003 to implement a user action recognition method or a standard action recording method according to an embodiment of the present disclosure. In the RAM 1003, various programs and information necessary for the operation of the electronic apparatus 1000 are also stored. The processing device 1001, the ROM 1002, and the RAM 1003 are connected to each other by a bus 1004. An input/output (I/O) interface 1005 is also connected to bus 1004.
In general, the following devices may be connected to the I/O interface 1005: input devices 1006 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, and the like; an output device 1007 including, for example, a Liquid Crystal Display (LCD), speaker, vibrator, etc.; storage 1008 including, for example, magnetic tape, hard disk, etc.; and communication means 1009. The communication means 1009 may allow the electronic device 1000 to communicate wirelessly or by wire with other devices to exchange information. While fig. 7 shows an electronic device 1000 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a non-transitory computer readable medium, the computer program containing program code for performing the method shown in the flowcharts, thereby implementing the user action recognition method or the standard action recording method as above. In such an embodiment, the computer program may be downloaded and installed from a network via the communication device 1009, or installed from the storage device 1008, or installed from the ROM 1002. The above-described functions defined in the method of the embodiment of the present disclosure are performed when the computer program is executed by the processing device 1001.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include an information signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any known or future developed network protocol, such as HTTP (HyperText Transfer Protocol ), and may be interconnected with digital information communication (e.g., a communication network) in any form or medium. Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any known or future developed networks.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to:
acquiring motion data of equipment held by a user at a moment of abrupt change of user action;
Determining a user gesture data variation sequence based on the motion data of the equipment held by the user at the moment of abrupt change of the user action;
determining a first similarity of the user gesture data variation sequence and a preset standard action gesture data variation sequence;
And determining a user action recognition result based on the first similarity.
Or the computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to:
acquiring motion data of equipment held by a user at a user action abrupt change moment in the process of completing a preset standard action by the user;
Determining a user gesture data variation sequence based on the motion data of the equipment held by the user at the moment of abrupt change of the user action;
And recording the user gesture data variable quantity sequence as the standard action gesture data variable quantity sequence.
Alternatively, the electronic device may perform other steps of the above embodiments when one or more of the above programs are executed by the electronic device.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including, but not limited to, an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure, the present disclosure provides an electronic device comprising:
One or more processors;
A memory for storing one or more programs;
when the one or more programs are executed by the one or more processors, the one or more processors are caused to implement any of the user action recognition methods or standard action recording methods as provided by the present disclosure.
According to one or more embodiments of the present disclosure, the present disclosure provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements any of the user action recognition methods or standard action recording methods as provided by the present disclosure.
The disclosed embodiments also provide a computer program product comprising a computer program or instructions which, when executed by a processor, implement the user action recognition method or the standard action recording method as above.
It should be noted that in this document, relational terms such as "first" and "second" and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises an element.
The above is merely a specific embodiment of the disclosure to enable one skilled in the art to understand or practice the disclosure. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Claims (12)
1. A method for identifying a user action, comprising:
acquiring motion data of equipment held by a plurality of continuous users; the acquisition time of the motion data of the equipment held by the users is the user action abrupt change time;
Determining attitude angle data of equipment held by a user at each user action abrupt change moment based on motion data of the equipment held by a plurality of users;
in the continuously determined multiple attitude angle data, the attitude angle data at any time of abrupt change of user action is differenced from the attitude angle data at the time of abrupt change of the previous user action, so as to obtain a series of attitude angle data variation;
according to the time sequence of the abrupt change moment of the user action, a series of attitude angle data variable quantities are arranged to obtain a user attitude angle variable quantity sequence;
Determining a first similarity of the user attitude angle change sequence and a preset standard action attitude angle change sequence;
And determining a user action recognition result based on the first similarity.
2. The method of claim 1, wherein the step of determining the position of the substrate comprises,
And taking the zero crossing point moment of the angular velocity of the equipment held by the user as the user action abrupt change moment.
3. The method of claim 2, wherein the obtaining motion data for a device held by a plurality of users comprises:
motion data of devices held by a plurality of users are acquired through inertial sensors.
4. The method of claim 3, wherein the step of,
The gyroscope in the inertial sensor is used for collecting the angular speed of rotation of equipment held by a user around each coordinate axis in the three-dimensional coordinate system of the gyroscope;
And the zero crossing point moment of the angular velocity of the equipment held by the user rotating around any coordinate axis in the three-dimensional coordinate system of the gyroscope is the abrupt change moment of the action of the user.
5. The method of claim 4, wherein the attitude angle comprises at least one of azimuth, pitch, and roll angles.
6. The method of claim 5, wherein if the attitude angle variation sequence comprises an azimuth angle variation sequence, a pitch angle variation sequence, and a roll angle variation sequence;
The determining the first similarity between the user gesture angle change amount sequence and the preset standard action gesture angle change amount sequence comprises the following steps:
Determining a second similarity of the azimuth angle variation sequence of the equipment held by the user and the preset standard action azimuth angle variation sequence;
Determining a third similarity between a pitch angle variation sequence of equipment held by the user and the preset standard action pitch angle variation sequence;
determining a fourth similarity between a roll angle change sequence of equipment held by the user and the preset standard action roll angle change sequence;
and determining the first similarity of the user gesture angle change amount sequence and the preset standard action gesture angle change amount sequence based on the second similarity, the third similarity and the fourth similarity.
7. The method of claim 1, wherein determining the first similarity of the sequence of user gesture angular changes and a preset sequence of standard motion gesture angular changes comprises:
and determining the first similarity of the user gesture angle change quantity sequence and a preset standard action gesture angle change quantity sequence based on a dynamic time warping algorithm.
8. A standard action recording method, comprising:
acquiring motion data of equipment held by a user, wherein the motion data are acquired at a plurality of continuous user action abrupt change moments in the process of completing preset standard actions by the user;
determining a user attitude angle variation sequence based on the motion data of the equipment held by the user;
and recording the user gesture angle change amount sequence as the standard action gesture angle change amount sequence.
9. A user action recognition apparatus, comprising:
The first acquisition module is used for acquiring motion data of equipment held by a plurality of continuous users; the acquisition time of the motion data of the equipment held by the users is the user action abrupt change time;
The first sequence determining module is used for determining attitude angle data of equipment held by a user at each user action abrupt change moment based on motion data of the equipment held by a plurality of users; in the continuously determined multiple attitude angle data, the attitude angle data at any time of abrupt change of user action is differenced from the attitude angle data at the time of abrupt change of the previous user action, so as to obtain a series of attitude angle data variation; according to the time sequence of the abrupt change moment of the user action, a series of attitude angle data variable quantities are arranged to obtain a user attitude angle variable quantity sequence;
The similarity determining module is used for determining first similarity between the user gesture angle change amount sequence and a preset standard action gesture angle change amount sequence;
and the recognition result determining module is used for determining a user action recognition result based on the first similarity.
10. A standard motion recording device, comprising:
the second acquisition module is used for acquiring motion data of equipment held by a user, which are acquired at a plurality of continuous user action abrupt change moments in the process of completing preset standard actions by the user;
The second sequence determining module is used for determining a user attitude angle change sequence based on the motion data of the equipment held by the user;
And the recording module is used for recording the user gesture angle change quantity sequence as the standard action gesture angle change quantity sequence.
11. An electronic device, the electronic device comprising:
One or more processors;
a storage means for storing one or more programs;
The one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1-8.
12. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the method according to any of claims 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110726719.3A CN113407046B (en) | 2021-06-29 | 2021-06-29 | User action recognition method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110726719.3A CN113407046B (en) | 2021-06-29 | 2021-06-29 | User action recognition method and device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113407046A CN113407046A (en) | 2021-09-17 |
CN113407046B true CN113407046B (en) | 2024-09-06 |
Family
ID=77680094
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110726719.3A Active CN113407046B (en) | 2021-06-29 | 2021-06-29 | User action recognition method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113407046B (en) |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108319421A (en) * | 2018-01-29 | 2018-07-24 | 维沃移动通信有限公司 | A kind of display triggering method and mobile terminal |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108108015A (en) * | 2017-11-20 | 2018-06-01 | 电子科技大学 | A kind of action gesture recognition methods based on mobile phone gyroscope and dynamic time warping |
CN111750919A (en) * | 2020-07-02 | 2020-10-09 | 陕西师范大学 | Identification method and device using multi-axis sensor and accelerometer |
CN112212861B (en) * | 2020-09-21 | 2022-05-06 | 哈尔滨工业大学(深圳) | A Trajectory Restoration Method Based on Single Inertial Sensor |
-
2021
- 2021-06-29 CN CN202110726719.3A patent/CN113407046B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108319421A (en) * | 2018-01-29 | 2018-07-24 | 维沃移动通信有限公司 | A kind of display triggering method and mobile terminal |
Also Published As
Publication number | Publication date |
---|---|
CN113407046A (en) | 2021-09-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP4056952A1 (en) | Map fusion method, apparatus, device, and storage medium | |
US9865029B2 (en) | Contextual rendering of process graphics | |
CN112965592A (en) | Equipment interaction method, device and system | |
CN111368668A (en) | Three-dimensional hand recognition method and device, electronic equipment and storage medium | |
CN115511779A (en) | Image detection method, device, electronic equipment and storage medium | |
US20230093983A1 (en) | Control method and device, terminal and storage medium | |
CN113706606B (en) | Method and device for determining position coordinates of spaced hand gestures | |
CN110069126B (en) | Virtual object control method and device | |
US20240411438A1 (en) | Display method and apparatus, electronic device, and storage medium | |
CN108875901B (en) | Neural network training method and universal object detection method, device and system | |
CN113407046B (en) | User action recognition method and device, electronic equipment and storage medium | |
CN114202799B (en) | Method, device, electronic device and storage medium for determining change speed of controlled object | |
CN117130518A (en) | Control display method, head display device, electronic device and readable storage medium | |
CN113741750B (en) | Cursor position updating method and device and electronic equipment | |
CN113253906B (en) | Input data processing method, device and equipment | |
CN115880719A (en) | Gesture depth information generation method, device, device and computer readable medium | |
CN116301496A (en) | Special effect information display method, device, electronic device and storage medium | |
CN110263743B (en) | Method and device for recognizing images | |
CN113741749A (en) | Cursor position updating method and device and electronic equipment | |
CN118409661B (en) | Gesture control method, device, equipment and storage medium based on display screen | |
US20230418072A1 (en) | Positioning method, apparatus, electronic device, head-mounted display device, and storage medium | |
CN111103967A (en) | Control method and device of virtual object | |
CN112306223B (en) | Information interaction method, device, equipment and medium | |
CN111259694B (en) | Gesture moving direction identification method, device, terminal and medium based on video | |
CN114115536A (en) | Interaction method, interaction device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |