[go: up one dir, main page]

CN111603162B - Myoelectric signal processing method and device, intelligent wearable equipment and storage medium - Google Patents

Myoelectric signal processing method and device, intelligent wearable equipment and storage medium Download PDF

Info

Publication number
CN111603162B
CN111603162B CN202010378837.5A CN202010378837A CN111603162B CN 111603162 B CN111603162 B CN 111603162B CN 202010378837 A CN202010378837 A CN 202010378837A CN 111603162 B CN111603162 B CN 111603162B
Authority
CN
China
Prior art keywords
electromyographic signal
action potential
potential segment
gesture
signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010378837.5A
Other languages
Chinese (zh)
Other versions
CN111603162A (en
Inventor
李红红
韩久琦
姚秀军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jingdong Technology Information Technology Co Ltd
Original Assignee
Jingdong Technology Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jingdong Technology Information Technology Co Ltd filed Critical Jingdong Technology Information Technology Co Ltd
Priority to CN202010378837.5A priority Critical patent/CN111603162B/en
Publication of CN111603162A publication Critical patent/CN111603162A/en
Application granted granted Critical
Publication of CN111603162B publication Critical patent/CN111603162B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6801Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
    • A61B5/6802Sensor mounted on worn items
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Veterinary Medicine (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Pathology (AREA)
  • Public Health (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physiology (AREA)
  • Psychiatry (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The embodiment of the invention provides an electromyographic signal processing method, an electromyographic signal processing device, intelligent wearable equipment and a storage medium, wherein the method comprises the following steps: acquiring a first electromyographic signal and a second electromyographic signal, wherein the first electromyographic signal and the second electromyographic signal are included in a preset gesture action set, and the first electromyographic signal and the second electromyographic signal corresponding to a first position of an arm part and a second electromyographic signal corresponding to a second position are acquired at any one time according to any gesture action; determining a first action potential section in the first electromyographic signal and a second action potential section in the second electromyographic signal; extracting a plurality of first features from the first action potential segment and a plurality of second features from the second action potential segment; splicing the first features and the second features based on preset splicing rules to generate a plurality of feature vectors; and projecting each feature vector by using a feature projection calculation algorithm, and classifying and identifying gesture actions on each projection result by using a preset classifier.

Description

Myoelectric signal processing method and device, intelligent wearable equipment and storage medium
Technical Field
The invention relates to the technical field of intelligent wearable equipment, in particular to an electromyographic signal processing method and device, intelligent wearable equipment and a storage medium.
Background
In recent years, along with the continuous development of industry and traffic, amputees have a tendency to rise year by year due to the reasons of industrial production, engineering construction, car accidents and the like, and intelligent wearable devices such as multi-degree-of-freedom myoelectric prostheses and the like with bionic control functions are generated. The multi-degree-of-freedom myoelectric artificial limb with the bionic control function can enable amputees to better integrate into society and life to a certain extent, so that the multi-degree-of-freedom myoelectric artificial limb with the bionic control function is paid attention to gradually.
In the related art, for a multi-degree-of-freedom myoelectric artificial limb, the myoelectric signals of the muscles of a human body can be used for driving the artificial limb to realize different gesture actions, and in the process, a key ring is to process the myoelectric signals so as to be convenient for identifying the corresponding gesture actions. The electromyographic signals are generally processed by adopting a mode of threshold switch control, single degree of freedom proportional control and the like.
At present, the electromyographic signals are processed in a mode of threshold switch control, single degree-of-freedom proportional control and the like, and the controllable degree of freedom is less, for example, the electromyographic signals are processed in a mode of threshold switch control, so that corresponding fist-making gesture actions and non-fist-making gesture actions can be identified, and the controllable degree of freedom is less, and the flexible motion function of the current multi-degree-of-freedom electromyographic artificial limb cannot be adapted.
Disclosure of Invention
The embodiment of the invention aims to provide an electromyographic signal processing method, an electromyographic signal processing device, intelligent wearable equipment and a storage medium, so as to realize the control of multiple degrees of freedom and adapt to the beneficial effects of flexible movement functions of the current electromyographic prostheses with multiple degrees of freedom. The specific technical scheme is as follows:
in a first aspect of the embodiment of the present invention, there is provided an electromyographic signal processing method, including:
acquiring a first electromyographic signal and a second electromyographic signal, wherein the first electromyographic signal and the second electromyographic signal are included in a preset gesture action set, and the first electromyographic signal and the second electromyographic signal corresponding to a first position of an arm part and a second position of the arm part are acquired at any time according to any gesture action;
determining a first action potential segment in the first electromyographic signal and a second action potential segment in the second electromyographic signal;
extracting a plurality of first features from the first action potential segment and a plurality of second features from the second action potential segment;
splicing the first features and the second features based on a preset splicing rule to generate a plurality of feature vectors;
And projecting each feature vector by using a feature projection calculation algorithm, and classifying and identifying gesture actions on each projection result by using a preset classifier.
In an optional embodiment, the first electromyographic signal corresponding to the first position and the second electromyographic signal corresponding to the second position of the arm part acquired at any one time include:
and the first myoelectric signal corresponding to extensor at the forearm part of the arm and the second myoelectric signal corresponding to flexor are acquired at any time.
In an alternative embodiment, the determining the first action potential segment in the first electromyographic signal and the second action potential segment in the second electromyographic signal includes:
preprocessing the first electromyographic signal and the second electromyographic signal respectively to generate a corresponding first preprocessed electromyographic signal and a corresponding second preprocessed electromyographic signal, wherein the preprocessing comprises power interference removal and band-pass filtering;
determining a first action potential segment in the first pre-processed electromyographic signal and a second action potential segment in the second pre-processed electromyographic signal.
In an alternative embodiment, the determining the first action potential segment in the first pre-processed electromyographic signal and the second action potential segment in the second pre-processed electromyographic signal includes:
Correcting the first preprocessed electromyographic signal and the second preprocessed electromyographic signal respectively to obtain a first corrected electromyographic signal and a second corrected electromyographic signal;
respectively carrying out integral operation on the first correcting electromyographic signal and the second correcting electromyographic signal, and extracting a plurality of corresponding first envelope signals and a plurality of corresponding second envelope signals;
a first action potential segment in the first corrected electromyographic signal is determined based on the plurality of first envelope signals, and a second action potential segment in the second corrected electromyographic signal is determined based on the plurality of second envelope signals.
In an alternative embodiment, the determining a first action potential segment in the first corrected electromyographic signal based on a plurality of the first envelope signals, and determining a second action potential segment in the second corrected electromyographic signal based on a plurality of the second envelope signals, includes:
determining a first starting position and a first ending position of a first action potential segment in the first corrected electromyographic signal based on a plurality of the first envelope signals;
determining a first action potential segment in the first corrected electromyographic signal based on the first starting position and the first ending position;
Determining a second starting position and a second ending position of a second action potential segment in the second corrected electromyographic signal based on a plurality of the second envelope signals;
a second action potential segment in the second corrected electromyographic signal is determined based on the second starting position and the second ending position.
In an alternative embodiment, the extracting a plurality of first features from the first action potential segment and a plurality of second features from the second action potential segment includes:
splitting the first action potential segment to obtain a plurality of first action potential subsections;
extracting corresponding first features for each first action potential subsection;
splitting the second action potential segment to obtain a plurality of second action potential subsections;
extracting corresponding second features for each second action potential subsection;
the characteristics comprise wavelength, zero crossing point number, slope sign change number, AR model coefficient and skewness.
In an alternative embodiment, said projecting each of said feature vectors using a feature projection calculation algorithm comprises:
inputting each of the feature vectors into a feature projection calculation algorithm, the feature projection calculation algorithm comprising:
y=W T x;
Wherein, W is a projection matrix, x is the feature vector, and y is the projection result.
In an optional embodiment, the performing gesture classification recognition on each projection result by using a preset classifier includes:
and carrying out gesture action classification and identification on each projection result by using a preset nearest neighbor classifier.
In an alternative embodiment, the method further comprises:
based on a preset posterior probability calculation formula, calculating posterior probability of any gesture classification to which each feature vector belongs;
and determining gesture action classification corresponding to the highest posterior probability, and classifying the gesture action corresponding to each feature vector.
In an alternative embodiment, the preset posterior probability calculation formula includes:
Figure GDA0002577499180000041
wherein said p (C i |w 1 ,w 2 ,w 3 ,…w N ) For the posterior probability, the p (C i |w n ) A conditional probability of classifying any gesture to which each feature vector belongs, wherein delta is a normalization constant, and k is j Obtained from the following function:
Figure GDA0002577499180000042
where j=1, 2,3, … …, m+1.
In a second aspect of the embodiment of the present invention, there is also provided an electromyographic signal processing apparatus, the apparatus including:
The signal acquisition module is used for acquiring a first electromyographic signal and a second electromyographic signal, wherein the first electromyographic signal and the second electromyographic signal are included in a preset gesture action set, and the first electromyographic signal and the second electromyographic signal corresponding to the first position of the arm part and the second electromyographic signal corresponding to the second position are acquired at any time aiming at any gesture action;
the potential segment determining module is used for determining a first action potential segment in the first electromyographic signal and a second action potential segment in the second electromyographic signal;
a feature extraction module for extracting a plurality of first features from the first action potential segment and a plurality of second features from the second action potential segment;
the feature splicing module is used for splicing the plurality of first features and the plurality of second features based on a preset splicing rule to generate a plurality of feature vectors;
the vector projection module is used for projecting each characteristic vector by utilizing a characteristic projection calculation algorithm;
and the classification recognition module is used for carrying out gesture action classification recognition on each projection result by utilizing a preset classifier.
In a third aspect of the embodiments of the present invention, there is also provided an intelligent wearable device, including a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory complete communication with each other through the communication bus;
A memory for storing a computer program;
and the processor is used for realizing the electromyographic signal processing method in any one of the first aspects when executing the program stored in the memory.
In a fourth aspect of the embodiments of the present invention, there is further provided a storage medium having stored therein instructions that, when executed on a computer, cause the computer to perform the electromyographic signal processing method of any one of the first aspects described above.
In a fifth aspect of embodiments of the present invention, there is also provided a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of any of the above-described first aspects of myoelectric signal processing.
According to the technical scheme provided by the embodiment of the invention, for any gesture action, a first electromyographic signal corresponding to a first position and a second electromyographic signal corresponding to a second position of an arm part are acquired at any time, a first action potential segment in the first electromyographic signal and a second action potential segment in the second electromyographic signal are determined by acquiring the first electromyographic signal and the second electromyographic signal, a plurality of first features are extracted from the first action potential segment, a plurality of second features are extracted from the second action potential segment, a plurality of first features and a plurality of second features are spliced to generate a plurality of feature vectors based on preset splicing rules, each feature vector is projected by using a feature projection calculation algorithm, and gesture action classification identification is performed on each projection result by using a preset classifier. The first electromyographic signals corresponding to the first position and the second electromyographic signals corresponding to the second position of the arm part in different gesture actions are acquired for a plurality of times, the first electromyographic signals corresponding to the first position and the second electromyographic signals corresponding to the second position of the arm part in each acquisition are processed in a characteristic extraction and projection mode, so that different gesture actions are separated to the greatest extent, the control of multiple degrees of freedom can be realized, and the flexible motion function of the current multiple degrees of freedom electromyographic artificial limb is adapted.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
In order to more clearly illustrate the embodiments of the invention or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, and it will be obvious to a person skilled in the art that other drawings can be obtained from these drawings without inventive effort.
Fig. 1 is a schematic flow chart of an implementation of an electromyographic signal processing method in an embodiment of the invention;
FIG. 2 is a schematic diagram of an implementation flow for determining a first action potential segment and a second action potential segment according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a first action potential segment in a first corrected electromyographic signal according to an embodiment of the invention;
fig. 4 is a schematic diagram showing a first feature in a first action potential segment and a second feature in a second action potential segment in an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electromyographic signal processing device according to an embodiment of the invention;
Fig. 6 is a schematic structural diagram of an intelligent wearable device according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
As shown in fig. 1, a schematic implementation flow chart of an electromyographic signal processing method provided by an embodiment of the present invention may specifically include the following steps:
s101, acquiring a first electromyographic signal and a second electromyographic signal, wherein the first electromyographic signal and the second electromyographic signal are included in a preset gesture action set, and the first electromyographic signal and the second electromyographic signal corresponding to a first position of an arm part and a second electromyographic signal corresponding to a second position are acquired at any time according to any gesture action;
in the embodiment of the present invention, a gesture action set may be preset, where the gesture action set may include the following gesture actions: fist making, wrist bending, wrist stretching, palm stretching, thumb bending, index finger bending, middle finger bending, ring finger bending, little finger bending, thumb and index finger combined bending and the like.
The user wears the myoelectricity acquisition module (an arm ring or a myoelectricity acquisition electrode and the like) at the relevant position according to the requirement, and acquires a first myoelectricity signal corresponding to the first position of the arm part and a second myoelectricity signal corresponding to the second position for any gesture in the gesture action set for multiple times.
Specifically, the relevant positions may be flexor and extensor positions, and then the first electromyographic signal corresponding to the extensor at the forearm portion of the arm and the second electromyographic signal corresponding to the flexor may be acquired multiple times.
For example, for a plurality of gesture actions such as fist making, wrist bending, wrist stretching, palm stretching, thumb bending, index finger bending, middle finger bending, ring finger bending, thumb and index finger bending, etc., first, the fist making gesture action is collected 6 times, each time including a first electromyographic signal corresponding to extensor at the forearm portion of the arm and a second electromyographic signal corresponding to flexor, wherein each time is separated by 5 seconds, for avoiding fatigue, after 30 seconds, the wrist bending gesture action is collected 6 times, each time including a first electromyographic signal corresponding to extensor at the forearm portion of the arm and a second electromyographic signal corresponding to flexor, wherein the time is separated by 5 seconds, and so on.
According to the embodiment of the invention, the first myoelectric signal corresponding to the extensor at the forearm portion of the arm and the second myoelectric signal corresponding to the flexor can be acquired for any gesture, and subsequent processing is facilitated.
The transmission modes of the first electromyographic signal and the second electromyographic signal may be wireless or wired, which is not limited in the embodiment of the present invention.
S102, determining a first action potential section in the first electromyographic signal and a second action potential section in the second electromyographic signal;
since the electromyographic signals comprise a resting potential segment and an action potential segment, the action potential segment is generated when the muscle contracts, it is necessary to determine a first action potential segment in the first electromyographic signal and a second action potential segment in the second electromyographic signal.
As shown in fig. 2, the first action potential segment in the first electromyographic signal and the second action potential segment in the second electromyographic signal are specifically determined through the following steps:
s201, respectively preprocessing the first electromyographic signal and the second electromyographic signal to generate a corresponding first preprocessed electromyographic signal and a corresponding second preprocessed electromyographic signal, wherein the preprocessing comprises power interference removal and band-pass filtering;
for example, for a fist-making gesture, a first electromyographic signal corresponding to extensor and a second electromyographic signal corresponding to flexor are collected for the first time, firstly, 50Hz power interference is removed by using a wave trap, and then 20-450 Hz band-pass filtering is performed respectively, so as to generate a first preprocessing electromyographic signal and a second preprocessing electromyographic signal corresponding to the first time.
S202, correcting the first preprocessed electromyographic signal and the second preprocessed electromyographic signal respectively to obtain a first corrected electromyographic signal and a second corrected electromyographic signal;
s203, respectively performing integral operation on the first correction electromyographic signal and the second correction electromyographic signal, and extracting a plurality of corresponding first envelope signals and a plurality of corresponding second envelope signals;
for any gesture, a first myoelectric signal corresponding to extensor at the forearm portion of the arm and a second myoelectric signal corresponding to flexor are acquired at any time, after preprocessing, a corresponding first preprocessed myoelectric signal and a corresponding second preprocessed myoelectric signal are generated, the first preprocessed myoelectric signal and the second preprocessed myoelectric signal are respectively corrected to obtain a first corrected myoelectric signal and a second corrected myoelectric signal, and for any gesture, the first myoelectric signal corresponding to extensor at the forearm portion of the arm and the second myoelectric signal corresponding to flexor are acquired at any time, and the first corrected myoelectric signal and the second corrected myoelectric signal can be obtained through correction;
and respectively carrying out integral operation on the first correction electromyographic signal and the second correction electromyographic signal to extract a plurality of corresponding first envelope signals and a plurality of corresponding second envelope signals, so that for any gesture action, the first electromyographic signal corresponding to extensor muscle of the forearm part of the arm and the second electromyographic signal corresponding to flexor muscle of the arm, which are acquired at any time, can be used for extracting a plurality of corresponding first envelope signals and a plurality of corresponding second envelope signals.
For example, taking a first myoelectric signal corresponding to extensor and a second myoelectric signal corresponding to flexor at the forearm portion of the arm, which are acquired for the first time, as an example, for a fist-making gesture, preprocessing the first myoelectric signal and the second myoelectric signal to obtain a first preprocessed myoelectric signal and a second preprocessed myoelectric signal, respectively, correcting the first preprocessed myoelectric signal and the second preprocessed myoelectric signal to obtain a first corrected myoelectric signal and a second corrected myoelectric signal, wherein the first corrected myoelectric signal and the second corrected myoelectric signal are substantially calibration baselines, and reducing the influence of individual differences on the whole;
then, respectively carrying out integral operation on the first correction electromyographic signal and the second correction electromyographic signal, and extracting a plurality of corresponding first envelope signals and a plurality of corresponding second envelope signals, wherein the integral operation is as follows:
for the signal range of the first correction electromyographic signal of 1-1000, performing integral operation on the interval 1-100, extracting a first envelope signal, performing integral operation on the interval 2-101, extracting a second first envelope signal, performing integral operation on the interval 3-102, extracting a third first envelope signal, and so on;
for the signal range of the second correction electromyographic signal of 1-1000, the first second envelope signal is extracted by integrating the interval 1-100, the second envelope signal is extracted by integrating the interval 2-101, the third second envelope signal is extracted by integrating the interval 3-102, and so on.
S204, determining a first action potential section in the first corrected electromyographic signal based on the first envelope signals, and determining a second action potential section in the second corrected electromyographic signal based on the second envelope signals.
For any gesture, the first electromyographic signals corresponding to extensor and the second electromyographic signals corresponding to flexor at the forearm portion of the arm are acquired at any time, a plurality of first envelope signals and a plurality of second envelope signals are extracted through the step 1 and the step 2, the first action potential section in the first correction electromyographic signals can be determined based on the plurality of first envelope signals, and the second action potential section in the second correction electromyographic signals can be determined based on the plurality of second envelope signals.
Specifically, determining a first starting position and a first ending position of a first action potential segment in the first corrected electromyographic signal based on a plurality of the first envelope signals; determining a first action potential segment in the first corrected electromyographic signal based on the first starting position and the first ending position; determining a second starting position and a second ending position of a second action potential segment in the second corrected electromyographic signal based on a plurality of the second envelope signals; a second action potential segment in the second corrected electromyographic signal is determined based on the second starting position and the second ending position.
For example, for a fist-making gesture, the first myoelectric signal and the second myoelectric signal corresponding to extensor muscle at the forearm portion of the arm, which are acquired for the 1 st time, are preprocessed to obtain a first preprocessed myoelectric signal and a second preprocessed myoelectric signal, respectively, the first preprocessed myoelectric signal and the second preprocessed myoelectric signal are corrected to obtain a first corrected myoelectric signal a and a second corrected myoelectric signal B, and the first corrected myoelectric signal a and the second corrected myoelectric signal B are subjected to integral operation to extract a plurality of corresponding first envelope signals and a plurality of corresponding second envelope signals, which means that a plurality of first envelope signals are extracted from the first corrected myoelectric signal a and a plurality of second envelope signals are extracted from the second corrected myoelectric signal B, as shown in table 1 below.
Figure GDA0002577499180000101
TABLE 1
Determining a first action potential segment in the first corrected electromyographic signal a based on the plurality of first envelope signals extracted from the first corrected electromyographic signal a:
1. firstly judging whether the amplitude of a first envelope signal is larger than a first threshold value, if the amplitude of the first envelope signal is not larger than the first threshold value, continuously judging whether the amplitude of a second first envelope signal is larger than the first threshold value, and if the amplitude of the second first envelope signal is larger than the first threshold value (namely, the first occurrence of the condition that the amplitude of the first envelope signal is larger than the first threshold value) determining a first starting position of an action potential section, namely, the 101 th myoelectricity data corresponding position in 2 to 101 participating in the integral operation of the second first envelope signal;
2. After determining the first starting position of the action potential segment, it may be instead determined whether the amplitude of the third first envelope signal is smaller than the first threshold value (i.e. whether the amplitude of the third first envelope signal is not determined to be larger than the first threshold value any more), if the amplitude of the third first envelope signal is not smaller than the first threshold value, it may be further determined whether the amplitude of the fourth first envelope signal is smaller than the first threshold value, if the amplitude of the fourth first envelope signal is not smaller than the first threshold value, it may be further determined whether the amplitude of the fifth first envelope signal is smaller than the first threshold value, and so on until the amplitude of the nth first envelope signal is smaller than the first threshold value (i.e. when the amplitude of the first envelope signal is smaller than the first threshold value for the first time after the second first envelope signal), it may be determined the first ending position of the action potential segment, i.e. the corresponding position of the nth myoelectric data in the N-99 to N involved in the integral operation of the nth first envelope signal;
3. based on the first starting position and the first ending position, a first action potential section in the first corrected electromyographic signal is determined, and the section is located between 101 and N as shown in fig. 3.
For the specific flow of determining the second action potential segment in the second corrected electromyographic signal B based on the plurality of second envelope signals extracted from the second corrected electromyographic signal B, similar to the specific flow of determining the first action potential segment in the first corrected electromyographic signal a based on the plurality of first envelope signals extracted from the first corrected electromyographic signal a described above, the embodiments of the present invention will not be described in detail here.
S103, extracting a plurality of first features from the first action potential segment and a plurality of second features from the second action potential segment;
through the steps, for any gesture action, a first myoelectric signal corresponding to extensor and a second myoelectric signal corresponding to flexor at the forearm portion of the arm are acquired at any time, a first corrected myoelectric signal and a second corrected myoelectric signal are respectively obtained through correction, and then a second action potential section in the second corrected myoelectric signal and a first action potential section in the first corrected myoelectric signal are determined.
For a second action potential segment in the second corrected electromyographic signal and a first action potential segment in the first corrected electromyographic signal, embodiments of the present invention extract a plurality of first features from the first action potential segment and extract a plurality of second features from the second action potential segment.
Specifically, the first action potential segment is split to obtain a plurality of first action potential subsections, corresponding first features are extracted for each first action potential subsection, the second action potential segment is split to obtain a plurality of second action potential subsections, and corresponding second features are extracted for each second action potential subsection. The characteristics include wavelength, zero crossing point number, slope sign change number, AR model coefficient and skewness.
For example, the first action potential segment in the first corrected electromyographic signal a is taken as an example for illustration, and the signal interval includes 1-1000, and can be split into 10 first action potential subsections: 1-100, 101-200, 201-300, … …, extracting, for each first action potential subsection, a corresponding first feature, specifically comprising the following features:
A. wavelength of
Figure GDA0002577499180000121
The waveform length WL is simply accumulated by the length of K-point signals, reflects the complexity of the electromyographic signal waveform and reflects the combined effect of the electromyographic signal amplitude, frequency, duration and the like, and the value range of K can be the interval of each first action potential subsection such as 1-100, 101-200, 201-300 and … ….
B. Zero crossing point number
x i x i+1 ≤0,|x i -x i+1 |≥ε;
Simple special frequency statistics, calculating signal waveform in a period of timeThe number of times of passing the time axis (i.e. zero) gives two adjacent samples x i x i+1 The above condition is satisfied, the zero crossing point is added by 1, and the corresponding statistical interval of the zero crossing point can be the interval of each first action potential subsection such as 1-100, 101-200, 201-300 and … ….
C. Slope sign change number
(x i+1 -x i )*(x i -x i-1 )≤0,|x i -x i+1 |≥ε,|x i -x i-1 |≥ε;
The statistical feature is another feature describing the signal frequency information, given three consecutive sample values, x i+1 ,x i ,x i-1 The above condition is satisfied, the value of the change number is increased by 1, and the statistical interval of the corresponding slope sign change number can be the interval of each first action potential sub-segment of 1-100, 101-200, 201-300 … … and the like.
D. AR model coefficients
The AR model is a commonly used time series model as follows:
Figure GDA0002577499180000122
wherein s (n) represents an electromyographic signal, a i (i=1, 2,3 … p) denotes an AP model coefficient, p denotes a model order, and w (n) denotes random white noise.
E. Degree of deviation
Figure GDA0002577499180000131
The skewness is a characteristic for measuring the direction and the skewness of data, wherein Sk is the skewness and x is i For the sample observation value, n is the sample tax, x mean For the average value of the observation values of the sample n times, sd is the standard deviation of the sample, and the corresponding statistical interval of the skewness can be the interval of each first action potential subsection of 1-100, 101-200, 201-300 … … and the like.
Thus, for each first actuation potential sub-segment, the corresponding A, B, C, D, E features are extracted, as are for each second actuation potential sub-segment, the corresponding A, B, C, D, E features are extracted.
S104, splicing the plurality of first features and the plurality of second features based on a preset splicing rule to generate a plurality of feature vectors;
Through the steps, for each first action potential subsection, the corresponding A, B, C, D, E features are extracted, and for each second action potential subsection, the corresponding A, B, C, D, E features are extracted, so that the corresponding A, B, C, D, E features are included for each first feature, the corresponding A, B, C, D, E features are included for each second feature, and a plurality of first features and a plurality of second features are spliced to generate a plurality of feature vectors based on a preset splicing rule.
Specifically, the first features and the second features are spliced in a one-to-one correspondence manner based on a preset splicing rule to generate a plurality of feature vectors.
For example, for a plurality of first features extracted from a first action potential segment in the first corrected electromyographic signal a and a plurality of second features extracted from a second action potential segment in the first corrected electromyographic signal B, as shown in fig. 4, for a 1 st first feature and a 1 st second feature, direct stitching may generate a 1 st feature vector, for a 2 nd first feature and a 2 nd second feature, direct stitching may generate a 2 nd feature vector, for a 3 rd first feature and a 3 rd second feature, direct stitching may generate a 3 rd feature vector, and so on, a plurality of feature vectors may be obtained, x may be used 1 ,x 2 ,x 3 ,…x N And (3) representing.
S105, projecting each feature vector by using a feature projection calculation algorithm, and classifying and identifying gesture actions on each projection result by using a preset classifier.
For multiple feature vectors, x may be used 1 ,x 2 ,x 3 ,…x N The number of gesture categories is NC, each gesture categoryLet w i Contains nc i The feature vectors are:
Figure GDA0002577499180000141
the feature vector average for each gesture action category:
Figure GDA0002577499180000142
overall eigenvector average:
Figure GDA0002577499180000143
discrete matrices within each gesture motion category:
Figure GDA0002577499180000144
a dispersion matrix between different gesture motion categories:
Figure GDA0002577499180000145
it can be demonstrated that the matrix S is guaranteed W Under the singular condition, the maximized J (W) problem solution can be converted into a eigenvalue decomposition problem, and the column vector of the optimal LDA projection matrix W can be used for solving the S W S and S B Substituting the following formula to obtain:
Figure GDA0002577499180000146
the projection matrix W can be obtained through the above steps, and each feature vector is input into a feature projection calculation algorithm, where the feature projection calculation algorithm includes:
y=W T x;
wherein, W is a projection matrix, x is the feature vector, and y is the projection result.
The feature projection calculation algorithm can realize dimension reduction of each feature vector, and can extract important information (such as distinguishing degree information, variance information and the like) in redundant features so as to improve the generalization performance of the subsequent classifier.
For each feature vector, a corresponding projection result can be obtained through the feature projection calculation algorithm, and gesture action classification and identification are carried out on each projection result by using a preset nearest neighbor classifier. After the feature vector is subjected to dimension reduction in this way, the KNN can be used for carrying out gesture action classification recognition on each projection result.
In the embodiment of the invention, in order to further improve the performance of the classifier, the false misclassification can be eliminated through Bayesian fusion.
For each feature vector input classifier, we get the conditional probability p (C i |w n ) (i=1, 2,3, … …, M), M representing the number of gesture motion types, w n For each feature vector, the representation belongs to a certain class C i Is a probability of (2).
When the first feature vector reaches the classifier, the feature vector belongs to a class C i P (C) i |w 1 ) Indicating that when the second feature vector arrives at the classifier, the feature vector belongs to a class C i P (C) i |w 2 ) Representation, and so on. The posterior probability for the second eigenvector belongs to class C i Is represented by p (C) i |w 1 ,w 2 ) Given, by bayesian rules:
Figure GDA0002577499180000151
considering the randomness of the electromyographic signals and the disjoint positions of the feature vectors on the time axis, the correlation of the feature vectors is very weak, which proves the statistical independence of the hypothesis of the invention, and the above formula can be simplified as follows:
Figure GDA0002577499180000152
Figure GDA0002577499180000153
Figure GDA0002577499180000154
From the above results, it can be seen that the gesture type posterior probability p (c_i-w_2) (i.e., p (C) i |w 1 ,w 2 ) Is practically equal to p (C) i |w 1 ) And p (C) i |w 2 ) Multiplied by a constant value, which can be extended to the nth eigenvector as follows:
Figure GDA0002577499180000155
where Δ is a normalization constant, the gesture motion category with higher probability is considered as the best classification for each feature vector.
In addition, when switching from one gesture motion category to another, there is an accurate switching delay, so one weight is added to the above equation as follows:
Figure GDA0002577499180000161
and calculating the posterior probability of any gesture classification to which each feature vector belongs through the preset posterior probability calculation formula, determining the gesture classification corresponding to the highest posterior probability, and classifying the gesture corresponding to each feature vector.
For example, the feature vector w is calculated by the above-mentioned predetermined posterior probability calculation formula N The posterior probability of any gesture classification is as follows: p (C) 1 |w 1 ,w 2 ,w 3 ,…w N )、p(C 2 |w 1 ,w 2 ,w 3 ,…w N )、p(C 3 |w 1 ,w 2 ,w 3 ,…w N )、……p(C i |w 1 ,w 2 ,w 3 ,…w N ) Determining the gesture action classification corresponding to the highest posterior probability as a feature vector w N And classifying corresponding gesture actions.
Wherein said p (C i |w 1 ,w 2 ,w 3 ,…w N ) For the posterior probability, the p (C i |w n ) A conditional probability of classifying any gesture to which each feature vector belongs, wherein delta is a normalization constant, and k is j Obtained from the following function:
Figure GDA0002577499180000162
where j=1, 2,3, … …, m+1.
Through the description of the technical scheme provided by the embodiment of the invention, for any gesture, a first electromyographic signal corresponding to a first position of an arm part and a second electromyographic signal corresponding to a second position are acquired at any time, a first action potential segment in the first electromyographic signal and a second action potential segment in the second electromyographic signal are determined by acquiring the first electromyographic signal and the second electromyographic signal, a plurality of first features are extracted from the first action potential segment, a plurality of second features are extracted from the second action potential segment, a plurality of first features and a plurality of second features are spliced to generate a plurality of feature vectors based on preset splicing rules, each feature vector is projected by using a feature projection calculation algorithm, and gesture action classification identification is performed on each projection result by using a preset classifier.
The first electromyographic signals corresponding to the first position and the second electromyographic signals corresponding to the second position of the arm part in different gesture actions are acquired for a plurality of times, the first electromyographic signals corresponding to the first position and the second electromyographic signals corresponding to the second position of the arm part in each acquisition are processed in a characteristic extraction and projection mode, so that different gesture actions are separated to the greatest extent, the control of multiple degrees of freedom can be realized, and the flexible motion function of the current multiple degrees of freedom electromyographic artificial limb is adapted. In addition, through Bayesian fusion post-processing, the performance of the classifier is improved by eliminating false misclassification, accurate conversion delay from one class to another class is avoided through a weight scheme, electromyographic signals belonging to different motions are correctly classified to the maximum extent, the control of multiple degrees of freedom is realized, and higher accuracy is obtained.
Corresponding to the above method embodiment, the embodiment of the present invention further provides an electromyographic signal processing device, as shown in fig. 5, where the device may include: the device comprises a signal acquisition module 510, a potential segment determination module 520, a feature extraction module 530, a feature stitching module 540, a vector projection module 550 and a classification recognition module 560.
The signal obtaining module 510 is configured to obtain a first electromyographic signal and a second electromyographic signal, where the first electromyographic signal and the second electromyographic signal are included in a preset gesture action set, and for any gesture action, the first electromyographic signal and the second electromyographic signal corresponding to a first position of an arm part are collected at any time;
a potential segment determining module 520, configured to determine a first action potential segment in the first electromyographic signal and a second action potential segment in the second electromyographic signal;
a feature extraction module 530, configured to extract a plurality of first features from the first action potential segment and extract a plurality of second features from the second action potential segment;
the feature stitching module 540 is configured to stitch the plurality of first features and the plurality of second features to generate a plurality of feature vectors based on a preset stitching rule;
A vector projection module 550, configured to project each of the feature vectors using a feature projection calculation algorithm;
the classification recognition module 560 is configured to perform gesture classification recognition on each projection result by using a preset classifier.
The embodiment of the invention also provides an intelligent wearable device, as shown in fig. 6, which comprises a processor 61, a communication interface 62, a memory 63 and a communication bus 64, wherein the processor 61, the communication interface 62 and the memory 63 complete the communication with each other through the communication bus 64,
a memory 63 for storing a computer program;
the processor 61 is configured to execute the program stored in the memory 63, and implement the following steps:
acquiring a first electromyographic signal and a second electromyographic signal, wherein the first electromyographic signal and the second electromyographic signal are included in a preset gesture action set, and the first electromyographic signal and the second electromyographic signal corresponding to a first position of an arm part and a second position of the arm part are acquired at any time according to any gesture action; determining a first action potential segment in the first electromyographic signal and a second action potential segment in the second electromyographic signal; extracting a plurality of first features from the first action potential segment and a plurality of second features from the second action potential segment; splicing the first features and the second features based on a preset splicing rule to generate a plurality of feature vectors; and projecting each feature vector by using a feature projection calculation algorithm, and classifying and identifying gesture actions on each projection result by using a preset classifier.
The communication bus mentioned by the smart wearable device may be a peripheral component interconnect standard (Peripheral Component Interconnect, abbreviated as PCI) bus or an extended industry standard architecture (Extended Industry Standard Architecture, abbreviated as EISA) bus, etc. The communication bus may be classified as an address bus, a data bus, a control bus, or the like. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus.
The communication interface is used for communication between the intelligent wearable device and other devices.
The memory may include random access memory (Random Access Memory, RAM) or non-volatile memory (non-volatile memory), such as at least one disk memory. Optionally, the memory may also be at least one memory device located remotely from the aforementioned processor.
The processor may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU for short), a network processor (Network Processor, NP for short), etc.; but also digital signal processors (Digital Signal Processing, DSP for short), application specific integrated circuits (Application Specific Integrated Circuit, ASIC for short), field-programmable gate arrays (Field-Programmable Gate Array, FPGA for short) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
In yet another embodiment of the present invention, there is also provided a storage medium having stored therein instructions that, when executed on a computer, cause the computer to perform the electromyographic signal processing method of any of the above embodiments.
In a further embodiment of the present invention, a computer program product comprising instructions which, when run on a computer, cause the computer to perform the electromyographic signal processing method of any of the above embodiments is also provided.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present invention, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a storage medium or transmitted from one storage medium to another, for example, from one website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.) means. The storage media may be any available media that can be accessed by a computer or a data storage device such as a server, data center, or the like that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk (SSD)), etc.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In this specification, each embodiment is described in a related manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments.
The foregoing description is only of the preferred embodiments of the present invention and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention are included in the protection scope of the present invention.

Claims (13)

1. A method of processing an electromyographic signal, the method comprising:
acquiring a first electromyographic signal and a second electromyographic signal, wherein the first electromyographic signal and the second electromyographic signal are included in a preset gesture action set, and the first electromyographic signal and the second electromyographic signal corresponding to a first position of an arm part and a second position of the arm part are acquired at any time according to any gesture action;
determining a first action potential segment in the first electromyographic signal and a second action potential segment in the second electromyographic signal;
extracting a plurality of first features from the first action potential segment and a plurality of second features from the second action potential segment;
splicing the first features and the second features based on a preset splicing rule to generate a plurality of feature vectors;
and projecting each feature vector by using a feature projection calculation algorithm, and classifying and identifying gesture actions on each projection result by using a preset classifier.
2. The method of claim 1, wherein the first electromyographic signal corresponding to the first position and the second electromyographic signal corresponding to the second position of the arm portion acquired at any one time comprises:
and the first myoelectric signal corresponding to extensor at the forearm part of the arm and the second myoelectric signal corresponding to flexor are acquired at any time.
3. The method of claim 1, wherein the determining a first action potential segment in the first electromyographic signal and a second action potential segment in the second electromyographic signal comprises:
preprocessing the first electromyographic signal and the second electromyographic signal respectively to generate a corresponding first preprocessed electromyographic signal and a corresponding second preprocessed electromyographic signal, wherein the preprocessing comprises power interference removal and band-pass filtering;
determining a first action potential segment in the first pre-processed electromyographic signal and a second action potential segment in the second pre-processed electromyographic signal.
4. A method according to claim 3, wherein said determining a first action potential segment in said first pre-processed electromyographic signal and a second action potential segment in said second pre-processed electromyographic signal comprises:
Correcting the first preprocessed electromyographic signal and the second preprocessed electromyographic signal respectively to obtain a first corrected electromyographic signal and a second corrected electromyographic signal;
respectively carrying out integral operation on the first correcting electromyographic signal and the second correcting electromyographic signal, and extracting a plurality of corresponding first envelope signals and a plurality of corresponding second envelope signals;
a first action potential segment in the first corrected electromyographic signal is determined based on the plurality of first envelope signals, and a second action potential segment in the second corrected electromyographic signal is determined based on the plurality of second envelope signals.
5. The method of claim 4, wherein the determining a first action potential segment in the first corrected electromyographic signal based on the plurality of first envelope signals and a second action potential segment in the second corrected electromyographic signal based on the plurality of second envelope signals comprises:
determining a first starting position and a first ending position of a first action potential segment in the first corrected electromyographic signal based on a plurality of the first envelope signals;
determining a first action potential segment in the first corrected electromyographic signal based on the first starting position and the first ending position;
Determining a second starting position and a second ending position of a second action potential segment in the second corrected electromyographic signal based on a plurality of the second envelope signals;
a second action potential segment in the second corrected electromyographic signal is determined based on the second starting position and the second ending position.
6. The method of claim 1, wherein the extracting a plurality of first features from the first action potential segment and a plurality of second features from the second action potential segment comprises:
splitting the first action potential segment to obtain a plurality of first action potential subsections;
extracting corresponding first features for each first action potential subsection;
splitting the second action potential segment to obtain a plurality of second action potential subsections;
extracting corresponding second features for each second action potential subsection;
the characteristics comprise wavelength, zero crossing point number, slope sign change number, AR model coefficient and skewness.
7. The method of claim 1, wherein said projecting each of said feature vectors using a feature projection calculation algorithm comprises:
inputting each of the feature vectors into a feature projection calculation algorithm, the feature projection calculation algorithm comprising:
y=W T x;
Wherein, W is a projection matrix, x is the feature vector, and y is the projection result.
8. The method of claim 1, wherein the performing gesture classification recognition on each projection result using a preset classifier comprises:
and carrying out gesture action classification and identification on each projection result by using a preset nearest neighbor classifier.
9. The method according to any one of claims 1 to 8, further comprising:
based on a preset posterior probability calculation formula, calculating posterior probability of any gesture classification to which each feature vector belongs;
and determining gesture action classification corresponding to the highest posterior probability, and classifying the gesture action corresponding to each feature vector.
10. The method of claim 9, wherein the predetermined posterior probability calculation formula comprises:
Figure FDA0004110789180000031
wherein said p (C i |w 1 ,w 2 ,w 3 ,…w N ) For the posterior probability, the p (C i |w n ) A conditional probability for classifying any gesture to which each feature vector belongs, the delta being a normalization constant, the w n For each feature vector, the C i (i=1, 2,3, … …, M) is the gesture motion type, M represents the number of gesture motion types, w N As the Nth feature vector, the k j Is the weight.
11. An electromyographic signal processing device, comprising:
the signal acquisition module is used for acquiring a first electromyographic signal and a second electromyographic signal, wherein the first electromyographic signal and the second electromyographic signal are included in a preset gesture action set, and the first electromyographic signal and the second electromyographic signal corresponding to the first position of the arm part and the second electromyographic signal corresponding to the second position are acquired at any time aiming at any gesture action;
the potential segment determining module is used for determining a first action potential segment in the first electromyographic signal and a second action potential segment in the second electromyographic signal;
a feature extraction module for extracting a plurality of first features from the first action potential segment and a plurality of second features from the second action potential segment;
the feature splicing module is used for splicing the plurality of first features and the plurality of second features based on a preset splicing rule to generate a plurality of feature vectors;
the vector projection module is used for projecting each characteristic vector by utilizing a characteristic projection calculation algorithm;
and the classification recognition module is used for carrying out gesture action classification recognition on each projection result by utilizing a preset classifier.
12. The intelligent wearable device is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are communicated with each other through the communication bus;
a memory for storing a computer program;
a processor for implementing the method steps of any one of claims 1-10 when executing a program stored on a memory.
13. A storage medium having stored thereon a computer program, which when executed by a processor performs the method of any of claims 1-10.
CN202010378837.5A 2020-05-07 2020-05-07 Myoelectric signal processing method and device, intelligent wearable equipment and storage medium Active CN111603162B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010378837.5A CN111603162B (en) 2020-05-07 2020-05-07 Myoelectric signal processing method and device, intelligent wearable equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010378837.5A CN111603162B (en) 2020-05-07 2020-05-07 Myoelectric signal processing method and device, intelligent wearable equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111603162A CN111603162A (en) 2020-09-01
CN111603162B true CN111603162B (en) 2023-05-30

Family

ID=72194833

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010378837.5A Active CN111603162B (en) 2020-05-07 2020-05-07 Myoelectric signal processing method and device, intelligent wearable equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111603162B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113288532B (en) * 2021-05-31 2023-04-07 北京京东乾石科技有限公司 Myoelectric control method and device
CN113616222A (en) * 2021-07-28 2021-11-09 复旦大学 Occlusion movement condition monitoring and analyzing system based on high-density myoelectricity acquisition array
CN114138111B (en) * 2021-11-11 2022-09-23 深圳市心流科技有限公司 Full-system control interaction method of myoelectric intelligent bionic hand
CN113986017B (en) * 2021-12-27 2022-05-17 深圳市心流科技有限公司 Myoelectric gesture template generation method and device and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110420025A (en) * 2019-09-03 2019-11-08 北京海益同展信息科技有限公司 Surface electromyogram signal processing method, device and wearable device

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4219151B2 (en) * 2002-10-23 2009-02-04 グローリー株式会社 Image collation apparatus, image collation method, and image collation program
JP5285575B2 (en) * 2009-11-04 2013-09-11 日本放送協会 Human behavior determination device and program thereof
TWI489317B (en) * 2009-12-10 2015-06-21 Tatung Co Method and system for operating electric apparatus
CN101961529B (en) * 2010-08-13 2013-06-05 中国科学院深圳先进技术研究院 Myoelectricity feedback training and function evaluation teleoperation device and method
CA2835460C (en) * 2011-05-10 2018-05-29 Foteini AGRAFIOTI System and method for enabling continuous or instantaneous identity recognition based on physiological biometric signals
CN103440498A (en) * 2013-08-20 2013-12-11 华南理工大学 Surface electromyogram signal identification method based on LDA algorithm
US9483123B2 (en) * 2013-09-23 2016-11-01 Thalmic Labs Inc. Systems, articles, and methods for gesture identification in wearable electromyography devices
CN104660549B (en) * 2013-11-19 2017-12-15 深圳市腾讯计算机系统有限公司 Auth method and device
US10327670B2 (en) * 2014-03-26 2019-06-25 GestureLogic Inc. Systems, methods and devices for exercise and activity metric computation
CN104572029B (en) * 2014-12-26 2017-06-30 中国科学院自动化研究所 A kind of sliceable property of state machine and the regular decision method of splicing and device
CN105361880B (en) * 2015-11-30 2018-06-26 上海乃欣电子科技有限公司 The identifying system and its method of muscular movement event
CN107273798A (en) * 2017-05-11 2017-10-20 华南理工大学 A kind of gesture identification method based on surface electromyogram signal
CN108416367B (en) * 2018-02-08 2021-12-10 南京理工大学 Sleep staging method based on multi-sensor data decision-level fusion
CN108537123A (en) * 2018-03-08 2018-09-14 四川大学 Electrocardiogram recognition method based on multi-feature extraction
CN109859570A (en) * 2018-12-24 2019-06-07 中国电子科技集团公司电子科学研究院 A kind of brain training method and system
CN110825232B (en) * 2019-11-07 2022-10-21 中国航天员科研训练中心 Gesture recognition human-computer interaction device based on aerospace medical supervision and medical insurance signals

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110420025A (en) * 2019-09-03 2019-11-08 北京海益同展信息科技有限公司 Surface electromyogram signal processing method, device and wearable device

Also Published As

Publication number Publication date
CN111603162A (en) 2020-09-01

Similar Documents

Publication Publication Date Title
CN111603162B (en) Myoelectric signal processing method and device, intelligent wearable equipment and storage medium
Subasi et al. Surface EMG signal classification using TQWT, Bagging and Boosting for hand movement recognition
Betthauser et al. Stable responsive EMG sequence prediction and adaptive reinforcement with temporal convolutional networks
Abbaspour et al. Evaluation of surface EMG-based recognition algorithms for decoding hand movements
CN111844032B (en) Electromyographic signal processing and exoskeleton robot control method and device
CN111700718B (en) Method and device for recognizing holding gesture, artificial limb and readable storage medium
Zabidi et al. Detection of asphyxia in infants using deep learning convolutional neural network (CNN) trained on Mel frequency cepstrum coefficient (MFCC) features extracted from cry sounds
CN111103976A (en) Gesture recognition method, device and electronic device
KR20170091963A (en) Gesture classification apparatus and method using electromyogram signals
Wang et al. Deep Feature Learning Using Target Priors with Applications in ECoG Signal Decoding for BCI.
Jahani Fariman et al. Simple and computationally efficient movement classification approach for EMG-controlled prosthetic hand: ANFIS vs. artificial neural network
Antonius et al. Electromyography gesture identification using CNN-RNN neural network for controlling quadcopters
Hossain et al. Left and right hand movements EEG signals classification using wavelet transform and probabilistic neural network
Zhang et al. Ready for use: subject-independent movement intention recognition via a convolutional attention model
Kaburlasos The Lattice Computing (LC) Paradigm.
Koçer et al. Classifying neuromuscular diseases using artificial neural networks with applied Autoregressive and Cepstral analysis
CN117171708B (en) Multimode fusion method, system, equipment and medium in hybrid BCI system
Dolopikos et al. Electromyography signal-based gesture recognition for human-machine interaction in real-time through model calibration
Hlavica et al. Assessment of Parkinson's disease progression using neural network and ANFIS models
Andronache et al. Towards extending real-time EMG-based gesture recognition system
CN111714121A (en) Electromyographic data classification model construction method, electromyographic data classification model classification device and server
CN118692691B (en) Intelligent pre-examination and triage system and method for emergency patients
Ison et al. Beyond user-specificity for emg decoding using multiresolution muscle synergy analysis
Hayashi et al. A Neural Network Based on the Johnson S U Translation System and Related Application to Electromyogram Classification
Pandian et al. Effect of data preprocessing in the detection of epilepsy using machine learning techniques

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 601, 6 / F, building 2, No. 18, Kechuang 11th Street, Daxing District, Beijing, 100176

Applicant after: Jingdong Technology Information Technology Co.,Ltd.

Address before: 601, 6 / F, building 2, No. 18, Kechuang 11th Street, Daxing District, Beijing, 100176

Applicant before: Jingdong Shuke Haiyi Information Technology Co.,Ltd.

Address after: 601, 6 / F, building 2, No. 18, Kechuang 11th Street, Daxing District, Beijing, 100176

Applicant after: Jingdong Shuke Haiyi Information Technology Co.,Ltd.

Address before: 601, 6 / F, building 2, No. 18, Kechuang 11th Street, Beijing Economic and Technological Development Zone, Beijing 100176

Applicant before: BEIJING HAIYI TONGZHAN INFORMATION TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant