[go: up one dir, main page]

CN114469465B - A control method, device and medium based on intelligent prosthesis - Google Patents

A control method, device and medium based on intelligent prosthesis Download PDF

Info

Publication number
CN114469465B
CN114469465B CN202111628374.4A CN202111628374A CN114469465B CN 114469465 B CN114469465 B CN 114469465B CN 202111628374 A CN202111628374 A CN 202111628374A CN 114469465 B CN114469465 B CN 114469465B
Authority
CN
China
Prior art keywords
action
action execution
execution target
grabbing
mode
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111628374.4A
Other languages
Chinese (zh)
Other versions
CN114469465A (en
Inventor
齐鹏涛
商广勇
胡立军
李佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inspur Industrial Internet Co Ltd
Original Assignee
Inspur Industrial Internet Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inspur Industrial Internet Co Ltd filed Critical Inspur Industrial Internet Co Ltd
Priority to CN202111628374.4A priority Critical patent/CN114469465B/en
Publication of CN114469465A publication Critical patent/CN114469465A/en
Application granted granted Critical
Publication of CN114469465B publication Critical patent/CN114469465B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61FFILTERS IMPLANTABLE INTO BLOOD VESSELS; PROSTHESES; DEVICES PROVIDING PATENCY TO, OR PREVENTING COLLAPSING OF, TUBULAR STRUCTURES OF THE BODY, e.g. STENTS; ORTHOPAEDIC, NURSING OR CONTRACEPTIVE DEVICES; FOMENTATION; TREATMENT OR PROTECTION OF EYES OR EARS; BANDAGES, DRESSINGS OR ABSORBENT PADS; FIRST-AID KITS
    • A61F2/00Filters implantable into blood vessels; Prostheses, i.e. artificial substitutes or replacements for parts of the body; Appliances for connecting them with the body; Devices providing patency to, or preventing collapsing of, tubular structures of the body, e.g. stents
    • A61F2/50Prostheses not implantable in the body
    • A61F2/68Operating or control means
    • A61F2/70Operating or control means electrical
    • A61F2/72Bioelectric control, e.g. myoelectric
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/14Arrangements specially adapted for eye photography
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61FFILTERS IMPLANTABLE INTO BLOOD VESSELS; PROSTHESES; DEVICES PROVIDING PATENCY TO, OR PREVENTING COLLAPSING OF, TUBULAR STRUCTURES OF THE BODY, e.g. STENTS; ORTHOPAEDIC, NURSING OR CONTRACEPTIVE DEVICES; FOMENTATION; TREATMENT OR PROTECTION OF EYES OR EARS; BANDAGES, DRESSINGS OR ABSORBENT PADS; FIRST-AID KITS
    • A61F2/00Filters implantable into blood vessels; Prostheses, i.e. artificial substitutes or replacements for parts of the body; Appliances for connecting them with the body; Devices providing patency to, or preventing collapsing of, tubular structures of the body, e.g. stents
    • A61F2/50Prostheses not implantable in the body
    • A61F2/68Operating or control means
    • A61F2002/6809Operating or control means acoustic

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Medical Informatics (AREA)
  • Ophthalmology & Optometry (AREA)
  • Biophysics (AREA)
  • Cardiology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Transplantation (AREA)
  • Vascular Medicine (AREA)
  • Prostheses (AREA)

Abstract

本申请公开了一种基于智能假肢的控制方法、设备及介质,方法包括:智能假肢通过肌电信号采集装置采集患者残肢处的肌电信号,并对肌电信号进行处理分析,根据分析结果确定第一动作模式;通过图像采集装置采集环境图像以及人眼图像,并根据确定环境图像中的动作执行目标;通过预先训练的第一识别模型对动作执行目标进行识别处理,确定第二动作模式;通过预先训练的第二识别模型对动作执行目标进行识别处理,确定动作执行力度。对抓取目标进行精准识别,同时,针对不同的抓取目标能够控制智能假肢采取不同的抓取力度,避免了因抓取力度过高而导致的易碎物品的损坏,同时,对抓取力度进行实时调整,也避免了抓取目标的掉落。

The present application discloses a control method, device and medium based on intelligent prosthesis, the method includes: the intelligent prosthesis collects electromyographic signals at the residual limb of the patient through an electromyographic signal acquisition device, processes and analyzes the electromyographic signals, and determines a first action mode according to the analysis results; collects environmental images and human eye images through an image acquisition device, and determines the action execution target in the environmental image; identifies and processes the action execution target through a pre-trained first recognition model to determine a second action mode; identifies and processes the action execution target through a pre-trained second recognition model to determine the action execution strength. The grasping target is accurately identified, and at the same time, the intelligent prosthesis can be controlled to adopt different grasping strengths for different grasping targets, thereby avoiding damage to fragile items caused by excessive grasping strength. At the same time, the grasping strength is adjusted in real time to avoid the falling of the grasping target.

Description

Control method, equipment and medium based on intelligent artificial limb
Technical Field
The application relates to the field of intelligent artificial limbs, in particular to a control method, equipment and medium based on an intelligent artificial limb.
Background
Along with the development of society and the continuous improvement of industrial level, patients who have amputation caused by machine or car accident wounds are more and more, and amputation brings a plurality of inconveniences for the life of the patients. Accordingly, to address the inconvenience in life of amputees, more and more intelligent prostheses have been developed.
At present, an intelligent artificial limb usually works in a myoelectric induction mode, when the muscles of the residual limb of a patient contract, a complex biochemical reaction can occur, a tiny potential difference which can be measured is generated on the surface of the skin, the myoelectric potential difference signal is transmitted to a micro-sensor, amplified by an amplifier in an electrode to become a control signal, the control signal is input into a microcomputer, and then the microcomputer sends out an activity instruction, and the micro-sensor drives the intelligent artificial limb to open or close all joints through a driving system such as a micro-motor.
However, when the patient snatches the article through intelligent artificial limb, be difficult to control snatch the dynamics, lead to some fragile article to be cracked because too big snatch dynamics easily, simultaneously, the patient also is difficult to perceive the dynamics of snatching of intelligent artificial limb.
Disclosure of Invention
In order to solve the above problems, the present application provides a control method, device and medium based on intelligent artificial limb, including:
In a first aspect, the present application proposes a control method based on an intelligent prosthesis, comprising: the intelligent artificial limb acquires the electromyographic signals of the stump of the patient through the electromyographic signal acquisition device, and processes and analyzes the electromyographic signals to obtain an analysis result; determining a first action mode according to the analysis result, wherein the first action mode at least comprises one of the following steps: a bulb grasping mode, a column grasping mode, and a fingertip pinching mode; collecting an environment image and a human eye image through an image collecting device on the intelligent artificial limb, and determining an action execution target in the environment image according to the human eye image; performing recognition processing on the action execution target through a pre-trained first recognition model to obtain a first recognition result, and determining a second action mode according to the first recognition result, wherein the second action mode at least comprises one of the following steps: side grabbing, upper grabbing and bottom grabbing; and carrying out recognition processing on the action execution target through a pre-trained second recognition model to obtain a second recognition result, and determining action execution strength according to the second recognition result.
In one example, the identifying process is performed on the action execution target through a first pre-trained identifying model to obtain a first identifying result, and a second action mode is determined according to the first identifying result, which specifically includes: inputting an environment image containing the action execution target into a pre-trained first recognition model, so as to extract the graphic characteristics of the action execution target through the first recognition model and the contact surface characteristics surrounding the action execution target, wherein the contact surface characteristics are the appearance characteristics of a contact object for supporting or dragging the action execution target; constructing a three-dimensional image corresponding to the action execution target according to the graphic features, and acquiring appearance parameters of the three-dimensional image; determining a selection range of a second action mode according to the appearance parameters and the pre-stored hand parameters of the intelligent artificial limb, wherein the selection range comprises: side grabbing, upper grabbing and bottom grabbing; and selecting an optimal execution mode as the second action mode in the selection range according to the contact surface characteristics.
In one example, the identifying process is performed on the action execution target through a pre-trained second identifying model to obtain a second identifying result, and the action execution strength is determined according to the second identifying result, which specifically includes: inputting an environment image containing the action execution target to a pre-trained second recognition model to extract characteristic parameters of the action execution target through the second recognition model; according to the characteristic parameters, inquiring article parameters with similarity reaching a preset threshold value with the characteristic parameters in a database, wherein the article parameters at least comprise one of the following: type of article, weight of article the material of the article; and analyzing and calculating the object parameters to obtain the minimum grabbing static friction force of the action execution target, and taking the minimum grabbing static friction force as the action execution force.
In one example, after analyzing and calculating the item parameter to obtain the minimum grabbing static friction force of the action execution target and taking the minimum grabbing net friction force as the action execution force, the method further includes: according to the minimum grabbing static friction force and the pre-stored force adjustment grade number, a force adjustment value is obtained; according to the first action mode, the second action mode and the action execution force, the action execution target is tried to be grabbed; the sliding signals are collected in real time through the sliding sensor of the intelligent prosthetic hand, and whether sliding exists or not is judged according to the sliding signals; if not, keeping the execution force of the action unchanged; if yes, the number of the gears is adjusted according to the pre-stored force, the force adjusting value is used for increasing and adjusting the action execution force step by step until the sliding signal judges that sliding does not exist.
In one example, the image acquisition device on the intelligent artificial limb acquires an environment image and a human eye image, and determines an action execution target in the environment image according to the human eye image, which specifically comprises: collecting human eye images of a patient through an image collecting device on the intelligent artificial limb, obtaining a cross section and a vertical section of an eyeball of the patient according to the human eye images, and taking the extending direction of the cross section as a gazing direction; constructing a reference shaft parallel to the vertical section by taking the position of the image acquisition device as an origin; obtaining a target acquisition direction of the image acquisition device according to the gazing direction and the reference axis, and controlling the image acquisition device to rotate to the target acquisition direction; and acquiring an environment image at the target acquisition direction by the image acquisition device, and taking an object target which coincides with the center point of the environment image as an action execution target.
In one example, determining the first action mode according to the analysis result specifically includes: according to the analysis result, inquiring an action mode with the highest matching degree with the analysis result in a database through a pre-trained fuzzy matching model, wherein the action mode at least comprises one of the following steps: a bulb grasping mode, a column grasping mode, and a fingertip pinching mode; and taking the action mode with the highest matching degree as a first action mode.
In one example, the intelligent artificial limb collects the electromyographic signals of the stump of the patient through the electromyographic signal collection device, and processes and analyzes the electromyographic signals, and before the analysis result is obtained, the method further comprises: the intelligent artificial limb acquires the action intention of a patient and acquires a plurality of training electromyographic signals at the stump of the patient according to the action intention, wherein the action intention at least comprises one of the following: ball grabbing, column grabbing and fingertip pinching; inputting the training electromyographic signals and the corresponding action intentions into an initial fuzzy matching model to perform fuzzy rule training to obtain a plurality of fuzzy rules; and constructing a fuzzy matching model according to the fuzzy rules.
In one example, after performing recognition processing on the action execution target through a pre-trained second recognition model to obtain a second recognition result and determining action execution strength according to the second recognition result, the method further includes: obtaining corresponding bioelectric strength according to the action execution force; simulating a stimulation current consistent with the bioelectric strength through an electrode unit on the intelligent artificial limb; the stimulation current is directed to the patient at the stump.
In another aspect, the present application provides a control device based on an intelligent prosthesis, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the following instructions: collecting an electromyographic signal at the residual limb of a patient through an electromyographic signal collecting device, and processing and analyzing the electromyographic signal to obtain an analysis result; determining a first action mode according to the analysis result, wherein the first action mode at least comprises one of the following steps: a bulb grasping mode, a column grasping mode, and a fingertip pinching mode; collecting an environment image and a human eye image through an image collecting device on the intelligent artificial limb, and determining an action execution target in the environment image according to the human eye image; performing recognition processing on the action execution target through a pre-trained first recognition model to obtain a first recognition result, and determining a second action mode according to the first recognition result, wherein the second action mode at least comprises one of the following steps: side grabbing, upper grabbing and bottom grabbing; and carrying out recognition processing on the action execution target through a pre-trained second recognition model to obtain a second recognition result, and determining action execution strength according to the second recognition result.
In another aspect, the present application provides a non-volatile computer storage medium storing computer-executable instructions configured to: collecting an electromyographic signal at the residual limb of a patient through an electromyographic signal collecting device, and processing and analyzing the electromyographic signal to obtain an analysis result; determining a first action mode according to the analysis result, wherein the first action mode at least comprises one of the following steps: a bulb grasping mode, a column grasping mode, and a fingertip pinching mode; collecting an environment image and a human eye image through an image collecting device on the intelligent artificial limb, and determining an action execution target in the environment image according to the human eye image; performing recognition processing on the action execution target through a pre-trained first recognition model to obtain a first recognition result, and determining a second action mode according to the first recognition result, wherein the second action mode at least comprises one of the following steps: side grabbing, upper grabbing and bottom grabbing; and carrying out recognition processing on the action execution target through a pre-trained second recognition model to obtain a second recognition result, and determining action execution strength according to the second recognition result.
The control method, the control equipment and the control medium based on the intelligent artificial limb provided by the application have the following beneficial effects: carry out accurate discernment to snatching the target, simultaneously, can control intelligent artificial limb to the different target of snatching and take different dynamics of snatching, avoided the damage of fragile article because of snatching the dynamics and high and lead to, simultaneously, carry out real-time adjustment to snatching the dynamics, also avoided snatching the dropping of target. In addition, the corresponding stimulation electric signals are sent to the patient according to the grabbing force, so that the patient can know the current grabbing force.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
FIG. 1 is a schematic flow chart of a control method based on an intelligent artificial limb in an embodiment of the application;
FIG. 2 is a schematic diagram of an image capturing device according to an embodiment of the present application;
fig. 3 is a schematic diagram of a control device based on an intelligent artificial limb according to an embodiment of the present application.
Description of the drawings:
10-intelligent artificial limb, 20-image acquisition device, 30-eyeball, 40-cross section, 50-vertical section, 60-action execution target, 70-reference axis, 80-target acquisition direction and 90-rotation angle.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be clearly and completely described below with reference to specific embodiments of the present application and corresponding drawings. It will be apparent that the described embodiments are only some, but not all, embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
It should be noted that, in the control method based on the intelligent artificial limb according to the present application, the control method may be stored in a corresponding system in a program or algorithm mode, the system may be installed in the intelligent artificial limb, and in order to support the operation of the system, the intelligent artificial limb should be equipped with corresponding elements, such as a processor, a memory, a communication module, etc., so as to implement support for the program or algorithm. Meanwhile, other hardware should be included in the intelligent artificial limb to support all the technical schemes described in the application, for example: an electromyographic signal acquisition device, an image acquisition device, a sliding sense sensor, an electrode unit and the like.
The following describes in detail the technical solutions provided by the embodiments of the present application with reference to the accompanying drawings.
As shown in fig. 1, a control method based on an intelligent artificial limb provided by an embodiment of the application includes:
S101: the intelligent artificial limb collects the electromyographic signals of the stump of the patient through the electromyographic signal collecting device, and processes and analyzes the electromyographic signals to obtain an analysis result.
The intelligent artificial limb is connected with the residual limb of the patient, has an myoelectric induction function, is provided with a myoelectric signal acquisition device, can be attached to the external skin of the residual limb of the patient, and can acquire corresponding myoelectric signals on the residual limb when the patient generates certain grabbing consciousness.
Further, a processor is arranged in the intelligent artificial limb, and can correspondingly analyze and process the collected electromyographic signals, and the electromyographic signals are subjected to signal amplification processing through an amplifier before being processed, so that the strength of the electromyographic signals is ensured.
Further, an analysis result was obtained.
S102: determining a first action mode according to the analysis result, wherein the first action mode at least comprises one of the following steps: a bulb grabbing mode, a column grabbing mode and a fingertip pinching mode.
Specifically, according to the analysis result, the intelligent artificial limb queries an action mode with the highest matching degree with the analysis result in the database through a pre-trained fuzzy matching model, and the action mode at least comprises one of the following steps: a bulb grabbing mode, a column grabbing mode and a fingertip pinching mode.
Further, the operation mode having the highest matching degree is set as the first operation mode, that is, any one of the ball gripping mode, the column gripping mode, and the fingertip gripping mode.
It should be noted that, the intelligent artificial limb may be provided with a memory as a database, and the intelligent artificial limb may also establish a wireless connection with the remote server and the database through the communication module, so as to achieve access to the database, and the specific implementation form is not specifically limited in the embodiment of the present application.
In addition, when a patient consciously uses a certain grabbing mode, a specific myoelectric signal is generated at the stump, however, the generated myoelectric signals are not completely consistent for the same grabbing mode, so in the embodiment of the application, a fuzzy matching model is introduced, and the scheme can be realized.
S103: and acquiring an environment image and a human eye image through the image acquisition device on the intelligent artificial limb, and determining an action execution target in the environment image according to the human eye image.
Specifically, as shown in fig. 2, the intelligent prosthesis 10 is provided with an image acquisition device 20, where the image acquisition device 20 includes a camera and a rotating device, and the camera can rotate according to the rotating device to acquire images at different angles.
It should be noted that, the image acquisition device 20 herein can automatically track the position of the eyeball 30 of the patient after the intelligent artificial limb determines the first action mode according to the analysis result, that is, the position of the eyeball 30 is determined by a 360-degree rotation mode, and the human eye image can be acquired after the camera is determined to be aligned with the eyeball 30.
Further, the intelligent prosthesis 10 collects an image of the eye of the patient, that is, an image of the eyeball 30, through the image collecting device 20 on the intelligent prosthesis, and the collecting mode can adopt characteristic identification of the eyeball so as to align the camera.
It should be noted that, the intelligent artificial limb 10 should be internally provided with a processing unit with corresponding computing power, which may store a corresponding program for identifying the eyeball characteristics, that is, the processing unit controls the image acquisition device 20, and after determining that the camera is aligned to the eyeball 30, the processing unit controls the rotation device to stop rotating. In addition, if the processing unit does not recognize the eyeball 30 after controlling the rotating device to rotate 360 degrees, for example, the arm of the patient is currently in a sagging state, and the eyeball 30 cannot be recognized, the patient needs to lift the residual limb in a voice prompt manner through the voice prompt device arranged on the intelligent artificial limb 10, and then the position of the intelligent artificial limb 10 is driven to lift until the human eye image is acquired in the manner, or the intelligent artificial limb 10 is automatically lifted for a certain distance under the control of the processing unit until the human eye image is acquired in the manner.
It should be further noted that, the above technical solution relies on the processing unit inside the intelligent artificial limb 10 to perform corresponding processing calculation, and in addition, connection with the remote server may be established through the communication module disposed inside the intelligent artificial limb 10, so as to support and implement the above technical solution by using the calculation force of the remote server.
Further, the cross section 40 and the vertical section 50 of the eyeball of the patient are obtained from the human eye image, and the extending direction of the cross section 40 is taken as the gazing direction, and the extending direction here is the forward looking direction of the eyeball, and the extending direction is the direction to intersect with the action execution target 60.
Further, the intelligent prosthesis constructs a reference axis 70 parallel to the vertical section with the position of the image acquisition device as the origin.
Furthermore, according to the gazing direction (i.e. the extending direction of the cross section 40) and the reference axis 70, the target acquiring direction 80 of the image acquiring apparatus is obtained, i.e. the value of the rotation angle 90 is calculated by taking the reference axis as the center point, and the camera is rotated according to the rotation angle 90 to align with the target acquiring direction 80, and according to the reference axis 70 and the gazing direction (i.e. the extending direction of the cross section 40), the rotation angle 90 can be calculated by a trigonometric function.
Further, the image pickup device 20 picks up the environmental image in the object pickup direction 80, and sets the object overlapping with the center point of the environmental image as the operation execution object. That is, when the image capturing device 20 is rotated to the target capturing direction 60, the direction in which the camera is aligned intersects the patient's gaze direction, and there is a certain action execution target at the center of the captured environmental image.
S104: performing recognition processing on the action execution target through a pre-trained first recognition model to obtain a first recognition result, and determining a second action mode according to the first recognition result, wherein the second action mode at least comprises one of the following steps: side grabbing, upper grabbing and bottom grabbing.
Specifically, the intelligent prosthesis inputs an environmental image containing the action execution target to a first recognition model trained in advance to extract the graphic features of the action execution target and the contact surface features surrounding the action execution target through the first recognition model.
Note that the graphic features, that is, various types of elements constituting the action execution target, include, but are not limited to: circular arc, circular, rectangular, trapezoid, etc. Interface features, i.e., appearance features of the supportive article holding or pulling the motion-performing object, include, but are not limited to: table tops, pull ropes, etc. I.e. the contact surface features are the appearance features of the contact object that holds or pulls the motion execution target.
Further, a three-dimensional image corresponding to the action execution target is constructed according to the graphic features, and appearance parameters of the three-dimensional image are acquired, wherein the appearance parameters include, but are not limited to: length, width, height, diameter, cross-sectional area, etc.
Further, according to the appearance parameters and pre-stored hand parameters of the intelligent artificial limb, determining a selection range of the second action mode, wherein the selection range comprises: side grabbing, upper grabbing and bottom grabbing. For example, the action execution target is determined to be a cylinder based on the appearance parameters, while it is determined to be able to surround all or part of holding the cylinder based on the hand parameters, and thus the selection range at this time may include all of the above-described modes.
Further, an optimal execution mode is selected as the second operation mode within the selection range according to the contact surface characteristics. For example, when the cylinder is identified to be placed on the table surface according to the contact surface characteristics, the bottom grabbing can be filtered at this time, and for grabbing of the cylinder, the stability of grabbing of the side edge is far higher than that of grabbing of the upper side, and the side edge grabbing can be selected as the second action mode according to the preset grabbing for the type of article.
S105: and carrying out recognition processing on the action execution target through a pre-trained second recognition model to obtain a second recognition result, and determining action execution strength according to the second recognition result.
Specifically, the intelligent artificial limb inputs an environment image containing the action execution target to a pre-trained second recognition model to extract characteristic parameters of the action execution target through the second recognition model.
Further, according to the characteristic parameters, inquiring the object parameters with similarity reaching a preset threshold value from a database and/or the internet, wherein the object parameters at least comprise one of the following: type of article, weight of article the material of the article.
And further, analyzing and calculating the object parameters to obtain the minimum grabbing static friction force of the action execution target, and taking the minimum grabbing static friction force as the action execution force.
The above-mentioned item parameters may reflect the relevant parameters of the action execution target.
In addition, by adopting two recognition models, namely a first recognition model and a second recognition model, two data processing can be operated in parallel, and the reaction time of the intelligent artificial limb is reduced.
In addition, in one embodiment, after the analysis and calculation are performed on the item parameters to obtain the minimum grabbing friction force of the action execution target, and the minimum grabbing static friction force is used as the action execution force, the method can further comprise the following steps:
And adjusting the number of grades according to the minimum grabbing static friction force and the pre-stored force to obtain a force adjusting value. For example, the number of pre-stored force adjustment gears is 5, and the minimum grabbing static friction force is 10N, at this time, the minimum grabbing static friction force can be equally divided according to the force adjustment gears, and the force adjustment value is 2N.
Further, the operation execution target is tried to be grasped according to the first operation mode, the second operation mode and the operation execution force.
And the sliding signals are acquired in real time through the sliding sensor of the intelligent artificial limb hand, and whether sliding exists or not is judged according to the sliding signals.
If not, keeping the action execution force unchanged, and grabbing the action execution target to the front of the eyes of the patient.
If yes, it is indicated that there is sliding at this time, it is necessary to increase the action execution force. Namely, the number of gears is adjusted according to the pre-stored dynamics, and the dynamics adjustment value is used for increasing and adjusting the dynamics of the action execution step by step until no sliding exists according to the sliding signal. For example, the current action execution force is 10N, and every time 2N is added, the current action execution force is increased by 5 times at most until the current action execution force is not slid any more.
Through this technical scheme, avoided some fragile article because snatch the cracked that the dynamics is too big leads to, simultaneously, guaranteed again can not drop because of snatching the dynamics too little and leading to the article.
In one embodiment, the intelligent artificial limb collects the electromyographic signals of the stump of the patient through the electromyographic signal collection device, processes and analyzes the electromyographic signals, and before obtaining the analysis result, the intelligent artificial limb further comprises:
The intelligent artificial limb acquires the action intention of a patient and acquires a plurality of training electromyographic signals at the stump of the patient according to the action intention, wherein the action intention at least comprises one of the following: ball grabbing, column grabbing and fingertip pinching.
It should be noted that the action intention may be instructed by the patient's voice or other means to cause the intelligent prosthesis to acquire the action intention, and at the same time, the action intention should include a specific type, i.e., a type of intelligent prosthesis support.
And the intelligent artificial limb inputs a plurality of training electromyographic signals and corresponding action intentions into the initial fuzzy matching model to perform fuzzy rule training so as to obtain a plurality of fuzzy rules.
Further, a fuzzy matching model is constructed according to a plurality of fuzzy rules. The corresponding first action mode can be determined through the electromyographic signals of the patient and the fuzzy matching model.
In one embodiment, summarizing, performing recognition processing on the action execution target through a pre-trained second recognition model to obtain a second recognition result, and determining the action execution strength according to the second recognition result, further including:
and obtaining the corresponding bioelectric strength according to the action execution force.
Furthermore, the stimulating current consistent with the bioelectric strength is simulated through the electrode unit on the intelligent artificial limb.
Further, a stimulation current is introduced at the stump of the patient. For example, the stimulation current may vary from 0.05A to 0.25A.
Through the technical scheme, the patient can sense the strength of the stimulation current at the stump so as to sense the execution strength of the action.
In one embodiment, as shown in fig. 3, the present application further provides a control device based on an intelligent artificial limb, including:
At least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the following instructions:
Collecting an electromyographic signal at the residual limb of a patient through an electromyographic signal collecting device, and processing and analyzing the electromyographic signal to obtain an analysis result;
Determining a first action mode according to the analysis result, wherein the first action mode at least comprises one of the following steps: a bulb grasping mode, a column grasping mode, and a fingertip pinching mode;
collecting an environment image and a human eye image through an image collecting device on the intelligent artificial limb, and determining an action execution target in the environment image according to the human eye image;
Performing recognition processing on the action execution target through a pre-trained first recognition model to obtain a first recognition result, and determining a second action mode according to the first recognition result, wherein the second action mode at least comprises one of the following steps: side grabbing, upper grabbing and bottom grabbing;
And carrying out recognition processing on the action execution target through a pre-trained second recognition model to obtain a second recognition result, and determining action execution strength according to the second recognition result.
In one embodiment, the present application also provides a non-volatile computer storage medium storing computer-executable instructions configured to:
Collecting an electromyographic signal at the residual limb of a patient through an electromyographic signal collecting device, and processing and analyzing the electromyographic signal to obtain an analysis result;
Determining a first action mode according to the analysis result, wherein the first action mode at least comprises one of the following steps: a bulb grasping mode, a column grasping mode, and a fingertip pinching mode;
collecting an environment image and a human eye image through an image collecting device on the intelligent artificial limb, and determining an action execution target in the environment image according to the human eye image;
Performing recognition processing on the action execution target through a pre-trained first recognition model to obtain a first recognition result, and determining a second action mode according to the first recognition result, wherein the second action mode at least comprises one of the following steps: side grabbing, upper grabbing and bottom grabbing;
And carrying out recognition processing on the action execution target through a pre-trained second recognition model to obtain a second recognition result, and determining action execution strength according to the second recognition result.
The embodiments of the present application are described in a progressive manner, and the same and similar parts of the embodiments are all referred to each other, and each embodiment is mainly described in the differences from the other embodiments. In particular, for the apparatus and medium embodiments, the description is relatively simple, as it is substantially similar to the method embodiments, with reference to the section of the method embodiments being relevant.
The devices and media provided in the embodiments of the present application are in one-to-one correspondence with the methods, so that the devices and media also have similar beneficial technical effects as the corresponding methods, and since the beneficial technical effects of the methods have been described in detail above, the beneficial technical effects of the devices and media are not repeated here.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and variations of the present application will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the application are to be included in the scope of the claims of the present application.

Claims (6)

1. The intelligent artificial limb-based control method is characterized by comprising the following steps of:
The intelligent artificial limb acquires the electromyographic signals of the stump of the patient through the electromyographic signal acquisition device, and processes and analyzes the electromyographic signals to obtain an analysis result;
Determining a first action mode according to the analysis result, wherein the first action mode at least comprises one of the following steps: a bulb grasping mode, a column grasping mode, and a fingertip pinching mode;
collecting an environment image and a human eye image through an image collecting device on the intelligent artificial limb, and determining an action execution target in the environment image according to the human eye image;
Performing recognition processing on the action execution target through a pre-trained first recognition model to obtain a first recognition result, and determining a second action mode according to the first recognition result, wherein the second action mode at least comprises one of the following steps: side grabbing, upper grabbing and bottom grabbing;
Performing recognition processing on the action execution target through a pre-trained second recognition model to obtain a second recognition result, and determining action execution strength according to the second recognition result;
The method comprises the steps of collecting an environment image and a human eye image through an image collecting device on the intelligent artificial limb, and determining an action execution target in the environment image according to the human eye image, wherein the action execution target comprises the following specific steps:
Collecting human eye images of a patient through an image collecting device on the intelligent artificial limb, obtaining a cross section and a vertical section of an eyeball of the patient according to the human eye images, and taking the extending direction of the cross section as a gazing direction;
constructing a reference shaft parallel to the vertical section by taking the position of the image acquisition device as an origin;
Obtaining a target acquisition direction of the image acquisition device according to the gazing direction and the reference axis, and controlling the image acquisition device to rotate to the target acquisition direction;
Acquiring an environment image in the target acquisition direction by the image acquisition device, and taking an object target which coincides with the center point of the environment image as an action execution target;
Performing recognition processing on the action execution target through a pre-trained first recognition model to obtain a first recognition result, and determining a second action mode according to the first recognition result, wherein the method specifically comprises the following steps of:
Inputting an environment image containing the action execution target into a pre-trained first recognition model, so as to extract the graphic characteristics of the action execution target through the first recognition model and the contact surface characteristics surrounding the action execution target, wherein the contact surface characteristics are appearance characteristics of a contact object supporting or towing the action execution target;
Constructing a three-dimensional image corresponding to the action execution target according to the graphic features, and acquiring appearance parameters of the three-dimensional image;
determining a selection range of a second action mode according to the appearance parameters and the pre-stored hand parameters of the intelligent artificial limb, wherein the selection range comprises: side grabbing, upper grabbing and bottom grabbing;
Selecting an optimal execution mode as the second action mode in the selection range according to the contact surface characteristics;
Performing recognition processing on the action execution target through a pre-trained second recognition model to obtain a second recognition result, and determining action execution strength according to the second recognition result, wherein the method specifically comprises the following steps of:
Inputting an environment image containing the action execution target to a pre-trained second recognition model to extract characteristic parameters of the action execution target through the second recognition model;
According to the characteristic parameters, inquiring article parameters with similarity reaching a preset threshold value with the characteristic parameters in a database, wherein the article parameters at least comprise one of the following: type of article, weight of article the material of the article;
Analyzing and calculating the object parameters to obtain the minimum grabbing static friction force of the action execution target, and taking the minimum grabbing static friction force as the action execution force;
Analyzing and calculating the object parameters to obtain the minimum grabbing static friction force of the action execution target, and taking the minimum grabbing static friction force as the action execution force, wherein the method further comprises the following steps:
according to the minimum grabbing static friction force and the pre-stored force adjustment grade number, a force adjustment value is obtained;
according to the first action mode, the second action mode and the action execution force, the action execution target is tried to be grabbed;
the sliding signals are collected in real time through the sliding sensor of the intelligent prosthetic hand, and whether sliding exists or not is judged according to the sliding signals;
If not, keeping the execution force of the action unchanged;
If yes, the number of the gears is adjusted according to the pre-stored force, the force adjusting value is used for increasing and adjusting the action execution force step by step until the sliding signal judges that sliding does not exist.
2. The method according to claim 1, wherein determining the first motion mode according to the analysis result comprises:
according to the analysis result, inquiring an action mode with the highest matching degree with the analysis result in a database through a pre-trained fuzzy matching model, wherein the action mode at least comprises one of the following steps: a bulb grasping mode, a column grasping mode, and a fingertip pinching mode;
and taking the action mode with the highest matching degree as a first action mode.
3. The method according to claim 1, wherein the intelligent artificial limb collects the electromyographic signals of the stump of the patient through the electromyographic signal collection device, and processes and analyzes the electromyographic signals, and before obtaining the analysis result, the method further comprises:
The intelligent artificial limb acquires the action intention of a patient and acquires a plurality of training electromyographic signals at the stump of the patient according to the action intention, wherein the action intention at least comprises one of the following: ball grabbing, column grabbing and fingertip pinching;
inputting the training electromyographic signals and the corresponding action intentions into an initial fuzzy matching model to perform fuzzy rule training to obtain a plurality of fuzzy rules;
and constructing a fuzzy matching model according to the fuzzy rules.
4. The control method according to claim 1, wherein after performing recognition processing on the action execution target by a second recognition model trained in advance to obtain a second recognition result and determining action execution strength according to the second recognition result, the method further comprises:
obtaining corresponding bioelectric strength according to the action execution force;
Simulating a stimulation current consistent with the bioelectric strength through an electrode unit on the intelligent artificial limb;
The stimulation current is directed to the patient at the stump.
5. A control device based on intelligent prostheses, characterized by comprising:
At least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the following instructions:
Collecting an electromyographic signal at the residual limb of a patient through an electromyographic signal collecting device, and processing and analyzing the electromyographic signal to obtain an analysis result;
Determining a first action mode according to the analysis result, wherein the first action mode at least comprises one of the following steps: a bulb grasping mode, a column grasping mode, and a fingertip pinching mode;
collecting an environment image and a human eye image through an image collecting device on the intelligent artificial limb, and determining an action execution target in the environment image according to the human eye image;
Performing recognition processing on the action execution target through a pre-trained first recognition model to obtain a first recognition result, and determining a second action mode according to the first recognition result, wherein the second action mode at least comprises one of the following steps: side grabbing, upper grabbing and bottom grabbing;
Performing recognition processing on the action execution target through a pre-trained second recognition model to obtain a second recognition result, and determining action execution strength according to the second recognition result;
The method comprises the steps of collecting an environment image and a human eye image through an image collecting device on the intelligent artificial limb, and determining an action execution target in the environment image according to the human eye image, wherein the action execution target comprises the following specific steps:
Collecting human eye images of a patient through an image collecting device on the intelligent artificial limb, obtaining a cross section and a vertical section of an eyeball of the patient according to the human eye images, and taking the extending direction of the cross section as a gazing direction;
constructing a reference shaft parallel to the vertical section by taking the position of the image acquisition device as an origin;
Obtaining a target acquisition direction of the image acquisition device according to the gazing direction and the reference axis, and controlling the image acquisition device to rotate to the target acquisition direction;
Acquiring an environment image in the target acquisition direction by the image acquisition device, and taking an object target which coincides with the center point of the environment image as an action execution target;
Performing recognition processing on the action execution target through a pre-trained first recognition model to obtain a first recognition result, and determining a second action mode according to the first recognition result, wherein the method specifically comprises the following steps of:
Inputting an environment image containing the action execution target into a pre-trained first recognition model, so as to extract the graphic characteristics of the action execution target through the first recognition model and the contact surface characteristics surrounding the action execution target, wherein the contact surface characteristics are appearance characteristics of a contact object supporting or towing the action execution target;
Constructing a three-dimensional image corresponding to the action execution target according to the graphic features, and acquiring appearance parameters of the three-dimensional image;
determining a selection range of a second action mode according to the appearance parameters and the pre-stored hand parameters of the intelligent artificial limb, wherein the selection range comprises: side grabbing, upper grabbing and bottom grabbing;
Selecting an optimal execution mode as the second action mode in the selection range according to the contact surface characteristics;
Performing recognition processing on the action execution target through a pre-trained second recognition model to obtain a second recognition result, and determining action execution strength according to the second recognition result, wherein the method specifically comprises the following steps of:
Inputting an environment image containing the action execution target to a pre-trained second recognition model to extract characteristic parameters of the action execution target through the second recognition model;
According to the characteristic parameters, inquiring article parameters with similarity reaching a preset threshold value with the characteristic parameters in a database, wherein the article parameters at least comprise one of the following: type of article, weight of article the material of the article;
Analyzing and calculating the object parameters to obtain the minimum grabbing static friction force of the action execution target, and taking the minimum grabbing static friction force as the action execution force;
Analyzing and calculating the object parameters to obtain the minimum grabbing static friction force of the action execution target, and taking the minimum grabbing static friction force as the action execution force, and then further comprising:
according to the minimum grabbing static friction force and the pre-stored force adjustment grade number, a force adjustment value is obtained;
according to the first action mode, the second action mode and the action execution force, the action execution target is tried to be grabbed;
the sliding signals are collected in real time through the sliding sensor of the intelligent prosthetic hand, and whether sliding exists or not is judged according to the sliding signals;
If not, keeping the execution force of the action unchanged;
If yes, the number of the gears is adjusted according to the pre-stored force, the force adjusting value is used for increasing and adjusting the action execution force step by step until the sliding signal judges that sliding does not exist.
6. A non-transitory computer storage medium storing computer-executable instructions, the computer-executable instructions configured to:
Collecting an electromyographic signal at the residual limb of a patient through an electromyographic signal collecting device, and processing and analyzing the electromyographic signal to obtain an analysis result;
Determining a first action mode according to the analysis result, wherein the first action mode at least comprises one of the following steps: a bulb grasping mode, a column grasping mode, and a fingertip pinching mode;
collecting an environment image and a human eye image through an image collecting device on the intelligent artificial limb, and determining an action execution target in the environment image according to the human eye image;
Performing recognition processing on the action execution target through a pre-trained first recognition model to obtain a first recognition result, and determining a second action mode according to the first recognition result, wherein the second action mode at least comprises one of the following steps: side grabbing, upper grabbing and bottom grabbing;
Performing recognition processing on the action execution target through a pre-trained second recognition model to obtain a second recognition result, and determining action execution strength according to the second recognition result;
The method comprises the steps of collecting an environment image and a human eye image through an image collecting device on the intelligent artificial limb, and determining an action execution target in the environment image according to the human eye image, wherein the action execution target comprises the following specific steps:
Collecting human eye images of a patient through an image collecting device on the intelligent artificial limb, obtaining a cross section and a vertical section of an eyeball of the patient according to the human eye images, and taking the extending direction of the cross section as a gazing direction;
constructing a reference shaft parallel to the vertical section by taking the position of the image acquisition device as an origin;
Obtaining a target acquisition direction of the image acquisition device according to the gazing direction and the reference axis, and controlling the image acquisition device to rotate to the target acquisition direction;
Acquiring an environment image in the target acquisition direction by the image acquisition device, and taking an object target which coincides with the center point of the environment image as an action execution target;
Performing recognition processing on the action execution target through a pre-trained first recognition model to obtain a first recognition result, and determining a second action mode according to the first recognition result, wherein the method specifically comprises the following steps of:
Inputting an environment image containing the action execution target into a pre-trained first recognition model, so as to extract the graphic characteristics of the action execution target through the first recognition model and the contact surface characteristics surrounding the action execution target, wherein the contact surface characteristics are appearance characteristics of a contact object supporting or towing the action execution target;
Constructing a three-dimensional image corresponding to the action execution target according to the graphic features, and acquiring appearance parameters of the three-dimensional image;
determining a selection range of a second action mode according to the appearance parameters and the pre-stored hand parameters of the intelligent artificial limb, wherein the selection range comprises: side grabbing, upper grabbing and bottom grabbing;
Selecting an optimal execution mode as the second action mode in the selection range according to the contact surface characteristics;
Performing recognition processing on the action execution target through a pre-trained second recognition model to obtain a second recognition result, and determining action execution strength according to the second recognition result, wherein the method specifically comprises the following steps of:
Inputting an environment image containing the action execution target to a pre-trained second recognition model to extract characteristic parameters of the action execution target through the second recognition model;
According to the characteristic parameters, inquiring article parameters with similarity reaching a preset threshold value with the characteristic parameters in a database, wherein the article parameters at least comprise one of the following: type of article, weight of article the material of the article;
Analyzing and calculating the object parameters to obtain the minimum grabbing static friction force of the action execution target, and taking the minimum grabbing static friction force as the action execution force;
Analyzing and calculating the object parameters to obtain the minimum grabbing static friction force of the action execution target, and taking the minimum grabbing static friction force as the action execution force, and then further comprising:
according to the minimum grabbing static friction force and the pre-stored force adjustment grade number, a force adjustment value is obtained;
according to the first action mode, the second action mode and the action execution force, the action execution target is tried to be grabbed;
the sliding signals are collected in real time through the sliding sensor of the intelligent prosthetic hand, and whether sliding exists or not is judged according to the sliding signals;
If not, keeping the execution force of the action unchanged;
If yes, the number of the gears is adjusted according to the pre-stored force, the force adjusting value is used for increasing and adjusting the action execution force step by step until the sliding signal judges that sliding does not exist.
CN202111628374.4A 2021-12-28 2021-12-28 A control method, device and medium based on intelligent prosthesis Active CN114469465B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111628374.4A CN114469465B (en) 2021-12-28 2021-12-28 A control method, device and medium based on intelligent prosthesis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111628374.4A CN114469465B (en) 2021-12-28 2021-12-28 A control method, device and medium based on intelligent prosthesis

Publications (2)

Publication Number Publication Date
CN114469465A CN114469465A (en) 2022-05-13
CN114469465B true CN114469465B (en) 2024-11-19

Family

ID=81495539

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111628374.4A Active CN114469465B (en) 2021-12-28 2021-12-28 A control method, device and medium based on intelligent prosthesis

Country Status (1)

Country Link
CN (1) CN114469465B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116030536B (en) * 2023-03-27 2023-06-09 国家康复辅具研究中心 Data collection and evaluation system for use state of upper limb prosthesis
CN118561004B (en) * 2024-05-21 2024-11-05 昆山纽杰伦智能机器科技有限公司 An intelligent gripper stacker for flexible production lines

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102309366A (en) * 2011-07-21 2012-01-11 山东科技大学 Control system and control method for controlling upper prosthesis to move by using eye movement signals
CN105012057A (en) * 2015-07-30 2015-11-04 沈阳工业大学 Intelligent artificial limb based on double-arm electromyogram and attitude information acquisition and motion classifying method
CN106236336A (en) * 2016-08-15 2016-12-21 中国科学院重庆绿色智能技术研究院 A kind of myoelectric limb gesture and dynamics control method
CN110298286A (en) * 2019-06-24 2019-10-01 中国科学院深圳先进技术研究院 Virtual reality recovery training method and system based on surface myoelectric and depth image

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2002236195A1 (en) * 2001-03-13 2002-09-24 Wide Horizon Holdings Inc. Cerebral programming
CN101120894B (en) * 2006-08-08 2010-05-12 上海市机械制造工艺研究所有限公司 Technology method relating to component in manufacturing artificial limb knee-joint
WO2008052166A2 (en) * 2006-10-26 2008-05-02 Wicab, Inc. Systems and methods for altering brain and body functions an treating conditions and diseases
US9271660B2 (en) * 2010-07-02 2016-03-01 Gangming Luo Virtual prosthetic limb system
ES2661538T3 (en) * 2013-06-12 2018-04-02 Otto Bock Healthcare Gmbh Limb device control
CN108815804B (en) * 2018-06-12 2020-06-09 深圳睿瀚医疗科技有限公司 VR upper limb rehabilitation training platform and method based on MYO arm ring and mobile terminal
CN111700718B (en) * 2020-07-13 2023-06-27 京东科技信息技术有限公司 Method and device for recognizing holding gesture, artificial limb and readable storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102309366A (en) * 2011-07-21 2012-01-11 山东科技大学 Control system and control method for controlling upper prosthesis to move by using eye movement signals
CN105012057A (en) * 2015-07-30 2015-11-04 沈阳工业大学 Intelligent artificial limb based on double-arm electromyogram and attitude information acquisition and motion classifying method
CN106236336A (en) * 2016-08-15 2016-12-21 中国科学院重庆绿色智能技术研究院 A kind of myoelectric limb gesture and dynamics control method
CN110298286A (en) * 2019-06-24 2019-10-01 中国科学院深圳先进技术研究院 Virtual reality recovery training method and system based on surface myoelectric and depth image

Also Published As

Publication number Publication date
CN114469465A (en) 2022-05-13

Similar Documents

Publication Publication Date Title
CN114469465B (en) A control method, device and medium based on intelligent prosthesis
Simon et al. The target achievement control test: Evaluating real-time myoelectric pattern recognition control of a multifunctional upper-limb prosthesis
Soares et al. The development of a virtual myoelectric prosthesis controlled by an EMG pattern recognition system based on neural networks
Dalley et al. A method for the control of multigrasp myoelectric prosthetic hands
Ahkami et al. Electromyography-based control of lower limb prostheses: a systematic review
US11596346B2 (en) Signal processing for decoding intended movements from electromyographic signals
US20230107850A1 (en) Systems and methods for controlling a device using detected changes in a neural-related signal
CN102309366B (en) Control system and control method for controlling movement of upper prosthesis using eye movement signals
Yu et al. Multi-DoF continuous estimation for wrist torques using stacked autoencoder
CN102499797B (en) Artificial limb control method and system
WO2020118797A1 (en) Prosthesis control method, apparatus, system and device, and storage medium
WO2008137346A2 (en) System and method for brain machine interface (bmi) control using reinforcement learning
CN101947152A (en) Electroencephalogram-voice control system and working method of humanoid artificial limb
US11644899B2 (en) Biometric enabled virtual reality systems and methods for detecting user intentions and modulating virtual avatar control based on the user intentions for creation of virtual avatars or objects in holographic space, two-dimensional (2D) virtual space, or three-dimensional (3D) virtual space
US20220051586A1 (en) System and method of generating control commands based on operator's bioelectrical data
CN103358311B (en) Robot control method and system based on bioelectricity signal decoding
Jaber et al. HD-sEMG gestures recognition by SVM classifier for controlling prosthesis
CN116999291A (en) Rehabilitation robot control method and electronic equipment based on multi-source information perception
Kakoty et al. Real-time EMG based prosthetic hand controller realizing neuromuscular constraint
Jaber et al. Using the robust high density-surface electromyography features for real-time hand gestures classification
CN202161439U (en) Control system capable of controlling movement of upper artificial limbs through eye movement signals
US11775066B2 (en) Biometric enabled virtual reality systems and methods for detecting user intentions and manipulating virtual avatar control based on user intentions for providing kinematic awareness in holographic space, two-dimensional (2D), or three-dimensional (3D) virtual space
Soares et al. Virtual and augmented reality: A new approach to aid users of myoelectric prostheses
WO2024197457A1 (en) Closed-loop adaptive brain‑machine interface decoding method and device based on reinforcement learning
Herle et al. Hierarchical myoelectric control of a human upper limb prosthesis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Country or region after: China

Address after: 250101 19th floor, building S02, Inspur Science Park, No. 1036, Inspur Road, high tech Zone, Jinan, Shandong Province

Applicant after: Inspur industrial Internet Co.,Ltd.

Address before: 250101 19th floor, building S02, Inspur Science Park, No. 1036, Inspur Road, high tech Zone, Jinan, Shandong Province

Applicant before: Shandong Inspur industrial Internet Industry Co.,Ltd.

Country or region before: China

GR01 Patent grant
GR01 Patent grant