CN114140607A - Machine vision positioning method and system for upper arm prosthesis control - Google Patents
Machine vision positioning method and system for upper arm prosthesis control Download PDFInfo
- Publication number
- CN114140607A CN114140607A CN202111462738.6A CN202111462738A CN114140607A CN 114140607 A CN114140607 A CN 114140607A CN 202111462738 A CN202111462738 A CN 202111462738A CN 114140607 A CN114140607 A CN 114140607A
- Authority
- CN
- China
- Prior art keywords
- image
- target
- gradient
- upper arm
- machine vision
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 30
- 238000003708 edge detection Methods 0.000 claims abstract description 24
- 238000006243 chemical reaction Methods 0.000 claims abstract description 13
- 238000005286 illumination Methods 0.000 claims abstract description 13
- 239000011159 matrix material Substances 0.000 claims abstract description 12
- 230000000007 visual effect Effects 0.000 claims abstract description 8
- 238000001914 filtration Methods 0.000 claims description 12
- 238000012545 processing Methods 0.000 claims description 7
- 230000001629 suppression Effects 0.000 claims description 7
- 230000002776 aggregation Effects 0.000 claims description 6
- 238000004220 aggregation Methods 0.000 claims description 6
- 238000010835 comparative analysis Methods 0.000 claims description 6
- 238000005516 engineering process Methods 0.000 claims description 6
- 238000002474 experimental method Methods 0.000 claims description 6
- 238000009499 grossing Methods 0.000 claims description 6
- 238000003709 image segmentation Methods 0.000 claims description 6
- 230000000877 morphologic effect Effects 0.000 claims description 6
- 230000004044 response Effects 0.000 claims description 6
- 230000011218 segmentation Effects 0.000 claims description 6
- 238000012360 testing method Methods 0.000 claims description 6
- 238000012549 training Methods 0.000 claims description 6
- 238000007781 pre-processing Methods 0.000 claims description 5
- 230000004807 localization Effects 0.000 claims 1
- 210000003414 extremity Anatomy 0.000 abstract description 16
- 230000001360 synchronised effect Effects 0.000 abstract description 4
- 230000004913 activation Effects 0.000 abstract description 2
- 210000003205 muscle Anatomy 0.000 abstract description 2
- 230000003183 myoelectrical effect Effects 0.000 abstract description 2
- 210000001364 upper extremity Anatomy 0.000 abstract description 2
- 230000006870 function Effects 0.000 description 6
- 230000003993 interaction Effects 0.000 description 2
- 230000002452 interceptive effect Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 238000002266 amputation Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/285—Selection of pattern recognition techniques, e.g. of classifiers in a multi-classifier system
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a machine vision positioning method and a system for upper arm prosthesis control, which comprise the following steps: step 1: recognizing a target object and completing target positioning by adopting a visual algorithm matched with Adaboost cascade classification and a depth image through a head camera; step 2: the illumination interference is reduced by a camera on the artificial limb through a color name conversion matrix trained by the CNN, the plane two-dimensional attitude of the target is obtained by using a Canny edge detection algorithm, and the artificial limb is further controlled to grab the target. The invention starts from a muscle cooperation basic theory, constructs a cooperation activation model and an upper limb multi-joint synchronous proportional myoelectric control system, realizes the synchronous continuous motion control of the artificial limb with multiple degrees of freedom, and realizes the flexible naturalness of the artificial limb motion and the convenient and high-efficiency use.
Description
Technical Field
The invention relates to the technical field of vision positioning, in particular to a machine vision positioning method and system for upper arm prosthesis control.
Background
The robot vision system, also called robot eye system, means to integrate the vision sensor information into the robot control system. According to the difference of the installation positions of the cameras, the hand-eye relationship of the system can be divided into two types: one is Eye-in-Hand, the camera is arranged at the tail end of the mechanical arm and is not fixed relative to the environment; another is Eye-to-Hand, where the camera is mounted in a fixed position, usually above the robot arm.
The Eye-in-Hand scheme has low requirement on the calibration precision of the monocular camera, but cannot ensure that a target body is always in the visual field, so that the complete environmental information can be obtained by the aid of an additional camera; the camera in the Eye-to-Hand solution can acquire information of the entire environment, but additional target placement or planning requirements may be added due to the tendency of the robotic arm movement to block the target area.
There are many methods for object recognition, such as deep learning, which is a popular method in recent years, but it not only requires a large number of samples, but also has a long learning time.
Patent document CN103271784B (application number: CN201310223530.8) discloses a binocular vision-based human-computer interactive manipulator control system and method, which is composed of the following four parts: the device comprises a real-time image acquisition device, a laser guide device, a programmable controller and a driving device; the programmable controller consists of a binocular stereo vision module, a three-dimensional coordinate system transformation module, a reverse inverse solution manipulator joint angle module and a control module. The color characteristics in binocular images of the real-time image acquisition device are extracted to be used as a signal source for controlling the manipulator, and three-dimensional information of red characteristic laser points in the visual field real-time images is obtained through conversion calculation of a binocular stereoscopic vision system and a three-dimensional coordinate system, so that the manipulator is controlled to carry out human-computer interactive target tracking operation.
Disclosure of Invention
In view of the deficiencies in the prior art, it is an object of the present invention to provide a machine vision positioning method and system for upper arm prosthesis control.
The invention provides a machine vision positioning method for upper arm prosthesis control, which comprises the following steps:
step 1: recognizing a target object and completing target positioning by adopting a visual algorithm matched with Adaboost cascade classification and a depth image through a head camera;
step 2: the illumination interference is reduced by a camera on the artificial limb through a color name conversion matrix trained by the CNN, the plane two-dimensional attitude of the target is obtained by using a Canny edge detection algorithm, and the artificial limb is further controlled to grab the target.
Preferably, the step 1 comprises:
step 1.1: based on Haar characteristics, LBP characteristics and HOG characteristics, an Adaboost cascade classification method is adopted for classifier training;
step 1.2: and testing the classifier through experiments under a single background and a complex background, and selecting the classifier which accords with the preset condition and is used for detecting the characteristic type of the target sample through comparative analysis.
Preferably, the step 2 includes:
step 2.1: identifying the target through the trained classifier, and taking the identified target area as an interested area;
step 2.2: after the region of interest is obtained, preprocessing image information, and using a trained color name conversion matrix as a first processing step to realize illumination robustness;
step 2.3: carrying out regional information aggregation by using image segmentation based on a graph, ensuring that the information of a target object is complete and simultaneously segmenting the target object from other objects;
step 2.4: and obtaining a maximum connected domain threshold value by adopting the marked pixel points, carrying out binarization segmentation by utilizing the threshold value, and finally carrying out morphological filtering to obtain a region only containing the target object.
Preferably, target boundary information is obtained by adopting a Canny edge detection method, and then the 2D posture is calculated;
reducing the influence of noise on the edge detection result through Gaussian smoothing filtering, wherein the generation equation of each element value of the Gaussian filter template is as follows:
wherein HijThe element value of the ith row and the jth column; sigma represents a Gaussian standard deviation, and the value of the sigma determines the variation amplitude of a Gaussian function and corresponds to the weight of the filter; k +1 is the size of the window template;
calculating the intensity gradient of the image, and returning a first derivative value in the direction of horizontal Gx and vertical Gy by using a Sobel operator, thereby determining the gradient G and the direction theta of the pixel point by applying the determination technology, wherein the formula is as follows:
θ=arctan(Gy/Gx)
wherein S isxRepresenting the x directionThe Sobel operator is used for detecting the edge in the y direction; syThe Sobel operator represents the y direction and is used for detecting the edge of the x direction, and the edge direction is vertical to the gradient direction;
if the window of 3 × 3 pixels in the image is a and the gradient of the pixel point e is to be calculated, the gradient values of e in the x and y directions are G respectivelyxAnd GyAnd e and a Sobel operator are convoluted to obtain the product, and the formula is as follows:
wherein: a to i denote the values of the elements of the A window.
Preferably, non-maxima suppression is applied to eliminate spurious responses due to edge detection;
after the non-maximum value is suppressed, the remaining pixels represent the actual edge in the image, two hysteresis thresholds Tmin and Tmax are set, and if the gray gradient of the image is higher than Tmax, the image is judged to be a true boundary; discarding if the gray scale gradient of the image is below Tmin; if the gray gradient of the image is between Tmin and Tmax, judging whether the point is connected with the determined real boundary point, if so, determining the point is a boundary, otherwise, discarding the point.
According to the present invention there is provided a machine vision positioning system for upper arm prosthesis control comprising:
module M1: recognizing a target object and completing target positioning by adopting a visual algorithm matched with Adaboost cascade classification and a depth image through a head camera;
module M2: the illumination interference is reduced by a camera on the artificial limb through a color name conversion matrix trained by the CNN, the plane two-dimensional attitude of the target is obtained by using a Canny edge detection algorithm, and the artificial limb is further controlled to grab the target.
Preferably, the module M1 includes:
module M1.1: based on Haar characteristics, LBP characteristics and HOG characteristics, an Adaboost cascade classification method is adopted for classifier training;
module M1.2: and testing the classifier through experiments under a single background and a complex background, and selecting the classifier which accords with the preset condition and is used for detecting the characteristic type of the target sample through comparative analysis.
Preferably, the module M2 includes:
module M2.1: identifying the target through the trained classifier, and taking the identified target area as an interested area;
module M2.2: after the region of interest is obtained, preprocessing image information, and using a trained color name conversion matrix as a first processing step to realize illumination robustness;
module M2.3: carrying out regional information aggregation by using image segmentation based on a graph, ensuring that the information of a target object is complete and simultaneously segmenting the target object from other objects;
module M2.4: and obtaining a maximum connected domain threshold value by adopting the marked pixel points, carrying out binarization segmentation by utilizing the threshold value, and finally carrying out morphological filtering to obtain a region only containing the target object.
Preferably, target boundary information is obtained by adopting a Canny edge detection method, and then the 2D posture is calculated;
reducing the influence of noise on the edge detection result through Gaussian smoothing filtering, wherein the generation equation of each element value of the Gaussian filter template is as follows:
wherein HijThe element value of the ith row and the jth column; sigma represents a Gaussian standard deviation, and the value of the sigma determines the variation amplitude of a Gaussian function and corresponds to the weight of the filter; k +1 is the size of the window template;
calculating the intensity gradient of the image, and returning a first derivative value in the direction of horizontal Gx and vertical Gy by using a Sobel operator, thereby determining the gradient G and the direction theta of the pixel point by applying the determination technology, wherein the formula is as follows:
θ=arctan(Gy/Gx)
wherein S isxThe Sobel operator represents the x direction and is used for detecting the edge of the y direction; syThe Sobel operator represents the y direction and is used for detecting the edge of the x direction, and the edge direction is vertical to the gradient direction;
if the window of 3 × 3 pixels in the image is a and the gradient of the pixel point e is to be calculated, the gradient values of e in the x and y directions are G respectivelyxAnd GyAnd e and a Sobel operator are convoluted to obtain the product, and the formula is as follows:
wherein: a to i denote the values of the elements of the A window.
Preferably, non-maxima suppression is applied to eliminate spurious responses due to edge detection;
after the non-maximum value is suppressed, the remaining pixels represent the actual edge in the image, two hysteresis thresholds Tmin and Tmax are set, and if the gray gradient of the image is higher than Tmax, the image is judged to be a true boundary; discarding if the gray scale gradient of the image is below Tmin; if the gray gradient of the image is between Tmin and Tmax, judging whether the point is connected with the determined real boundary point, if so, determining the point is a boundary, otherwise, discarding the point.
Compared with the prior art, the invention has the following beneficial effects:
the invention combines two schemes of Eye-in-Hand and Eye-to-Hand, selects an Adaboost cascade classifier to detect a target sample, and constructs a cooperative activation model and an upper limb multi-joint synchronous proportion myoelectric control system from a muscle cooperation basic theory to realize the synchronous continuous motion control of multiple degrees of freedom of the artificial limb, and simultaneously facilitates the human-computer interaction between an amputation patient and the artificial limb and familiarizes the function and action mode of the artificial limb as soon as possible.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
FIG. 1 is a flow chart of the operation of the present invention;
fig. 2 is a schematic external view of the present invention, wherein 1 is an Eye-to-Hand machine vision positioning camera, and 2 is an Eye-in-Hand machine vision positioning camera on an upper arm prosthesis.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that it would be obvious to those skilled in the art that various changes and modifications can be made without departing from the spirit of the invention. All falling within the scope of the present invention.
Example (b):
referring to fig. 1 and fig. 2, a machine vision positioning system firstly adopts Adaboost cascade classification through an Eye-to-Hand camera of a head, and is mainly characterized by the following steps: firstly, carrying out sample processing on different types of samples, then training a classifier by adopting an Adaboost cascade classification method based on 3 different feature types of Haar features, LBP features and HOG features, and testing the classifier by experiments under a single background and a complex background; and finally, selecting the classifier which is most suitable for detecting the feature type of the target sample through comparative analysis. In the process, a user can control the artificial limb in a large range through the man-machine interaction system.
Then, acquiring a planar two-dimensional posture of the target through an Eye-in-Hand camera on the artificial limb, and mainly comprising the following steps of: and identifying the target by using the trained classifier, and taking the identified target area as the area of interest. After the region of interest is obtained, image information is preprocessed. Firstly, using a trained color name conversion matrix as a first processing step to realize illumination robustness; secondly, carrying out regional information aggregation by using image segmentation based on a graph, ensuring that the information of a target object is complete and simultaneously being well segmented from other objects; and finally, aiming at the characteristic that the occupied area of the target object in the region of interest is the largest, marking pixel points to obtain the maximum connected domain threshold value, carrying out binarization segmentation by using the threshold value, and finally carrying out morphological filtering to obtain the region only containing the target object. And then, acquiring target boundary information by adopting a Canny edge detection method, and then calculating the 2D posture. The first step, the influence of noise on the edge detection result is reduced as much as possible through Gaussian smoothing filtering, and the step smoothes the image and reduces the obvious noise influence. The following equation is a generation equation of each element value of the gaussian filter template:
wherein, sigma represents the standard deviation of Gaussian, and the value of sigma determines the variation amplitude of the Gaussian function and corresponds to the weight of the filter; k +1 is the window template size. And secondly, calculating the intensity gradient of the image, and returning a first derivative value in the horizontal Gx and vertical Gy directions by using a Sobel operator, so that the gradient G and the direction theta of the pixel point can be determined by applying the technology.
θ=arctan(Gy/Gx)
The formula is a Sobel operator relation in the x and y directions, wherein Sx represents a Sobel operator in the x direction and is used for detecting edges in the y direction; sy represents a Sobel operator in the y direction for detecting an edge in the x direction (the edge direction is perpendicular to the gradient direction):
if a window of 3 × 3 pixels in the image is a, and a pixel point e of the gradient is to be calculated, the gradient values of e in the x and y directions are Gx and Gy, respectively, and the gradient values are obtained by performing convolution on e and a Sobel operator:
and thirdly, applying non-maximum suppression to eliminate spurious response caused by edge detection. And fourthly, hysteresis threshold. After non-maxima suppression, the remaining pixels may represent actual edges in the image, but some edge pixels still exist. Two thresholds Tmin and Tmax need to be set at this time. If the gray gradient of the image is higher than Tmax, the image is regarded as a true boundary; discard if less than Tmin; if the point is between the two points, whether the point is connected with the determined real boundary point needs to be judged, if the point is connected with the determined real boundary point, the point is a boundary, otherwise, the point is discarded. Edge information point sets with different density degrees can be obtained by adjusting the high and low threshold values of the lag threshold value, and the edge information can comprise a strong edge for describing the outline of the object and a weak edge for describing the illumination information in a proper amount by selecting a proper threshold value, so that the distribution of the points can reflect the posture of the object, and the artificial limb is controlled to grab the target object.
According to the present invention there is provided a machine vision positioning system for upper arm prosthesis control comprising:
module M1: recognizing a target object and completing target positioning by adopting a visual algorithm matched with Adaboost cascade classification and a depth image through a head camera;
module M2: the illumination interference is reduced by a camera on the artificial limb through a color name conversion matrix trained by the CNN, the plane two-dimensional attitude of the target is obtained by using a Canny edge detection algorithm, and the artificial limb is further controlled to grab the target.
Preferably, the module M1 includes:
module M1.1: based on Haar characteristics, LBP characteristics and HOG characteristics, an Adaboost cascade classification method is adopted for classifier training;
module M1.2: and testing the classifier through experiments under a single background and a complex background, and selecting the classifier which accords with the preset condition and is used for detecting the characteristic type of the target sample through comparative analysis.
Preferably, the module M2 includes:
module M2.1: identifying the target through the trained classifier, and taking the identified target area as an interested area;
module M2.2: after the region of interest is obtained, preprocessing image information, and using a trained color name conversion matrix as a first processing step to realize illumination robustness;
module M2.3: carrying out regional information aggregation by using image segmentation based on a graph, ensuring that the information of a target object is complete and simultaneously segmenting the target object from other objects;
module M2.4: and obtaining a maximum connected domain threshold value by adopting the marked pixel points, carrying out binarization segmentation by utilizing the threshold value, and finally carrying out morphological filtering to obtain a region only containing the target object.
Preferably, target boundary information is obtained by adopting a Canny edge detection method, and then the 2D posture is calculated;
reducing the influence of noise on the edge detection result through Gaussian smoothing filtering, wherein the generation equation of each element value of the Gaussian filter template is as follows:
wherein HijThe element value of the ith row and the jth column; sigma represents a Gaussian standard deviation, and the value of the sigma determines the variation amplitude of a Gaussian function and corresponds to the weight of the filter; k +1 is the size of the window template;
calculating the intensity gradient of the image, and returning a first derivative value in the direction of horizontal Gx and vertical Gy by using a Sobel operator, thereby determining the gradient G and the direction theta of the pixel point by applying the determination technology, wherein the formula is as follows:
θ=arctan(Gy/Gx)
wherein S isxThe Sobel operator represents the x direction and is used for detecting the edge of the y direction; syThe Sobel operator represents the y direction and is used for detecting the edge of the x direction, and the edge direction is vertical to the gradient direction;
if the window of 3 × 3 pixels in the image is a and the gradient of the pixel point e is to be calculated, the gradient values of e in the x and y directions are G respectivelyxAnd GyAnd e and a Sobel operator are convoluted to obtain the product, and the formula is as follows:
wherein: a to i denote the values of the elements of the A window.
Preferably, non-maxima suppression is applied to eliminate spurious responses due to edge detection;
after the non-maximum value is suppressed, the remaining pixels represent the actual edge in the image, two hysteresis thresholds Tmin and Tmax are set, and if the gray gradient of the image is higher than Tmax, the image is judged to be a true boundary; discarding if the gray scale gradient of the image is below Tmin; if the gray gradient of the image is between Tmin and Tmax, judging whether the point is connected with the determined real boundary point, if so, determining the point is a boundary, otherwise, discarding the point.
In the description of the present application, it is to be understood that the terms "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", and the like indicate orientations or positional relationships based on those shown in the drawings, and are only for convenience in describing the present application and simplifying the description, but do not indicate or imply that the referred device or element must have a specific orientation, be constructed in a specific orientation, and be operated, and thus, should not be construed as limiting the present application.
Those skilled in the art will appreciate that, in addition to implementing the systems, apparatus, and various modules thereof provided by the present invention in purely computer readable program code, the same procedures can be implemented entirely by logically programming method steps such that the systems, apparatus, and various modules thereof are provided in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Therefore, the system, the device and the modules thereof provided by the present invention can be considered as a hardware component, and the modules included in the system, the device and the modules thereof for implementing various programs can also be considered as structures in the hardware component; modules for performing various functions may also be considered to be both software programs for performing the methods and structures within hardware components.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes or modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention. The embodiments and features of the embodiments of the present application may be combined with each other arbitrarily without conflict.
Claims (10)
1. A machine vision positioning method for upper arm prosthesis control, comprising:
step 1: recognizing a target object and completing target positioning by adopting a visual algorithm matched with Adaboost cascade classification and a depth image through a head camera;
step 2: the illumination interference is reduced by a camera on the artificial limb through a color name conversion matrix trained by the CNN, the plane two-dimensional attitude of the target is obtained by using a Canny edge detection algorithm, and the artificial limb is further controlled to grab the target.
2. The machine vision positioning method for upper arm prosthesis control according to claim 1, wherein said step 1 comprises:
step 1.1: based on Haar characteristics, LBP characteristics and HOG characteristics, an Adaboost cascade classification method is adopted for classifier training;
step 1.2: and testing the classifier through experiments under a single background and a complex background, and selecting the classifier which accords with the preset condition and is used for detecting the characteristic type of the target sample through comparative analysis.
3. The machine vision positioning method for upper arm prosthesis control according to claim 2, characterized in that said step 2 includes:
step 2.1: identifying the target through the trained classifier, and taking the identified target area as an interested area;
step 2.2: after the region of interest is obtained, preprocessing image information, and using a trained color name conversion matrix as a first processing step to realize illumination robustness;
step 2.3: carrying out regional information aggregation by using image segmentation based on a graph, ensuring that the information of a target object is complete and simultaneously segmenting the target object from other objects;
step 2.4: and obtaining a maximum connected domain threshold value by adopting the marked pixel points, carrying out binarization segmentation by utilizing the threshold value, and finally carrying out morphological filtering to obtain a region only containing the target object.
4. The machine vision positioning method for upper arm prosthesis control according to claim 1, characterized in that a Canny edge detection method is adopted to obtain target boundary information, and then a 2D pose is calculated;
reducing the influence of noise on the edge detection result through Gaussian smoothing filtering, wherein the generation equation of each element value of the Gaussian filter template is as follows:
wherein HijThe element value of the ith row and the jth column; sigma represents a Gaussian standard deviation, and the value of the sigma determines the variation amplitude of a Gaussian function and corresponds to the weight of the filter; k +1 is the size of the window template;
calculating the intensity gradient of the image, and returning a first derivative value in the direction of horizontal Gx and vertical Gy by using a Sobel operator, thereby determining the gradient G and the direction theta of the pixel point by applying the determination technology, wherein the formula is as follows:
θ=arctan(Gy/Gx)
wherein S isxThe Sobel operator represents the x direction and is used for detecting the edge of the y direction; syThe Sobel operator represents the y direction and is used for detecting the edge of the x direction, and the edge direction is vertical to the gradient direction;
if the window of 3 × 3 pixels in the image is a and the gradient of the pixel point e is to be calculated, the gradient values of e in the x and y directions are G respectivelyxAnd GyGo through e and Sobel operatorsThe row convolution yields the formula:
wherein: a to i denote the values of the elements of the A window.
5. The machine vision localization method for upper arm prosthesis control of claim 1, wherein non-maxima suppression is applied to eliminate spurious responses due to edge detection;
after the non-maximum value is suppressed, the remaining pixels represent the actual edge in the image, two hysteresis thresholds Tmin and Tmax are set, and if the gray gradient of the image is higher than Tmax, the image is judged to be a true boundary; discarding if the gray scale gradient of the image is below Tmin; if the gray gradient of the image is between Tmin and Tmax, judging whether the point is connected with the determined real boundary point, if so, determining the point is a boundary, otherwise, discarding the point.
6. A machine vision positioning system for upper arm prosthesis control, comprising:
module M1: recognizing a target object and completing target positioning by adopting a visual algorithm matched with Adaboost cascade classification and a depth image through a head camera;
module M2: the illumination interference is reduced by a camera on the artificial limb through a color name conversion matrix trained by the CNN, the plane two-dimensional attitude of the target is obtained by using a Canny edge detection algorithm, and the artificial limb is further controlled to grab the target.
7. The machine vision positioning system for upper arm prosthesis control of claim 6, wherein said module M1 includes:
module M1.1: based on Haar characteristics, LBP characteristics and HOG characteristics, an Adaboost cascade classification method is adopted for classifier training;
module M1.2: and testing the classifier through experiments under a single background and a complex background, and selecting the classifier which accords with the preset condition and is used for detecting the characteristic type of the target sample through comparative analysis.
8. The machine vision positioning system for upper arm prosthesis control of claim 7, wherein said module M2 includes therein:
module M2.1: identifying the target through the trained classifier, and taking the identified target area as an interested area;
module M2.2: after the region of interest is obtained, preprocessing image information, and using a trained color name conversion matrix as a first processing step to realize illumination robustness;
module M2.3: carrying out regional information aggregation by using image segmentation based on a graph, ensuring that the information of a target object is complete and simultaneously segmenting the target object from other objects;
module M2.4: and obtaining a maximum connected domain threshold value by adopting the marked pixel points, carrying out binarization segmentation by utilizing the threshold value, and finally carrying out morphological filtering to obtain a region only containing the target object.
9. The machine vision positioning system for upper arm prosthesis control according to claim 6, wherein a Canny edge detection method is adopted to obtain target boundary information, and then a 2D pose is calculated;
reducing the influence of noise on the edge detection result through Gaussian smoothing filtering, wherein the generation equation of each element value of the Gaussian filter template is as follows:
wherein HijThe element value of the ith row and the jth column; sigma represents the standard deviation of Gaussian, and the value determines the variation amplitude of the Gaussian function, corresponding toThe weight of the filter; k +1 is the size of the window template;
calculating the intensity gradient of the image, and returning a first derivative value in the direction of horizontal Gx and vertical Gy by using a Sobel operator, thereby determining the gradient G and the direction theta of the pixel point by applying the determination technology, wherein the formula is as follows:
θ=arctan(Gy/Gx)
wherein S isxThe Sobel operator represents the x direction and is used for detecting the edge of the y direction; syThe Sobel operator represents the y direction and is used for detecting the edge of the x direction, and the edge direction is vertical to the gradient direction;
if the window of 3 × 3 pixels in the image is a and the gradient of the pixel point e is to be calculated, the gradient values of e in the x and y directions are G respectivelyxAnd GyAnd e and a Sobel operator are convoluted to obtain the product, and the formula is as follows:
wherein: a to i denote the values of the elements of the A window.
10. The machine vision positioning system for upper arm prosthesis control of claim 6, wherein non-maximum suppression is applied to eliminate spurious responses from edge detection;
after the non-maximum value is suppressed, the remaining pixels represent the actual edge in the image, two hysteresis thresholds Tmin and Tmax are set, and if the gray gradient of the image is higher than Tmax, the image is judged to be a true boundary; discarding if the gray scale gradient of the image is below Tmin; if the gray gradient of the image is between Tmin and Tmax, judging whether the point is connected with the determined real boundary point, if so, determining the point is a boundary, otherwise, discarding the point.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111462738.6A CN114140607A (en) | 2021-12-02 | 2021-12-02 | Machine vision positioning method and system for upper arm prosthesis control |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111462738.6A CN114140607A (en) | 2021-12-02 | 2021-12-02 | Machine vision positioning method and system for upper arm prosthesis control |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114140607A true CN114140607A (en) | 2022-03-04 |
Family
ID=80387378
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111462738.6A Pending CN114140607A (en) | 2021-12-02 | 2021-12-02 | Machine vision positioning method and system for upper arm prosthesis control |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114140607A (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103271784A (en) * | 2013-06-06 | 2013-09-04 | 山东科技大学 | Man-machine interactive manipulator control system and method based on binocular vision |
CN104552285A (en) * | 2013-10-28 | 2015-04-29 | 精工爱普生株式会社 | Robot, robot control device, and robot system |
CN105128012A (en) * | 2015-08-10 | 2015-12-09 | 深圳百思拓威机器人技术有限公司 | Open type intelligent service robot system and multiple controlling methods thereof |
CN109079777A (en) * | 2018-08-01 | 2018-12-25 | 北京科技大学 | A kind of mechanical arm hand eye coordination operating system |
CN110852173A (en) * | 2019-10-15 | 2020-02-28 | 山东大学 | A visual positioning method and system for fuzzy welds |
CN112200821A (en) * | 2020-09-07 | 2021-01-08 | 天津津航技术物理研究所 | Detection and positioning method for assembly line multi-partition subpackage targets |
CN113524172A (en) * | 2021-05-27 | 2021-10-22 | 中国科学院深圳先进技术研究院 | Robot, article grabbing method thereof and computer-readable storage medium |
-
2021
- 2021-12-02 CN CN202111462738.6A patent/CN114140607A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103271784A (en) * | 2013-06-06 | 2013-09-04 | 山东科技大学 | Man-machine interactive manipulator control system and method based on binocular vision |
CN104552285A (en) * | 2013-10-28 | 2015-04-29 | 精工爱普生株式会社 | Robot, robot control device, and robot system |
CN105128012A (en) * | 2015-08-10 | 2015-12-09 | 深圳百思拓威机器人技术有限公司 | Open type intelligent service robot system and multiple controlling methods thereof |
CN109079777A (en) * | 2018-08-01 | 2018-12-25 | 北京科技大学 | A kind of mechanical arm hand eye coordination operating system |
CN110852173A (en) * | 2019-10-15 | 2020-02-28 | 山东大学 | A visual positioning method and system for fuzzy welds |
CN112200821A (en) * | 2020-09-07 | 2021-01-08 | 天津津航技术物理研究所 | Detection and positioning method for assembly line multi-partition subpackage targets |
CN113524172A (en) * | 2021-05-27 | 2021-10-22 | 中国科学院深圳先进技术研究院 | Robot, article grabbing method thereof and computer-readable storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112476434B (en) | Visual 3D pick-and-place method and system based on cooperative robot | |
CN109255813B (en) | Man-machine cooperation oriented hand-held object pose real-time detection method | |
US11694432B2 (en) | System and method for augmenting a visual output from a robotic device | |
Li et al. | Development of a human–robot hybrid intelligent system based on brain teleoperation and deep learning SLAM | |
Elforaici et al. | Posture recognition using an RGB-D camera: exploring 3D body modeling and deep learning approaches | |
CN109079777B (en) | Manipulator hand-eye coordination operation system | |
Eppner et al. | Grasping unknown objects by exploiting shape adaptability and environmental constraints | |
CN108942923A (en) | A kind of mechanical arm crawl control method | |
JP2018514036A (en) | Machine vision with dimensional data reduction | |
Abed et al. | Python-based Raspberry Pi for hand gesture recognition | |
Shi et al. | Fuzzy dynamic obstacle avoidance algorithm for basketball robot based on multi-sensor data fusion technology | |
Pauli | Learning to recognize and grasp objects | |
CN114140607A (en) | Machine vision positioning method and system for upper arm prosthesis control | |
Rougeaux et al. | Robust tracking by a humanoid vision system | |
CN105912976A (en) | Intelligent wheelchair based on face identification and color point tracking | |
Zhou et al. | Visual servo control system of 2-DOF parallel robot | |
Wang et al. | Research and Design of Human Behavior Recognition Method in Industrial Production Based on Depth Image | |
Li et al. | Improved SLAM and motor imagery based navigation control of a mobile robot | |
Hema et al. | An intelligent vision system for object localization and obstacle avoidance for an indoor service robot | |
Hüser et al. | Visual programming by demonstration of grasping skills in the context of a mobile service robot using 1D-topology based self-organizing-maps | |
Zoghlami et al. | Tracking body motions in order to guide a robot using the time of flight technology. | |
Grigorescu et al. | Robust feature extraction for 3D reconstruction of boundary segmented objects in a robotic library scenario | |
CN117102856B (en) | A five-degree-of-freedom pose recognition and adjustment method for large cabin dual platforms | |
Mo | Baxter Robot Shape Feature Recognition Algorithm Based on Machine Vision | |
Pagès et al. | Possibilities of Man-Machine interaction through the perception of human gestures |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |