CN112497219A - Columnar workpiece classification positioning method based on target detection and machine vision - Google Patents
Columnar workpiece classification positioning method based on target detection and machine vision Download PDFInfo
- Publication number
- CN112497219A CN112497219A CN202011419779.2A CN202011419779A CN112497219A CN 112497219 A CN112497219 A CN 112497219A CN 202011419779 A CN202011419779 A CN 202011419779A CN 112497219 A CN112497219 A CN 112497219A
- Authority
- CN
- China
- Prior art keywords
- workpiece
- target
- eye
- detection
- precision
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 79
- 238000000034 method Methods 0.000 title claims abstract description 53
- 230000007547 defect Effects 0.000 claims abstract description 18
- 230000002159 abnormal effect Effects 0.000 claims abstract description 4
- 238000000691 measurement method Methods 0.000 claims abstract description 4
- 230000000007 visual effect Effects 0.000 claims description 18
- 238000012549 training Methods 0.000 claims description 17
- 238000012360 testing method Methods 0.000 claims description 14
- 230000008569 process Effects 0.000 claims description 9
- 239000011159 matrix material Substances 0.000 claims description 6
- 238000007619 statistical method Methods 0.000 claims description 6
- 238000004519 manufacturing process Methods 0.000 claims description 5
- 238000012545 processing Methods 0.000 claims description 5
- 230000002950 deficient Effects 0.000 claims description 4
- 238000005070 sampling Methods 0.000 claims description 4
- 230000009466 transformation Effects 0.000 claims description 4
- 241001292396 Cirrhitidae Species 0.000 claims description 3
- 238000004891 communication Methods 0.000 claims description 3
- 238000000605 extraction Methods 0.000 claims description 3
- 238000007781 pre-processing Methods 0.000 claims description 3
- 230000009467 reduction Effects 0.000 claims description 3
- 230000002708 enhancing effect Effects 0.000 claims 1
- 238000002474 experimental method Methods 0.000 claims 1
- 230000006872 improvement Effects 0.000 abstract description 3
- 238000013135 deep learning Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1628—Programme controls characterised by the control loop
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
- B25J9/1697—Vision controlled systems
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Image Analysis (AREA)
- Manipulator (AREA)
Abstract
The invention discloses a columnar workpiece classification high-precision positioning method based on target detection and machine vision, which comprises two parts of target detection, defect detection and rough positioning of yolov3 and high-precision positioning of machine vision. The parts of yolov3 include data set making, network structure improvement, candidate frame parameter adjustment, real-time positioning identification and defect detection. And acquiring a workpiece image through an eye-to-hand camera, fusing an image enhancement algorithm, and improving the candidate frame parameters by adopting a vector similarity measurement method. In the machine vision part, an eye-in-hand camera is guided to obtain an image by a coarse positioning position of yolov3 algorithm, the characteristic of the image is extracted, the abnormal characteristic is removed by adopting maximum value constraint, and finally the contour characteristic of the workpiece is fitted to obtain the high-precision position of the target workpiece.
Description
Technical Field
The invention relates to industrial robots and machine vision applications, in particular to a columnar workpiece classification high-precision positioning method based on target detection and machine vision.
Background
Along with the development of intelligent manufacturing, industrial robots have the advantages of good universality, high repeated positioning precision and the like, and most of the industrial automation fields adopt a robot teaching method. There is a long distance for realizing real intelligent manufacturing, and the traditional teaching can not meet the requirement of intelligent manufacturing. The machine vision technology well solves the position control requirement of the robot, but simultaneously has the problem that the recognition flexibility and the precision are difficult to be compatible. The target detection technology based on deep learning can better meet the flexible requirement of multi-target identification, but has the problem of insufficient positioning precision. The traditional machine vision detection technology has high identification precision, but the identification characteristic is single.
The patent with publication number CN111238450A discloses a visual positioning method and device, in which multiple frames of images are collected for a single target workpiece, and the collected pose transformation relation corresponding to each frame of image is satisfied by the visual positioning information corresponding to each frame of image, so that the multi-target workpiece cannot be identified and positioned. The patent with publication number CN106272416A discloses a robot slender shaft precision assembly system and method based on force sense and vision, and the invention has certain limitations by means of various sensors such as vision, position, force sense and the like when realizing precision assembly.
Target detection can be realized based on deep learning, but the precision is poor; the traditional machine vision recognition has high positioning precision but the detection target is too single. Therefore, the classification and high-precision positioning of multi-target workpieces are a problem to be solved in the field of industrial robots and machine vision application.
Disclosure of Invention
The invention provides a columnar workpiece classification high-precision positioning method based on target detection and machine vision. And performing target detection on the target workpiece through deep learning to finish workpiece classification and target rough positioning, guiding the manipulator to the position above the workpiece by the rough positioning target position, and finishing target high-precision positioning through machine vision. Therefore, classification recognition and high-precision positioning of the multi-target workpiece are achieved.
Therefore, the invention provides a columnar workpiece classification high-precision positioning method based on target detection and machine vision, which comprises the following steps:
the multi-target identification, rough positioning and defect identification process based on yolov3 target detection algorithm is as follows:
and acquiring images of the multi-target workpiece by using an Eye-To-Hand camera of the experimental platform. The experimental platform comprises a mechanical arm, a visual control system, an Eye-To-Hand camera and an Eye-In-Hand camera. The Eye-To-Hand camera is fixed right above the test bed, and the camera has a higher working distance from the experimental platform so as To image different types of multi-target workpieces on a visual field surface. The Eye-To-Hand camera has lower coarse identification and positioning accuracy due To larger working distance.
S1: the Eye-To-Hand camera acquires images of multiple target workpieces on a test bed, inputs the acquired images into a yolov3 algorithm of an improved network structure, trains a yolov3 algorithm model of the improved network structure, and performs target detection by using the trained yolov3 algorithm multi-target detection model To obtain image coordinates of each category and coarse precision of the multiple target workpieces.
S2: and based on coordinate transformation, performing Hand-Eye calibration on the Eye-To-Hand camera and the tail end of the manipulator by a calibration plate calibration method, solving world coordinates of each target workpiece by combining the obtained image coordinates of the multi-target workpiece coarse precision with Hand-Eye calibration parameters, and returning the category of each target workpiece.
S3: the yolov3 algorithm model of the improved network structure trains multiple target workpiece types during training, and trains typical defects of each workpiece at the same time. When the trained yolov3 algorithm for improving the network structure is used for target detection, key defects of targets such as scratches, unfilled corners and the like are identified.
The high-precision positioning process of the target workpiece based on machine vision comprises the following steps:
s4: the coordinates of the roughly positioned workpiece are obtained by identifying a yolov3 algorithm model of an improved network structure, the position coordinates are transmitted to a vision control system based on a communication protocol, and the control system sends the position coordinates to a manipulator. The visual control system is acted by an industrial personal computer; the Eye-In-Hand camera is connected with the tail end of the manipulator. The Eye-In-Hand camera moves along with the robot arm to above the target workpiece.
S5: the Eye-In-Hand camera moves to the position above a workpiece to acquire images of the workpiece, the workpiece is placed above a test bed, the system performs image processing and feature extraction on the acquired image to acquire key feature coordinates of the workpiece, and high-precision world coordinates of the workpiece are acquired and sent to a vision system In combination with Hand-Eye calibration parameters of the Eye-In-Hand camera.
S6: and the system processor guides the manipulator to clamp, carry or assemble according to the high-precision coordinates.
S7: and repeating the steps S4-S6 to perform high-precision positioning on the target workpieces of different types, so as to realize the high-precision positioning of the multi-target workpieces.
The workpiece is a shaft part; the multi-target workpieces comprise four different types of workpieces; the camera and the vision system are communicated based on a GigE protocol to transmit images; and the visual system and the mechanical arm are communicated based on a TCP/IP protocol to transmit position coordinates.
Further, the step S1 is specifically:
s11: and acquiring images of the target To be detected by using an Eye-To-Hand camera on the test workbench, and marking and classifying different types of workpieces after acquisition To prepare a training data set. The workpiece marks are classified into five categories, including four different types of shaft parts and four different types of workpieces with defects.
S12: and (3) performing enhancement processing on the training data set, and inputting the enhanced data set into an improved yolov3 algorithm model for training to obtain a parameter model.
S13: and inputting the original multi-target workpiece image to be identified into a yolov3 model of the trained improved network, and outputting corresponding defect detection and classification identification coarse positioning results.
S14: and measuring the parameters of the candidate frame in the training set by adopting a vector similarity measurement method, performing statistical analysis on the parameters according to the standardized Euclidean distance, performing statistical analysis on the parameters of the candidate frame according to the standardized Euclidean distance, writing the parameter with the minimum error into a configuration file, and improving the yolv 3 target detection frame.
The yolov3 model of the improved network structure is improved based on the network result of the darknet53, and the requirement of multi-target workpiece target detection is met. In the target detection and defect identification method provided by the invention, the optimization and improvement of the yolov3 algorithm network structure model specifically comprises the following steps:
the original network model of the Yolov3 target detection algorithm obtains detection results under three scales of 13 × 13 × 75, 26 × 26 × 75 and 52 × 52 × 75 by a series of downsampling processes, wherein 13, 26 and 52 represent sampling scales. 75 is divided into 3 × (4+1+20), 3 represents three scales of detection boxes, 4 represents position information of each detection box, which includes the width and height of the detection box and the center position coordinates of the detection box, 1 represents the probability of recognition, and 20 represents the kind of target that can be detected. The yolov3 algorithm of the improved network structure is that the modified network structure can meet the target detection of four different types of multi-target workpieces, and can identify different types of defective workpieces to obtain the outputs of three different scales of 13 multiplied by 39, 26 multiplied by 39 and 52 multiplied by 39.
Further, the step S2 is specifically:
s21: the Eye-To-Hand camera is calibrated by a calibration method of a calibration plate based on halcon;
s22: calibrating the Hand and the Eye To obtain external parameters of the Eye-To-Hand camera, and standardizing the parameters into a matrix form;
s23: and combining the image coordinates obtained by the yolov3 target detection model with an external parameter matrix, and converting the obtained image coordinates into the world coordinates of the robot.
Further, the step S5 is specifically:
s51: the Eye-In-Hand camera performs operations such as image preprocessing, noise reduction and the like after photographing a single target workpiece; and carrying out self-adaptive binarization on the preprocessed image to obtain edge characteristic information of the columnar workpiece.
S52: according to the edge characteristic information of the circle, the circle contour of the columnar workpiece is fitted based on an abnormal value detection method, the maximum excircle contour of the columnar workpiece is obtained by adopting a selection _ max _ length _ contourer method constrained by a maximum value, and high precision of visual positioning is realized.
The select _ max _ length _ constraint method performs maximum value constraint on concentric circle profiles of the columnar workpieces after fitting the critical information of the workpieces, and returns profile characteristic information of the columnar workpieces. The method comprises the steps of initializing the longest length and the longest length index, traversing the acquired profile characteristic length, storing the length and the index of the longest profile, and finally returning the index of the longest profile length.
The method can achieve micron-level classification and positioning accuracy of the columnar workpieces, and can identify various columnar workpieces of different types. The accuracy rate of the identification of various different types of columnar workpieces can reach more than 90%, and the identification speed can reach more than 50 fps.
Compared with the prior art, the invention has the following advantages:
1. the method can be used for carrying out high-precision positioning on the multi-target workpiece on the test workbench, and is matched with the manipulator, so that full automation in the processes of clamping, carrying and assembling of the multi-target workpiece is realized, no manual intervention in the whole process is realized, and the production efficiency is greatly improved.
2. The Eye-To-Hand camera fixed on the test workbench automatically performs target detection on multi-target workpieces and completes coarse positioning on the basis of a yolov3 model of an improved network, and simultaneously performs defect detection on defective workpieces.
3. And transmitting the coordinate position returned by the rough positioning to a mechanical arm, and driving an Eye-In-Hand camera to move to the upper part of the target workpiece by the mechanical arm to perform high-precision positioning on the target workpiece. The method can overcome the problem that the target detection technology based on deep learning can better meet the flexible requirement of multi-target identification, but has insufficient positioning precision and high identification precision of the traditional machine vision detection technology, but has single identification characteristic.
Drawings
FIG. 1 is a schematic view of a camera layout according to the present invention.
Fig. 2 is a schematic flow chart of the high-precision columnar workpiece classification positioning method based on target detection and machine vision provided by the invention.
Fig. 3 is a schematic diagram of the improved yolov3 target detection model structure according to the present invention.
FIG. 4 is a flow chart of the select _ max _ length _ constant algorithm employed in the present invention.
FIG. 5 is an effect diagram of a columnar workpiece classification high-precision positioning method based on target detection and machine vision.
FIG. 6 is a flow chart of the present invention.
Detailed Description
The invention provides a columnar workpiece classification high-precision positioning method based on target detection and machine vision. And performing target detection on the target workpiece through deep learning to finish workpiece classification and target rough positioning, guiding the manipulator to the position above the workpiece by the rough positioning target position, and finishing target high-precision positioning through machine vision. Therefore, classification recognition and high-precision positioning of the multi-target workpiece are achieved.
Therefore, the invention provides a columnar workpiece classification high-precision positioning method based on target detection and machine vision, which comprises the following steps:
the multi-target identification, rough positioning and defect identification process based on yolov3 target detection algorithm is as follows:
and acquiring images of the multi-target workpiece by using an Eye-To-Hand camera of the experimental platform. The experimental platform comprises a mechanical arm, a visual control system, an Eye-To-Hand camera and an Eye-In-Hand camera. The Eye-To-Hand camera is fixed right above the test bed, and the camera has a higher working distance from the experimental platform so as To image different types of multi-target workpieces on a visual field surface. The Eye-To-Hand camera has lower coarse identification and positioning accuracy due To larger working distance.
S1: the Eye-To-Hand camera acquires images of multiple target workpieces on a test bed, inputs the acquired images into a yolov3 algorithm of an improved network structure, trains a yolov3 algorithm model of the improved network structure, and performs target detection by using the trained yolov3 algorithm multi-target detection model To obtain image coordinates of each category and coarse precision of the multiple target workpieces.
S2: and based on coordinate transformation, performing Hand-Eye calibration on the Eye-To-Hand camera and the tail end of the manipulator by a calibration plate calibration method, solving world coordinates of each target workpiece by combining the obtained image coordinates of the multi-target workpiece coarse precision with Hand-Eye calibration parameters, and returning the category of each target workpiece.
S3: the yolov3 algorithm model of the improved network structure trains multiple target workpiece types during training, and trains typical defects of each workpiece at the same time. When the trained yolov3 algorithm for improving the network structure is used for target detection, key defects of targets such as scratches, unfilled corners and the like are identified.
The high-precision positioning process of the target workpiece based on machine vision comprises the following steps:
s4: the coordinates of the roughly positioned workpiece are obtained by identifying a yolov3 algorithm model of an improved network structure, the position coordinates are transmitted to a vision control system based on a communication protocol, and the control system sends the position coordinates to a manipulator. The visual control system is acted by an industrial personal computer; the Eye-In-Hand camera is connected with the tail end of the manipulator. The Eye-In-Hand camera moves along with the robot arm to above the target workpiece.
S5: the Eye-In-Hand camera moves to the position above a workpiece to acquire images of the workpiece, the workpiece is placed above a test bed, the system performs image processing and feature extraction on the acquired image to acquire key feature coordinates of the workpiece, and high-precision world coordinates of the workpiece are acquired and sent to a vision system In combination with Hand-Eye calibration parameters of the Eye-In-Hand camera.
S6: and the system processor guides the manipulator to clamp, carry or assemble according to the high-precision coordinates.
S7: and repeating the steps S4-S6 to perform high-precision positioning on the target workpieces of different types, so as to realize the high-precision positioning of the multi-target workpieces.
The workpiece is a shaft part; the multi-target workpieces comprise four different types of workpieces; the camera and the vision system are communicated based on a GigE protocol to transmit images; and the visual system and the mechanical arm are communicated based on a TCP/IP protocol to transmit position coordinates.
Further, the step S1 is specifically:
s11: and acquiring images of the target To be detected by using an Eye-To-Hand camera on the test workbench, and marking and classifying different types of workpieces after acquisition To prepare a training data set. The workpiece marks are classified into five categories, including four different types of shaft parts and four different types of workpieces with defects.
S12: and (3) performing enhancement processing on the training data set, and inputting the enhanced data set into an improved yolov3 algorithm model for training to obtain a parameter model.
S13: and inputting the original multi-target workpiece image to be identified into a yolov3 model of the trained improved network, and outputting corresponding defect detection and classification identification coarse positioning results.
S14: and measuring the parameters of the candidate frame in the training set by adopting a vector similarity measurement method, performing statistical analysis on the parameters according to the standardized Euclidean distance, performing statistical analysis on the parameters of the candidate frame according to the standardized Euclidean distance, writing the parameter with the minimum error into a configuration file, and improving the yolv 3 target detection frame.
The yolov3 model of the improved network structure is improved based on the network result of the darknet53, and the requirement of multi-target workpiece target detection is met. In the target detection and defect identification method provided by the invention, the optimization and improvement of the yolov3 algorithm network structure model specifically comprises the following steps:
the original network model of the Yolov3 target detection algorithm obtains detection results under three scales of 13 × 13 × 75, 26 × 26 × 75 and 52 × 52 × 75 by a series of downsampling processes, wherein 13, 26 and 52 represent sampling scales. 75 is divided into 3 × (4+1+20), 3 represents three scales of detection boxes, 4 represents position information of each detection box, which includes the width and height of the detection box and the center position coordinates of the detection box, 1 represents the probability of recognition, and 20 represents the kind of target that can be detected. The yolov3 algorithm of the improved network structure is that the modified network structure can meet the target detection of four different types of multi-target workpieces, and can identify different types of defective workpieces to obtain the outputs of three different scales of 13 multiplied by 39, 26 multiplied by 39 and 52 multiplied by 39.
Further, the step S2 is specifically:
s21: the Eye-To-Hand camera is calibrated by a calibration method of a calibration plate based on halcon;
s22: calibrating the Hand and the Eye To obtain external parameters of the Eye-To-Hand camera, and standardizing the parameters into a matrix form;
s23: and combining the image coordinates obtained by the yolov3 target detection model with an external parameter matrix, and converting the obtained image coordinates into the world coordinates of the robot.
Further, the step S5 is specifically:
s51: the Eye-In-Hand camera performs operations such as image preprocessing, noise reduction and the like after photographing a single target workpiece; and carrying out self-adaptive binarization on the preprocessed image to obtain edge characteristic information of the columnar workpiece.
S52: according to the edge characteristic information of the circle, the circle contour of the columnar workpiece is fitted based on an abnormal value detection method, the maximum excircle contour of the columnar workpiece is obtained by adopting a selection _ max _ length _ contourer method constrained by a maximum value, and high precision of visual positioning is realized.
The select _ max _ length _ constraint method performs maximum value constraint on concentric circle profiles of the columnar workpieces after fitting the critical information of the workpieces, and returns profile characteristic information of the columnar workpieces. The method comprises the steps of initializing the longest length and the longest length index, traversing the acquired profile characteristic length, storing the length and the index of the longest profile, and finally returning the index of the longest profile length.
The method can achieve micron-level classification and positioning accuracy of the columnar workpieces, and can identify various columnar workpieces of different types. The accuracy rate of the identification of various different types of columnar workpieces can reach more than 90%, and the identification speed can reach more than 50 fps.
Claims (9)
1. A columnar workpiece classification high-precision positioning method based on target detection and machine vision is characterized by comprising the following steps:
acquiring images of the multi-target workpiece by using an Eye-To-Hand camera of the experimental platform; the experimental platform comprises a mechanical arm, a visual control system, an Eye-To-Hand camera and an Eye-In-Hand camera; the Eye-To-Hand camera is fixed right above the test bed, and the camera is away from the experiment platform, so that different types of multi-target workpieces are imaged on a visual field surface;
s1: the Eye-To-Hand camera acquires images of a multi-target workpiece on a test bed, inputs the acquired images into a yolov3 algorithm of an improved network structure, trains a yolov3 algorithm model of the improved network structure, and performs target detection by using the trained yolov3 algorithm multi-target detection model To obtain image coordinates of each category and coarse precision of the multi-target workpiece;
s2: based on coordinate transformation, performing Hand-Eye calibration on the Eye-To-Hand camera and the tail end of the manipulator by a calibration plate calibration method, solving world coordinates of each target workpiece by combining the obtained image coordinates of the multi-target workpiece coarse precision with Hand-Eye calibration parameters, and returning the category of each target workpiece;
s3: the yolov3 algorithm model with the improved network structure is used for training the types of multiple targets of workpieces during training, and training typical defects of each workpiece; when the trained yolov3 algorithm for improving the network structure is used for target detection, the key defects of scratch and unfilled corner targets are identified;
s4: the coordinates of the roughly positioned workpiece are obtained by identifying a yolov3 algorithm model of an improved network structure, the position coordinates are transmitted to a visual control system based on a communication protocol, and the control system sends the position coordinates to a manipulator; the visual control system is acted by an industrial personal computer; the Eye-In-Hand camera is connected with the tail end of the manipulator; the Eye-In-Hand camera moves to the upper part of the target workpiece along with the manipulator;
s5: the Eye-In-Hand camera moves to a position above a workpiece to acquire an image of the workpiece, the workpiece is placed above a test bed, the system performs image processing and feature extraction on the acquired image to acquire key feature coordinates of the workpiece, and high-precision world coordinates of the workpiece are acquired and sent to a vision system In combination with Hand-Eye calibration parameters of the Eye-In-Hand camera;
s6: the system processor guides the manipulator to clamp, carry or assemble according to the high-precision coordinates;
s7: and repeating the steps S4-S6 to perform high-precision positioning on the target workpieces of different types, so as to realize the high-precision positioning of the multi-target workpieces.
2. The method for classifying and positioning the columnar workpiece with high precision based on the target detection and the machine vision as claimed in claim 1, wherein the workpiece is a shaft part; the multi-target workpieces comprise four different types of workpieces; the camera and the vision system are communicated based on a GigE protocol to transmit images; and the visual system and the mechanical arm are communicated based on a TCP/IP protocol to transmit position coordinates.
3. The method as claimed in claim 1, wherein the step S1 comprises:
s11: collecting images of a target To be detected by using an Eye-To-Hand camera on a test workbench, and marking and classifying different types of workpieces after collection To manufacture a training data set;
s12: enhancing the training data set, inputting the enhanced data set into an improved yolov3 algorithm model for training to obtain a parameter model;
s13: inputting an original multi-target workpiece image to be identified into a yolov3 model of a trained improved network, and outputting corresponding defect detection and classification identification coarse positioning results;
s14: and measuring the parameters of the candidate frame in the training set by adopting a vector similarity measurement method, performing statistical analysis on the parameters according to the standardized Euclidean distance, performing statistical analysis on the parameters of the candidate frame according to the standardized Euclidean distance, writing the parameter with the minimum error into a configuration file, and improving the yolv 3 target detection frame.
4. The method as claimed in claim 1, wherein the yolov3 model of the improved network structure is improved based on the network result of darknet53, so as to meet the target detection requirement of multiple targets of workpieces.
5. The method for high-precision positioning of columnar workpiece classification based on target detection and machine vision according to claim 1, characterized in that the original network model of Yolov3 target detection algorithm obtains detection results at three scales of 13 × 13 × 75, 26 × 26 × 75 and 52 × 52 × 75 by a series of down-sampling processes, wherein 13, 26 and 52 represent sampling scales; 75, dividing the detection box into 3 x (4+1+20), wherein 3 represents detection boxes with three scales, 4 represents position information of each detection box, the position information comprises width and height of the detection boxes and center position coordinates of the detection boxes, 1 represents recognition probability, and 20 represents detected target types; the yolov3 algorithm of the improved network structure is that the modified network structure can meet the target detection of four different types of multi-target workpieces, and can identify different types of defective workpieces to obtain the outputs of three different scales of 13 multiplied by 39, 26 multiplied by 39 and 52 multiplied by 39.
6. The method as claimed in claim 1, wherein the step S2 comprises:
s21: the Eye-To-Hand camera is calibrated by a calibration method of a calibration plate based on halcon;
s22: calibrating the Hand and the Eye To obtain external parameters of the Eye-To-Hand camera, and standardizing the parameters into a matrix form;
s23: and combining the image coordinates obtained by the yolov3 target detection model with an external parameter matrix, and converting the obtained image coordinates into the world coordinates of the robot.
7. The method as claimed in claim 1, wherein the step S5 comprises:
s51: the Eye-In-Hand camera performs operations such as image preprocessing, noise reduction and the like after photographing a single target workpiece; carrying out self-adaptive binarization on the preprocessed image to obtain edge characteristic information of the columnar workpiece;
s52: according to the edge characteristic information of the circle, the circle contour of the columnar workpiece is fitted based on an abnormal value detection method, the maximum excircle contour of the columnar workpiece is obtained by adopting a selection _ max _ length _ contourer method constrained by a maximum value, and high precision of visual positioning is realized.
8. The method for classifying and positioning the columnar workpiece with high precision based on the target detection and the machine vision as claimed in claim 1, wherein the select _ max _ length _ constraint method performs maximum value constraint on concentric contours of the columnar workpiece obtained after fitting the critical information of the workpiece, and returns the contour characteristic information of the columnar workpiece; the method comprises the steps of initializing the longest length and the longest length index, traversing the acquired profile characteristic length, storing the length and the index of the longest profile, and finally returning the index of the longest profile length.
9. The method for high-precision classified positioning of the columnar workpiece based on the target detection and the machine vision as claimed in claim 1, wherein the method has micron-scale classified positioning precision of the columnar workpiece and can identify a plurality of different types of columnar workpieces.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011419779.2A CN112497219B (en) | 2020-12-06 | 2020-12-06 | Columnar workpiece classifying and positioning method based on target detection and machine vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011419779.2A CN112497219B (en) | 2020-12-06 | 2020-12-06 | Columnar workpiece classifying and positioning method based on target detection and machine vision |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112497219A true CN112497219A (en) | 2021-03-16 |
CN112497219B CN112497219B (en) | 2023-09-12 |
Family
ID=74971073
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011419779.2A Active CN112497219B (en) | 2020-12-06 | 2020-12-06 | Columnar workpiece classifying and positioning method based on target detection and machine vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112497219B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113134683A (en) * | 2021-05-13 | 2021-07-20 | 兰州理工大学 | Laser marking method and device based on machine learning |
CN113538417A (en) * | 2021-08-24 | 2021-10-22 | 安徽顺鼎阿泰克科技有限公司 | Transparent container defect detection method and device based on multi-angle and target detection |
CN113657551A (en) * | 2021-09-01 | 2021-11-16 | 陕西工业职业技术学院 | Robot grabbing posture task planning method for sorting and stacking multiple targets |
CN113814987A (en) * | 2021-11-24 | 2021-12-21 | 季华实验室 | Multi-camera robot hand-eye calibration method and device, electronic equipment and storage medium |
CN115159149A (en) * | 2022-07-28 | 2022-10-11 | 深圳市罗宾汉智能装备有限公司 | Material taking and unloading method and device based on visual positioning |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110103679A1 (en) * | 2009-10-29 | 2011-05-05 | Mitutoyo Corporation | Autofocus video tool and method for precise dimensional inspection |
CN102229146A (en) * | 2011-04-27 | 2011-11-02 | 北京工业大学 | Remote control humanoid robot system based on exoskeleton human posture information acquisition technology |
CN105690386A (en) * | 2016-03-23 | 2016-06-22 | 北京轩宇智能科技有限公司 | Teleoperation system and teleoperation method for novel mechanical arm |
CN108555908A (en) * | 2018-04-12 | 2018-09-21 | 同济大学 | A kind of identification of stacking workpiece posture and pick-up method based on RGBD cameras |
CN109448054A (en) * | 2018-09-17 | 2019-03-08 | 深圳大学 | Target step-by-step positioning method, application, device and system based on visual fusion |
CN109483554A (en) * | 2019-01-22 | 2019-03-19 | 清华大学 | Robotic Dynamic grasping means and system based on global and local vision semanteme |
-
2020
- 2020-12-06 CN CN202011419779.2A patent/CN112497219B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110103679A1 (en) * | 2009-10-29 | 2011-05-05 | Mitutoyo Corporation | Autofocus video tool and method for precise dimensional inspection |
CN102229146A (en) * | 2011-04-27 | 2011-11-02 | 北京工业大学 | Remote control humanoid robot system based on exoskeleton human posture information acquisition technology |
CN105690386A (en) * | 2016-03-23 | 2016-06-22 | 北京轩宇智能科技有限公司 | Teleoperation system and teleoperation method for novel mechanical arm |
CN108555908A (en) * | 2018-04-12 | 2018-09-21 | 同济大学 | A kind of identification of stacking workpiece posture and pick-up method based on RGBD cameras |
CN109448054A (en) * | 2018-09-17 | 2019-03-08 | 深圳大学 | Target step-by-step positioning method, application, device and system based on visual fusion |
CN109483554A (en) * | 2019-01-22 | 2019-03-19 | 清华大学 | Robotic Dynamic grasping means and system based on global and local vision semanteme |
Non-Patent Citations (1)
Title |
---|
陈春谋: "基于图像分辨率处理与卷积神经网络的工件识别分类系统", 系统仿真技术, vol. 15, no. 2, pages 99 - 106 * |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113134683A (en) * | 2021-05-13 | 2021-07-20 | 兰州理工大学 | Laser marking method and device based on machine learning |
CN113538417A (en) * | 2021-08-24 | 2021-10-22 | 安徽顺鼎阿泰克科技有限公司 | Transparent container defect detection method and device based on multi-angle and target detection |
CN113657551A (en) * | 2021-09-01 | 2021-11-16 | 陕西工业职业技术学院 | Robot grabbing posture task planning method for sorting and stacking multiple targets |
CN113657551B (en) * | 2021-09-01 | 2023-10-20 | 陕西工业职业技术学院 | Robot grabbing gesture task planning method for sorting and stacking multiple targets |
CN113814987A (en) * | 2021-11-24 | 2021-12-21 | 季华实验室 | Multi-camera robot hand-eye calibration method and device, electronic equipment and storage medium |
CN113814987B (en) * | 2021-11-24 | 2022-06-03 | 季华实验室 | Multi-camera robot hand-eye calibration method, device, electronic device and storage medium |
CN115159149A (en) * | 2022-07-28 | 2022-10-11 | 深圳市罗宾汉智能装备有限公司 | Material taking and unloading method and device based on visual positioning |
CN115159149B (en) * | 2022-07-28 | 2024-05-24 | 深圳市罗宾汉智能装备有限公司 | Visual positioning-based material taking and unloading method and device |
Also Published As
Publication number | Publication date |
---|---|
CN112497219B (en) | 2023-09-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112497219A (en) | Columnar workpiece classification positioning method based on target detection and machine vision | |
CN108765378B (en) | Machine vision detection method for workpiece contour flash bulge under guidance of G code | |
CN111537517A (en) | An unmanned intelligent stamping defect identification method | |
CN109840900B (en) | A fault online detection system and detection method applied to intelligent manufacturing workshops | |
CN110146017B (en) | Industrial robot repeated positioning precision measuring method | |
WO2015120734A1 (en) | Special testing device and method for correcting welding track based on machine vision | |
CN114758236B (en) | Non-specific shape object identification, positioning and manipulator grabbing system and method | |
CN114355953B (en) | High-precision control method and system of multi-axis servo system based on machine vision | |
CN112561886A (en) | Automatic workpiece sorting method and system based on machine vision | |
CN108460552B (en) | A steel storage control system based on machine vision and PLC | |
CN112509063A (en) | Mechanical arm grabbing system and method based on edge feature matching | |
CN118314138B (en) | Laser processing method and system based on machine vision | |
CN113146172A (en) | Multi-vision-based detection and assembly system and method | |
CN118386258B (en) | Packaging mechanical arm control system | |
CN113822810A (en) | Method for positioning workpiece in three-dimensional space based on machine vision | |
CN111784688A (en) | Flower automatic grading method based on deep learning | |
CN116465335A (en) | Automatic thickness measurement method and system based on point cloud matching | |
CN114913346B (en) | An intelligent sorting system and method based on product color and shape recognition | |
CN118788992A (en) | A CNC lathe machining auxiliary system based on machine vision | |
CN113936291A (en) | Aluminum template quality inspection and recovery method based on machine vision | |
CN114851206A (en) | Method for grabbing stove based on visual guidance mechanical arm | |
CN110021027B (en) | Edge cutting point calculation method based on binocular vision | |
CN113589776A (en) | Special steel bar quality monitoring and diagnosing method based on big data technology | |
CN117260003B (en) | Automatic arranging, steel stamping and coding method and system for automobile seat framework | |
CN112275847A (en) | Bending system and method for processing by using robot and machine vision |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |