[go: up one dir, main page]

CN109101966A - Workpiece identification positioning and posture estimation system and method based on deep learning - Google Patents

Workpiece identification positioning and posture estimation system and method based on deep learning Download PDF

Info

Publication number
CN109101966A
CN109101966A CN201810591858.8A CN201810591858A CN109101966A CN 109101966 A CN109101966 A CN 109101966A CN 201810591858 A CN201810591858 A CN 201810591858A CN 109101966 A CN109101966 A CN 109101966A
Authority
CN
China
Prior art keywords
workpiece
positioning
deep learning
training
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810591858.8A
Other languages
Chinese (zh)
Other versions
CN109101966B (en
Inventor
卜伟
张波
徐显兵
彭成斌
肖江剑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo Institute of Material Technology and Engineering of CAS
Original Assignee
Ningbo Institute of Material Technology and Engineering of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo Institute of Material Technology and Engineering of CAS filed Critical Ningbo Institute of Material Technology and Engineering of CAS
Priority to CN201810591858.8A priority Critical patent/CN109101966B/en
Publication of CN109101966A publication Critical patent/CN109101966A/en
Application granted granted Critical
Publication of CN109101966B publication Critical patent/CN109101966B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/242Aligning, centring, orientation detection or correction of the image by image rotation, e.g. by 90 degrees

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The present invention provides a kind of workpiece identification positioning and posture estimation system and method based on deep learning.The workpiece identification positioning and posture estimation system based on deep learning includes the network construction module for being sequentially connected setting, data acquisition module, model training module and workpiece identification positioning and Attitude estimation module.Using the workpiece identification positioning provided by the invention based on deep learning and posture estimation system, so that the spatial attitude estimation of the Classification and Identification of variety classes workpiece, the determining and single workpiece in position can be detected simultaneously, output bas line efficiency is substantially increased.

Description

Workpiece identification positioning and posture estimation system and method based on deep learning
Technical field
The present invention relates to a kind of positioning of workpiece identification and posture estimation system and methods, in particular to a kind of to be based on depth The workpiece identification positioning of habit and posture estimation system and method, belong to target identification detection field.
Background technique
With the development of science and technology more and more industrial robots are applied to production field, weight is carried out to replace the mankind The production activity of renaturation.Industrial robot is multi-joint manipulator or multivariant installations towards industrial circle, it Work can be executed automatically, be a kind of machine that various functions are realized by self power and control ability.It can receive the mankind Commander, can also run, modern industrial robot can also be formulated according to artificial intelligence technology according to the program of preparatory layout Principle program action.
In order to improve the degree of automation of industrial robot, need industrial robot that can carry out intelligence to the workpiece in production Identification positioning and Attitude estimation, such industrial robot can be according to the adjustment campaign of the different attitude-adaptives of different workpieces Workpiece is sorted with crawl angle track.
Deep learning algorithm all achieves quantum jump in the every field of computer vision in recent years, especially examines in target It surveys, the identification various outstanding deep learning algorithms in classification field emerge in multitude.Such as GoogleNet, VGG, Faster R-CNN, YOLO etc..Therefore by powerful deep learning algorithm applies to workpiece sensing, identification positioning field can effectively improve algorithm can By property, increase detection positioning accuracy and dimension, to improve the degree of automation of industrial robot, greatly enhances actual production effect Energy.However, there is also certain defects in workpiece sensing in the prior art, such as to the variety classes workpiece on same production line The spatial attitude estimation of Classification and Identification, the determining and single workpiece in position can not provide satisfied testing result simultaneously.
Summary of the invention
It is a primary object of the present invention to be directed to it is a kind of based on deep learning workpiece identification positioning and Attitude estimation System and method, with overcome the deficiencies in the prior art.
To realize aforementioned invention purpose, the workpiece identification positioning that the embodiment of the invention provides a kind of based on deep learning and Posture estimation system may include:
Network construction module is at least used to carry out workpiece identification positioning and Attitude estimation based on YOLO deep learning network Network design, the workpiece identification positioning and Attitude estimation network design increase an output project after being included in full articulamentum, The output project is for obtaining angle information;
Data acquisition module, is at least used to construct training set, and building process includes the workpiece picture for acquiring different postures Angle information mark and classification information mark and location information mark are carried out as training sample, and to the training sample;
Model training module is at least used to position the workpiece identification according to the training set that data acquisition module constructs It is trained with Attitude estimation network, when penalty values reach preset threshold, training terminates and obtains workpiece identification positioning and appearance State estimates model;
Workpiece identification positioning and Attitude estimation module are at least used to be positioned and Attitude estimation mould according to the workpiece identification Type carries out identification positioning and Attitude estimation to workpiece material object picture.
Preferably, the model training module further includes penalty values computational submodule, for the work being currently trained The penalty values of part identification positioning and Attitude estimation network are calculated, and the penalty values calculate using while merging workpiece classification and miss The loss function of difference, location of workpiece error of coordinate and workpiece posture error.
The embodiment of the invention also provides a kind of workpiece identification positioning and Attitude estimation method based on deep learning, can To include:
S1. workpiece identification positioning and Attitude estimation network design are carried out based on YOLO deep learning network, is included in and connects entirely Increase an output project after connecing layer, for obtaining angle information;
S2. the workpiece pictures of different postures is acquired as training sample to carry out building training set, including to the training Sample carries out angle information mark and classification information mark and location information mark;
S3. the training set constructed using step S2 positions the workpiece identification and Attitude estimation network is trained;When When penalty values reach preset threshold, training terminates and obtains workpiece identification positioning and Attitude estimation model;
S4. the workpiece identification positioning and Attitude estimation model is called to carry out identification positioning and posture to workpiece material object picture Estimation.
Preferably, the angle information mark includes:
Some workpiece posture is chosen as benchmark, is set as being (0 °, 0 °, 0 °) around x, y, z axis, be set separately around x, Y, the angle interval of z-axis rotation takes training sample picture to carry out around the median of interval section where the angle that x, y, z axis rotates Mark.
Preferably, the classification information mark and location information mark include: classification information with number mark, to distinguish not It is generic;The bounding box of workpiece is by asking its minimum circumscribed rectangle to obtain.
Preferably, the penalty values use while merging workpiece error in classification, location of workpiece error of coordinate and workpiece posture The loss function of error is calculated.
Preferably, the process being trained in step S3 to workpiece identification positioning and Attitude estimation network is specifically wrapped It includes:
S31. it is trained to not changing the YOLO deep learning network before network structure, it is excellent using gradient decline optimizer Change variable, repetition training reaches preset threshold up to penalty values, obtains updated weight;
S32. the weight after step S31 training is loaded into the workpiece identification positioning and Attitude estimation network modified, is adopted With the relevant variable of gradient decline optimizer Optimization Prediction angle, repetition training is until penalty values reach preset threshold.
Preferably, training sample picture is revolved rotating around x-axis, the angle of y-axis rotation in [- 15 °, 14 °] range around z-axis Gyration is in [0 °, 90 °] range.
Preferably, when training sample picture is around x, y-axis rotation, the angle interval is set as 5 °;Training sample picture is around z When axis rotates, the angle interval is set as 10 °.
Preferably, the loss function include angular error loss function, error of coordinate function, IOU error loss function, Error in classification loss function;
Angular error loss function formula are as follows:
Wherein Ax, Ay, Az are respectively the angle value around the rotation of x, y, z axis of neural network forecast,Respectively Corresponding mark value,Indicate that object center is fallen in grid i;
Error of coordinate loss function formula are as follows:
IoU error loss function formula are as follows:
Error in classification loss function formula are as follows:
Wherein x, y, w, h, C, p are neural network forecast value,It is mark value,It indicates in object The heart is fallen in grid i,WithRespectively indicate whether object center falls into j-th of prediction block of i-th of grid;
The loss function are as follows: L=LaLc+LIoU+Lcls
Further, it is described based on deep learning workpiece identification positioning and Attitude estimation method be based on it is described based on What the workpiece identification positioning of deep learning and posture estimation system were realized.
Compared with prior art, the invention has the advantages that using technical solution provided by the invention, so that variety classes The spatial attitude estimation of the Classification and Identification of workpiece, the determining and single workpiece in position can be detected simultaneously, be substantially increased Output bas line efficiency.
Detailed description of the invention
Fig. 1 is a kind of workpiece identification positioning and Attitude estimation side based on deep learning in an exemplary embodiments of the invention The flow chart of method;
Fig. 2 is a kind of workpiece identification positioning based on YOLO deep learning network improvement in an exemplary embodiments of the invention With Attitude estimation network diagram.
Specific embodiment
In view of deficiency in the prior art, inventor is studied for a long period of time and is largely practiced, and is able to propose of the invention Technical solution.The technical solution, its implementation process and principle etc. will be further explained as follows.
The workpiece identification based on deep learning that the embodiment of the invention provides a kind of positions and posture estimation system, packet It includes:
Network construction module is at least used to carry out workpiece identification positioning and Attitude estimation based on YOLO deep learning network Network design, the workpiece identification positioning and Attitude estimation network design increase an output project after being included in full articulamentum, The output project is for obtaining angle information;
Data acquisition module, is at least used to construct training set, and building process includes the workpiece picture for acquiring different postures Angle information mark and classification information mark and location information mark are carried out as training sample, and to the training sample;
Model training module is at least used to position the workpiece identification according to the training set that data acquisition module constructs It is trained with Attitude estimation network, when penalty values reach preset threshold, training terminates and obtains workpiece identification positioning and appearance State estimates model;
Workpiece identification positioning and Attitude estimation module are at least used to be positioned and Attitude estimation mould according to the workpiece identification Type carries out identification positioning and Attitude estimation to workpiece material object picture.
Further, the network construction module, data acquisition module, model training module and workpiece identification positioning and Attitude estimation module is sequentially connected setting and forms the workpiece identification positioning based on deep learning and posture estimation system.
Further, the model training module further includes penalty values computational submodule, for being currently trained The penalty values of workpiece identification positioning and Attitude estimation network are calculated, and the penalty values calculate using while merging workpiece classification The loss function of error, location of workpiece error of coordinate and workpiece posture error.
Referring to Fig. 1, the workpiece identification positioning and posture that the embodiment of the invention also provides a kind of based on deep learning are estimated Meter method, may comprise steps of:
Step 101, workpiece identification positioning and Attitude estimation network design are carried out based on YOLO deep learning network;
It is improved based on YOLO deep learning network, increases output angle information.
Step 102, it is acquired and marks for the workpiece training sample picture of different postures;
It is acquired for the workpiece picture of different postures, and carries out angle information mark and classification information mark and position Set information labeling.
Step 103, the training set training workpiece identification constructed using step 102 positions and Attitude estimation model;
The damage of workpiece error in classification, location of workpiece error of coordinate and workpiece posture error is used while merged in training process Lose function.
Step 104, the workpiece identification positioning and Attitude estimation model is called to carry out identification positioning to workpiece material object picture And Attitude estimation.
In order to make the objectives, technical solutions, and advantages of the present invention clearer, below in conjunction with attached drawing and specific implementation Example, the present invention will be described in further detail.It should be appreciated that specific embodiment described herein is only to explain this hair It is bright, it is not intended to limit the present invention.
In some more specific embodiments, a kind of positioning of workpiece identification and Attitude estimation method may include as follows Step:
One, workpiece identification positioning and Attitude estimation network design based on YOLO deep learning network
Referring to Fig. 2, Fig. 2 is in an of the invention exemplary embodiments after a kind of deep learning network improvement based on YOLO Workpiece identification positioning and Attitude estimation network, according to former YOLO deep learning network, it can be deduced that the classification situation of workpiece and position Confidence breath, and in the embodiment of the present invention other than the classification of workpiece to be obtained, location information, also to obtain the posture of workpiece i.e. Angle information, therefore need to be improved on the basis of former network, obtain angle value, the network structure after change is as shown in Figure 2.From Fig. 2 for calculating classification and location information, improves it is found that improved YOLO network remains primitive network structure substantially Place increases an output project after being included in full articulamentum, i.e., reuses one on the full articulamentum that output is 4096 dimensional vectors Full articulamentum, for obtaining angle information, output size 7*7*3, wherein 3 be 3 angles of output, as anglex, angley、 anglez。
Two, Image Acquisition and mark are trained for the workpiece of different postures
Assuming that shape size is neither identical, and does not have the symmetry of height there are three types of the workpiece of this test.It is right In the workpiece of producing line production, several workpiece will not usually be mixed to increase sorting difficulty, according to practical application scene, originally The case where a kind of workpiece is only considered in embodiment.By taking the first workpiece as an example, in order to obtain its posture information, in production training set When, need the picture to its each posture to be acquired, be set as on the basis of some posture be around x, y, z axis (0 °, 0 °, 0 °).Image three-dimensional CAD model picture rotate around x, y, z axis by OPENGL as a preferred method, Operation
In an exemplary embodiments of the invention, we take a part of posture to do training and test, to the other appearances of analogy The test case of state.The angle rotated around x-axis, y-axis respectively within the scope of [- 15 °, 14 °], around z-axis rotate angle [0 °, 90 °] within the scope of.When picture is around x, y-axis rotation, 5 ° are divided between setting angle, takes picture where the angle that x, y-axis rotate The median of interval section be labeled;Due to being rotated around z-axis, it is considered as rotation planar, workpiece posture is become Change less, angle is set to 10 °, picture is taken to be marked around the median of the interval section where the angle that z-axis rotates Note.Such as between being rotated in 6 ° to 10 ° around x, y-axis for workpiece, around z-axis be rotated in 11 ° to 20 ° between workpiece, angle is unified to be marked Note is (8,8,15), by the workpiece within the scope of this posture, it is believed that is same posture, rotation angle is all (8,8,15).
In an exemplary embodiments of the invention, a kind of picture trained for Posture acquisition 250, training set picture is adopted After the completion of collection, it is necessary to be labeled work to picture, by the classification information of training picture and the target of training be needed to outline Come, classification information is labeled as 1, represent the 1st class, the bounding box of workpiece can by asking its minimum circumscribed rectangle to obtain, by Xmin, Xmax, Ymin, Ymax tetra- values are written in annotation file in the same format.
Three, design while merging the loss function of workpiece error in classification, location of workpiece error of coordinate and workpiece posture error
Due to introducing angles return network, on the basis of former loss function, also need that angular error loss function is added, Formula is as follows:
Wherein Ax, Ay, Az are respectively the angle value around the rotation of x, y, z axis of neural network forecast,Respectively Corresponding mark value,Indicate that object center is fallen in grid i.
In addition to angle loss function, loss function further include: error of coordinate loss function, IOU error loss function, classification Error loss function.Formula difference is as follows:
Error of coordinate loss function:
IoU error loss function:
Classification error loss function:
Wherein x, y, w, h, C, p are neural network forecast value,It is mark value,Indicate that object center is fallen In grid i,WithRespectively indicate whether object center falls into j-th of prediction block of i-th of grid.
For the positioning of entire workpiece identification and Attitude estimation network, total losses value are as follows:
L=La+Lc+LIoU+Lcls
Four, using the training image training workpiece identification positioning of acquisition and Attitude estimation model
Since network is added to angles return layer, while training optimization all variables of network, it will lead to loss function and be difficult Convergence, therefore mode trained in two steps can be taken;It is trained first to not changing the YOLO network before network structure, Leaming rate initial value is that 0.01, batch size is 30, the period 11, declines optimizer optimized variable using gradient, The accurate test result without angle measurement is finally obtained by repetition training.
After the training and test of completing the first step, trained weight is loaded into the workpiece identification positioning and posture modified Estimate in network of network, re -training, optimizer optimized variable is still declined using gradient, but only optimize newly-increased pre- measuring angle Relevant variable.Batch size remain as 30, learning rate initial value be 0.01, and by 0.01 gradually change to 0.0001, the period 11, repetition training.
Five, calling model identifies workpiece, is positioned and Attitude estimation
It for test set, is tested using the workpiece pictorial diagram that camera is shot, size and test picture have different, work It is also different with training picture that part photoreceiving surface shines situations such as influencing, getting rusty, but its angular range is still around x-axis and y Axis rotates -15 ° to 14 °, rotates within the scope of 0 ° to 90 ° around z-axis, and every kind of workpiece 1600 is opened, and totally 4800, specific every kind of workpiece Training and test result comparison as shown in table 1, by the statistical result of table 1 it is found that either training or test, classification results All it is excellent in;And the error of the direction x, y bounding box is all very low, test set resultant error is in 1mm or so;In training set around x, y, When the rotation angular error of z-axis distinguishes 4.038 °, 4.334 °, 8.464 ° of average out to, penalty values can not reduce again;And cause the knot The reason of fruit is, takes median to mark for interval section with 5 ° the rotation angle of x, y-axis in training set, by z-axis by 10 ° be between Median is taken to mark between septal area, its own error is 5 °, 5 ° and 10 ° respectively;And it is got rusty by workpiece surface, illumination, workpiece The influence of size, the error that may cause test set are increased, and cause the difference with training set, but error still can connect The normal range (NR) received influences test result little.
Table 1 is to test sample training using 3 kinds of workpiece to count with test result
Using the workpiece identification positioning provided by the invention based on deep learning and posture estimation system, to same production line On the spatial attitude estimation of the Classification and Identification of variety classes workpiece, the determining and single workpiece in position can provide detection simultaneously It can be according to the adjustment motion profile and crawl angle of the different attitude-adaptives of different workpieces as a result, facilitating industrial robot Workpiece is carried out the automatic operation such as sort, is greatly improved the production efficiency of producing line.
It should be appreciated that the technical concepts and features of above-described embodiment only to illustrate the invention, its object is to allow be familiar with this The personage of item technology cans understand the content of the present invention and implement it accordingly, and it is not intended to limit the scope of the present invention.It is all Equivalent change or modification made by Spirit Essence according to the present invention, should be covered by the protection scope of the present invention.

Claims (10)

1.一种基于深度学习的工件识别定位和姿态估计系统,其特征在于包括:1. A workpiece recognition positioning and attitude estimation system based on deep learning, characterized in that it comprises: 网络建设模块,其至少用于基于YOLO深度学习网络进行工件识别定位和姿态估计网络设计,所述工件识别定位和姿态估计网络设计包括在全连接层后增加一个输出项目,所述输出项目用于获取角度信息;A network construction module, which is at least used for workpiece recognition and positioning and pose estimation network design based on the YOLO deep learning network. The workpiece recognition and positioning and pose estimation network design includes adding an output item after the fully connected layer, and the output item is used for Get angle information; 数据采集模块,其至少用于构建训练集,构建过程包括采集不同姿态的工件图片作为训练样本,并对所述训练样本进行角度信息标注以及分类信息标注和位置信息标注;The data acquisition module is at least used to construct a training set, and the construction process includes collecting images of workpieces in different postures as training samples, and performing angle information labeling, classification information labeling and position information labeling on the training samples; 模型训练模块,其至少用于根据数据采集模块构建的训练集对所述工件识别定位和姿态估计网络进行训练,当损失值达到预设阈值时,训练结束并获得工件识别定位和姿态估计模型;A model training module, which is at least used to train the workpiece recognition positioning and pose estimation network according to the training set constructed by the data acquisition module. When the loss value reaches a preset threshold, the training ends and the workpiece recognition positioning and pose estimation model is obtained; 工件识别定位和姿态估计模块,其至少用于根据所述工件识别定位和姿态估计模型对工件实物图片进行识别定位和姿态估计。The workpiece recognition, positioning and pose estimation module is at least used for performing recognition, positioning and pose estimation on the actual workpiece picture according to the workpiece recognition, positioning and pose estimation model. 2.根据权利要求1所述的基于深度学习的工件识别定位和姿态估计系统,其特征在于:2. The workpiece recognition positioning and attitude estimation system based on deep learning according to claim 1, characterized in that: 所述模型训练模块还包括损失值计算子模块,其至少用于对当前进行训练的工件识别定位和姿态估计网络的损失值进行计算,所述损失值计算采用同时融合工件分类误差、工件位置坐标误差和工件姿态误差的损失函数。The model training module also includes a loss value calculation sub-module, which is at least used to calculate the loss value of the currently trained workpiece recognition and positioning and attitude estimation network, and the loss value calculation adopts simultaneous fusion of workpiece classification error, workpiece position coordinates Loss function for error and artifact pose error. 3.一种基于深度学习的工件识别定位和姿态估计方法,其特征在于包括:3. A workpiece recognition location and attitude estimation method based on deep learning, characterized in that it comprises: S1.基于YOLO深度学习网络进行工件识别定位和姿态估计网络设计,包括在全连接层后增加一个输出项目,用于获得角度信息;S1. Based on the YOLO deep learning network, the workpiece recognition and positioning and pose estimation network design, including adding an output item after the fully connected layer, is used to obtain angle information; S2.采集不同姿态的工件图片作为训练样本以进行构建训练集,包括对所述训练样本进行角度信息标注以及分类信息标注和位置信息标注;S2. Collect workpiece images of different postures as training samples to construct a training set, including labeling the training samples with angle information, classification information and position information; S3.使用步骤S2构建的训练集对所述工件识别定位和姿态估计网络进行训练;当损失值达到预设阈值时,训练结束并获得工件识别定位和姿态估计模型;S3. Using the training set constructed in step S2 to train the workpiece recognition and positioning and pose estimation network; when the loss value reaches a preset threshold, the training ends and the workpiece recognition and pose estimation model is obtained; S4.调用所述工件识别定位和姿态估计模型对工件实物图片进行识别定位和姿态估计。S4. Invoking the workpiece recognition and positioning and pose estimation model to perform recognition and positioning and pose estimation on the real workpiece picture. 4.根据权利要求3所述的基于深度学习的工件识别定位和姿态估计方法,其特征在于,所述角度信息标注包括:4. The workpiece recognition positioning and attitude estimation method based on deep learning according to claim 3, wherein the angle information annotation comprises: 选取某个工件姿态作为基准,设置为绕x、y、z轴均为(0°,0°,0°),分别设定绕x、y、z轴旋转的角度间隔,取训练样本图片绕x、y、z轴旋转的角度所在间隔区间的中间值进行标注。Select a certain workpiece posture as a reference, set it to be (0°, 0°, 0°) around the x, y, and z axes, set the angle intervals around the x, y, and z axes respectively, and take the training sample pictures around The middle value of the interval interval where the angles of the x, y, and z axes are rotated is marked. 5.根据权利要求3所述的基于深度学习的工件识别定位和姿态估计方法,其特征在于,所述分类信息标注和位置信息标注包括:分类信息以数字标注,以区分不同类别;工件的边界框通过求其最小外接矩形得出;和/或,所述损失值采用同时融合工件分类误差、工件位置坐标误差和工件姿态误差的损失函数进行计算。5. The workpiece recognition and positioning method and attitude estimation method based on deep learning according to claim 3, wherein the classification information labeling and position information labeling include: the classification information is marked with numbers to distinguish different categories; the boundary of the workpiece The box is obtained by calculating its minimum circumscribed rectangle; and/or, the loss value is calculated by using a loss function that simultaneously integrates the workpiece classification error, workpiece position coordinate error and workpiece attitude error. 6.根据权利要求3所述的方法,其特征在于,步骤S3中所述对所述工件识别定位和姿态估计网络进行训练具体包括:6. The method according to claim 3, wherein the training of the workpiece recognition and positioning and pose estimation network described in step S3 specifically includes: S31.对未更改网络结构前的YOLO深度学习网络进行训练,采用梯度下降优化器优化变量,反复训练直至损失值达到预设阈值,获取更新后的权重;S31. Train the YOLO deep learning network before changing the network structure, use the gradient descent optimizer to optimize the variables, train repeatedly until the loss value reaches the preset threshold, and obtain the updated weight; S32.将步骤S31训练后的权重载入修改过的工件识别定位和姿态估计网络中,采用梯度下降优化器优化预测角度相关的变量,反复训练直至损失值达到预设阈值。S32. Load the weights trained in step S31 into the modified workpiece recognition and positioning and attitude estimation network, use the gradient descent optimizer to optimize variables related to the prediction angle, and train repeatedly until the loss value reaches the preset threshold. 7.根据权利要求4所述的基于深度学习的工件识别定位和姿态估计方法,其特征在于,所述方法包括:7. The workpiece recognition positioning and pose estimation method based on deep learning according to claim 4, wherein the method comprises: 训练样本图片分别绕x轴、y轴旋转的角度在[-15°,14°]范围内,绕z轴旋转角度在[0°,90°]范围内。The rotation angles of the training sample pictures around the x-axis and y-axis are in the range of [-15°, 14°], and the rotation angles around the z-axis are in the range of [0°, 90°]. 8.根据权利要求4所述的基于深度学习的工件识别定位和姿态估计方法,其特征在于,所述方法包括:8. The workpiece recognition positioning and attitude estimation method based on deep learning according to claim 4, wherein the method comprises: 训练样本图片绕x、y轴旋转时,所述角度间隔设定为5°;训练样本图片绕z轴旋转时,所述角度间隔设定为10°。When the training sample picture is rotated around the x and y axes, the angular interval is set to 5°; when the training sample picture is rotated around the z axis, the angular interval is set to 10°. 9.根据权利要求5所述的基于深度学习的工件识别定位和姿态估计方法,其特征在于:9. The workpiece recognition positioning and pose estimation method based on deep learning according to claim 5, characterized in that: 所述损失函数包括角度误差损失函数、坐标误差函数、IOU误差损失函数、分类误差损失函数;The loss function includes an angle error loss function, a coordinate error function, an IOU error loss function, and a classification error loss function; 角度误差损失函数公式为:The angular error loss function formula is: 其中Ax、Ay、Az分别为网络预测的绕x、y、z轴旋转的角度值,分别为对应的标注值,表示物体中心落在网格i中;Among them, Ax, Ay, and Az are the angle values of the rotation around the x, y, and z axes predicted by the network, respectively. are the corresponding labeled values, respectively, Indicates that the center of the object falls in grid i; 坐标误差损失函数公式为:The coordinate error loss function formula is: IoU误差损失函数公式为:The IoU error loss function formula is: 分类误差损失函数公式为:The classification error loss function formula is: 其中x,y,w,h,C,p均为网络预测值,均为标注值,表示物体中心落在网格i中,分别表示物体中心是否落入第i个网格的第j个预测框内;Among them, x, y, w, h, C, and p are network predicted values, are marked values, Indicates that the center of the object falls in grid i, and Respectively indicate whether the center of the object falls within the jth prediction frame of the ith grid; 所述损失函数为:L=La+Lc+LIoU+LclsThe loss function is: L=L a +L c +L IoU +L cls . 10.根据权利要求3所述的基于深度学习的工件识别定位和姿态估计方法,其特征在于:所述基于深度学习的工件识别定位和姿态估计方法是基于权利要求1-2中任一项所述的基于深度学习的工件识别定位和姿态估计系统实现的。10. The workpiece recognition, positioning and pose estimation method based on deep learning according to claim 3, characterized in that: the deep learning-based workpiece recognition, positioning and pose estimation method is based on any one of claims 1-2. The above-mentioned deep learning-based workpiece recognition positioning and pose estimation system is realized.
CN201810591858.8A 2018-06-08 2018-06-08 Workpiece recognition, positioning and pose estimation system and method based on deep learning Active CN109101966B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810591858.8A CN109101966B (en) 2018-06-08 2018-06-08 Workpiece recognition, positioning and pose estimation system and method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810591858.8A CN109101966B (en) 2018-06-08 2018-06-08 Workpiece recognition, positioning and pose estimation system and method based on deep learning

Publications (2)

Publication Number Publication Date
CN109101966A true CN109101966A (en) 2018-12-28
CN109101966B CN109101966B (en) 2022-03-08

Family

ID=64796782

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810591858.8A Active CN109101966B (en) 2018-06-08 2018-06-08 Workpiece recognition, positioning and pose estimation system and method based on deep learning

Country Status (1)

Country Link
CN (1) CN109101966B (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109858530A (en) * 2019-01-14 2019-06-07 苏州长风航空电子有限公司 One kind being based on compound pyramidal rolling target detection method
CN109902629A (en) * 2019-03-01 2019-06-18 成都康乔电子有限责任公司 A Real-time Vehicle Object Detection Model in Complex Traffic Scenarios
CN109948514A (en) * 2019-03-15 2019-06-28 中国科学院宁波材料技术与工程研究所 Fast workpiece identification and localization method based on single-target 3D reconstruction
CN110223352A (en) * 2019-06-14 2019-09-10 浙江明峰智能医疗科技有限公司 A kind of medical image scanning automatic positioning method based on deep learning
CN110338835A (en) * 2019-07-02 2019-10-18 深圳安科高技术股份有限公司 A kind of intelligent scanning stereoscopic monitoring method and system
CN110826499A (en) * 2019-11-08 2020-02-21 上海眼控科技股份有限公司 Object space parameter detection method and device, electronic equipment and storage medium
CN110948489A (en) * 2019-12-04 2020-04-03 国电南瑞科技股份有限公司 A method and system for limiting safe working space of a live working robot
CN111667510A (en) * 2020-06-17 2020-09-15 常州市中环互联网信息技术有限公司 Rock climbing action evaluation system based on deep learning and attitude estimation
CN111784767A (en) * 2020-06-08 2020-10-16 珠海格力电器股份有限公司 Method and device for determining target position
CN112800856A (en) * 2021-01-06 2021-05-14 南京通盛弘数据有限公司 Livestock position and posture recognition method and device based on YOLOv3
CN113111712A (en) * 2021-03-11 2021-07-13 稳健医疗用品股份有限公司 AI identification positioning method, system and device for bagged product
CN113102882A (en) * 2021-06-16 2021-07-13 杭州景业智能科技股份有限公司 Geometric error compensation model training method and geometric error compensation method
CN113723217A (en) * 2021-08-09 2021-11-30 南京邮电大学 Object intelligent detection method and system based on yolo improvement
CN114385322A (en) * 2020-10-21 2022-04-22 沈阳中科数控技术股份有限公司 Edge collaborative data distribution method applied to industrial Internet of things
CN114708484A (en) * 2022-03-14 2022-07-05 中铁电气化局集团有限公司 Pattern analysis method suitable for identifying defects
CN116468998A (en) * 2022-09-09 2023-07-21 国网湖北省电力有限公司超高压公司 Visual characteristic-based power transmission line small part and hanging point part detection method
CN117368000A (en) * 2023-10-13 2024-01-09 昆山美仑工业样机有限公司 Static torsion test stand provided with self-adaptive clamping mechanism

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5459636A (en) * 1994-01-14 1995-10-17 Hughes Aircraft Company Position and orientation estimation neural network system and method
CN106683091A (en) * 2017-01-06 2017-05-17 北京理工大学 Target classification and attitude detection method based on depth convolution neural network
CN107451568A (en) * 2017-08-03 2017-12-08 重庆邮电大学 Use the attitude detecting method and equipment of depth convolutional neural networks
CN108121986A (en) * 2017-12-29 2018-06-05 深圳云天励飞技术有限公司 Object detection method and device, computer installation and computer readable storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5459636A (en) * 1994-01-14 1995-10-17 Hughes Aircraft Company Position and orientation estimation neural network system and method
CN106683091A (en) * 2017-01-06 2017-05-17 北京理工大学 Target classification and attitude detection method based on depth convolution neural network
CN107451568A (en) * 2017-08-03 2017-12-08 重庆邮电大学 Use the attitude detecting method and equipment of depth convolutional neural networks
CN108121986A (en) * 2017-12-29 2018-06-05 深圳云天励飞技术有限公司 Object detection method and device, computer installation and computer readable storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
JOSEPH REDMON: "You Only Look Once: Unified, Real-Time Object Detection", 《2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 *

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109858530A (en) * 2019-01-14 2019-06-07 苏州长风航空电子有限公司 One kind being based on compound pyramidal rolling target detection method
CN109858530B (en) * 2019-01-14 2022-06-28 苏州长风航空电子有限公司 Composite pyramid-based rotating target detection method
CN109902629A (en) * 2019-03-01 2019-06-18 成都康乔电子有限责任公司 A Real-time Vehicle Object Detection Model in Complex Traffic Scenarios
CN109948514A (en) * 2019-03-15 2019-06-28 中国科学院宁波材料技术与工程研究所 Fast workpiece identification and localization method based on single-target 3D reconstruction
CN110223352B (en) * 2019-06-14 2021-07-02 浙江明峰智能医疗科技有限公司 Medical image scanning automatic positioning method based on deep learning
CN110223352A (en) * 2019-06-14 2019-09-10 浙江明峰智能医疗科技有限公司 A kind of medical image scanning automatic positioning method based on deep learning
CN110338835A (en) * 2019-07-02 2019-10-18 深圳安科高技术股份有限公司 A kind of intelligent scanning stereoscopic monitoring method and system
CN110826499A (en) * 2019-11-08 2020-02-21 上海眼控科技股份有限公司 Object space parameter detection method and device, electronic equipment and storage medium
CN110948489A (en) * 2019-12-04 2020-04-03 国电南瑞科技股份有限公司 A method and system for limiting safe working space of a live working robot
CN110948489B (en) * 2019-12-04 2022-11-04 国电南瑞科技股份有限公司 Method and system for limiting safe working space of live working robot
CN111784767A (en) * 2020-06-08 2020-10-16 珠海格力电器股份有限公司 Method and device for determining target position
CN111667510A (en) * 2020-06-17 2020-09-15 常州市中环互联网信息技术有限公司 Rock climbing action evaluation system based on deep learning and attitude estimation
CN114385322A (en) * 2020-10-21 2022-04-22 沈阳中科数控技术股份有限公司 Edge collaborative data distribution method applied to industrial Internet of things
CN112800856A (en) * 2021-01-06 2021-05-14 南京通盛弘数据有限公司 Livestock position and posture recognition method and device based on YOLOv3
CN113111712A (en) * 2021-03-11 2021-07-13 稳健医疗用品股份有限公司 AI identification positioning method, system and device for bagged product
CN113102882A (en) * 2021-06-16 2021-07-13 杭州景业智能科技股份有限公司 Geometric error compensation model training method and geometric error compensation method
CN113723217A (en) * 2021-08-09 2021-11-30 南京邮电大学 Object intelligent detection method and system based on yolo improvement
CN113723217B (en) * 2021-08-09 2025-01-14 南京邮电大学 An improved object intelligent detection method and system based on Yolo
CN114708484A (en) * 2022-03-14 2022-07-05 中铁电气化局集团有限公司 Pattern analysis method suitable for identifying defects
CN116468998A (en) * 2022-09-09 2023-07-21 国网湖北省电力有限公司超高压公司 Visual characteristic-based power transmission line small part and hanging point part detection method
CN117368000A (en) * 2023-10-13 2024-01-09 昆山美仑工业样机有限公司 Static torsion test stand provided with self-adaptive clamping mechanism
CN117368000B (en) * 2023-10-13 2024-05-07 昆山美仑工业样机有限公司 Static torsion test stand provided with self-adaptive clamping mechanism

Also Published As

Publication number Publication date
CN109101966B (en) 2022-03-08

Similar Documents

Publication Publication Date Title
CN109101966A (en) Workpiece identification positioning and posture estimation system and method based on deep learning
CN108171748B (en) Visual identification and positioning method for intelligent robot grabbing application
CN110322510B (en) 6D pose estimation method using contour information
CN112297013B (en) A robot intelligent grasping method based on digital twin and deep neural network
CN111260649B (en) Close-range mechanical arm sensing and calibrating method
CN114011608B (en) Spraying process optimization system based on digital twinning and spraying optimization method thereof
CN115816460B (en) Mechanical arm grabbing method based on deep learning target detection and image segmentation
CN110969660B (en) Robot feeding system based on three-dimensional vision and point cloud deep learning
CN110378325B (en) Target pose identification method in robot grabbing process
CN115861999B (en) A robot grasping detection method based on multimodal visual information fusion
CN113034575B (en) Model construction method, pose estimation method and object picking device
Chang et al. A lightweight appearance quality assessment system based on parallel deep learning for painted car body
CN115330734A (en) Automatic robot repair welding system based on three-dimensional target detection and point cloud defect completion
CN110428464A (en) Multi-class out-of-order workpiece robot based on deep learning grabs position and orientation estimation method
CN109948514A (en) Fast workpiece identification and localization method based on single-target 3D reconstruction
CN111310637A (en) A scale-invariant network-based detection method for robot object grasping
CN116665312A (en) A Human-Machine Collaboration Method Based on Multi-scale Graph Convolutional Neural Network
Gonçalves et al. Grasp planning with incomplete knowledge about the object to be grasped
CN118990489A (en) Double-mechanical-arm cooperative carrying system based on deep reinforcement learning
Frank et al. Stereo-vision for autonomous industrial inspection robots
Hosseini et al. Multi-modal robust geometry primitive shape scene abstraction for grasp detection
CN114972948A (en) Neural detection network-based identification and positioning method and system
Manawadu et al. Object recognition and pose estimation from rgb-d data using active sensing
CN112634367A (en) Anti-occlusion object pose estimation method based on deep neural network
CN118071828B (en) Intelligent non-contact chip surface temperature measurement method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant