CN109101966B - Workpiece recognition, positioning and pose estimation system and method based on deep learning - Google Patents
Workpiece recognition, positioning and pose estimation system and method based on deep learning Download PDFInfo
- Publication number
- CN109101966B CN109101966B CN201810591858.8A CN201810591858A CN109101966B CN 109101966 B CN109101966 B CN 109101966B CN 201810591858 A CN201810591858 A CN 201810591858A CN 109101966 B CN109101966 B CN 109101966B
- Authority
- CN
- China
- Prior art keywords
- workpiece
- positioning
- training
- loss function
- network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000013135 deep learning Methods 0.000 title claims abstract description 32
- 238000000034 method Methods 0.000 title claims abstract description 16
- 238000012549 training Methods 0.000 claims abstract description 66
- 230000036544 posture Effects 0.000 claims description 43
- 230000006870 function Effects 0.000 claims description 38
- 238000002372 labelling Methods 0.000 claims description 17
- 238000013461 design Methods 0.000 claims description 8
- 230000004927 fusion Effects 0.000 claims 1
- 238000004519 manufacturing process Methods 0.000 abstract description 11
- 238000010276 construction Methods 0.000 abstract description 6
- 238000012360 testing method Methods 0.000 description 15
- 238000001514 detection method Methods 0.000 description 7
- 238000004364 calculation method Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 230000006872 improvement Effects 0.000 description 3
- 241000282414 Homo sapiens Species 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/24—Aligning, centring, orientation detection or correction of the image
- G06V10/242—Aligning, centring, orientation detection or correction of the image by image rotation, e.g. by 90 degrees
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention provides a workpiece recognition positioning and attitude estimation system and method based on deep learning. The workpiece recognition, positioning and posture estimation system based on deep learning comprises a network construction module, a data acquisition module, a model training module and a workpiece recognition, positioning and posture estimation module which are sequentially connected. By adopting the workpiece recognition positioning and attitude estimation system based on deep learning, the classification recognition and the position determination of different types of workpieces and the spatial attitude estimation of a single workpiece can be simultaneously detected, and the automation efficiency of a production line is greatly improved.
Description
Technical Field
The invention relates to a workpiece recognition positioning and attitude estimation system and method, in particular to a workpiece recognition positioning and attitude estimation system and method based on deep learning, and belongs to the field of target recognition detection.
Background
With the advancement of science and technology, more and more industrial robots are applied to the production field to replace human beings to perform repetitive production activities. Industrial robots are multi-joint manipulators or multi-degree-of-freedom machine devices oriented to the industrial field, can automatically execute work, and are machines which realize various functions by means of self power and control capacity. The robot can accept human command and operate according to a preset program, and modern industrial robots can also perform actions according to a principle formulated by artificial intelligence technology.
In order to improve the automation degree of the industrial robot, the industrial robot is required to intelligently identify, position and estimate the posture of a workpiece in production, and the workpiece can be sorted according to the motion track and the grabbing angle of different workpieces in different posture self-adaption adjustment.
In recent years, deep learning algorithms make great breakthrough in various fields of computer vision, and particularly, various excellent deep learning algorithms in the fields of target detection and identification classification emerge in large numbers. Such as GoogleNet, VGG, Faster R-CNN, YOLO, and the like. Therefore, the reliability of the algorithm can be effectively improved by applying the strong deep learning algorithm to the field of workpiece detection, identification and positioning, and the detection and positioning precision and dimension are increased, so that the automation degree of the industrial robot is improved, and the actual production efficiency is greatly enhanced. However, the prior art has certain defects in workpiece detection, such as classification identification, position determination and spatial attitude estimation of different workpieces on the same production line, which cannot simultaneously provide satisfactory detection results.
Disclosure of Invention
The invention mainly aims to provide a workpiece recognition positioning and posture estimation system and method based on deep learning so as to overcome the defects of the prior art.
To achieve the foregoing object, an embodiment of the present invention provides a deep learning based workpiece recognition positioning and posture estimation system, which includes:
the network construction module is at least used for carrying out workpiece recognition positioning and attitude estimation network design based on a YOLO deep learning network, wherein the workpiece recognition positioning and attitude estimation network design comprises an output item added behind a full connection layer, and the output item is used for acquiring angle information;
the data acquisition module is at least used for constructing a training set, and the construction process comprises the steps of acquiring workpiece pictures with different postures as training samples, and carrying out angle information labeling, classification information labeling and position information labeling on the training samples;
the model training module is at least used for training the workpiece recognition positioning and attitude estimation network according to a training set constructed by the data acquisition module, and when the loss value reaches a preset threshold value, the training is finished and a workpiece recognition positioning and attitude estimation model is obtained;
and the workpiece recognition, positioning and posture estimation module is at least used for carrying out recognition, positioning and posture estimation on the workpiece object picture according to the workpiece recognition, positioning and posture estimation model.
Preferably, the model training module further includes a loss value calculation operator module, configured to calculate a loss value of the currently trained workpiece recognition positioning and attitude estimation network, where the loss value calculation employs a loss function that simultaneously fuses a workpiece classification error, a workpiece position coordinate error, and a workpiece attitude error.
The embodiment of the invention also provides a workpiece recognition positioning and posture estimation method based on deep learning, which comprises the following steps:
s1, carrying out workpiece recognition positioning and posture estimation network design based on a YOLO deep learning network, wherein an output item is added behind a full connection layer and is used for obtaining angle information;
s2, collecting workpiece pictures with different postures as training samples to construct a training set, wherein angle information labeling, classification information labeling and position information labeling are carried out on the training samples;
s3, training the workpiece recognition positioning and posture estimation network by using the training set constructed in the step S2; when the loss value reaches a preset threshold value, finishing training and obtaining a workpiece recognition positioning and posture estimation model;
and S4, calling the workpiece identification, positioning and posture estimation model to perform identification, positioning and posture estimation on the workpiece object picture.
Preferably, the angle information labeling includes:
selecting a certain workpiece posture as a reference, setting the posture of the workpiece to be (0 degrees, 0 degrees and 0 degrees) around the x axis, the y axis and the z axis, respectively setting the angle intervals of the rotation around the x axis, the y axis and the z axis, and marking the intermediate value of the interval where the angles of the rotation around the x axis, the y axis and the z axis of the training sample picture are positioned.
Preferably, the classification information labeling and the position information labeling include: the classification information is marked by numbers to distinguish different categories; the bounding box of the workpiece is obtained by solving the minimum bounding rectangle.
Preferably, the loss value is calculated by using a loss function which simultaneously fuses the workpiece classification error, the workpiece position coordinate error and the workpiece attitude error.
Preferably, the process of training the workpiece recognition positioning and pose estimation network in step S3 specifically includes:
s31, training a YOLO deep learning network before the network structure is not changed, optimizing variables by adopting a gradient descent optimizer, repeatedly training until the loss value reaches a preset threshold value, and acquiring updated weight;
and S32, loading the weight trained in the step S31 into the modified workpiece recognition positioning and posture estimation network, optimizing variables related to the prediction angle by adopting a gradient descent optimizer, and repeatedly training until the loss value reaches a preset threshold value.
Preferably, the training sample pictures are rotated around the x-axis and y-axis by an angle in the range of [ -15 °, 14 ° ] and around the z-axis by an angle in the range of [0 °, 90 ° ], respectively.
Preferably, when the training sample picture rotates around the x and y axes, the angle interval is set to be 5 °; the angular interval is set to 10 ° when the training sample picture is rotated around the z-axis.
Preferably, the loss function includes an angle error loss function, a coordinate error function, an IOU error loss function, a classification error loss function;
the angle error loss function formula is:
wherein the input image is divided into S x S grids, Ax, Ay, Az are respectively the angular values of rotation around x, y, z axes predicted by the grids,are respectively the corresponding labeled values, and are,indicating that the object center falls within grid i;
the coordinate error loss function is formulated as:
IoU the error loss function is formulated as:
the classification error loss function is formulated as:
wherein, B is the number of prediction frames to be predicted for each grid, x, y, w, h, C, p are network prediction values,are all marked values, and are all marked values,indicating that the center of the object falls within grid i,andrespectively indicating whether the center of the object falls into the jth prediction box of the ith grid;
the loss function is: l ═ La+Lc+LIoU+Lcls。
Further, the workpiece recognition positioning and posture estimation method based on deep learning is realized based on the workpiece recognition positioning and posture estimation system based on deep learning.
Compared with the prior art, the invention has the advantages that: by utilizing the technical scheme provided by the invention, the classification identification and position determination of different types of workpieces and the spatial attitude estimation of a single workpiece can be simultaneously detected, and the automation efficiency of a production line is greatly improved.
Drawings
FIG. 1 is a flow chart of a method for deep learning based workpiece recognition positioning and pose estimation in an exemplary embodiment of the present invention;
FIG. 2 is a schematic diagram of a workpiece recognition positioning and pose estimation network based on a YOLO deep learning network improvement in an exemplary embodiment of the present invention.
Detailed Description
In view of the deficiencies in the prior art, the inventors of the present invention have made extensive studies and extensive practices to provide technical solutions of the present invention. The technical solution, its implementation and principles, etc. will be further explained as follows.
The embodiment of the invention provides a workpiece recognition positioning and posture estimation system based on deep learning, which comprises:
the network construction module is at least used for carrying out workpiece recognition positioning and attitude estimation network design based on a YOLO deep learning network, wherein the workpiece recognition positioning and attitude estimation network design comprises an output item added behind a full connection layer, and the output item is used for acquiring angle information;
the data acquisition module is at least used for constructing a training set, and the construction process comprises the steps of acquiring workpiece pictures with different postures as training samples, and carrying out angle information labeling, classification information labeling and position information labeling on the training samples;
the model training module is at least used for training the workpiece recognition positioning and attitude estimation network according to a training set constructed by the data acquisition module, and when the loss value reaches a preset threshold value, the training is finished and a workpiece recognition positioning and attitude estimation model is obtained;
and the workpiece recognition, positioning and posture estimation module is at least used for carrying out recognition, positioning and posture estimation on the workpiece object picture according to the workpiece recognition, positioning and posture estimation model.
Furthermore, the network construction module, the data acquisition module, the model training module and the workpiece recognition, positioning and attitude estimation module are sequentially connected and arranged to form the deep learning-based workpiece recognition, positioning and attitude estimation system.
Furthermore, the model training module further comprises a loss value calculation operator module, which is used for calculating the loss value of the currently trained workpiece recognition positioning and attitude estimation network, wherein the loss value calculation adopts a loss function which simultaneously integrates the workpiece classification error, the workpiece position coordinate error and the workpiece attitude error.
Referring to fig. 1, an embodiment of the present invention further provides a method for identifying, positioning and estimating an orientation of a workpiece based on deep learning, which includes the following steps:
101, carrying out workpiece identification and positioning and posture estimation network design based on a YOLO deep learning network;
and improving the deep learning network based on the YOLO, and increasing output angle information.
102, acquiring and labeling workpiece training sample pictures in different postures;
and acquiring workpiece pictures of different postures, and carrying out angle information labeling, classification information labeling and position information labeling.
103, training a workpiece recognition positioning and posture estimation model by using the training set constructed in the step 102;
and a loss function which simultaneously integrates the classification error of the workpiece, the position coordinate error of the workpiece and the attitude error of the workpiece is adopted in the training process.
And step 104, calling the workpiece identification, positioning and posture estimation model to perform identification, positioning and posture estimation on the workpiece object picture.
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail below with reference to the accompanying drawings and specific embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In some more specific embodiments, a method of workpiece recognition positioning and pose estimation may comprise the steps of:
workpiece recognition positioning and attitude estimation network design based on YOLO deep learning network
Referring to fig. 2, fig. 2 is a diagram illustrating an improved workpiece recognition positioning and posture estimation network based on a YOLO deep learning network according to an exemplary embodiment of the present invention, where classification conditions and position information of a workpiece can be obtained according to an original YOLO deep learning network, and in an embodiment of the present invention, besides obtaining information of a workpiece type and a workpiece position, a posture, i.e., angle information of the workpiece is obtained, so that an improvement needs to be performed on the basis of an original network to obtain an angle value, and a modified network structure is shown in fig. 2. As can be seen from fig. 2, the improved YOLO network basically retains the original network structure for calculating the classification and location information, and the improvement includes adding an output item after the full-connected layer, i.e., using a full-connected layer on the full-connected layer with output of 4096-dimensional vector for obtaining the angle information, where the output size is 7 × 3, and 3 is 3 angles of the output, i.e., angle, angley, anglez.
Secondly, training image acquisition and annotation are carried out on workpieces with different postures
Suppose that the workpieces tested at this time have three types, the shapes and the sizes of the workpieces are different, and the workpieces do not have high symmetry. For the workpieces produced by the production line, the sorting difficulty cannot be increased by mixing several workpieces together, and according to the actual application scenario, only the condition of one workpiece is considered in the embodiment. Taking the first workpiece as an example, in order to obtain the posture information thereof, when a training set is made, pictures of each posture thereof need to be collected, and the first workpiece is set to be (0 °, 0 °, 0 °) around x, y, and z axes with a certain posture as a reference. Preferably, the three-dimensional CAD model picture can be rotated around the x, y and z axes by OPENGL
In an exemplary embodiment of the present invention, a part of the gestures is used for training and testing to simulate the testing of other gestures. The rotation angles around the x-axis and the y-axis are respectively in the range of [ -15 degrees, 14 degrees ], and the rotation angle around the z-axis is in the range of [0 degrees, 90 degrees ]. When the picture rotates around the x axis and the y axis, setting the angle interval to be 5 degrees, and marking the middle value of the interval where the angle of the picture rotating around the x axis and the y axis is located; because the rotation around the z-axis can be regarded as the rotation in a plane, the attitude of the workpiece is not changed greatly, the angle interval is set to be 10 degrees, and the middle value of the interval where the angle of the rotation of the picture around the z-axis is located is taken for marking. For example, for a workpiece rotated between 6 ° and 10 ° around the x and y axes and between 11 ° and 20 ° around the z axis, the angles are collectively denoted as (8, 8, 15), and workpieces within this range of postures are considered to be in the same posture and the rotation angles are all (8, 8, 15).
In a typical embodiment of the invention, 250 training pictures are acquired for a posture, after the acquisition of the pictures in a training set is completed, the pictures need to be labeled, the classification information of the training pictures and a target frame to be trained are extracted, the classification information is labeled as 1 and represents the 1 st class, a boundary frame of a workpiece can be obtained by solving the minimum external rectangle of the boundary frame, and the four values of Xmin, Xmax, Ymin and Ymax are written into an annotation file in the same format.
Thirdly, designing a loss function simultaneously fusing the classification error of the workpiece, the position coordinate error of the workpiece and the attitude error of the workpiece
Because an angle regression network is introduced, an angle error loss function needs to be added on the basis of the original loss function, and the formula is as follows:
wherein the input image is divided into S x S grids, Ax, Ay, Az are respectively the angular values of rotation around x, y, z axes predicted by the grids,are respectively the corresponding labeled values, and are,indicating that the object center falls within grid i.
In addition to the angular loss function, the loss function includes: coordinate error loss function, IOU error loss function, classification error loss function. The formulas are respectively as follows:
coordinate error loss function:
IoU error loss function:
class error loss function:
wherein, B is the number of prediction frames to be predicted for each grid, x, y, w, h, C, p are network prediction values,are all marked values, and are all marked values,indicating that the center of the object falls within grid i,andrespectively, whether the center of the object falls within the jth prediction box of the ith grid.
For the whole workpiece recognition positioning and attitude estimation network, the total loss value is as follows:
L=La+Lc+LIoU+Lcls
fourthly, training a workpiece recognition positioning and posture estimation model by using the acquired training images
Because the angle regression layer is added to the network, and all variables of the network are trained and optimized simultaneously, the loss function is difficult to converge, so that a two-step training mode can be adopted; firstly, training a YOLO network before the structure of the network is not changed, wherein the initial value of a learning rate is 0.01, the batch size is 30, the period is 11, optimizing variables by adopting a gradient descent optimizer, and finally obtaining a relatively accurate test result without angle test through repeated training.
And after the training and the testing of the first step are finished, loading the trained weight into the modified workpiece recognition positioning and posture estimation network, retraining, and optimizing variables by still adopting a gradient descent optimizer, but only optimizing newly-added variables related to the prediction angle. The batch size is still 30, the learning rate is initially 0.01 and gradually changes from 0.01 to 0.0001 with a period of 11, and the training is repeated.
Fifthly, calling the model to identify, position and estimate the attitude of the workpiece
For the test set, a workpiece object image shot by a camera is used for testing, the size of the image is different from that of the test image, the conditions of influence of illumination, rusting and the like on the surface of the workpiece are also different from those of a training image, but the angle range of the image is still in the range of rotating around the x axis and the y axis by-15 degrees to 14 degrees and rotating around the z axis by 0 degree to 90 degrees, 1600 pieces of each workpiece are 4800 pieces in total, the specific training and testing result pair of each workpiece is shown in table 1, and the statistical results of the table 1 show that the classification results are excellent no matter training or testing; the error of the x-direction boundary box and the y-direction boundary box is very low, and the error of the test set result is about 1 mm; when the rotation angle errors around the x, y and z axes in the training set are respectively 4.038 degrees, 4.334 degrees and 8.464 degrees on average, the loss value can not be reduced any more; the reason for this result is that the rotation angles of the x and y axes are labeled with the intermediate values at intervals of 5 degrees in the training set, the z axis is labeled with the intermediate values at intervals of 10 degrees, and the error difference is respectively 5 degrees, 5 degrees and 10 degrees; and the error of the test set is possibly increased due to the influence of the rusting of the surface of the workpiece, the illumination and the size of the workpiece, so that the difference from the training set is caused, but the error is still in an acceptable normal range, and the influence on the test result is small.
Table 1 shows the training and test result statistics of 3 workpiece test samples
By adopting the workpiece recognition positioning and attitude estimation system based on deep learning, provided by the invention, the classification recognition and position determination of different types of workpieces on the same production line and the spatial attitude estimation of a single workpiece can simultaneously give out detection results, so that the automatic operation of sorting the workpieces and the like by an industrial robot according to the self-adaptive motion track adjustment and grabbing angles of different workpieces is facilitated, and the production efficiency of the production line can be greatly improved.
It should be understood that the above-mentioned embodiments are merely illustrative of the technical concepts and features of the present invention, which are intended to enable those skilled in the art to understand the contents of the present invention and implement the present invention, and therefore, the protection scope of the present invention is not limited thereby. All equivalent changes and modifications made according to the spirit of the present invention should be covered within the protection scope of the present invention.
Claims (1)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810591858.8A CN109101966B (en) | 2018-06-08 | 2018-06-08 | Workpiece recognition, positioning and pose estimation system and method based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810591858.8A CN109101966B (en) | 2018-06-08 | 2018-06-08 | Workpiece recognition, positioning and pose estimation system and method based on deep learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109101966A CN109101966A (en) | 2018-12-28 |
CN109101966B true CN109101966B (en) | 2022-03-08 |
Family
ID=64796782
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810591858.8A Active CN109101966B (en) | 2018-06-08 | 2018-06-08 | Workpiece recognition, positioning and pose estimation system and method based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109101966B (en) |
Families Citing this family (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109858530B (en) * | 2019-01-14 | 2022-06-28 | 苏州长风航空电子有限公司 | Composite pyramid-based rotating target detection method |
CN109902629A (en) * | 2019-03-01 | 2019-06-18 | 成都康乔电子有限责任公司 | A Real-time Vehicle Object Detection Model in Complex Traffic Scenarios |
CN109948514A (en) * | 2019-03-15 | 2019-06-28 | 中国科学院宁波材料技术与工程研究所 | Fast workpiece identification and localization method based on single-target 3D reconstruction |
CN110223352B (en) * | 2019-06-14 | 2021-07-02 | 浙江明峰智能医疗科技有限公司 | Medical image scanning automatic positioning method based on deep learning |
CN110338835B (en) * | 2019-07-02 | 2023-04-18 | 深圳安科高技术股份有限公司 | Intelligent scanning three-dimensional monitoring method and system |
CN110826499A (en) * | 2019-11-08 | 2020-02-21 | 上海眼控科技股份有限公司 | Object space parameter detection method and device, electronic equipment and storage medium |
CN110948489B (en) * | 2019-12-04 | 2022-11-04 | 国电南瑞科技股份有限公司 | Method and system for limiting safe working space of live working robot |
CN111784767B (en) * | 2020-06-08 | 2024-06-18 | 珠海格力电器股份有限公司 | Method and device for determining target position |
CN111667510A (en) * | 2020-06-17 | 2020-09-15 | 常州市中环互联网信息技术有限公司 | Rock climbing action evaluation system based on deep learning and attitude estimation |
CN114385322A (en) * | 2020-10-21 | 2022-04-22 | 沈阳中科数控技术股份有限公司 | Edge collaborative data distribution method applied to industrial Internet of things |
CN112800856A (en) * | 2021-01-06 | 2021-05-14 | 南京通盛弘数据有限公司 | Livestock position and posture recognition method and device based on YOLOv3 |
CN113111712A (en) * | 2021-03-11 | 2021-07-13 | 稳健医疗用品股份有限公司 | AI identification positioning method, system and device for bagged product |
CN113102882B (en) * | 2021-06-16 | 2021-08-24 | 杭州景业智能科技股份有限公司 | Geometric error compensation model training method and geometric error compensation method |
CN113723217B (en) * | 2021-08-09 | 2025-01-14 | 南京邮电大学 | An improved object intelligent detection method and system based on Yolo |
CN114708484B (en) * | 2022-03-14 | 2023-04-07 | 中铁电气化局集团有限公司 | Pattern analysis method suitable for identifying defects |
CN116468998A (en) * | 2022-09-09 | 2023-07-21 | 国网湖北省电力有限公司超高压公司 | Visual characteristic-based power transmission line small part and hanging point part detection method |
CN117368000B (en) * | 2023-10-13 | 2024-05-07 | 昆山美仑工业样机有限公司 | Static torsion test stand provided with self-adaptive clamping mechanism |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5459636A (en) * | 1994-01-14 | 1995-10-17 | Hughes Aircraft Company | Position and orientation estimation neural network system and method |
CN106683091A (en) * | 2017-01-06 | 2017-05-17 | 北京理工大学 | Target classification and attitude detection method based on depth convolution neural network |
CN107451568A (en) * | 2017-08-03 | 2017-12-08 | 重庆邮电大学 | Use the attitude detecting method and equipment of depth convolutional neural networks |
CN108121986A (en) * | 2017-12-29 | 2018-06-05 | 深圳云天励飞技术有限公司 | Object detection method and device, computer installation and computer readable storage medium |
-
2018
- 2018-06-08 CN CN201810591858.8A patent/CN109101966B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5459636A (en) * | 1994-01-14 | 1995-10-17 | Hughes Aircraft Company | Position and orientation estimation neural network system and method |
CN106683091A (en) * | 2017-01-06 | 2017-05-17 | 北京理工大学 | Target classification and attitude detection method based on depth convolution neural network |
CN107451568A (en) * | 2017-08-03 | 2017-12-08 | 重庆邮电大学 | Use the attitude detecting method and equipment of depth convolutional neural networks |
CN108121986A (en) * | 2017-12-29 | 2018-06-05 | 深圳云天励飞技术有限公司 | Object detection method and device, computer installation and computer readable storage medium |
Non-Patent Citations (1)
Title |
---|
You Only Look Once: Unified, Real-Time Object Detection;Joseph Redmon;《2016 IEEE Conference on Computer Vision and Pattern Recognition》;20161230;第786页第1栏最后1段、第781页第2栏最后1段 * |
Also Published As
Publication number | Publication date |
---|---|
CN109101966A (en) | 2018-12-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109101966B (en) | Workpiece recognition, positioning and pose estimation system and method based on deep learning | |
CN112297013B (en) | A robot intelligent grasping method based on digital twin and deep neural network | |
RU2700246C1 (en) | Method and system for capturing an object using a robot device | |
CN109807882A (en) | Holding system, learning device and holding method | |
Zhou et al. | Imitating tool-based garment folding from a single visual observation using hand-object graph dynamics | |
CN115330734A (en) | Automatic robot repair welding system based on three-dimensional target detection and point cloud defect completion | |
CN110065068A (en) | A kind of robotic asssembly operation programming by demonstration method and device based on reverse-engineering | |
CN109948514A (en) | Fast workpiece identification and localization method based on single-target 3D reconstruction | |
CN117769724A (en) | Synthetic dataset creation using deep-learned object detection and classification | |
Yang et al. | Automation of SME production with a Cobot system powered by learning-based vision | |
Deng et al. | A human–robot collaboration method using a pose estimation network for robot learning of assembly manipulation trajectories from demonstration videos | |
Zhang et al. | Deep learning-based robot vision: High-end tools for smart manufacturing | |
Zhang et al. | Vision-based six-dimensional peg-in-hole for practical connector insertion | |
RU2745380C1 (en) | Method and system for capturing objects using robotic device | |
Frank et al. | Stereo-vision for autonomous industrial inspection robots | |
Wu et al. | A novel approach for porcupine crab identification and processing based on point cloud segmentation | |
CN118322214A (en) | Mechanical arm imitation learning method and device based on single teaching | |
CN118097790A (en) | Manual operation method for AI training of robot | |
CN116580084B (en) | Industrial part rapid pose estimation method based on deep learning and point cloud | |
CN116690988A (en) | 3D printing system and method for large building model | |
Hosseini et al. | Multi-modal robust geometry primitive shape scene abstraction for grasp detection | |
CN115270399A (en) | An industrial robot attitude recognition method, device and storage medium | |
CN111002292B (en) | Robot arm humanoid motion teaching method based on similarity measurement | |
Naik et al. | Robotic task success evaluation under multi-modal non-parametric object pose uncertainty | |
Chatterjee et al. | Utilizing Inpainting for Training Keypoint Detection Algorithms Towards Markerless Visual Servoing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |