CN109101966A - Workpiece identification positioning and posture estimation system and method based on deep learning - Google Patents
Workpiece identification positioning and posture estimation system and method based on deep learning Download PDFInfo
- Publication number
- CN109101966A CN109101966A CN201810591858.8A CN201810591858A CN109101966A CN 109101966 A CN109101966 A CN 109101966A CN 201810591858 A CN201810591858 A CN 201810591858A CN 109101966 A CN109101966 A CN 109101966A
- Authority
- CN
- China
- Prior art keywords
- workpiece
- positioning
- deep learning
- training
- network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000013135 deep learning Methods 0.000 title claims abstract description 42
- 238000000034 method Methods 0.000 title claims abstract description 31
- 238000012549 training Methods 0.000 claims abstract description 68
- 238000010276 construction Methods 0.000 claims abstract description 6
- 230000006870 function Effects 0.000 claims description 39
- 230000036544 posture Effects 0.000 claims description 35
- 238000013461 design Methods 0.000 claims description 11
- 238000002372 labelling Methods 0.000 claims description 7
- 230000008569 process Effects 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims 2
- 230000004927 fusion Effects 0.000 claims 1
- 238000012360 testing method Methods 0.000 description 15
- 238000004519 manufacturing process Methods 0.000 description 9
- 230000008859 change Effects 0.000 description 5
- 238000013528 artificial neural network Methods 0.000 description 4
- 239000000463 material Substances 0.000 description 4
- 238000001514 detection method Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000007423 decrease Effects 0.000 description 2
- 230000007812 deficiency Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 230000003760 hair shine Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 238000004153 renaturation Methods 0.000 description 1
- 210000002186 septum of brain Anatomy 0.000 description 1
- 239000013598 vector Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/24—Aligning, centring, orientation detection or correction of the image
- G06V10/242—Aligning, centring, orientation detection or correction of the image by image rotation, e.g. by 90 degrees
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The present invention provides a kind of workpiece identification positioning and posture estimation system and method based on deep learning.The workpiece identification positioning and posture estimation system based on deep learning includes the network construction module for being sequentially connected setting, data acquisition module, model training module and workpiece identification positioning and Attitude estimation module.Using the workpiece identification positioning provided by the invention based on deep learning and posture estimation system, so that the spatial attitude estimation of the Classification and Identification of variety classes workpiece, the determining and single workpiece in position can be detected simultaneously, output bas line efficiency is substantially increased.
Description
Technical field
The present invention relates to a kind of positioning of workpiece identification and posture estimation system and methods, in particular to a kind of to be based on depth
The workpiece identification positioning of habit and posture estimation system and method, belong to target identification detection field.
Background technique
With the development of science and technology more and more industrial robots are applied to production field, weight is carried out to replace the mankind
The production activity of renaturation.Industrial robot is multi-joint manipulator or multivariant installations towards industrial circle, it
Work can be executed automatically, be a kind of machine that various functions are realized by self power and control ability.It can receive the mankind
Commander, can also run, modern industrial robot can also be formulated according to artificial intelligence technology according to the program of preparatory layout
Principle program action.
In order to improve the degree of automation of industrial robot, need industrial robot that can carry out intelligence to the workpiece in production
Identification positioning and Attitude estimation, such industrial robot can be according to the adjustment campaign of the different attitude-adaptives of different workpieces
Workpiece is sorted with crawl angle track.
Deep learning algorithm all achieves quantum jump in the every field of computer vision in recent years, especially examines in target
It surveys, the identification various outstanding deep learning algorithms in classification field emerge in multitude.Such as GoogleNet, VGG, Faster R-CNN,
YOLO etc..Therefore by powerful deep learning algorithm applies to workpiece sensing, identification positioning field can effectively improve algorithm can
By property, increase detection positioning accuracy and dimension, to improve the degree of automation of industrial robot, greatly enhances actual production effect
Energy.However, there is also certain defects in workpiece sensing in the prior art, such as to the variety classes workpiece on same production line
The spatial attitude estimation of Classification and Identification, the determining and single workpiece in position can not provide satisfied testing result simultaneously.
Summary of the invention
It is a primary object of the present invention to be directed to it is a kind of based on deep learning workpiece identification positioning and Attitude estimation
System and method, with overcome the deficiencies in the prior art.
To realize aforementioned invention purpose, the workpiece identification positioning that the embodiment of the invention provides a kind of based on deep learning and
Posture estimation system may include:
Network construction module is at least used to carry out workpiece identification positioning and Attitude estimation based on YOLO deep learning network
Network design, the workpiece identification positioning and Attitude estimation network design increase an output project after being included in full articulamentum,
The output project is for obtaining angle information;
Data acquisition module, is at least used to construct training set, and building process includes the workpiece picture for acquiring different postures
Angle information mark and classification information mark and location information mark are carried out as training sample, and to the training sample;
Model training module is at least used to position the workpiece identification according to the training set that data acquisition module constructs
It is trained with Attitude estimation network, when penalty values reach preset threshold, training terminates and obtains workpiece identification positioning and appearance
State estimates model;
Workpiece identification positioning and Attitude estimation module are at least used to be positioned and Attitude estimation mould according to the workpiece identification
Type carries out identification positioning and Attitude estimation to workpiece material object picture.
Preferably, the model training module further includes penalty values computational submodule, for the work being currently trained
The penalty values of part identification positioning and Attitude estimation network are calculated, and the penalty values calculate using while merging workpiece classification and miss
The loss function of difference, location of workpiece error of coordinate and workpiece posture error.
The embodiment of the invention also provides a kind of workpiece identification positioning and Attitude estimation method based on deep learning, can
To include:
S1. workpiece identification positioning and Attitude estimation network design are carried out based on YOLO deep learning network, is included in and connects entirely
Increase an output project after connecing layer, for obtaining angle information;
S2. the workpiece pictures of different postures is acquired as training sample to carry out building training set, including to the training
Sample carries out angle information mark and classification information mark and location information mark;
S3. the training set constructed using step S2 positions the workpiece identification and Attitude estimation network is trained;When
When penalty values reach preset threshold, training terminates and obtains workpiece identification positioning and Attitude estimation model;
S4. the workpiece identification positioning and Attitude estimation model is called to carry out identification positioning and posture to workpiece material object picture
Estimation.
Preferably, the angle information mark includes:
Some workpiece posture is chosen as benchmark, is set as being (0 °, 0 °, 0 °) around x, y, z axis, be set separately around x,
Y, the angle interval of z-axis rotation takes training sample picture to carry out around the median of interval section where the angle that x, y, z axis rotates
Mark.
Preferably, the classification information mark and location information mark include: classification information with number mark, to distinguish not
It is generic;The bounding box of workpiece is by asking its minimum circumscribed rectangle to obtain.
Preferably, the penalty values use while merging workpiece error in classification, location of workpiece error of coordinate and workpiece posture
The loss function of error is calculated.
Preferably, the process being trained in step S3 to workpiece identification positioning and Attitude estimation network is specifically wrapped
It includes:
S31. it is trained to not changing the YOLO deep learning network before network structure, it is excellent using gradient decline optimizer
Change variable, repetition training reaches preset threshold up to penalty values, obtains updated weight;
S32. the weight after step S31 training is loaded into the workpiece identification positioning and Attitude estimation network modified, is adopted
With the relevant variable of gradient decline optimizer Optimization Prediction angle, repetition training is until penalty values reach preset threshold.
Preferably, training sample picture is revolved rotating around x-axis, the angle of y-axis rotation in [- 15 °, 14 °] range around z-axis
Gyration is in [0 °, 90 °] range.
Preferably, when training sample picture is around x, y-axis rotation, the angle interval is set as 5 °;Training sample picture is around z
When axis rotates, the angle interval is set as 10 °.
Preferably, the loss function include angular error loss function, error of coordinate function, IOU error loss function,
Error in classification loss function;
Angular error loss function formula are as follows:
Wherein Ax, Ay, Az are respectively the angle value around the rotation of x, y, z axis of neural network forecast,Respectively
Corresponding mark value,Indicate that object center is fallen in grid i;
Error of coordinate loss function formula are as follows:
IoU error loss function formula are as follows:
Error in classification loss function formula are as follows:
Wherein x, y, w, h, C, p are neural network forecast value,It is mark value,It indicates in object
The heart is fallen in grid i,WithRespectively indicate whether object center falls into j-th of prediction block of i-th of grid;
The loss function are as follows: L=LaLc+LIoU+Lcls。
Further, it is described based on deep learning workpiece identification positioning and Attitude estimation method be based on it is described based on
What the workpiece identification positioning of deep learning and posture estimation system were realized.
Compared with prior art, the invention has the advantages that using technical solution provided by the invention, so that variety classes
The spatial attitude estimation of the Classification and Identification of workpiece, the determining and single workpiece in position can be detected simultaneously, be substantially increased
Output bas line efficiency.
Detailed description of the invention
Fig. 1 is a kind of workpiece identification positioning and Attitude estimation side based on deep learning in an exemplary embodiments of the invention
The flow chart of method;
Fig. 2 is a kind of workpiece identification positioning based on YOLO deep learning network improvement in an exemplary embodiments of the invention
With Attitude estimation network diagram.
Specific embodiment
In view of deficiency in the prior art, inventor is studied for a long period of time and is largely practiced, and is able to propose of the invention
Technical solution.The technical solution, its implementation process and principle etc. will be further explained as follows.
The workpiece identification based on deep learning that the embodiment of the invention provides a kind of positions and posture estimation system, packet
It includes:
Network construction module is at least used to carry out workpiece identification positioning and Attitude estimation based on YOLO deep learning network
Network design, the workpiece identification positioning and Attitude estimation network design increase an output project after being included in full articulamentum,
The output project is for obtaining angle information;
Data acquisition module, is at least used to construct training set, and building process includes the workpiece picture for acquiring different postures
Angle information mark and classification information mark and location information mark are carried out as training sample, and to the training sample;
Model training module is at least used to position the workpiece identification according to the training set that data acquisition module constructs
It is trained with Attitude estimation network, when penalty values reach preset threshold, training terminates and obtains workpiece identification positioning and appearance
State estimates model;
Workpiece identification positioning and Attitude estimation module are at least used to be positioned and Attitude estimation mould according to the workpiece identification
Type carries out identification positioning and Attitude estimation to workpiece material object picture.
Further, the network construction module, data acquisition module, model training module and workpiece identification positioning and
Attitude estimation module is sequentially connected setting and forms the workpiece identification positioning based on deep learning and posture estimation system.
Further, the model training module further includes penalty values computational submodule, for being currently trained
The penalty values of workpiece identification positioning and Attitude estimation network are calculated, and the penalty values calculate using while merging workpiece classification
The loss function of error, location of workpiece error of coordinate and workpiece posture error.
Referring to Fig. 1, the workpiece identification positioning and posture that the embodiment of the invention also provides a kind of based on deep learning are estimated
Meter method, may comprise steps of:
Step 101, workpiece identification positioning and Attitude estimation network design are carried out based on YOLO deep learning network;
It is improved based on YOLO deep learning network, increases output angle information.
Step 102, it is acquired and marks for the workpiece training sample picture of different postures;
It is acquired for the workpiece picture of different postures, and carries out angle information mark and classification information mark and position
Set information labeling.
Step 103, the training set training workpiece identification constructed using step 102 positions and Attitude estimation model;
The damage of workpiece error in classification, location of workpiece error of coordinate and workpiece posture error is used while merged in training process
Lose function.
Step 104, the workpiece identification positioning and Attitude estimation model is called to carry out identification positioning to workpiece material object picture
And Attitude estimation.
In order to make the objectives, technical solutions, and advantages of the present invention clearer, below in conjunction with attached drawing and specific implementation
Example, the present invention will be described in further detail.It should be appreciated that specific embodiment described herein is only to explain this hair
It is bright, it is not intended to limit the present invention.
In some more specific embodiments, a kind of positioning of workpiece identification and Attitude estimation method may include as follows
Step:
One, workpiece identification positioning and Attitude estimation network design based on YOLO deep learning network
Referring to Fig. 2, Fig. 2 is in an of the invention exemplary embodiments after a kind of deep learning network improvement based on YOLO
Workpiece identification positioning and Attitude estimation network, according to former YOLO deep learning network, it can be deduced that the classification situation of workpiece and position
Confidence breath, and in the embodiment of the present invention other than the classification of workpiece to be obtained, location information, also to obtain the posture of workpiece i.e.
Angle information, therefore need to be improved on the basis of former network, obtain angle value, the network structure after change is as shown in Figure 2.From
Fig. 2 for calculating classification and location information, improves it is found that improved YOLO network remains primitive network structure substantially
Place increases an output project after being included in full articulamentum, i.e., reuses one on the full articulamentum that output is 4096 dimensional vectors
Full articulamentum, for obtaining angle information, output size 7*7*3, wherein 3 be 3 angles of output, as anglex,
angley、 anglez。
Two, Image Acquisition and mark are trained for the workpiece of different postures
Assuming that shape size is neither identical, and does not have the symmetry of height there are three types of the workpiece of this test.It is right
In the workpiece of producing line production, several workpiece will not usually be mixed to increase sorting difficulty, according to practical application scene, originally
The case where a kind of workpiece is only considered in embodiment.By taking the first workpiece as an example, in order to obtain its posture information, in production training set
When, need the picture to its each posture to be acquired, be set as on the basis of some posture be around x, y, z axis (0 °,
0 °, 0 °).Image three-dimensional CAD model picture rotate around x, y, z axis by OPENGL as a preferred method,
Operation
In an exemplary embodiments of the invention, we take a part of posture to do training and test, to the other appearances of analogy
The test case of state.The angle rotated around x-axis, y-axis respectively within the scope of [- 15 °, 14 °], around z-axis rotate angle [0 °,
90 °] within the scope of.When picture is around x, y-axis rotation, 5 ° are divided between setting angle, takes picture where the angle that x, y-axis rotate
The median of interval section be labeled;Due to being rotated around z-axis, it is considered as rotation planar, workpiece posture is become
Change less, angle is set to 10 °, picture is taken to be marked around the median of the interval section where the angle that z-axis rotates
Note.Such as between being rotated in 6 ° to 10 ° around x, y-axis for workpiece, around z-axis be rotated in 11 ° to 20 ° between workpiece, angle is unified to be marked
Note is (8,8,15), by the workpiece within the scope of this posture, it is believed that is same posture, rotation angle is all (8,8,15).
In an exemplary embodiments of the invention, a kind of picture trained for Posture acquisition 250, training set picture is adopted
After the completion of collection, it is necessary to be labeled work to picture, by the classification information of training picture and the target of training be needed to outline
Come, classification information is labeled as 1, represent the 1st class, the bounding box of workpiece can by asking its minimum circumscribed rectangle to obtain, by Xmin,
Xmax, Ymin, Ymax tetra- values are written in annotation file in the same format.
Three, design while merging the loss function of workpiece error in classification, location of workpiece error of coordinate and workpiece posture error
Due to introducing angles return network, on the basis of former loss function, also need that angular error loss function is added,
Formula is as follows:
Wherein Ax, Ay, Az are respectively the angle value around the rotation of x, y, z axis of neural network forecast,Respectively
Corresponding mark value,Indicate that object center is fallen in grid i.
In addition to angle loss function, loss function further include: error of coordinate loss function, IOU error loss function, classification
Error loss function.Formula difference is as follows:
Error of coordinate loss function:
IoU error loss function:
Classification error loss function:
Wherein x, y, w, h, C, p are neural network forecast value,It is mark value,Indicate that object center is fallen
In grid i,WithRespectively indicate whether object center falls into j-th of prediction block of i-th of grid.
For the positioning of entire workpiece identification and Attitude estimation network, total losses value are as follows:
L=La+Lc+LIoU+Lcls
Four, using the training image training workpiece identification positioning of acquisition and Attitude estimation model
Since network is added to angles return layer, while training optimization all variables of network, it will lead to loss function and be difficult
Convergence, therefore mode trained in two steps can be taken;It is trained first to not changing the YOLO network before network structure,
Leaming rate initial value is that 0.01, batch size is 30, the period 11, declines optimizer optimized variable using gradient,
The accurate test result without angle measurement is finally obtained by repetition training.
After the training and test of completing the first step, trained weight is loaded into the workpiece identification positioning and posture modified
Estimate in network of network, re -training, optimizer optimized variable is still declined using gradient, but only optimize newly-increased pre- measuring angle
Relevant variable.Batch size remain as 30, learning rate initial value be 0.01, and by 0.01 gradually change to
0.0001, the period 11, repetition training.
Five, calling model identifies workpiece, is positioned and Attitude estimation
It for test set, is tested using the workpiece pictorial diagram that camera is shot, size and test picture have different, work
It is also different with training picture that part photoreceiving surface shines situations such as influencing, getting rusty, but its angular range is still around x-axis and y
Axis rotates -15 ° to 14 °, rotates within the scope of 0 ° to 90 ° around z-axis, and every kind of workpiece 1600 is opened, and totally 4800, specific every kind of workpiece
Training and test result comparison as shown in table 1, by the statistical result of table 1 it is found that either training or test, classification results
All it is excellent in;And the error of the direction x, y bounding box is all very low, test set resultant error is in 1mm or so;In training set around x, y,
When the rotation angular error of z-axis distinguishes 4.038 °, 4.334 °, 8.464 ° of average out to, penalty values can not reduce again;And cause the knot
The reason of fruit is, takes median to mark for interval section with 5 ° the rotation angle of x, y-axis in training set, by z-axis by 10 ° be between
Median is taken to mark between septal area, its own error is 5 °, 5 ° and 10 ° respectively;And it is got rusty by workpiece surface, illumination, workpiece
The influence of size, the error that may cause test set are increased, and cause the difference with training set, but error still can connect
The normal range (NR) received influences test result little.
Table 1 is to test sample training using 3 kinds of workpiece to count with test result
Using the workpiece identification positioning provided by the invention based on deep learning and posture estimation system, to same production line
On the spatial attitude estimation of the Classification and Identification of variety classes workpiece, the determining and single workpiece in position can provide detection simultaneously
It can be according to the adjustment motion profile and crawl angle of the different attitude-adaptives of different workpieces as a result, facilitating industrial robot
Workpiece is carried out the automatic operation such as sort, is greatly improved the production efficiency of producing line.
It should be appreciated that the technical concepts and features of above-described embodiment only to illustrate the invention, its object is to allow be familiar with this
The personage of item technology cans understand the content of the present invention and implement it accordingly, and it is not intended to limit the scope of the present invention.It is all
Equivalent change or modification made by Spirit Essence according to the present invention, should be covered by the protection scope of the present invention.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810591858.8A CN109101966B (en) | 2018-06-08 | 2018-06-08 | Workpiece recognition, positioning and pose estimation system and method based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810591858.8A CN109101966B (en) | 2018-06-08 | 2018-06-08 | Workpiece recognition, positioning and pose estimation system and method based on deep learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109101966A true CN109101966A (en) | 2018-12-28 |
CN109101966B CN109101966B (en) | 2022-03-08 |
Family
ID=64796782
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810591858.8A Active CN109101966B (en) | 2018-06-08 | 2018-06-08 | Workpiece recognition, positioning and pose estimation system and method based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109101966B (en) |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109858530A (en) * | 2019-01-14 | 2019-06-07 | 苏州长风航空电子有限公司 | One kind being based on compound pyramidal rolling target detection method |
CN109902629A (en) * | 2019-03-01 | 2019-06-18 | 成都康乔电子有限责任公司 | A Real-time Vehicle Object Detection Model in Complex Traffic Scenarios |
CN109948514A (en) * | 2019-03-15 | 2019-06-28 | 中国科学院宁波材料技术与工程研究所 | Fast workpiece identification and localization method based on single-target 3D reconstruction |
CN110223352A (en) * | 2019-06-14 | 2019-09-10 | 浙江明峰智能医疗科技有限公司 | A kind of medical image scanning automatic positioning method based on deep learning |
CN110338835A (en) * | 2019-07-02 | 2019-10-18 | 深圳安科高技术股份有限公司 | A kind of intelligent scanning stereoscopic monitoring method and system |
CN110826499A (en) * | 2019-11-08 | 2020-02-21 | 上海眼控科技股份有限公司 | Object space parameter detection method and device, electronic equipment and storage medium |
CN110948489A (en) * | 2019-12-04 | 2020-04-03 | 国电南瑞科技股份有限公司 | A method and system for limiting safe working space of a live working robot |
CN111667510A (en) * | 2020-06-17 | 2020-09-15 | 常州市中环互联网信息技术有限公司 | Rock climbing action evaluation system based on deep learning and attitude estimation |
CN111784767A (en) * | 2020-06-08 | 2020-10-16 | 珠海格力电器股份有限公司 | Method and device for determining target position |
CN112800856A (en) * | 2021-01-06 | 2021-05-14 | 南京通盛弘数据有限公司 | Livestock position and posture recognition method and device based on YOLOv3 |
CN113111712A (en) * | 2021-03-11 | 2021-07-13 | 稳健医疗用品股份有限公司 | AI identification positioning method, system and device for bagged product |
CN113102882A (en) * | 2021-06-16 | 2021-07-13 | 杭州景业智能科技股份有限公司 | Geometric error compensation model training method and geometric error compensation method |
CN113723217A (en) * | 2021-08-09 | 2021-11-30 | 南京邮电大学 | Object intelligent detection method and system based on yolo improvement |
CN114385322A (en) * | 2020-10-21 | 2022-04-22 | 沈阳中科数控技术股份有限公司 | Edge collaborative data distribution method applied to industrial Internet of things |
CN114708484A (en) * | 2022-03-14 | 2022-07-05 | 中铁电气化局集团有限公司 | Pattern analysis method suitable for identifying defects |
CN116468998A (en) * | 2022-09-09 | 2023-07-21 | 国网湖北省电力有限公司超高压公司 | Visual characteristic-based power transmission line small part and hanging point part detection method |
CN117368000A (en) * | 2023-10-13 | 2024-01-09 | 昆山美仑工业样机有限公司 | Static torsion test stand provided with self-adaptive clamping mechanism |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5459636A (en) * | 1994-01-14 | 1995-10-17 | Hughes Aircraft Company | Position and orientation estimation neural network system and method |
CN106683091A (en) * | 2017-01-06 | 2017-05-17 | 北京理工大学 | Target classification and attitude detection method based on depth convolution neural network |
CN107451568A (en) * | 2017-08-03 | 2017-12-08 | 重庆邮电大学 | Use the attitude detecting method and equipment of depth convolutional neural networks |
CN108121986A (en) * | 2017-12-29 | 2018-06-05 | 深圳云天励飞技术有限公司 | Object detection method and device, computer installation and computer readable storage medium |
-
2018
- 2018-06-08 CN CN201810591858.8A patent/CN109101966B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5459636A (en) * | 1994-01-14 | 1995-10-17 | Hughes Aircraft Company | Position and orientation estimation neural network system and method |
CN106683091A (en) * | 2017-01-06 | 2017-05-17 | 北京理工大学 | Target classification and attitude detection method based on depth convolution neural network |
CN107451568A (en) * | 2017-08-03 | 2017-12-08 | 重庆邮电大学 | Use the attitude detecting method and equipment of depth convolutional neural networks |
CN108121986A (en) * | 2017-12-29 | 2018-06-05 | 深圳云天励飞技术有限公司 | Object detection method and device, computer installation and computer readable storage medium |
Non-Patent Citations (1)
Title |
---|
JOSEPH REDMON: "You Only Look Once: Unified, Real-Time Object Detection", 《2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 * |
Cited By (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109858530A (en) * | 2019-01-14 | 2019-06-07 | 苏州长风航空电子有限公司 | One kind being based on compound pyramidal rolling target detection method |
CN109858530B (en) * | 2019-01-14 | 2022-06-28 | 苏州长风航空电子有限公司 | Composite pyramid-based rotating target detection method |
CN109902629A (en) * | 2019-03-01 | 2019-06-18 | 成都康乔电子有限责任公司 | A Real-time Vehicle Object Detection Model in Complex Traffic Scenarios |
CN109948514A (en) * | 2019-03-15 | 2019-06-28 | 中国科学院宁波材料技术与工程研究所 | Fast workpiece identification and localization method based on single-target 3D reconstruction |
CN110223352B (en) * | 2019-06-14 | 2021-07-02 | 浙江明峰智能医疗科技有限公司 | Medical image scanning automatic positioning method based on deep learning |
CN110223352A (en) * | 2019-06-14 | 2019-09-10 | 浙江明峰智能医疗科技有限公司 | A kind of medical image scanning automatic positioning method based on deep learning |
CN110338835A (en) * | 2019-07-02 | 2019-10-18 | 深圳安科高技术股份有限公司 | A kind of intelligent scanning stereoscopic monitoring method and system |
CN110826499A (en) * | 2019-11-08 | 2020-02-21 | 上海眼控科技股份有限公司 | Object space parameter detection method and device, electronic equipment and storage medium |
CN110948489A (en) * | 2019-12-04 | 2020-04-03 | 国电南瑞科技股份有限公司 | A method and system for limiting safe working space of a live working robot |
CN110948489B (en) * | 2019-12-04 | 2022-11-04 | 国电南瑞科技股份有限公司 | Method and system for limiting safe working space of live working robot |
CN111784767A (en) * | 2020-06-08 | 2020-10-16 | 珠海格力电器股份有限公司 | Method and device for determining target position |
CN111667510A (en) * | 2020-06-17 | 2020-09-15 | 常州市中环互联网信息技术有限公司 | Rock climbing action evaluation system based on deep learning and attitude estimation |
CN114385322A (en) * | 2020-10-21 | 2022-04-22 | 沈阳中科数控技术股份有限公司 | Edge collaborative data distribution method applied to industrial Internet of things |
CN112800856A (en) * | 2021-01-06 | 2021-05-14 | 南京通盛弘数据有限公司 | Livestock position and posture recognition method and device based on YOLOv3 |
CN113111712A (en) * | 2021-03-11 | 2021-07-13 | 稳健医疗用品股份有限公司 | AI identification positioning method, system and device for bagged product |
CN113102882A (en) * | 2021-06-16 | 2021-07-13 | 杭州景业智能科技股份有限公司 | Geometric error compensation model training method and geometric error compensation method |
CN113723217A (en) * | 2021-08-09 | 2021-11-30 | 南京邮电大学 | Object intelligent detection method and system based on yolo improvement |
CN113723217B (en) * | 2021-08-09 | 2025-01-14 | 南京邮电大学 | An improved object intelligent detection method and system based on Yolo |
CN114708484A (en) * | 2022-03-14 | 2022-07-05 | 中铁电气化局集团有限公司 | Pattern analysis method suitable for identifying defects |
CN116468998A (en) * | 2022-09-09 | 2023-07-21 | 国网湖北省电力有限公司超高压公司 | Visual characteristic-based power transmission line small part and hanging point part detection method |
CN117368000A (en) * | 2023-10-13 | 2024-01-09 | 昆山美仑工业样机有限公司 | Static torsion test stand provided with self-adaptive clamping mechanism |
CN117368000B (en) * | 2023-10-13 | 2024-05-07 | 昆山美仑工业样机有限公司 | Static torsion test stand provided with self-adaptive clamping mechanism |
Also Published As
Publication number | Publication date |
---|---|
CN109101966B (en) | 2022-03-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109101966A (en) | Workpiece identification positioning and posture estimation system and method based on deep learning | |
CN108171748B (en) | Visual identification and positioning method for intelligent robot grabbing application | |
CN110322510B (en) | 6D pose estimation method using contour information | |
CN112297013B (en) | A robot intelligent grasping method based on digital twin and deep neural network | |
CN111260649B (en) | Close-range mechanical arm sensing and calibrating method | |
CN114011608B (en) | Spraying process optimization system based on digital twinning and spraying optimization method thereof | |
CN115816460B (en) | Mechanical arm grabbing method based on deep learning target detection and image segmentation | |
CN110969660B (en) | Robot feeding system based on three-dimensional vision and point cloud deep learning | |
CN110378325B (en) | Target pose identification method in robot grabbing process | |
CN115861999B (en) | A robot grasping detection method based on multimodal visual information fusion | |
CN113034575B (en) | Model construction method, pose estimation method and object picking device | |
Chang et al. | A lightweight appearance quality assessment system based on parallel deep learning for painted car body | |
CN115330734A (en) | Automatic robot repair welding system based on three-dimensional target detection and point cloud defect completion | |
CN110428464A (en) | Multi-class out-of-order workpiece robot based on deep learning grabs position and orientation estimation method | |
CN109948514A (en) | Fast workpiece identification and localization method based on single-target 3D reconstruction | |
CN111310637A (en) | A scale-invariant network-based detection method for robot object grasping | |
CN116665312A (en) | A Human-Machine Collaboration Method Based on Multi-scale Graph Convolutional Neural Network | |
Gonçalves et al. | Grasp planning with incomplete knowledge about the object to be grasped | |
CN118990489A (en) | Double-mechanical-arm cooperative carrying system based on deep reinforcement learning | |
Frank et al. | Stereo-vision for autonomous industrial inspection robots | |
Hosseini et al. | Multi-modal robust geometry primitive shape scene abstraction for grasp detection | |
CN114972948A (en) | Neural detection network-based identification and positioning method and system | |
Manawadu et al. | Object recognition and pose estimation from rgb-d data using active sensing | |
CN112634367A (en) | Anti-occlusion object pose estimation method based on deep neural network | |
CN118071828B (en) | Intelligent non-contact chip surface temperature measurement method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |