[go: up one dir, main page]

CN111178296B - Multi-workpiece visual positioning and identifying method - Google Patents

Multi-workpiece visual positioning and identifying method Download PDF

Info

Publication number
CN111178296B
CN111178296B CN201911417127.2A CN201911417127A CN111178296B CN 111178296 B CN111178296 B CN 111178296B CN 201911417127 A CN201911417127 A CN 201911417127A CN 111178296 B CN111178296 B CN 111178296B
Authority
CN
China
Prior art keywords
workpiece
coordinate system
robot
machine vision
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911417127.2A
Other languages
Chinese (zh)
Other versions
CN111178296A (en
Inventor
郝富强
丁会霞
廖昌叶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Kunpeng Intelligent Equipment Manufacture Co ltd
Original Assignee
Shenzhen Kunpeng Intelligent Equipment Manufacture Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Kunpeng Intelligent Equipment Manufacture Co ltd filed Critical Shenzhen Kunpeng Intelligent Equipment Manufacture Co ltd
Priority to CN201911417127.2A priority Critical patent/CN111178296B/en
Publication of CN111178296A publication Critical patent/CN111178296A/en
Application granted granted Critical
Publication of CN111178296B publication Critical patent/CN111178296B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/10544Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum
    • G06K7/10712Fixed beam scanning
    • G06K7/10722Photodetector array or CCD scanning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Electromagnetism (AREA)
  • Health & Medical Sciences (AREA)
  • Toxicology (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Manipulator (AREA)
  • Numerical Control (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

A visual positioning and identifying method for multiple workpieces includes utilizing high-precision industrial-grade camera to collect image information of machine visual positioning module by visual sensor, carrying out filtering and noise suppression on image, extracting edge characteristic data, utilizing fitting algorithm to fit edge, finally utilizing Canny edge detection algorithm to calculate out plane coordinate value of workpiece and to establish workpiece coordinate system, determining workpiece coordinate position and rotation angle to be welded, utilizing algorithm of homogeneous matrix to unify workpiece coordinate system and robot coordinate system to robot coordinate system so as to guide robot to realize automatic welding process. The invention can achieve the advantages of improving welding quality, improving production efficiency and reducing labor cost.

Description

Multi-workpiece visual positioning and identifying method
Technical Field
The invention relates to the technical field of welding and positioning of robots.
Background
In the age of rapid development of economy, the yield of ships is larger and larger, the types of different workpieces are more and more, and the requirement of productivity cannot be met by the traditional mode of fixing the workpieces correspondingly by a fixed platform. The technology of domestic automatic welding has been used in many fields, such as automatic welding of frames of automobiles, for example. At present, due to the fact that the work environment of shipyards is severe, workpieces are large in size and large in types, mainly manual welding is adopted, and the defects of low welding efficiency, high cost, long time consumption and relatively low quality exist. In order to solve the problem, the ship industry introduces the function of machine vision, but because the field environment of a shipyard is severe, the size of a workpiece is large, an open working environment light source and instability thereof generate very large interference on the positioning of the machine vision, and because the specificity of the workpiece is deeper, the problem can be solved by painting the background color of a platform to increase the contrast ratio of the workpiece and the platform, but because the upper workpiece and the lower workpiece frequently cause repeated maintenance on the background paint scraping of the platform, the cost is high, the construction period is long, and the production efficiency is seriously affected.
Disclosure of Invention
The invention aims to provide a multi-workpiece visual positioning and identifying method, which can quickly identify and position workpieces through machine vision and guide a robot to realize automatic welding.
The invention can realize the aim by designing a multi-workpiece visual positioning and identifying method, which comprises the following steps:
A. setting a large welding working platform, placing a workpiece on the welding platform, and respectively placing machine vision positioning modules at the outer edges of the workpiece;
B. the camera obtains pixel equivalent through shooting a calibration caliper; the camera obtains the image information of the machine vision positioning module through the photographing machine vision positioning module and performs analysis operation to obtain the coordinates of the point cloud on the edge; scanning the two-dimensional code through a camera, and determining model batch information of the workpiece; removing noise points of a reference visual image by a nonlinear median filtering algorithm on the reference image acquired by the camera;
C. converting the data of the camera coordinate system by using a homogeneous matrix algorithm, fitting the point cloud, determining the coordinates of the workpiece in the world coordinate system, calculating to obtain a rotation angle, and sending the rotation angle to a robot database to guide a robot to realize automatic welding of the workpiece;
the conversion relation of two coordinate systems is expressed as homogeneous matrix
Wherein: s represents sin and c represents cos;
and carrying out fitting calculation on the point clouds of the two-dimensional codes on the same axis on the workpiece to obtain a fitted straight line, and intersecting the two fitted straight lines of the X axis and the Y axis to obtain the coordinate of the intersection point.
Furthermore, feature extraction at the pixel level adopts Canny operators and Log operators.
Still further, assuming that the points are (x 1, y 1), (x 2, y 2), (x 2), and (xn, yn), then the correlation coefficient of the linear fit formula is
b=(n(x1y1+x2y2+...+xnyn)-(x1+...+x2)(y1+...+yn))/(n((x1) 2 +(x2) 2 +...+(xn) 2 )-(x1+...+x2) 2 ),
a=(y1+...+yn)/n-b*(x1+...+x2)/n。
The invention can achieve the advantages of improving welding quality, improving production efficiency and reducing labor cost, and can be popularized in the welding industry of shipyards in the future so that more shipyards benefit in the welding field.
Drawings
FIG. 1 is a flow chart of a preferred embodiment of the present invention;
FIG. 2 is a schematic view of the welding robot coordinates and ZYX Euler angles in accordance with a preferred embodiment of the present invention;
FIG. 3 is a schematic diagram of a preferred embodiment of the present invention.
Detailed Description
The invention is further described below with reference to examples.
As shown in fig. 1, a method for visual positioning and identifying multiple workpieces includes the following steps:
A. a large welding working platform is arranged, the workpiece 1 is placed on the welding platform, and the machine vision positioning modules 2 are respectively placed on the outer edges of the workpiece 1, as shown in fig. 3. A large work platform is required to place workpieces of different sizes on the welding platform.
B. The camera 4 obtains pixel equivalent by shooting a calibration caliper, and the camera 4 obtains image information of the machine vision positioning module by shooting the machine vision positioning module and performs analysis operation to obtain coordinates of point cloud on the edge; and scanning the two-dimensional code 3 by a camera to determine model batch information of the workpiece. Camera calibration by shooting a calliper means a process of calibration of each camera coordinate system, i.e. the source of pixel equivalents.
The control mechanism 7 sends an instruction to the industrial computer 6, the industrial computer 6 starts the image acquisition card 5, the camera 4 scans the two-dimensional code 3 on the workpiece to acquire images, the industrial computer 6 directly acquires information contained in the two-dimensional code, the information comprises the model, the specification and the like of the workpiece, and meanwhile, the coordinates of point clouds on the edge line of the two-dimensional code are extracted through the function of a visual software algorithm.
The reference image collected by the camera sometimes has high-frequency pixel points and noise due to vibration, light, scraps and the like. The ideal contour of the reference edge is a smooth curve, so the nonlinear median filtering algorithm is particularly suitable for the reference visual image noise point removal. The feature extraction at the pixel level is mature, and the feature is obtained more stably and more rapidly by adopting the Canny operator and the Log operator, so that the method is suitable for the static high-precision pixel feature extraction. Theoretically, the maximum error of edge positioning of these operators is 0.5 pixel, and the number of pixels between two feature points may have an error of 1 pixel.
C. Converting the data of the camera coordinate system by using a homogeneous matrix algorithm, fitting the point cloud, determining the coordinates of the workpiece in the world coordinate system, calculating to obtain a rotation angle, and sending the rotation angle to a robot database to guide the robot to realize automatic welding of the workpiece.
And carrying out fitting calculation on the point clouds of the machine vision positioning module on the same axis on the workpiece to obtain a fitted straight line, and intersecting the two fitted straight lines of the X axis and the Y axis to obtain the coordinate of the intersection point. Assuming that the points are (x 1, y 1), (x 2, y 2), (x 2), and (xn, yn), then the correlation coefficient of the linear fit formula is
b=(n(x1y1+x2y2+...+xnyn)-(x1+...+x2)(y1+...+yn))/(n((x1) 2 +(x2) 2 +...+(xn) 2 )-(x1+...+x2) 2 ),
a=(y1+...+yn)/n-b*(x1+...+x2)/n。
As shown in fig. 2 (a), the robot coordinate systemThe coordinates in the coordinate system O-xyz are [ XYZABC ]]It essentially represents the coordinate system O-xyz to the robot coordinate system +.>The conversion sequence is translation followed by rotation. X, Y, Z the robot coordinate System->The origin O' of (c) is located in mm in the coordinate system O-xyz. A. B, C it is expressed that the origin of the coordinate system O-xyz is translated to O' and then the coordinate system O-xyz is shifted from>To the coordinate system->The unit is degree through ZYX Euler transformation.
As shown in FIG. 2 (b), A represents a coordinate systemWind->The rotation angle of the shaft, B represents the coordinate system after rotation +.>The rotation angle of the shaft, C represents +.>Angle of rotation.
The conversion relation of two coordinate systems is expressed as homogeneous matrix
Wherein: s represents sin and c represents cos.
If the matrix is multiplied from right to left, it means that each rotation rotates about the relevant axis of the reference frame O. I.e. first around x by an angle C, then around y by an angle B, and finally around z by an angle a.
And sending the conversion calculation result to a robot database to guide the robot to realize automatic welding of the workpiece.
The method has the advantages that the machine vision positioning module and the method for attaching the two-dimensional code are respectively arranged on the outer edge of the workpiece, and the workpiece is well positioned by correcting the multi-layer complex algorithm of the fitting algorithm, the edge detection algorithm and the homogeneous matrix algorithm.
The invention reduces the number of fixed working platforms, saves the cost and improves the production efficiency. The invention is mainly applied to the field of welding large steel workpieces in ships and shipyards, and realizes a full-automatic welding mode instead of manual welding by scanning and positioning workpiece coordinates to guide a robot.

Claims (1)

1. The multi-workpiece visual positioning and identifying method is characterized by comprising the following steps of:
A. setting a large welding working platform, placing a workpiece on the welding platform, and respectively placing machine vision positioning modules at the outer edges of the workpiece;
B. the camera obtains a pixel equivalent unit by shooting a calibration caliper; the camera obtains the image information of the machine vision positioning module through the photographing machine vision positioning module and performs analysis operation to obtain the coordinates of the point cloud on the edge; scanning the two-dimensional code through a camera, and determining model batch information of the workpiece; removing noise points of a reference visual image by a nonlinear median filtering algorithm on the reference image acquired by the camera; the feature extraction of the pixel level adopts a Canny operator or a Log operator;
C. converting the data of the camera coordinate system by using a homogeneous matrix algorithm, fitting the point cloud, determining the coordinates of the workpiece in the world coordinate system, calculating to obtain a rotation angle, and sending the rotation angle to a robot database to guide a robot to realize automatic welding of the workpiece;
fitting calculation is carried out on point clouds of the machine vision positioning module on the same axis on the workpiece to obtain a fitted straight line, and the coordinate of an intersection point is obtained by intersecting the two fitted straight lines of the X axis and the Y axis; assuming that the points are (x 1, y 1), (x 2, y 2), (x 2), and (xn, yn), then the correlation coefficient of the linear fit formula is
b=(n(x1y1+x2y2+...+xnyn)-(x1+...+x2)(y1+...+yn))/(n((x1) 2 +(x2) 2 +...+(xn) 2 )-(x1+...+x2) 2 ),
a=(y1+...+yn)/n-b*(x1+...+x2)/n;
The conversion relation of two coordinate systems is expressed as homogeneous matrix
Wherein: s represents sin and c represents cos; x, Y, Z the robot coordinate SystemThe position of the origin O' of (2) in the coordinate system O-xyz is given in mm; A. b, C it is expressed that the origin of the coordinate system O-xyz is translated to O' and then the coordinate system O-xyz is shifted from>To the coordinate system->The unit is degree through ZYX Euler transformation;
and carrying out fitting calculation on the point clouds of the machine vision positioning module on the same axis on the workpiece to obtain a fitted straight line, and intersecting the two fitted straight lines of the X axis and the Y axis to obtain the coordinate of the intersection point.
CN201911417127.2A 2019-12-31 2019-12-31 Multi-workpiece visual positioning and identifying method Active CN111178296B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911417127.2A CN111178296B (en) 2019-12-31 2019-12-31 Multi-workpiece visual positioning and identifying method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911417127.2A CN111178296B (en) 2019-12-31 2019-12-31 Multi-workpiece visual positioning and identifying method

Publications (2)

Publication Number Publication Date
CN111178296A CN111178296A (en) 2020-05-19
CN111178296B true CN111178296B (en) 2024-03-01

Family

ID=70654316

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911417127.2A Active CN111178296B (en) 2019-12-31 2019-12-31 Multi-workpiece visual positioning and identifying method

Country Status (1)

Country Link
CN (1) CN111178296B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111595266A (en) * 2020-06-02 2020-08-28 西安航天发动机有限公司 Spatial complex trend catheter visual identification method
CN113592955B (en) * 2021-07-27 2024-04-09 中国科学院西安光学精密机械研究所 Round workpiece plane coordinate high-precision positioning method based on machine vision
CN115540749A (en) * 2022-09-14 2022-12-30 泰州市创新电子有限公司 Three-dimensional vision measurement data processing method and device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101949687A (en) * 2010-09-19 2011-01-19 天津大学 Detection method of automobile door based on vision measurement
CN107976147A (en) * 2017-12-11 2018-05-01 西安迈森威自动化科技有限公司 A kind of glass locating and detecting device based on machine vision
CN108550141A (en) * 2018-03-29 2018-09-18 上海大学 A kind of movement wagon box automatic identification and localization method based on deep vision information
CN109448054A (en) * 2018-09-17 2019-03-08 深圳大学 Target step-by-step positioning method, application, device and system based on visual fusion
CN110110760A (en) * 2019-04-17 2019-08-09 浙江工业大学 A kind of workpiece positioning and recognition methods based on machine vision
CN110524580A (en) * 2019-09-16 2019-12-03 西安中科光电精密工程有限公司 A kind of welding robot visual component and its measurement method
CN110599544A (en) * 2019-08-08 2019-12-20 佛山科学技术学院 Workpiece positioning method and device based on machine vision

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8126260B2 (en) * 2007-05-29 2012-02-28 Cognex Corporation System and method for locating a three-dimensional object using machine vision
JP5549129B2 (en) * 2009-07-06 2014-07-16 セイコーエプソン株式会社 Position control method, robot

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101949687A (en) * 2010-09-19 2011-01-19 天津大学 Detection method of automobile door based on vision measurement
CN107976147A (en) * 2017-12-11 2018-05-01 西安迈森威自动化科技有限公司 A kind of glass locating and detecting device based on machine vision
CN108550141A (en) * 2018-03-29 2018-09-18 上海大学 A kind of movement wagon box automatic identification and localization method based on deep vision information
CN109448054A (en) * 2018-09-17 2019-03-08 深圳大学 Target step-by-step positioning method, application, device and system based on visual fusion
CN110110760A (en) * 2019-04-17 2019-08-09 浙江工业大学 A kind of workpiece positioning and recognition methods based on machine vision
CN110599544A (en) * 2019-08-08 2019-12-20 佛山科学技术学院 Workpiece positioning method and device based on machine vision
CN110524580A (en) * 2019-09-16 2019-12-03 西安中科光电精密工程有限公司 A kind of welding robot visual component and its measurement method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
赵善政.基于机器视觉的玻璃在线定位方法.《硕士电子期刊》.2013,全文. *
郝永平等.面向视觉测量的像素当量标定方法.《纳米技术与精密工程》.2014,373-380. *

Also Published As

Publication number Publication date
CN111178296A (en) 2020-05-19

Similar Documents

Publication Publication Date Title
CN111178296B (en) Multi-workpiece visual positioning and identifying method
CN109612390B (en) Large-size workpiece automatic measuring system based on machine vision
CN101840736B (en) Device and method for mounting optical glass under vision guide
CN110717872B (en) Method and system for extracting characteristic points of V-shaped welding seam image under laser-assisted positioning
CN112529858A (en) Welding seam image processing method based on machine vision
WO2015120734A1 (en) Special testing device and method for correcting welding track based on machine vision
CN111645074A (en) Robot grabbing and positioning method
CN113146172A (en) Multi-vision-based detection and assembly system and method
CN109671059B (en) Battery box image processing method and system based on OpenCV
CN110480127A (en) A kind of seam tracking system and method based on structured light visual sensing
CN113843797B (en) Automatic disassembly method for part hexagonal bolt under non-structural environment based on single-binocular hybrid vision
CN110625644B (en) Workpiece grabbing method based on machine vision
CN108389184A (en) A kind of workpiece drilling number detection method based on machine vision
CN107160241A (en) A kind of vision positioning system and method based on Digit Control Machine Tool
CN110568866A (en) Three-dimensional curved surface vision guiding alignment system and alignment method
CN113822810A (en) Method for positioning workpiece in three-dimensional space based on machine vision
CN114926531A (en) Binocular vision based method and system for autonomously positioning welding line of workpiece under large visual field
CN115830018A (en) Carbon block detection method and system based on deep learning and binocular vision
CN111639538B (en) Casting positioning method based on vision
CN105068139B (en) A kind of characterization processes of piston cooling nozzle installment state
CN115283905A (en) Welding gun posture adjusting method of welding robot
CN113433129B (en) Six-axis robot deburring cutter detection mechanism and method thereof
CN107843602B (en) Image-based weld quality detection method
CN116740141A (en) Machine vision-based weld joint positioning system and method for small preceding assembly
CN116766201A (en) An industrial robot control system based on machine vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant