[go: up one dir, main page]

CN109986238B - Robot linear flexible operation vision fuzzy profiling control method - Google Patents

Robot linear flexible operation vision fuzzy profiling control method Download PDF

Info

Publication number
CN109986238B
CN109986238B CN201711477715.6A CN201711477715A CN109986238B CN 109986238 B CN109986238 B CN 109986238B CN 201711477715 A CN201711477715 A CN 201711477715A CN 109986238 B CN109986238 B CN 109986238B
Authority
CN
China
Prior art keywords
area
current position
image
axis
speed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711477715.6A
Other languages
Chinese (zh)
Other versions
CN109986238A (en
Inventor
吕洁印
周受钦
刘海林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen CIMC Intelligent Technology Co Ltd
Guangdong CIMC Intelligent Technology Co Ltd
Original Assignee
China International Marine Containers Group Co Ltd
Shenzhen CIMC Intelligent Technology Co Ltd
Dongguan CIMC Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China International Marine Containers Group Co Ltd, Shenzhen CIMC Intelligent Technology Co Ltd, Dongguan CIMC Intelligent Technology Co Ltd filed Critical China International Marine Containers Group Co Ltd
Priority to CN201711477715.6A priority Critical patent/CN109986238B/en
Publication of CN109986238A publication Critical patent/CN109986238A/en
Application granted granted Critical
Publication of CN109986238B publication Critical patent/CN109986238B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B23MACHINE TOOLS; METAL-WORKING NOT OTHERWISE PROVIDED FOR
    • B23KSOLDERING OR UNSOLDERING; WELDING; CLADDING OR PLATING BY SOLDERING OR WELDING; CUTTING BY APPLYING HEAT LOCALLY, e.g. FLAME CUTTING; WORKING BY LASER BEAM
    • B23K37/00Auxiliary devices or processes, not specially adapted for a procedure covered by only one of the other main groups of this subclass
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Robotics (AREA)
  • Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Manipulator (AREA)

Abstract

本发明提供了一种机器人类线形柔性作业视觉模糊仿形控制方法。控制方法包括:获取机器人当前位置图像;根据当前位置图像判断当前位置图像的特征点与预设图像特征点的偏差值;根据偏差值判断当前位置所属区域;根据所述当前位置所属区域控制机器人移动以实现对所述预设图像特征点的对准,当机器人沿着作业方向移动时,在每个工作面均执行对所述图像特征点的对准步骤,从而实现仿形操作,其中,所述预设图像分为三个速度控制区域,根据所述当前位置的所属区域所在的速度控制区域,控制机器人以三种速度仿形,所述三种仿形速度分别对应于所述三个速度控制区域。控制装置及控制方法可以提高作业质量,提高生产效率。

Figure 201711477715

The invention provides a visual fuzzy profiling control method for a robot-like linear flexible operation. The control method includes: acquiring a current position image of the robot; judging the deviation value of the feature point of the current position image and the preset image feature point according to the current position image; judging the region to which the current position belongs according to the deviation value; controlling the robot to move according to the region to which the current position belongs In order to realize the alignment of the preset image feature points, when the robot moves along the working direction, the step of aligning the image feature points is performed on each working surface, so as to realize the profiling operation. The preset image is divided into three speed control areas. According to the speed control area where the current position belongs to, the robot is controlled to profile at three speeds, and the three profile speeds correspond to the three speeds respectively. control area. The control device and the control method can improve work quality and production efficiency.

Figure 201711477715

Description

Robot linear flexible operation vision fuzzy profiling control method
Technical Field
The invention relates to the technical field of intelligent manufacturing, in particular to a vision fuzzy profiling control method for linear flexible operation of robots and humans.
Background
With the rapid development of the global manufacturing industry, the application of robots typified by industrial robots in the manufacturing industry is more and more extensive, and the figure of the industrial robot can be seen in each link of cutting, welding, assembling, spraying and conveying of equipment manufacturing. The development of sensors is also changing day by day, and sensing technologies based on a series of principles of vision, hearing, pressure, gas and the like are developed.
The sensing technology is integrated into the application of the industrial robot, and particularly the visual perception technology is applied to the industrial robot, so that the sensing technology is emphasized by the industrial robot industry and is usually embodied in the aspects of visual control and visual servo. Classified according to control models, visual control can be classified into position visual servoing, image visual servoing, and hybrid visual servoing. Image-based vision control is a method of using captured image features to operate a control robot. The image characteristic of the target is a given quantity of the controller, the image characteristic fed back is used as feedback, a motion adjustment strategy of the robot is designed according to deviation, and the motion is controlled according to the motion adjustment strategy to form a closed-loop control system. The judgment of the image-based visual servo or the image-based visual control is based on whether the movement speed of the robot joint is controlled according to the deviation of the image characteristics. The visual servo control based on the image has a joint speed ring and an image characteristic closed ring, the control method is good in real-time performance, and the requirement of the camera mechanism on the center follow-up profile control is met. Meanwhile, the position-based visual servoing is divided into position-feedback type visual servoing control and position-feedback type visual servoing control as a feedback amount or a given amount depending on the position signal. From the perspective of visual control, the position giving and shaping visual control belongs to a visual open-loop control method, belongs to a Looking then Doing mode, has low real-time requirements, and has a part of degrees of freedom by adopting image servo control, and the position giving and shaping visual control adopting the visual open-loop control method for other degrees of freedom becomes the current research direction.
The line-like operation is a common operation requirement in equipment manufacturing, and particularly, the line-like operation with a long distance is difficult and has many problems. The development and application of the eye-to-hand robot image vision control method aiming at the linear-like operation can accumulate successful experience for researching and developing high-efficiency automatic welding equipment, not only can play a role in promoting national economy in China, but also can change the current situation that the technical level of the equipment manufacturing industry in China is laggard to a certain extent, and provide automatic, intelligent and flexible technical support for the leapfrog-type development of welding in the manufacturing industry. The container is typical in the container industry, the container is mainly formed by welding and splicing thin plate type metal structural components with the thickness of 1.5-2 mm, the container has high requirements on bearing all-weather navigation tests in the transportation process, and the container adopts standard vehicles, devices and hoisting equipment during loading and unloading, so that the product has to meet the requirements on water tightness, strength and limited deformation. The welding of the fold line welding seam of the corrugated plate and the side beam (the side beam at the top and the bottom of the container) is a key process in the production and the manufacture of the container, and the container is assembled by adopting thin structural components, so that the container is easy to weld, easy to deform and poor in regularity when the welding seam is welded, and the technical requirement on the welding aspect is higher. The automation of the welding of the long-leg welding seam of the thin plate, particularly the welding seam of the broken line of the side plate (corrugated plate) and the upper and lower side beams of the container is always a difficult problem of enterprises, and the angle of the corrugated plate of the container is
Figure DEST_PATH_IMAGE002
And the sheet structural member with repeated periods alternately appearing like a straight line-fixed angle broken line-straight line-fixed angle broken line is presented, the thickness of a workpiece to be welded is 1.6-2 mm, the length of a welding line is long, the variation of an assembly gap is large (0-6 mm), the welding speed is high (1.5 m/min is used for gas shielded welding), and the broken line welding line of a corrugated plate and a side beam is formed. The general length of the container produced by the container production line is 12m and 6m, even if a plurality of machines are welded simultaneously, one machine is also welded for a distance of 3-5 m at one time, including tenSeveral cycles, dozens of bending points; since the corrugated plate is blanked and stamped, and the corrugated plate and the bottom plate are fillet welded through butt welding between the corrugated plates, each link has certain error interference, so that the assembly gap track has irregular fluctuation, the gap size is fluctuated, the gap range is 0-6 mm, the thermal influence in the welding process causes certain deformation of the assembly gap to be welded, and the assembly gap is dynamically changed in the welding process and is inconsistent in the front and back; under the welding background that the requirement on the welding speed is 1.3m/min, the interference of splashed high-temperature welding slag, strong arc and the like exists in the welding process, the complicated environment makes the tracking and monitoring of the welding seam very difficult, and the accuracy and the speed of the welding seam identification directly influence the quality and the production efficiency of a weldment.
Therefore, a need exists for a robotic human linear flexible task vision-obscuring profiling control method that at least partially addresses the above-mentioned problems.
Disclosure of Invention
In this summary, concepts in a simplified form are introduced that are further described in the detailed description. This summary of the invention is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
In order to at least partially solve the above problems, according to one aspect of the present invention, there is provided a robot human linear flexible task vision-blurred profiling control method, characterized in that the control method comprises:
acquiring a current position image of the robot;
judging the deviation value of the characteristic point of the current position image and the characteristic point of a preset image according to the current position image;
judging the area of the current position according to the deviation value;
controlling the robot to move according to the area to which the current position belongs so as to realize the alignment of the preset image characteristic points, and when the robot moves along the operation direction, executing the alignment step of the image characteristic points on each working surface so as to realize the profiling operation;
the preset image is divided into three speed control areas, and the robot is controlled to profile at three profiling speeds according to the speed control area where the area to which the current position belongs, wherein the three profiling speeds respectively correspond to the three speed control areas.
According to the control method, part of degrees of freedom are controlled by image servo to carry out visual acquisition, and other degrees of freedom are controlled by a visual open-loop control method, so that different areas are subjected to different profiling speeds, and the working quality and the production efficiency of the industrial robot can be ensured.
Optionally, the preset image is further divided into nine direction control areas, and the movement direction of the movement axis of the robot is controlled to align with the preset image feature point, wherein the movement direction of the movement axis of the robot is specifically controlled corresponding to the nine direction control areas according to the direction control area where the area to which the current position belongs.
Optionally, the feature point of the preset image is selected as a central point of the preset image.
Optionally, the three speed control regions are a coarse adjustment region, a fine adjustment region and a fine adjustment region, respectively, the profiling speed corresponding to the coarse adjustment region is a third speed F3, the profiling speed corresponding to the fine adjustment region is a second speed F2, and the profiling speed corresponding to the fine adjustment region is a first speed F1.
Optionally, the coarse adjustment region surrounds the fine adjustment region, the fine adjustment region surrounds the feature point of the preset image, and the third speed F3> the second speed F2> the first speed F1.
Optionally, the method further includes the step of removing the interference feature points:
the dead pixel spacing of characteristic points of a similar linear operation object acquired twice continuously is respectively Um and Vm, wherein Um and Vm are ideal values;
comparing the image coordinates (U, V) of the current feature point in the screen with the coordinates (U1, V1) of the feature point of the linear operation object acquired last time;
and if the position is | U1-U | > Um or | V1-V | > Vm, judging that the dead pixel is jumped, setting the current feature point as U = U1 or V = V1, and further obtaining the feature point image coordinates of the normally-changed linear-like operation object.
Optionally, when | U-Uc | ≦ Uil and | V-Vc | ≦ Vil, the screen area is the ideal area range, and the profiling speed of the motion axis Y-axis and/or Z-axis within the ideal area range is controlled to be the first speed F1;
when Uil < | U-Uc | ≦ Uml and Vil < | V-Vc | ≦ Vml, the screen area is a first limit area range, and the profiling speed of the motion axis Y-axis and/or Z-axis in the first limit area range is controlled to be a second speed F2;
when Uml < | U-Uc | is less than or equal to M and Vml < | V-Vc | is less than or equal to N, the screen area is in a second limit area range, and the profiling speed of the Y axis and/or the Z axis of the movement axis in the second limit area range is controlled to be a third speed F3;
wherein the coordinates of the current position image are (U, V);
the preset image size is the size of a rectangular area formed by the coordinate points (M, N) and the coordinate axes;
the coordinates of the central point of the preset image are (Uc, Vc), wherein Uc = M/2, Vc = N/2;
uil and Vil are the ideal side lengths, respectively, that define the ideal area of the profile;
uml and Vml are defined lengths, and Uil < Uml < M, Vil < Vml < N.
Optionally, the coordinates of the current position image are (U, V), and the coordinates of the preset image center point are (Uc, Vc);
when U > Uc + Uil and V < Vc-Vil, the characteristic point of the current position image is located in the area A;
when U > Uc + Uil and V > Vc + Vil, the characteristic point of the current position image is located in a B area;
when U is greater than Uc + Uil and V is not less than Vc-Vil and not more than Vc + Vil, the characteristic point of the current position image is located in the C area;
when U < Uc-Uil and V < Vc-Vil, the characteristic point of the current position image is located in a D area;
when U < Uc-Uil and V > Vc + Vil, the characteristic point of the current position image is located in an E area;
when U is less than Uc-Uil and V is more than or equal to Vc-Vil and less than or equal to Vc + Vil, the feature point of the current position image is located in the F area;
when U is more than or equal to Uc-Uil and less than or equal to Uc + Uil and V is more than Vc-Vil, the feature point of the current position image is located in the G area;
when U is more than or equal to Uc-Uil and less than or equal to Uc + Uil and V is more than Vc + Vil, the feature point of the current position image is located in the H area;
when U is more than or equal to Uc-Uil and is less than or equal to Uc + Uil and V is more than or equal to Vc-Vil and is less than or equal to Vc + Vil, the feature point of the current position image is located in the region I;
where Uil and Vil are each ideal side lengths that define the ideal area of the profile.
Optionally, when the feature point of the current position image is located in the area a or the area C, controlling the movement axis Y to move close to the preset image center point;
when the feature point of the current position image is located in an E area or an F area, controlling a motion axis Y to move close to the central point of the preset image;
when the feature point of the current position image is located in a B area or an H area, controlling a motion axis Z to move close to the central point of the preset image;
and when the characteristic point of the current position image is positioned in a D area or a G area, controlling the movement of the movement axis Z to be close to the central point of the preset image.
Alternatively, when the feature point of the current position image is located in the I zone, let m = | U-Uc |, n = | V-Vc |, control the Y axis to move positively or negatively so as to reach the point (U2, V2), let m2= | U2-Uc |, n2= | V2-Vc |, when m2+ n2 ≦ m + n, control the Y axis to move in the original direction, and when m2+ n2> m + n, control the Y axis to move in the opposite direction.
Drawings
The following drawings of the invention are included to provide a further understanding of the invention. There are shown in the drawings, embodiments and descriptions thereof, which are used to explain the principles and apparatus of the invention. In the drawings, there is shown in the drawings,
FIG. 1 is a schematic mechanical diagram according to the present disclosure;
FIG. 2 is a schematic view of image acquisition;
FIG. 3 is a schematic diagram of a screen speed partition; and
fig. 4 is a sectional view of a screen axis control method.
Description of the reference numerals
2 visual sensor
101X-axis sliding block
102Y-axis sliding block
103Z-axis slide block
201 surface laser generator
202 pick-up head
Class 203 linear operation object
301 collected feature points
Phase of 302 camera after collection
Detailed Description
In the following description, numerous specific details are set forth in order to provide a more thorough understanding of the present invention. It will be apparent, however, to one skilled in the art, that the present invention may be practiced without one or more of these specific details. In other instances, well-known features have not been described in order to avoid obscuring the invention.
In the following description, for purposes of explanation, specific details are set forth in order to provide a thorough understanding of the present invention. It is apparent that the practice of the invention is not limited to the specific details set forth herein as are known to those of skill in the art. The following detailed description of the preferred embodiments of the invention, however, the invention is capable of other embodiments in addition to the detailed description and should not be construed as limited to the embodiments set forth herein.
It is to be understood that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention, as the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. When the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. The terms "upper", "lower", "front", "rear", "left", "right" and the like as used herein are for purposes of illustration only and are not limiting.
Ordinal words such as "first" and "second" are referred to herein merely as labels, and do not have any other meaning, such as a particular order, etc.
Specific embodiments of the present invention will now be described in more detail with reference to the accompanying drawings, which illustrate representative embodiments of the invention and do not limit the invention.
Exemplary embodiments
As shown in fig. 1 to 4, according to an exemplary embodiment of the present invention, a control method and a control device for a robot linear flexible job vision-blurred profiling are provided.
A vision fuzzy profiling control method for linear flexible operation of a robot and a human being is characterized by comprising the following steps:
acquiring a current position image of the robot;
judging the deviation value of the characteristic point of the current position image and the characteristic point of a preset image according to the current position image;
judging the area of the current position according to the deviation value;
controlling the robot to move according to the area to which the current position belongs so as to realize the alignment of the preset image characteristic points, and executing the alignment step of the image characteristic points on each working surface when the robot moves along the working direction so as to realize the profiling operation,
the preset image is divided into three speed control areas, and the robot is controlled to profile at three speeds according to the speed control area where the current position belongs, wherein the three profiling speeds respectively correspond to the three speed control areas.
Specifically, as shown in fig. 1 to 2, the X-axis slider 101 is movable along the X-axis direction, the X-axis direction is a moving direction of the robot during operation, the robot stops at a specific position in the X-axis direction during operation, and after stopping, the robot performs operation on an object at the position. The Y-axis slider 102 and the Z-axis slider 103 are movable along the Y-axis and the Z-axis, respectively. Further, a Y-axis slider 102 is movably provided on the X-axis slider 101 along the Y-axis, and a Z-axis slider 103 is movable along the Z-axis. A camera 202 (image pickup device) is connected to the Y-axis slider 102 and the Z-axis slider 103. The X, Y and Z axes form a Cartesian coordinate system. Optionally, the welding assembly of the robot is connected to the X-axis, Y-axis and Z-axis, thereby enabling control of the welding position. Optionally, the welding assembly is connected with the image acquisition device, and when the coordinate center of the camera image coincides with the phase 302 of the characteristic point 301 of the line-like operation object 203 in the camera 202, the current position is determined as the correct welding position, and the welding assembly can directly perform the welding operation. Alternatively, the image capture device is mounted directly on the robotic work arm, and the welding assembly is also mounted on the robotic work arm.
The transmission mechanism of the X-axis sliding block 101 comprises a motor, a speed reducer, a rack, a linear guide rail and a sliding block, and position information fed back by an encoder is used for closed-loop control, so that the X-axis upper sliding block can be driven to be accurately positioned at any position in the X-axis direction.
The transmission mechanisms of the Y-axis slide block 102 and the Z-axis slide block 103 can adopt a speed reducer, a linear guide rail, a rack, a motor, a roller lead screw and a slide block, and carry out closed-loop control by using position information fed back by an encoder.
When the X-axis slider 101 moves along the X-axis direction, the Y-axis slider 102 and the Z-axis slider 103 can drive the camera 202 to perform dynamic acquisition of the feature points of the line-like work object 203 and drive the center of the camera image coordinate system to approach the feature points of the line-like work object 203, for example, drive the center of the camera 202 image coordinate system to dynamically approach the phase 302 (Oc) of the feature point 301 (O0) (fig. 2) of the line-like work object 203 in the camera 202.
When the X-axis slider 101 drives the image acquisition devices such as the camera 202 to move along the direction of the quasi-linear operation object 203 (approximate X-axis direction), the movement of the Y-axis and the Z-axis can be dynamically controlled through the deviation value between the phase 302 of the characteristic point 301 of the quasi-linear operation object 203 acquired by the image acquisition devices such as the camera 202 in real time in the image coordinate system and the origin of the image coordinate system, so that the deviation value is close to 0 as much as possible, and further, in the process of moving in a certain direction (positive direction or negative direction) of the X-axis, the camera 202 is driven by the XYZ tri-axis slider, so that the movement track of the camera 202 in the three-dimensional space integrally conforms to the quasi-linear visual fuzzy profiling of the characteristic point track of the quasi-linear operation object 203.
In order to realize the fuzzy profiling of the human line operation on the mobile phone by eyes to the center operation and the characteristic point track of the linear operation object, the motion control problem of the Y axis and the Z axis needs to be considered when the X axis starts to move and in the process of moving along the X axis direction. The basic principle of the fuzzy profiling of the feature point track of the line-like operation object and the central operation is that the coordinate value of the feature point of the line-like operation object in the output image screen of the camera is used as an input quantity, and the motion of a Y axis and a Z axis is controlled to control the feature point of the line-like operation object of the camera 202 to be positioned at the central point of the output image of the camera or a convergence area of the target position of the central point; meanwhile, in order to adjust the speed and stability of the convergence region returning to the central point of the output image of the camera or taking the central point as a target position, the method is divided into three coarse, fine and fine centering speeds.
Optionally, the step of acquiring the current position image of the robot includes: the surface laser emitter 201 (fig. 2) emits laser, the laser is reflected by the line-like work object 203, and the reflected laser is received by the camera 202 to form a current position image.
Turning now to fig. 3, the three regions are a coarse adjustment region (region 3), a fine adjustment region (region 2) and a fine adjustment region (region 1), respectively, the profiling speed corresponding to the coarse adjustment region (region 3) is a third speed F3, the profiling speed corresponding to the fine adjustment region (region 2) is a second speed F2, and the profiling speed corresponding to the fine adjustment region (region 1) is a first speed F1.
Alternatively, the coarse adjustment region surrounds the fine adjustment region, the fine adjustment region surrounds the feature point of the preset image, and the third speed F3> the second speed F2> the first speed F1.
In particular for a central profile as shown in figure 3,
defining: assuming that the camera output image screen size is (M, N), the screen center point coordinates are (Uc, Vc) = (M/2, N/2).
And defining the coordinates of the current feature point in the screen as (U, V), and the coordinates of the last sampling feature point in the screen as (U1, V1).
Define the ideal area range value of the screen relative to the screen center point coordinates as area 1: when (U, V) is within the desired zone range, when | U-Uc | ≦ Uil and | V-Vc | ≦ Vil, the axis Y and Z axes are at a first speed F1 for the central profiling speed when in zone 1, defining the screen zone limit range relative to the screen center point coordinates as zone 2: when (U, V) is within the screen area limit, Uil < | U-Uc | ≦ Uml and Vil < | V-Vc | ≦ Vml, the Y-axis and Z-axis are at a second velocity F2 to the center profiling velocity for zone 2, defining the screen area limit relative to the screen center point coordinates as zone 3: when (U, V) is within the screen area limits, the centering profiling speed for the Y-axis and Z-axis is a third speed F3 when Uml < | U-Uc | ≦ M and Vml < | V-Vc | ≦ N for zone 3.
In addition, in industrial application, as the process of acquiring the robot image on the hand is very complicated and the environment is harsh, false feature points are inevitably acquired and displayed on a screen in the operation process, the false feature points are also called jump points, and are interference of the acquisition of the feature track of the line-like operation object and the good copying of the camera 202, so that the jump points are removed before the Y-axis slide block 102 and the Z-axis slide block 103 are driven to drive the camera 202 to perform the central follow-up copying control. Optionally, the method further includes the step of removing the interference feature points:
the dead pixel distances of characteristic points of the similar linear operation object acquired twice are respectively Um and Vm, wherein Um and Vm are ideal values,
comparing the image coordinates (U, V) of the current feature point in the screen with the coordinates (U1, V1) of the feature point of the linear operation object acquired last time,
if the position is | U1-U | > Um or | V1-V | > Vm, the dead pixel is judged to be jumping, the current feature point is set to be U = U1 or V = V1, and the feature point image coordinates of the normally changed similar linear operation object 203 are obtained.
Further optionally, when | U-Uc | ≦ Uil and | V-Vc | ≦ Vil, the screen area is the desired area range, the profiling speed controlling the motion axis Y-axis and/or Z-axis within the desired area range is F1,
when Uil < | U-Uc | is less than or equal to Uml and Vil < | V-Vc | is less than or equal to Vml, the screen area is a first limit area range, the copying speed of the control movement axis Y axis and/or Z axis in the first limit area range is F2,
when Uml < | U-Uc | is less than or equal to M and Vml < | V-Vc | is less than or equal to N, the screen area is in the range of a second limit area, the copying speed of the Y axis and/or the Z axis of the motion axis in the range of the second limit area is controlled to be F3,
wherein the coordinates of the current position image are (U, V);
coordinates of a central point of a preset image are (Uc, Vc);
the preset image size is the size of a rectangular area formed by the coordinate points (M, N) and coordinate axes;
uil and Vil are the ideal side lengths, respectively, that define the ideal area of the profile;
uml and Vml are defined lengths, and Uil < Uml < M, Vil < Vml < N.
For example, the distance between the characteristic points and the dead pixels of the line-like operation object acquired twice in the U direction is defined as Um, and the distance between the characteristic points and the dead pixels acquired twice in the V direction is defined as Vm; and comparing the image coordinates (U, V) of the current feature point in the screen with the coordinates (U1, V1) of the feature point of the linear operation object acquired last time, if the point is | U1-U | > Um or | V1-V | > Vm, judging that the point is dead-beat, and obtaining the image coordinates of the feature point of the linear operation object with normal change by using the point U = U1 or V = V1.
With continued reference to fig. 3, the screen is divided into three regions, wherein region 1 is an ideal region, when the feature point is in this region, the current feature point is returned to the screen center point (Uc, Vc) as far as possible by controlling the Y-axis slider 102 and/or the Z-axis slider 103 to move at the first speed F1, and this process is called fine-centering operation; the area 2 is a screen area limiting range, when the characteristic point is in the area, the Y axis and/or the Z axis needs to be adjusted to move at the second speed F2 to perform centering operation, so that the characteristic point approaches to the screen center coordinate point as much as possible, and a process of returning to the area 1 from the area 2 is called fine centering operation; when the feature point is within the range of zone 3, the operation of pulling the feature point within zone 3 back to zone 2 by adjusting the Y-axis and/or Z-axis to move at the third speed F3 within that zone is referred to as a coarse centering operation, as shown in fig. 3. As described above, whether from the area 3 to the area 2, from the area 2 to the area 2, or back to the screen center coordinates (Uc, Vc) in the area 1, the essence of the entire process to the center profiling is to make the image coordinates of the feature points in the screen converge on the screen center coordinates (Uc, Vc).
In addition, by controlling the Y-axis and Z-axis motion to make the image coordinates of the feature points converge on the screen center point coordinates (Uc, Vc), in order to better control the Y-axis and Z-axis motion, the image coordinate screen can be divided into A, B, C, D, E, F, G, H, I areas, and the screen axis control method is shown in fig. 4:
definition Uil and Vil are the ideal side lengths, respectively, that define the ideal area of the profile. When U > Uc + Uil, and V < Vc-Vil, is shown in FIG. 4 in region A; when U > Uc + Uil, and V > Vc + Vil, shown in FIG. 4 in region B; when U is greater than Uc + Uil and Vc-Vil is less than or equal to V and less than or equal to Vc + Vil, the area C in FIG. 4 shows; when U < Uc-Uil, and V < Vc-Vil, are shown in region D in FIG. 4; when U < Uc-Uil, and V > Vc + Vil, shown in FIG. 4 in area E; when U < Uc-Uil and Vc-Vil is not less than V not more than Vc + Vil, shown in the area F in FIG. 4; when Uc-Uil is not less than U not more than Uc + Uil and V is more than Vc-Vil, the area G in FIG. 4 shows; when Uc-Uil is not less than U not more than Uc + Uil and V is more than Vc + Vil, it is shown in H area in FIG. 4; when U is more than or equal to Uc-Uil and less than or equal to Uc + Uil and V is more than or equal to Vc-Vil and less than or equal to Vc + Vil, the area I in FIG. 4 shows the ideal area of the follow-up profile modeling; the influence of the motion of the Y axis and/or the Z axis on the center of the linear operation object characteristic point in the corresponding area of the screen is obtained as the following relational table:
screen area and axis control method comparison table
Area of the screen A B C D E F G H I
Motion shaft Y Z Y Z Y Y Z Z Y
Direction of shaft motion 1 1 1 -1 -1 -1 -1 1 1 or-1
When the characteristic point is in the A, B, C, D, E, F, G, H area, the corresponding motion shaft is controlled to operate according to the motion direction and speed.
Specifically, optionally, when the feature point of the current position image is located in the area a or the area C, controlling the movement axis Y to move close to the preset image center point;
when the feature point of the current position image is located in an E area or an F area, controlling the motion axis Y to move close to the central point of the preset image;
when the feature point of the current position image is located in a B area or an H area, controlling the movement of a Z axis of the movement axis to be close to the central point of the preset image;
and when the characteristic point of the current position image is positioned in the D area or the G area, controlling the movement axis Z axis to move close to the central point of the preset image.
However, in the I area, the area range is small, and the change of the characteristic points in the area range in the Z-axis direction is small and can be ignored, so that only the motion of the Y-axis needs to be controlled in the I area to perform the centering profiling operation. For a point (U, V) in the I area, making m = | U-Uc |, n = | V-Vc |, controlling the positive or negative movement of the Y axis to reach the point (U2, V2), making m2= | U2-Uc |, n2= | V2-Vc |, when m2+ n2 is less than or equal to m + n, controlling the Y axis to move in the original direction, and otherwise, controlling the Y axis to move in the opposite direction. Of course, it is also possible to control only the Z-axis movement for the center profiling operation. According to the method, the center can be accurately centered only when the X axis does not move, and in the process of copying the camera 202 along the direction of the characteristic line of the linear-like operation object 203, the characteristic line of the linear-like operation object 203 changes, so that the acquired characteristic point of the linear-like operation object 203 is always unfixed and can change, and therefore the characteristic point of the characteristic line of the linear-like operation object 203 is always in the convergence process from the area 3 to the area 2 to the area 1 in the image output by the camera in the copying process.
The control mode shown in the comparison table of the screen area and the axis control method is only one implementation mode for realizing control of the invention, the control method can be flexibly changed, and when the characteristic point of the work object is in any one of the areas A-I, the robot can control the motion axis Y axis and/or Z axis to move, so as to realize the control method.
The technical scheme of the invention brings beneficial effects
During the process of collecting the operation track by the camera 202, the camera 202 is centered and the camera 202 mechanism follows the profile in the operation direction. 1) Firstly, the Y axis and the Z axis are moved to make the phase 302 of the feature point in the central area of the image coordinate system of the camera 202, so that the operation can accurately measure the exact information of the feature point from the beginning. 2) When the camera 202 mechanism collects the characteristic points of the linear operation object, the camera 202 is required to be capable of keeping a certain distance range with the similar linear operation object 203, on one hand, image characteristic collection errors and damage to the camera 202 caused by contact between the camera 202 mechanism and the similar linear operation object 203 are prevented, and on the other hand, an operator can see the characteristic points in real time on an image interface and cannot exceed a visual field range.
The robot and the control method thereof can be integrated in the operation control of the robot on the hand by eyes, and the degree of freedom controlled by other control methods can be increased on the basis of the control, so that the integration is realized, and the robot is not only applied to the linear operation requirements of containers, but also applied to the linear operation requirements of any field, such as: aviation, shipping, manufacturing, pharmaceutical, chemical, and the like.
The present invention has been illustrated by the above embodiments, but it should be understood that the above embodiments are for illustrative and descriptive purposes only and are not intended to limit the invention to the scope of the described embodiments. Furthermore, it will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, and that many variations and modifications may be made in accordance with the teachings of the present invention, which variations and modifications are within the scope of the present invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (10)

1. A vision fuzzy profiling control method for linear flexible operation of a robot and a human being is characterized by comprising the following steps:
acquiring a current position image of the robot;
judging the deviation value of the characteristic point of the current position image and the characteristic point of a preset image according to the current position image;
judging the area of the current position according to the deviation value;
controlling the robot to move according to the area to which the current position belongs so as to realize the alignment of the preset image characteristic points, and executing the alignment step of the image characteristic points on each working surface when the robot moves along the working direction so as to realize the profiling operation,
the preset image is divided into three speed control areas, and the robot is controlled to profile at three profiling speeds according to the speed control area where the area to which the current position belongs, wherein the three profiling speeds respectively correspond to the three speed control areas.
2. The control method according to claim 1, wherein the preset image is further divided into nine direction control areas, and the movement direction of the movement axis of the robot is controlled to align with the preset image feature point, wherein the movement direction of the movement axis of the robot is specifically controlled corresponding to the nine direction control areas according to the direction control area where the area to which the current position belongs.
3. The control method according to claim 1, wherein the feature point of the preset image is selected as a center point of the preset image.
4. The control method according to claim 1, wherein the three speed control regions are a coarse adjustment region, a fine adjustment region, and a fine adjustment region, respectively, a profiling speed corresponding to the coarse adjustment region is a third speed F3, a profiling speed corresponding to the fine adjustment region is a second speed F2, and a profiling speed corresponding to the fine adjustment region is a first speed F1.
5. The control method according to claim 4, characterized in that the coarse adjustment region surrounds the fine adjustment region, the fine adjustment region surrounds a feature point of the preset image, and the third speed F3> the second speed F2> the first speed F1.
6. The control method according to any one of claims 1 to 4, further comprising a step of rejecting interference feature points:
the dead pixel distances of characteristic points of the similar linear operation object acquired twice are respectively Um and Vm, wherein Um and Vm are ideal values,
comparing the image coordinates (U, V) of the current feature point in the screen with the coordinates (U1, V1) of the feature point of the linear operation object acquired last time,
and if the position is | U1-U | > Um or | V1-V | > Vm, judging that the dead pixel is jumped, setting the current feature point as U = U1 or V = V1, and further obtaining the feature point image coordinates of the normally-changed linear-like operation object.
7. The control method according to any one of claims 1 to 3,
when the absolute value of U-Uc is less than or equal to Uil and the absolute value of V-Vc is less than or equal to Vil, the screen area is an ideal area range, the copying speed of the motion axis Y-axis and/or Z-axis in the ideal area range is controlled to be a first speed F1,
when Uil < | U-Uc | ≦ Uml and Vil < | V-Vc | ≦ Vml, the screen area is a first limit area range, the profiling speed of the motion axis Y-axis and/or Z-axis within the first limit area range is controlled to be a second speed F2,
when Uml < | U-Uc | ≦ M and Vml < | V-Vc | ≦ N, the screen area is a second limit area range, the profiling speed of the motion axis Y axis and/or the motion axis Z axis in the second limit area range is controlled to be a third speed F3,
wherein the coordinates of the current position image are (U, V);
the preset image size is the size of a rectangular area formed by the coordinate points (M, N) and the coordinate axes;
the coordinates of the central point of the preset image are (Uc, Vc), wherein Uc = M/2, Vc = N/2;
uil and Vil are the ideal side lengths, respectively, that define the ideal area of the profile;
uml and Vml are defined lengths, and Uil < Uml < M, Vil < Vml < N.
8. The control method according to any one of claims 1 to 4, wherein the coordinates of the current position image are (U, V), the coordinates of the preset image center point are (Uc, Vc),
when U > Uc + Uil and V < Vc-Vil, the characteristic point of the current position image is located in the area A;
when U > Uc + Uil and V > Vc + Vil, the characteristic point of the current position image is located in a B area;
when U is greater than Uc + Uil and V is not less than Vc-Vil and not more than Vc + Vil, the characteristic point of the current position image is located in the C area;
when U < Uc-Uil and V < Vc-Vil, the characteristic point of the current position image is located in a D area;
when U < Uc-Uil and V > Vc + Vil, the characteristic point of the current position image is located in an E area;
when U is less than Uc-Uil and V is more than or equal to Vc-Vil and less than or equal to Vc + Vil, the feature point of the current position image is located in the F area;
when U is more than or equal to Uc-Uil and less than or equal to Uc + Uil and V is more than Vc-Vil, the feature point of the current position image is located in the G area;
when U is more than or equal to Uc-Uil and less than or equal to Uc + Uil and V is more than Vc + Vil, the feature point of the current position image is located in the H area;
when U is more than or equal to Uc-Uil and less than or equal to Uc + Uil and Vc-Vil is more than or equal to V and less than or equal to Vc + Vil, the characteristic point of the current position image is positioned in the I area,
where Uil and Vil are each ideal side lengths that define the ideal area of the profile.
9. The control method according to claim 8,
when the feature point of the current position image is located in the area A or the area C, controlling a motion axis Y to move close to the central point of the preset image;
when the feature point of the current position image is located in an E area or an F area, controlling a motion axis Y to move close to the central point of the preset image;
when the feature point of the current position image is located in a B area or an H area, controlling a motion axis Z to move close to the central point of the preset image;
and when the characteristic point of the current position image is positioned in a D area or a G area, controlling the movement of the movement axis Z to be close to the central point of the preset image.
10. The control method according to claim 8, wherein when the feature point of the current position image is located in the I zone, let m = | U-Uc |, n = | V-Vc |, control the Y axis to move in a positive or negative direction so as to reach a point (U2, V2), let m2= | U2-Uc |, n2= | V2-Vc |, control the Y axis to move in an original direction when m2+ n2 ≦ m + n, and control the Y axis to move in an opposite direction when m2+ n2> m + n.
CN201711477715.6A 2017-12-29 2017-12-29 Robot linear flexible operation vision fuzzy profiling control method Active CN109986238B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711477715.6A CN109986238B (en) 2017-12-29 2017-12-29 Robot linear flexible operation vision fuzzy profiling control method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711477715.6A CN109986238B (en) 2017-12-29 2017-12-29 Robot linear flexible operation vision fuzzy profiling control method

Publications (2)

Publication Number Publication Date
CN109986238A CN109986238A (en) 2019-07-09
CN109986238B true CN109986238B (en) 2021-02-26

Family

ID=67108871

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711477715.6A Active CN109986238B (en) 2017-12-29 2017-12-29 Robot linear flexible operation vision fuzzy profiling control method

Country Status (1)

Country Link
CN (1) CN109986238B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113002181B (en) * 2021-02-09 2022-05-27 深圳市丹芽科技有限公司 Method, device, equipment and storage medium for controlling ink box slot of nail art

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1676287A (en) * 2004-03-31 2005-10-05 发那科株式会社 Robot teaching apparatus
CN101486124A (en) * 2009-02-13 2009-07-22 南京工程学院 Multi-structured light binocular composite vision weld joint tracking method and device
CN102566574A (en) * 2012-01-20 2012-07-11 北人机器人系统(苏州)有限公司 Robot trajectory generation method and device based on laser sensing
CN103208186A (en) * 2013-03-19 2013-07-17 北京万集科技股份有限公司 Method and device for scanning vehicles in three-dimensional mode through laser
DE102014101568A1 (en) * 2014-02-07 2015-08-13 Blackbird Robotersysteme Gmbh Method and apparatus for laser welding or cutting with a dynamically adaptable analysis area
CN106325731A (en) * 2015-07-03 2017-01-11 Lg电子株式会社 Display device and method of controlling therefor

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1676287A (en) * 2004-03-31 2005-10-05 发那科株式会社 Robot teaching apparatus
CN101486124A (en) * 2009-02-13 2009-07-22 南京工程学院 Multi-structured light binocular composite vision weld joint tracking method and device
CN102566574A (en) * 2012-01-20 2012-07-11 北人机器人系统(苏州)有限公司 Robot trajectory generation method and device based on laser sensing
CN103208186A (en) * 2013-03-19 2013-07-17 北京万集科技股份有限公司 Method and device for scanning vehicles in three-dimensional mode through laser
DE102014101568A1 (en) * 2014-02-07 2015-08-13 Blackbird Robotersysteme Gmbh Method and apparatus for laser welding or cutting with a dynamically adaptable analysis area
CN106325731A (en) * 2015-07-03 2017-01-11 Lg电子株式会社 Display device and method of controlling therefor

Also Published As

Publication number Publication date
CN109986238A (en) 2019-07-09

Similar Documents

Publication Publication Date Title
CN116175035B (en) Intelligent welding method for steel structure high-altitude welding robot based on deep learning
CN112059363B (en) Unmanned wall climbing welding robot based on vision measurement and welding method thereof
CN111014879B (en) Automatic welding method for corrugated plate of robot based on laser weld seam tracking
KR100311663B1 (en) Apparatus and method for tracking the appearance of an object using a spare shaft
CN105562973B (en) A kind of laser identification axle robot space curve welding system of weld seam 8 and method
CN106238969B (en) Non-standard automatic welding processing system based on structured light vision
CN106392267B (en) A kind of real-time welding seam tracking method of six degree of freedom welding robot line laser
CN114769988B (en) Welding control method, system, welding equipment and storage medium
Baeten et al. Hybrid vision/force control at corners in planar robotic-contour following
JP7000361B2 (en) Follow-up robot and work robot system
CN205393782U (en) 8 robot space curve welding system of laser discernment welding seam
KR20160010868A (en) Automated machining head with vision and procedure
CN108607819A (en) Material sorting system and method
JPH0431836B2 (en)
CN104400217A (en) Full-automatic laser welding method and full-automatic laser welding device
CN109986255B (en) Hybrid vision servo parallel robot and operation method
WO2021235331A1 (en) Following robot
Huang et al. High-performance robotic contour tracking based on the dynamic compensation concept
CN116922415A (en) Robot system for welding steel structure
Sharma et al. Enhancing weld quality of novel robotic-arm arc welding: Vision-based monitoring, real-time control seam tracking
CN109986238B (en) Robot linear flexible operation vision fuzzy profiling control method
CN113909765B (en) Guiding welding system
JP2022530589A (en) Robot-mounted mobile devices, systems and machine tools
Yıldız et al. Development of Seam Tracking Sensor in ROS Environment
CN117182898A (en) Automatic path correction method for industrial robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: 518063 Shenzhen national engineering laboratory building b1001-b1004, No. 20, Gaoxin South seventh Road, high tech Zone community, Yuehai street, Nanshan District, Shenzhen, Guangdong

Patentee after: SHENZHEN CIMC SECURITY AND SMART TECHNOLOGY Co.,Ltd.

Patentee after: SHENZHEN CIMC TECHNOLOGY Co.,Ltd.

Patentee after: Guangdong CIMC Intelligent Technology Co.,Ltd.

Patentee after: CHINA INTERNATIONAL MARINE CONTAINERS (GROUP) Ltd.

Address before: Room 102, block a, phase II, science and technology building, 1057 Nanhai Avenue, Shekou, Nanshan District, Shenzhen City, Guangdong Province

Patentee before: SHENZHEN CIMC SECURITY AND SMART TECHNOLOGY Co.,Ltd.

Patentee before: SHENZHEN CIMC TECHNOLOGY Co.,Ltd.

Patentee before: DONGGUAN CIMC INTELLIGENT TECHNOLOGY Co.,Ltd.

Patentee before: CHINA INTERNATIONAL MARINE CONTAINERS (GROUP) Ltd.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230417

Address after: 518063 Shenzhen national engineering laboratory building b1001-b1004, No. 20, Gaoxin South seventh Road, high tech Zone community, Yuehai street, Nanshan District, Shenzhen, Guangdong

Patentee after: SHENZHEN CIMC SECURITY AND SMART TECHNOLOGY Co.,Ltd.

Patentee after: SHENZHEN CIMC TECHNOLOGY Co.,Ltd.

Patentee after: Guangdong CIMC Intelligent Technology Co.,Ltd.

Address before: 518063 Shenzhen national engineering laboratory building b1001-b1004, No. 20, Gaoxin South seventh Road, high tech Zone community, Yuehai street, Nanshan District, Shenzhen, Guangdong

Patentee before: SHENZHEN CIMC SECURITY AND SMART TECHNOLOGY Co.,Ltd.

Patentee before: SHENZHEN CIMC TECHNOLOGY Co.,Ltd.

Patentee before: Guangdong CIMC Intelligent Technology Co.,Ltd.

Patentee before: CHINA INTERNATIONAL MARINE CONTAINERS (GROUP) Ltd.

PP01 Preservation of patent right
PP01 Preservation of patent right

Effective date of registration: 20251205

Granted publication date: 20210226