Disclosure of Invention
In this summary, concepts in a simplified form are introduced that are further described in the detailed description. This summary of the invention is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
In order to at least partially solve the above problems, according to one aspect of the present invention, there is provided a robot human linear flexible task vision-blurred profiling control method, characterized in that the control method comprises:
acquiring a current position image of the robot;
judging the deviation value of the characteristic point of the current position image and the characteristic point of a preset image according to the current position image;
judging the area of the current position according to the deviation value;
controlling the robot to move according to the area to which the current position belongs so as to realize the alignment of the preset image characteristic points, and when the robot moves along the operation direction, executing the alignment step of the image characteristic points on each working surface so as to realize the profiling operation;
the preset image is divided into three speed control areas, and the robot is controlled to profile at three profiling speeds according to the speed control area where the area to which the current position belongs, wherein the three profiling speeds respectively correspond to the three speed control areas.
According to the control method, part of degrees of freedom are controlled by image servo to carry out visual acquisition, and other degrees of freedom are controlled by a visual open-loop control method, so that different areas are subjected to different profiling speeds, and the working quality and the production efficiency of the industrial robot can be ensured.
Optionally, the preset image is further divided into nine direction control areas, and the movement direction of the movement axis of the robot is controlled to align with the preset image feature point, wherein the movement direction of the movement axis of the robot is specifically controlled corresponding to the nine direction control areas according to the direction control area where the area to which the current position belongs.
Optionally, the feature point of the preset image is selected as a central point of the preset image.
Optionally, the three speed control regions are a coarse adjustment region, a fine adjustment region and a fine adjustment region, respectively, the profiling speed corresponding to the coarse adjustment region is a third speed F3, the profiling speed corresponding to the fine adjustment region is a second speed F2, and the profiling speed corresponding to the fine adjustment region is a first speed F1.
Optionally, the coarse adjustment region surrounds the fine adjustment region, the fine adjustment region surrounds the feature point of the preset image, and the third speed F3> the second speed F2> the first speed F1.
Optionally, the method further includes the step of removing the interference feature points:
the dead pixel spacing of characteristic points of a similar linear operation object acquired twice continuously is respectively Um and Vm, wherein Um and Vm are ideal values;
comparing the image coordinates (U, V) of the current feature point in the screen with the coordinates (U1, V1) of the feature point of the linear operation object acquired last time;
and if the position is | U1-U | > Um or | V1-V | > Vm, judging that the dead pixel is jumped, setting the current feature point as U = U1 or V = V1, and further obtaining the feature point image coordinates of the normally-changed linear-like operation object.
Optionally, when | U-Uc | ≦ Uil and | V-Vc | ≦ Vil, the screen area is the ideal area range, and the profiling speed of the motion axis Y-axis and/or Z-axis within the ideal area range is controlled to be the first speed F1;
when Uil < | U-Uc | ≦ Uml and Vil < | V-Vc | ≦ Vml, the screen area is a first limit area range, and the profiling speed of the motion axis Y-axis and/or Z-axis in the first limit area range is controlled to be a second speed F2;
when Uml < | U-Uc | is less than or equal to M and Vml < | V-Vc | is less than or equal to N, the screen area is in a second limit area range, and the profiling speed of the Y axis and/or the Z axis of the movement axis in the second limit area range is controlled to be a third speed F3;
wherein the coordinates of the current position image are (U, V);
the preset image size is the size of a rectangular area formed by the coordinate points (M, N) and the coordinate axes;
the coordinates of the central point of the preset image are (Uc, Vc), wherein Uc = M/2, Vc = N/2;
uil and Vil are the ideal side lengths, respectively, that define the ideal area of the profile;
uml and Vml are defined lengths, and Uil < Uml < M, Vil < Vml < N.
Optionally, the coordinates of the current position image are (U, V), and the coordinates of the preset image center point are (Uc, Vc);
when U > Uc + Uil and V < Vc-Vil, the characteristic point of the current position image is located in the area A;
when U > Uc + Uil and V > Vc + Vil, the characteristic point of the current position image is located in a B area;
when U is greater than Uc + Uil and V is not less than Vc-Vil and not more than Vc + Vil, the characteristic point of the current position image is located in the C area;
when U < Uc-Uil and V < Vc-Vil, the characteristic point of the current position image is located in a D area;
when U < Uc-Uil and V > Vc + Vil, the characteristic point of the current position image is located in an E area;
when U is less than Uc-Uil and V is more than or equal to Vc-Vil and less than or equal to Vc + Vil, the feature point of the current position image is located in the F area;
when U is more than or equal to Uc-Uil and less than or equal to Uc + Uil and V is more than Vc-Vil, the feature point of the current position image is located in the G area;
when U is more than or equal to Uc-Uil and less than or equal to Uc + Uil and V is more than Vc + Vil, the feature point of the current position image is located in the H area;
when U is more than or equal to Uc-Uil and is less than or equal to Uc + Uil and V is more than or equal to Vc-Vil and is less than or equal to Vc + Vil, the feature point of the current position image is located in the region I;
where Uil and Vil are each ideal side lengths that define the ideal area of the profile.
Optionally, when the feature point of the current position image is located in the area a or the area C, controlling the movement axis Y to move close to the preset image center point;
when the feature point of the current position image is located in an E area or an F area, controlling a motion axis Y to move close to the central point of the preset image;
when the feature point of the current position image is located in a B area or an H area, controlling a motion axis Z to move close to the central point of the preset image;
and when the characteristic point of the current position image is positioned in a D area or a G area, controlling the movement of the movement axis Z to be close to the central point of the preset image.
Alternatively, when the feature point of the current position image is located in the I zone, let m = | U-Uc |, n = | V-Vc |, control the Y axis to move positively or negatively so as to reach the point (U2, V2), let m2= | U2-Uc |, n2= | V2-Vc |, when m2+ n2 ≦ m + n, control the Y axis to move in the original direction, and when m2+ n2> m + n, control the Y axis to move in the opposite direction.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a more thorough understanding of the present invention. It will be apparent, however, to one skilled in the art, that the present invention may be practiced without one or more of these specific details. In other instances, well-known features have not been described in order to avoid obscuring the invention.
In the following description, for purposes of explanation, specific details are set forth in order to provide a thorough understanding of the present invention. It is apparent that the practice of the invention is not limited to the specific details set forth herein as are known to those of skill in the art. The following detailed description of the preferred embodiments of the invention, however, the invention is capable of other embodiments in addition to the detailed description and should not be construed as limited to the embodiments set forth herein.
It is to be understood that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention, as the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. When the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. The terms "upper", "lower", "front", "rear", "left", "right" and the like as used herein are for purposes of illustration only and are not limiting.
Ordinal words such as "first" and "second" are referred to herein merely as labels, and do not have any other meaning, such as a particular order, etc.
Specific embodiments of the present invention will now be described in more detail with reference to the accompanying drawings, which illustrate representative embodiments of the invention and do not limit the invention.
Exemplary embodiments
As shown in fig. 1 to 4, according to an exemplary embodiment of the present invention, a control method and a control device for a robot linear flexible job vision-blurred profiling are provided.
A vision fuzzy profiling control method for linear flexible operation of a robot and a human being is characterized by comprising the following steps:
acquiring a current position image of the robot;
judging the deviation value of the characteristic point of the current position image and the characteristic point of a preset image according to the current position image;
judging the area of the current position according to the deviation value;
controlling the robot to move according to the area to which the current position belongs so as to realize the alignment of the preset image characteristic points, and executing the alignment step of the image characteristic points on each working surface when the robot moves along the working direction so as to realize the profiling operation,
the preset image is divided into three speed control areas, and the robot is controlled to profile at three speeds according to the speed control area where the current position belongs, wherein the three profiling speeds respectively correspond to the three speed control areas.
Specifically, as shown in fig. 1 to 2, the X-axis slider 101 is movable along the X-axis direction, the X-axis direction is a moving direction of the robot during operation, the robot stops at a specific position in the X-axis direction during operation, and after stopping, the robot performs operation on an object at the position. The Y-axis slider 102 and the Z-axis slider 103 are movable along the Y-axis and the Z-axis, respectively. Further, a Y-axis slider 102 is movably provided on the X-axis slider 101 along the Y-axis, and a Z-axis slider 103 is movable along the Z-axis. A camera 202 (image pickup device) is connected to the Y-axis slider 102 and the Z-axis slider 103. The X, Y and Z axes form a Cartesian coordinate system. Optionally, the welding assembly of the robot is connected to the X-axis, Y-axis and Z-axis, thereby enabling control of the welding position. Optionally, the welding assembly is connected with the image acquisition device, and when the coordinate center of the camera image coincides with the phase 302 of the characteristic point 301 of the line-like operation object 203 in the camera 202, the current position is determined as the correct welding position, and the welding assembly can directly perform the welding operation. Alternatively, the image capture device is mounted directly on the robotic work arm, and the welding assembly is also mounted on the robotic work arm.
The transmission mechanism of the X-axis sliding block 101 comprises a motor, a speed reducer, a rack, a linear guide rail and a sliding block, and position information fed back by an encoder is used for closed-loop control, so that the X-axis upper sliding block can be driven to be accurately positioned at any position in the X-axis direction.
The transmission mechanisms of the Y-axis slide block 102 and the Z-axis slide block 103 can adopt a speed reducer, a linear guide rail, a rack, a motor, a roller lead screw and a slide block, and carry out closed-loop control by using position information fed back by an encoder.
When the X-axis slider 101 moves along the X-axis direction, the Y-axis slider 102 and the Z-axis slider 103 can drive the camera 202 to perform dynamic acquisition of the feature points of the line-like work object 203 and drive the center of the camera image coordinate system to approach the feature points of the line-like work object 203, for example, drive the center of the camera 202 image coordinate system to dynamically approach the phase 302 (Oc) of the feature point 301 (O0) (fig. 2) of the line-like work object 203 in the camera 202.
When the X-axis slider 101 drives the image acquisition devices such as the camera 202 to move along the direction of the quasi-linear operation object 203 (approximate X-axis direction), the movement of the Y-axis and the Z-axis can be dynamically controlled through the deviation value between the phase 302 of the characteristic point 301 of the quasi-linear operation object 203 acquired by the image acquisition devices such as the camera 202 in real time in the image coordinate system and the origin of the image coordinate system, so that the deviation value is close to 0 as much as possible, and further, in the process of moving in a certain direction (positive direction or negative direction) of the X-axis, the camera 202 is driven by the XYZ tri-axis slider, so that the movement track of the camera 202 in the three-dimensional space integrally conforms to the quasi-linear visual fuzzy profiling of the characteristic point track of the quasi-linear operation object 203.
In order to realize the fuzzy profiling of the human line operation on the mobile phone by eyes to the center operation and the characteristic point track of the linear operation object, the motion control problem of the Y axis and the Z axis needs to be considered when the X axis starts to move and in the process of moving along the X axis direction. The basic principle of the fuzzy profiling of the feature point track of the line-like operation object and the central operation is that the coordinate value of the feature point of the line-like operation object in the output image screen of the camera is used as an input quantity, and the motion of a Y axis and a Z axis is controlled to control the feature point of the line-like operation object of the camera 202 to be positioned at the central point of the output image of the camera or a convergence area of the target position of the central point; meanwhile, in order to adjust the speed and stability of the convergence region returning to the central point of the output image of the camera or taking the central point as a target position, the method is divided into three coarse, fine and fine centering speeds.
Optionally, the step of acquiring the current position image of the robot includes: the surface laser emitter 201 (fig. 2) emits laser, the laser is reflected by the line-like work object 203, and the reflected laser is received by the camera 202 to form a current position image.
Turning now to fig. 3, the three regions are a coarse adjustment region (region 3), a fine adjustment region (region 2) and a fine adjustment region (region 1), respectively, the profiling speed corresponding to the coarse adjustment region (region 3) is a third speed F3, the profiling speed corresponding to the fine adjustment region (region 2) is a second speed F2, and the profiling speed corresponding to the fine adjustment region (region 1) is a first speed F1.
Alternatively, the coarse adjustment region surrounds the fine adjustment region, the fine adjustment region surrounds the feature point of the preset image, and the third speed F3> the second speed F2> the first speed F1.
In particular for a central profile as shown in figure 3,
defining: assuming that the camera output image screen size is (M, N), the screen center point coordinates are (Uc, Vc) = (M/2, N/2).
And defining the coordinates of the current feature point in the screen as (U, V), and the coordinates of the last sampling feature point in the screen as (U1, V1).
Define the ideal area range value of the screen relative to the screen center point coordinates as area 1: when (U, V) is within the desired zone range, when | U-Uc | ≦ Uil and | V-Vc | ≦ Vil, the axis Y and Z axes are at a first speed F1 for the central profiling speed when in zone 1, defining the screen zone limit range relative to the screen center point coordinates as zone 2: when (U, V) is within the screen area limit, Uil < | U-Uc | ≦ Uml and Vil < | V-Vc | ≦ Vml, the Y-axis and Z-axis are at a second velocity F2 to the center profiling velocity for zone 2, defining the screen area limit relative to the screen center point coordinates as zone 3: when (U, V) is within the screen area limits, the centering profiling speed for the Y-axis and Z-axis is a third speed F3 when Uml < | U-Uc | ≦ M and Vml < | V-Vc | ≦ N for zone 3.
In addition, in industrial application, as the process of acquiring the robot image on the hand is very complicated and the environment is harsh, false feature points are inevitably acquired and displayed on a screen in the operation process, the false feature points are also called jump points, and are interference of the acquisition of the feature track of the line-like operation object and the good copying of the camera 202, so that the jump points are removed before the Y-axis slide block 102 and the Z-axis slide block 103 are driven to drive the camera 202 to perform the central follow-up copying control. Optionally, the method further includes the step of removing the interference feature points:
the dead pixel distances of characteristic points of the similar linear operation object acquired twice are respectively Um and Vm, wherein Um and Vm are ideal values,
comparing the image coordinates (U, V) of the current feature point in the screen with the coordinates (U1, V1) of the feature point of the linear operation object acquired last time,
if the position is | U1-U | > Um or | V1-V | > Vm, the dead pixel is judged to be jumping, the current feature point is set to be U = U1 or V = V1, and the feature point image coordinates of the normally changed similar linear operation object 203 are obtained.
Further optionally, when | U-Uc | ≦ Uil and | V-Vc | ≦ Vil, the screen area is the desired area range, the profiling speed controlling the motion axis Y-axis and/or Z-axis within the desired area range is F1,
when Uil < | U-Uc | is less than or equal to Uml and Vil < | V-Vc | is less than or equal to Vml, the screen area is a first limit area range, the copying speed of the control movement axis Y axis and/or Z axis in the first limit area range is F2,
when Uml < | U-Uc | is less than or equal to M and Vml < | V-Vc | is less than or equal to N, the screen area is in the range of a second limit area, the copying speed of the Y axis and/or the Z axis of the motion axis in the range of the second limit area is controlled to be F3,
wherein the coordinates of the current position image are (U, V);
coordinates of a central point of a preset image are (Uc, Vc);
the preset image size is the size of a rectangular area formed by the coordinate points (M, N) and coordinate axes;
uil and Vil are the ideal side lengths, respectively, that define the ideal area of the profile;
uml and Vml are defined lengths, and Uil < Uml < M, Vil < Vml < N.
For example, the distance between the characteristic points and the dead pixels of the line-like operation object acquired twice in the U direction is defined as Um, and the distance between the characteristic points and the dead pixels acquired twice in the V direction is defined as Vm; and comparing the image coordinates (U, V) of the current feature point in the screen with the coordinates (U1, V1) of the feature point of the linear operation object acquired last time, if the point is | U1-U | > Um or | V1-V | > Vm, judging that the point is dead-beat, and obtaining the image coordinates of the feature point of the linear operation object with normal change by using the point U = U1 or V = V1.
With continued reference to fig. 3, the screen is divided into three regions, wherein region 1 is an ideal region, when the feature point is in this region, the current feature point is returned to the screen center point (Uc, Vc) as far as possible by controlling the Y-axis slider 102 and/or the Z-axis slider 103 to move at the first speed F1, and this process is called fine-centering operation; the area 2 is a screen area limiting range, when the characteristic point is in the area, the Y axis and/or the Z axis needs to be adjusted to move at the second speed F2 to perform centering operation, so that the characteristic point approaches to the screen center coordinate point as much as possible, and a process of returning to the area 1 from the area 2 is called fine centering operation; when the feature point is within the range of zone 3, the operation of pulling the feature point within zone 3 back to zone 2 by adjusting the Y-axis and/or Z-axis to move at the third speed F3 within that zone is referred to as a coarse centering operation, as shown in fig. 3. As described above, whether from the area 3 to the area 2, from the area 2 to the area 2, or back to the screen center coordinates (Uc, Vc) in the area 1, the essence of the entire process to the center profiling is to make the image coordinates of the feature points in the screen converge on the screen center coordinates (Uc, Vc).
In addition, by controlling the Y-axis and Z-axis motion to make the image coordinates of the feature points converge on the screen center point coordinates (Uc, Vc), in order to better control the Y-axis and Z-axis motion, the image coordinate screen can be divided into A, B, C, D, E, F, G, H, I areas, and the screen axis control method is shown in fig. 4:
definition Uil and Vil are the ideal side lengths, respectively, that define the ideal area of the profile. When U > Uc + Uil, and V < Vc-Vil, is shown in FIG. 4 in region A; when U > Uc + Uil, and V > Vc + Vil, shown in FIG. 4 in region B; when U is greater than Uc + Uil and Vc-Vil is less than or equal to V and less than or equal to Vc + Vil, the area C in FIG. 4 shows; when U < Uc-Uil, and V < Vc-Vil, are shown in region D in FIG. 4; when U < Uc-Uil, and V > Vc + Vil, shown in FIG. 4 in area E; when U < Uc-Uil and Vc-Vil is not less than V not more than Vc + Vil, shown in the area F in FIG. 4; when Uc-Uil is not less than U not more than Uc + Uil and V is more than Vc-Vil, the area G in FIG. 4 shows; when Uc-Uil is not less than U not more than Uc + Uil and V is more than Vc + Vil, it is shown in H area in FIG. 4; when U is more than or equal to Uc-Uil and less than or equal to Uc + Uil and V is more than or equal to Vc-Vil and less than or equal to Vc + Vil, the area I in FIG. 4 shows the ideal area of the follow-up profile modeling; the influence of the motion of the Y axis and/or the Z axis on the center of the linear operation object characteristic point in the corresponding area of the screen is obtained as the following relational table:
screen area and axis control method comparison table
| Area of the screen
|
A
|
B
|
C
|
D
|
E
|
F
|
G
|
H
|
I
|
| Motion shaft
|
Y
|
Z
|
Y
|
Z
|
Y
|
Y
|
Z
|
Z
|
Y
|
| Direction of shaft motion
|
1
|
1
|
1
|
-1
|
-1
|
-1
|
-1
|
1
|
1 or-1 |
When the characteristic point is in the A, B, C, D, E, F, G, H area, the corresponding motion shaft is controlled to operate according to the motion direction and speed.
Specifically, optionally, when the feature point of the current position image is located in the area a or the area C, controlling the movement axis Y to move close to the preset image center point;
when the feature point of the current position image is located in an E area or an F area, controlling the motion axis Y to move close to the central point of the preset image;
when the feature point of the current position image is located in a B area or an H area, controlling the movement of a Z axis of the movement axis to be close to the central point of the preset image;
and when the characteristic point of the current position image is positioned in the D area or the G area, controlling the movement axis Z axis to move close to the central point of the preset image.
However, in the I area, the area range is small, and the change of the characteristic points in the area range in the Z-axis direction is small and can be ignored, so that only the motion of the Y-axis needs to be controlled in the I area to perform the centering profiling operation. For a point (U, V) in the I area, making m = | U-Uc |, n = | V-Vc |, controlling the positive or negative movement of the Y axis to reach the point (U2, V2), making m2= | U2-Uc |, n2= | V2-Vc |, when m2+ n2 is less than or equal to m + n, controlling the Y axis to move in the original direction, and otherwise, controlling the Y axis to move in the opposite direction. Of course, it is also possible to control only the Z-axis movement for the center profiling operation. According to the method, the center can be accurately centered only when the X axis does not move, and in the process of copying the camera 202 along the direction of the characteristic line of the linear-like operation object 203, the characteristic line of the linear-like operation object 203 changes, so that the acquired characteristic point of the linear-like operation object 203 is always unfixed and can change, and therefore the characteristic point of the characteristic line of the linear-like operation object 203 is always in the convergence process from the area 3 to the area 2 to the area 1 in the image output by the camera in the copying process.
The control mode shown in the comparison table of the screen area and the axis control method is only one implementation mode for realizing control of the invention, the control method can be flexibly changed, and when the characteristic point of the work object is in any one of the areas A-I, the robot can control the motion axis Y axis and/or Z axis to move, so as to realize the control method.
The technical scheme of the invention brings beneficial effects
During the process of collecting the operation track by the camera 202, the camera 202 is centered and the camera 202 mechanism follows the profile in the operation direction. 1) Firstly, the Y axis and the Z axis are moved to make the phase 302 of the feature point in the central area of the image coordinate system of the camera 202, so that the operation can accurately measure the exact information of the feature point from the beginning. 2) When the camera 202 mechanism collects the characteristic points of the linear operation object, the camera 202 is required to be capable of keeping a certain distance range with the similar linear operation object 203, on one hand, image characteristic collection errors and damage to the camera 202 caused by contact between the camera 202 mechanism and the similar linear operation object 203 are prevented, and on the other hand, an operator can see the characteristic points in real time on an image interface and cannot exceed a visual field range.
The robot and the control method thereof can be integrated in the operation control of the robot on the hand by eyes, and the degree of freedom controlled by other control methods can be increased on the basis of the control, so that the integration is realized, and the robot is not only applied to the linear operation requirements of containers, but also applied to the linear operation requirements of any field, such as: aviation, shipping, manufacturing, pharmaceutical, chemical, and the like.
The present invention has been illustrated by the above embodiments, but it should be understood that the above embodiments are for illustrative and descriptive purposes only and are not intended to limit the invention to the scope of the described embodiments. Furthermore, it will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, and that many variations and modifications may be made in accordance with the teachings of the present invention, which variations and modifications are within the scope of the present invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.