Disclosure of Invention
The present invention is directed to solving the above-mentioned problems, and provides a baggage handling system for airport and a control method thereof.
In order to achieve the purpose, the invention adopts the technical scheme that:
in one aspect, a luggage handling system for an airport and a control method thereof, which includes a control unit, a detection unit in signal connection with the control unit, and a transportation unit for transporting luggage;
the control unit comprises a control box and a gripper which is fixedly arranged on the mechanical arm and is controlled by the control unit;
the detection unit comprises a grid laser, an industrial camera and a pressure film sensor; the industrial camera and the grid laser are both arranged on the top plate of the gripper, and a plurality of point light source lamps are arranged around the industrial camera; the pressure film sensor is fixedly arranged on the side wall of the inner side of the hand grip;
the transport unit comprises a conveyor belt and a transport trolley arranged on one side of the conveyor belt.
Furthermore, the grid laser, the industrial camera and the film pressure sensor are connected with a small embedded industrial personal computer and are in communication connection with the control box through the small embedded industrial personal computer.
Further, the point light source lamp is an LED light-emitting source lamp.
In one aspect, a method of controlling an airport baggage handling system comprising the steps of:
s1, industrial camera calibration, including parameter calibration in the industrial camera and calibration of hand-eye coordinate conversion relation;
s2, the industrial camera collects the image information of the luggage case on the conveyor belt, and transmits the collected image information to the control unit for processing to obtain the position information of the luggage case in the image;
s3, controlling the hand grip to move to a target position according to the position information of the luggage case, controlling the hand grip to move according to the moving track of the luggage case, and setting the vertical downward moving distance and the horizontal displacement of the hand grip until the hand grip moves to two sides of the luggage case;
s4, grabbing the luggage case according to the weight of the luggage case and the pressure value fed back by the film sensor, and adjusting the grabbing strength of the grab in real time;
s5, the control unit drives the mechanical arm to rotate, the luggage case is transported to the upper part of the transport trolley, the mechanical arm is moved downwards continuously, the gripper is controlled to release the luggage case, and the luggage case falls on the transport trolley;
and S6, the control unit controls the mechanical arm to return to the initial position.
Further, the step S2, the industrial camera acquires image information of the luggage on the conveyor belt, and transmits the acquired image information to the control unit for processing, so as to obtain the position information of the luggage in the image, including:
marking and extracting the acquired image information data by adopting a silhouette processing method;
carrying out binarization processing on the extracted image information, and carrying out image segmentation on the image subjected to binarization processing to obtain image information containing a target area of the trunk;
filtering the segmented image information;
and adopting skeletonization to identify the position information of the target area, and transmitting the position information to the control unit.
Further, the step S3 of calculating the moving track of the luggage case includes:
the grid laser projects a group of parallel grid red laser lines on the surface of the suitcase to be grabbed;
calculating a horizontal straight line equation and a vertical straight line equation in the grid by adopting standard Hough transform, and then calculating coordinates of start points and stop points of grid lines through intersection points of the horizontal straight lines and the vertical straight lines;
and drawing straight lines on the basis of the binary image of the calculated coordinates of the start point and the stop point, and obtaining a result graph of grid line detection.
Further, according to the result graph of the grid line detection, calculating the coordinates of the starting point of the grid horizontal line:
keeping the vertical straight line equation unchanged, sequentially traversing all the horizontal straight line equations, calculating the coordinates of the intersection point of each horizontal straight line and the set vertical line, and calculating to obtain the coordinates of the initial point of the horizontal straight line of the actual grid;
keeping the horizontal straight line equation unchanged, sequentially traversing all the vertical straight line equations, calculating the coordinates of the intersection points of the horizontal straight line and the vertical straight line, and calculating to obtain the coordinates of the initial points of the straight lines in the vertical direction of the actual grid;
and calculating the coordinates of the end points of the grid horizontal lines according to a result graph of grid line detection:
keeping the horizontal straight line equation unchanged, sequentially traversing all the vertical straight line equations, calculating the intersection point coordinate of each vertical straight line and the set horizontal straight line, and calculating to obtain the end point coordinate of the actual grid horizontal straight line;
keeping the horizontal straight line equation unchanged, sequentially traversing all the vertical straight line equations, and calculating the coordinates of the intersection points of the horizontal straight line and the vertical straight line to obtain the coordinates of the end points of the straight lines in the vertical direction of the actual grid.
Further, the industrial camera calibration in step S1 includes:
carrying out parameter calibration and distortion correction in the industrial camera by adopting a Zhangyou calibration algorithm;
and performing hand-eye calibration by adopting a Tsai-Lenz calibration method.
Further, the calculation and correction of the volume of the suitcase is also included:
when the grid laser irradiates on the target luggage case, the industrial camera photographs the target luggage case, identifies the laser grid on the photographed picture, and takes the edge of the luggage case with the farthest bent grid line of the luggage case as the farthest edge;
calculating the area of the uppermost surface of the farthest luggage case, and further estimating the initial volume of the luggage case;
the calculated volume of the case is filled with an edge dilation filling of a total of 5cm of gray border outwardly filled at the most distal curved gridlines of the case.
The airport luggage case carrying system and the control method thereof provided by the invention have the following beneficial effects:
the size and the volume of the luggage case are estimated by utilizing the tongs at the tail end of the mechanical arm and the peripheral point light source lamps in combination with the object detection in the machine vision, and the mechanical arm controls the tongs to grab the luggage case and place the luggage case on the luggage case carrying trolley; the pressure film sensor is utilized to realize real-time detection and feedback of the strength of the gripper at the tail end of the mechanical arm, so that the luggage case can be gripped without damage; it can adapt to the suitcase that snatchs not unidimensional to the harmless of the suitcase and snatching and carrying that has realized snatching of controllable dynamics has been realized.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate the understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and it will be apparent to those skilled in the art that various changes may be made without departing from the spirit and scope of the invention as defined and defined in the appended claims, and all matters produced by the invention using the inventive concept are protected.
According to the first embodiment of the application, referring to fig. 1 and 2, the airport luggage 5 handling system and the control method thereof in the present scheme comprise a control unit, a detection unit in signal connection with the control unit, and a transportation unit for transporting the luggage 5.
The control unit comprises a control box 4, a mechanical arm 3 and a gripper 10, the gripper 10 is mounted on the mechanical arm 3 and controlled by the control box 4, and the mechanical arm 3 is a mechanical arm 3 with 6 degrees of freedom.
The control box 4 is connected with the mechanical arm 3 with 6 degrees of freedom through a communication line; the gripper 10 at the tail end of the mechanical arm 3 is connected with the control box 4 through a communication line; the grid laser 8, the industrial camera 7 and the film pressure sensor are connected with the small embedded industrial personal computer firstly and then are in wired communication with the machine box controlled by the small embedded industrial personal computer and the mechanical arm 3.
The industrial camera 7 positions and classifies the luggage case 5 according to categories, the luggage case 5 needs to be sorted after the classification and the positioning, and the force of the pressure film sensor 9 on the end gripper 10 is fed back to the mechanical arm 3 to control the gripper 10 to carry out force-controlled sorting.
The control unit is used for controlling the mechanical arm 3 and the gripper 10 to realize the grabbing of the luggage case 5, placing the grabbed luggage case 5 on the transport trolley 6, and finally returning to the initial position to wait for the next grabbing task.
The detection unit comprises a grid laser 8, an industrial camera 7 and a pressure film sensor 9, the industrial camera 7 and the grid laser 8 are both arranged on the top plate 2 of the gripper 10, the grid laser 8 projects a group of parallel grid red laser lines on the surface of an object to be gripped, and the object is generally judged.
And the periphery of the industrial camera 7 is provided with a plurality of point light source lamps in a surrounding manner, the point light source lamps select the characteristics for projecting the shooting target, so that the different parts of the point light source lamps have enough contrast, and the LED luminous electro-optic is selected, and meanwhile, the defect that the industrial camera 7 can be matched well for good detection in some dim environments can be overcome.
The industrial camera 7 collects the object marked by the grid laser 8, and converts the visual image and the characteristics of the target object into a series of data which can be processed by calculation. The pressure film sensor 9 is fixedly arranged on the inner side wall of the hand grip 10.
The transport unit comprises a conveyor belt 1 and a transport trolley 6 arranged on one side of the conveyor belt 1.
According to the second embodiment of the present application, a method for controlling a system for handling a luggage case 5 at an airport comprises the following steps:
step S1, calibrating the industrial camera 7, including calibrating parameters in the industrial camera 7 and calibrating a hand-eye coordinate transformation relationship, which specifically includes:
the high-precision hand-eye calibration is a key for ensuring the precision of the system, and in the embodiment, the calibration comprises two parts of camera internal parameter calibration and hand-eye coordinate conversion relation calibration, wherein the camera internal parameter calibration and distortion correction are realized by adopting an improved Zhang Zhengyou calibration algorithm.
The calibration of the hand-eye coordinate relationship of the mechanical arm 3 mainly comprises the step of solving a hand-eye mapping model, namely a nonlinear mapping model from a robot visual space to a robot working space, such as a camera coordinate system O in the left diagram of FIG. 6cTo the manipulator base coordinate system OrThe conversion relationship of (1). The right diagram of fig. 6 shows the coordinate transformation relationship in the visual guidance process.
Referring to fig. 7, in the present solution, a Tsai-Lenz calibration method is adopted to perform hand-eye calibration, so as to obtain a spatial pose between a robot base coordinate and a camera.
Step S2, the industrial camera 7 collects the image information of the luggage case 5 on the conveyor belt 1, and transmits the collected image information to the control unit for processing, so as to obtain the position information of the luggage case 5 in the image;
referring to fig. 3, the image acquisition includes:
marking and extracting the acquired image information data by adopting a silhouette processing method;
carrying out binarization processing on the extracted image information, and carrying out image segmentation on the image subjected to binarization processing to obtain image information containing a target area of the trunk 5;
filtering the segmented image information;
adopting skeletonization to identify the position information of the target area, and transmitting the position information to a control unit;
the control unit issues commands to the control box 4 and controls the operation of the mechanical arm 3 and the gripper 10 according to the position information of the luggage box 5.
Step S3, controlling the hand grip 10 to move to the target position according to the position information of the luggage 5, controlling the hand grip 10 to move according to the moving track of the luggage 5, and setting the vertical downward movement distance and the horizontal displacement of the hand grip 10 until the hand grip 10 moves to the two sides of the luggage 5, which specifically comprises:
the posture and the volume of the luggage case 5 are intelligently identified, a group of parallel grid red laser lines are projected on the surface of an object to be grabbed by the grid laser 8, and the coordinate position of the luggage case 5 in the grid is firstly calculated by combining point light source lamps with an industrial camera 7 and object detection in machine vision;
analyzing the hough transform detection result shows that there are some problems in directly using hough transform to detect the grid lines, such as overlapping of a plurality of straight lines at the same position, shielding of light, and the fact that the start and end points of the straight lines are not actual start and end points. The problem of linear overlapping is solved by the idea of classifying and combining Hough transform detection results. Because the non-parallel straight lines have intersection points, the coordinates of the start point and the end point of the actual grid line can be obtained by calculating the intersection points of the straight lines in the horizontal direction and the vertical direction, so that the problems of partial shielding, non-true coordinates of the start point and the end point of the grid line and the like can be solved.
And (4) Hough transform, namely comparing two Hough transform functions to know, and selecting standard Hough transform. Because the result of standard Hough transform is (theta, rho) parameters under a polar coordinate system, and the theta parameter is the included angle between a straight line and a horizontal axis, the method provides a convenient way for horizontal and vertical classification arrangement of the straight line. In addition, the standard Hough transform function only has one dynamically changing parameter, and the parameter value is easy to set.
Referring to fig. 4, calculation is performed by using a hough transform straight line method:
point A (x)0,y0) The straight line passing through the point A satisfies the equation y ═ k x0+ b. (k is slope, b is intercept);
then the point A (x) is crossed in the XOY plane0,y0) The linear clusters of (a) may be represented by y ═ k × x0+ b, however, since the slope of the line perpendicular to the X-axis is infinite and cannot be represented, the special case can be solved by converting the rectangular coordinate system to the polar coordinate system;
in a polar coordinate system, the linear equation can be expressed as ρ ═ xcos θ + ysin θ (ρ represents the distance from the origin to the line), and the calculation method is shown in fig. 4.
Judging the linear direction: by judging the parameter theta, when theta belongs to a horizontal line between two epsilon (pi/4, 3 x pi/4), or is a vertical line, then sorting and combining straight line data with small difference, eliminating the problem that a plurality of straight lines are overlapped at the same position, and finally obtaining the coordinate information of the straight lines in a rectangular coordinate system through coordinate system conversion.
And calculating a horizontal straight line equation and a vertical straight line equation by using the results, and then calculating the coordinates of the start and stop points of the grid lines through the intersection points of the horizontal straight line and the vertical straight line, so that the problems caused by the fact that the coordinates of the start and stop points of the grid lines are not actual in Hough transformation and lamplight shielding can be solved.
And drawing the straight line on the basis of the binary image according to the coordinates of the straight line start and stop points obtained by calculation, and obtaining a result graph of grid line detection. The straight line is drawn on the binary image because not only can the laser line of the original position be obtained, but also the laser area deformed under the influence of the barrier can be reserved, and the problem of shielding of light on the grid laser 8 lines can be solved.
In FIG. 5, line LhsAnd a straight line LheIs a straight line in the horizontal direction in the detection result of the Hough transform of the binary image, wherein the straight line LhsIn the presentation of the results of the detectionFirst horizontal straight line, straight line LheRepresenting the last horizontal line in the test results. Straight line LvsAnd a straight line LveIs a straight line in the vertical direction in the detection result of the Hough transform of the binary image, wherein the straight line LvsRepresents the first vertical straight line, straight line L in the detection resultveRepresenting the last vertical line in the test result. And in fig. 5, the coordinate point (x)1,y1),(x2,y2),(x3,y3) And (x)4,y4) Are the start and end point coordinates of the actual grid laser 8 line.
Wherein, the coordinate point (x)v1,yv1) And (x)v2,yv2) In xv1And xv2Is calculated by Hough transform, and yv1And yv2The value of (d) is 0. Coordinate point (x)v3,yv3) And (x)v4,yv4) In xv3And xv4Calculated by Hough transform, and yh1And yh3Is the height of the image. Coordinate point (x)h1,yh1) And (x)h3,yh3) In xh1And xh3Is 0 and y ish1And yh3Is calculated by Hough transform. Coordinate point (x)h2,yh2) And (x)h4,yh4) In xh2And xh4Is the width of the image, and yh2And yh4Is calculated by Hough transform.
Then, according to the calculation method of the grid horizontal line starting point coordinates, keeping the vertical straight line equations unchanged, sequentially traversing all the horizontal straight line equations, calculating the intersection point coordinates of each horizontal straight line and the set vertical line, and finally obtaining the actual grid horizontal line starting point coordinates. For example (x)1,y1) And (x)3,y3) And the coordinates are equal. And similarly, keeping the horizontal straight line equation unchanged, sequentially traversing all the vertical straight line equations, then calculating the coordinates of the intersection points of the horizontal straight line and the vertical straight line, and finally obtaining the coordinates of the initial point of the straight line in the vertical direction of the actual grid. For example (x)1,y1) And (x)2,y2) And the coordinates are equal.
Then, according to the calculation method of the grid horizontal line terminal point coordinates, keeping the horizontal straight line equations unchanged, traversing all the vertical straight line equations in sequence, calculating the intersection point coordinates of each vertical straight line and the set horizontal straight line, and finally obtaining the terminal point coordinates of the actual grid horizontal straight lines. For example (x)2,y2) And (x)4,y4) And the coordinates are equal. And similarly, keeping the horizontal straight line equation unchanged, sequentially traversing all the vertical straight line equations, then calculating the coordinates of the intersection points of the horizontal straight line and the vertical straight line, and finally obtaining the coordinates of the end point of the straight line in the vertical direction of the actual grid. For example (x)3,y3) And (x)4,y4) And the coordinates are equal.
The size information measuring and positioning method based on the industrial camera 7 comprises the following steps:
referring to fig. 8, a reference light triggering method based on active grid laser 8 is adopted, and a size information measuring and positioning method based on an industrial camera 7 is utilized, so that the high precision of luggage size information measurement can be met, the time loss and the computing resource consumption caused by dense point cloud operation are avoided, and the real-time performance of the system is ensured. A schematic diagram of the dimensional measurement based on the industrial camera 7 is shown in fig. 7.
A circle of point light sources are arranged at the tail end of the mechanical arm 3 and matched with the industrial camera 7, and when the environmental light is dark, the point light sources can provide enough light for the industrial camera 7; meanwhile, when the grid laser 8 irradiates the surface of the luggage case 5, the point light source can well highlight the lines of the grid laser 8, and the lines which are clear enough are provided for the industrial camera 7 to photograph, so that the size and the volume of the luggage case 5 can be estimated and calculated in the rear direction.
Calculating and correcting the volume of the luggage case 5, when the grid laser 8 irradiates the target luggage case 5, the industrial camera 7 photographs the target luggage case 5, and identifies the laser grid on the photographed picture, with the edge of the luggage case 5 of the farthest curved grid line of the luggage case 5 as the farthest edge;
a first step of calculating the area of the preliminary uppermost face of the furthest-side luggage case 5, and then calculating the volume of the preliminary luggage case 5;
the second step is to perform an edge expansion filling of the calculated volume of the luggage case 5 on the basis of the first step, in order for the gripper 10 at the end of the robot arm 3 to be able to completely wrap the edge of the luggage case 5, thus allowing both ends of the gripper 10 to be transferred to the luggage case 5.
Edge filling is an optimization of the industrial camera 7 in making the target size inspection relative to the gripper 10; filling the edges of the gray frames of 5cm in total outwards on the basis of the first step, wherein the virtual modeling is performed when the size of the luggage case 5 is estimated, so that the opening degree of the hand grip 10 is greater than the lengths of the two actually longest sides of the posture of the luggage case 5 during detection; if the distance between the longest sides is just taken as the degree of opening of the hand grip 10, then the hand grip 10 may be blocked by the thickness of the hand grip 10, so that the hand grip 10 cannot be gripped, and therefore, in practical situations, the gripping state of the hand grip 10 can be well adapted by performing a size expansion, so that each luggage case 5 can be gripped.
Take 20 inch luggage case 5 as an example:
since different sizes of the luggage case 5 are fixed, the grid laser 8 is irradiated above the luggage case 5, the industrial camera 7 performs detection shooting on the luggage case 5, the area of the front face of the case is estimated, 34 x 50cm in fig. 9 represents the real area of the front face of the luggage case 5(20 inches), the length and the width are respectively expanded by 2.5cm on the basis of the length and the width calculated by the camera to reserve enough space for grabbing, the broken line (39 x 55cm) in fig. 9 is an optimization made by the industrial camera 7 relative to the grab 10 when the industrial camera performs target size detection, and the edge filling does not exist actually but is a virtual modeling made when the size of the luggage case 5 is estimated, so that the expansion degree of the grab 10 is greater than the length of the actual longest two sides of the posture of the luggage case 5 during detection. Since the height of the luggage case 5 corresponds to the corresponding size, the volume of the luggage case 5 can be estimated by multiplying the height by the area of the front surface.
And S4, grabbing the luggage case 5 according to the weight of the luggage case 5 and the pressure value fed back by the film sensor, adjusting the grabbing strength of the grab 10 in real time, detecting and feeding back the strength of the grab 10 at the tail end of the mechanical arm 3 in real time by using the pressure film sensor 9, and grabbing the luggage case 5 without damage.
Step S5, the control unit drives the mechanical arm 3 to rotate, the luggage case 5 is transported to the upper part of the transport trolley 6, the mechanical arm 3 is moved downwards continuously, the gripper 10 is controlled to release the luggage case 5, and the luggage case 5 falls on the transport trolley 6;
step S6, the control unit controls the robot arm 3 to return to the initial position.
While the embodiments of the invention have been described in detail in connection with the accompanying drawings, it is not intended to limit the scope of the invention. Various modifications and changes may be made by those skilled in the art without inventive step within the scope of the appended claims.