CN118533182A - Visual intelligent navigation method and system for transfer robot - Google Patents
Visual intelligent navigation method and system for transfer robot Download PDFInfo
- Publication number
- CN118533182A CN118533182A CN202411000598.4A CN202411000598A CN118533182A CN 118533182 A CN118533182 A CN 118533182A CN 202411000598 A CN202411000598 A CN 202411000598A CN 118533182 A CN118533182 A CN 118533182A
- Authority
- CN
- China
- Prior art keywords
- transfer robot
- industrial camera
- navigation
- pose
- gray value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000012546 transfer Methods 0.000 title abstract description 48
- 238000000034 method Methods 0.000 title abstract description 40
- 230000000007 visual effect Effects 0.000 title abstract description 34
- 238000001514 detection method Methods 0.000 abstract description 14
- 238000000605 extraction Methods 0.000 abstract description 14
- 239000003086 colorant Substances 0.000 abstract description 13
- 238000013527 convolutional neural network Methods 0.000 abstract description 11
- 238000012545 processing Methods 0.000 abstract description 10
- 230000010365 information processing Effects 0.000 abstract description 8
- 230000006870 function Effects 0.000 description 20
- 239000011159 matrix material Substances 0.000 description 18
- 238000010586 diagram Methods 0.000 description 13
- 230000008569 process Effects 0.000 description 13
- 230000008859 change Effects 0.000 description 8
- 238000004364 calculation method Methods 0.000 description 7
- 238000011161 development Methods 0.000 description 5
- 230000009466 transformation Effects 0.000 description 5
- 230000035945 sensitivity Effects 0.000 description 4
- 230000004069 differentiation Effects 0.000 description 2
- 238000009499 grossing Methods 0.000 description 2
- 229940050561 matrix product Drugs 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 235000013410 fast food Nutrition 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/06—Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
- G06Q10/063—Operations research, analysis or management
- G06Q10/0639—Performance analysis of employees; Performance analysis of enterprise or organisation operations
- G06Q10/06395—Quality analysis or management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/08—Logistics, e.g. warehousing, loading or distribution; Inventory or stock management
- G06Q10/083—Shipping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Resources & Organizations (AREA)
- Economics (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Strategic Management (AREA)
- Multimedia (AREA)
- Entrepreneurship & Innovation (AREA)
- Development Economics (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Educational Administration (AREA)
- Marketing (AREA)
- Tourism & Hospitality (AREA)
- Health & Medical Sciences (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- General Business, Economics & Management (AREA)
- Evolutionary Computation (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Medical Informatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Game Theory and Decision Science (AREA)
- Automation & Control Theory (AREA)
- Image Analysis (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
The invention discloses a visual intelligent navigation method and a visual intelligent navigation system for a transfer robot, which relate to the technical field of robot navigation and are used for collecting paved navigation pattern information in the visual range of the transfer robot in real time, inputting the navigation pattern information into a convolutional neural network for feature extraction processing, generating at least one candidate frame with different target categories at the center points of cells divided by a path feature map, classifying the candidate frames with different target categories according to colors, extracting the position information of all angular points in the visual range of the transfer robot through an angular point detection algorithm, and obtaining the pose of an industrial camera during shooting by utilizing a pose estimation algorithm, and calculating the real-time deflection angle of the transfer robot. By arranging the image information processing module and the deflection direction calculating module, the center area of the logistics is saved, the handling efficiency of the express mail is improved, and the service quality of the electronic commerce logistics industry is improved.
Description
Technical Field
The invention relates to the technical field of robot navigation, in particular to a visual intelligent navigation method and system for a transfer robot.
Background
The quantity of the express items in the logistics center is huge, the variety is miscellaneous, and the workload is huge. Traditional manual handling is high in labor cost, low in efficiency and prone to error, and the requirements of the development of the logistics express industry of electronic commerce can not be met. At present, a plurality of large-scale logistics centers in China use developed automatic transportation systems such as crossed belts, sliding blocks and the like, but the systems have the defects of large field scale, high maintenance cost and the like, and the logistics transfer robots which are newly developed internationally are developing well. Because the related enterprises in China do not apply express delivery robots for a long time, the prior art level of the express delivery robots completely cannot meet the development of the logistics industry of electronic commerce, such as the large logistics centers of large-scale domestic express delivery enterprises in China, postal service and the like, or the automatic transportation systems of crossed belt type, sliding block type and the like which are independently developed are adopted for carrying out the operation of express delivery goods. Therefore, the express enterprises are urgent to develop an intelligent express carrying robot with high efficiency, convenience and low price to cope with the rapid development of the e-commerce logistics industry.
With the rapid development of national economy, electronic commerce rises rapidly. The shopping modes of people are changed, the quality of life is improved, the quantity of express items in a logistics center is larger and larger, and goods are stacked more and more. How to improve the carrying efficiency of fast-food goods is a hard index for measuring the service quality of electronic commerce logistics industry, and is a problem which needs to be solved in the current development.
Disclosure of Invention
In order to solve the technical problems, the technical scheme solves the problems in the background technology.
In order to achieve the above purpose, the invention adopts the following technical scheme:
A visual intelligent navigation method of a transfer robot comprises the following steps:
At least one navigation pattern containing small rectangles with different colors is paved at equal intervals in the center of a travelling track of the carrying robot;
The method comprises the steps that through an industrial camera arranged in the middle of the bottom of a carrying robot, navigation pattern information paved in the visual range of the carrying robot is collected in real time;
inputting the navigation pattern information into a convolutional neural network for feature extraction processing to obtain a path feature map;
Generating at least one candidate frame with different target categories at the center point of the cell divided by the path feature diagram by using an anchor frame generation method;
classifying the candidate frames of different target categories according to colors;
Extracting all angular point position information in the visual range of the transfer robot through an angular point detection algorithm;
obtaining the pose of the industrial camera during shooting by utilizing a pose estimation algorithm based on the angular point position information and the internal parameters of the industrial camera;
calculating a real-time deflection angle of the transfer robot by utilizing destination position information based on the pose of the industrial camera during shooting;
And inputting the real-time deflection angle into a steering device of the transfer robot, and performing steering operation by the transfer robot according to the real-time deflection angle.
Preferably, inputting the navigation pattern information into a convolutional neural network for feature extraction processing, and obtaining a path feature map specifically includes:
acquiring red, green and blue components of each pixel point of the navigation pattern information;
setting a convolution kernel matrix;
outputting all 3*3 pixel points in the navigation pattern information as unit cells, and combining red, green and blue components of the unit cells into an input matrix;
Calculating a matrix product of the input matrix and the convolution kernel matrix;
all matrix products are orderly arranged and output as a path characteristic diagram.
Preferably, the generating, by using an anchor frame generating method, the candidate frames of at least one different target category at the center points of the cells divided by the path feature diagram specifically includes:
acquiring red, green and blue components of each cell in the path feature map;
Substituting each cell red, green and blue components in the path feature diagram into a probability formula to obtain the superposition probability of each cell and each laid navigation pattern;
setting a coincidence probability threshold value;
judging whether the superposition probability of each cell and each laid navigation pattern is higher than a superposition probability threshold value, if so, outputting the cell as a candidate frame to be classified, and if not, not outputting;
The probability formula is as follows:
In which, in the process, For the probability of overlap of the ith cell and the jth navigation pattern, N is the total number of cells, M is the total number of navigation patterns,Respectively the red-green-blue three-color components of the ith cell,The red and blue components of the jth navigation pattern, respectively.
Preferably, the extracting, by the corner detection algorithm, the position information of all the corners in the visual range of the handling robot specifically includes:
acquiring a red, green and blue human eye sensitivity weighting coefficient;
calculating the gray value of each pixel point in the visual range of the transfer robot according to a gray value formula, wherein the gray value range is [0, 255];
Setting a window movement offset;
Substituting the gray value of each pixel point in the visible range of the transfer robot and the window moving offset into a first gray value change degree function;
carrying out Taylor expansion on the first gray value change degree function at each pixel point to obtain a second gray value change degree function;
calculating an approximate hessian matrix based on the second gray value change degree function;
calculating two characteristic values of the approximate hessian matrix corresponding to each pixel point;
Judging whether the two characteristic values are larger than a set characteristic threshold value, if so, outputting the pixel point as a corner point, and if not, not outputting;
The gray value formula is as follows:
In which, in the process, Is the (u) th row and the (v) th column the gray level of the pixel point is calculated,Respectively red, green and blue human eye sensitivity weighting coefficients,Red, green and blue three-color components of the pixel points of the ith row and the ith column respectively;
the first gray value variation degree function is as follows:
In which, in the process, As a function of the degree of gray value variation, (x, y) is the window shift offset,The total number of rows and the total number of columns of the pixel points are respectively, and u and v are respectively the number of rows and the number of columns of the pixel points;
the second gray value variation degree function is as follows:
Wherein X, Y is the first differential of the gray value variation degree function, A, B, C is the second differential of the gray value variation degree function, Is of a ratio ofInfinitesimal quantities of higher order;
The first-order differentiation is as follows:
In which, in the process, Calculating bias derivative of the gray value function formed by gray values of all pixel points;
the second order differential is:
In which, in the process, Is a Gaussian smoothing filter;
the approximated hessian matrix is:
In which, in the process, Is approximately a hessian matrix.
Preferably, the obtaining the pose of the industrial camera when shooting by using a pose estimation algorithm based on the angular point position information and the internal parameters of the industrial camera specifically includes:
acquiring coordinates of at least three angular points and the size of an included angle formed by any two angular points and an industrial camera point;
Calculating the distance between each corner point and the industrial camera;
constructing a sphere model by taking each angular point as a sphere center and the distance between each angular point and an industrial camera as a radius;
And outputting the intersection point coordinates of at least three ball models as the pose information of the industrial camera.
Furthermore, a visual intelligent navigation system of a transfer robot is provided, which is used for realizing the visual intelligent navigation method of the transfer robot, and the visual intelligent navigation system comprises an image information processing module, wherein the image information processing module is used for paving at least one navigation pattern containing small rectangles with different colors at equal intervals in the center of a running track of the transfer robot; the method comprises the steps that through an industrial camera arranged in the middle of the bottom of a carrying robot, navigation pattern information paved in the visual range of the carrying robot is collected in real time; inputting the navigation pattern information into a convolutional neural network for feature extraction processing to obtain a path feature map; generating at least one candidate frame with different target categories at the center point of the cell divided by the path feature diagram by using an anchor frame generation method; classifying the candidate frames of different target categories according to colors; extracting all angular point position information in the visual range of the transfer robot through an angular point detection algorithm;
the deflection direction calculation module is used for obtaining the pose of the industrial camera during shooting by utilizing a pose estimation algorithm based on the angular point position information and the internal parameters of the industrial camera; calculating a real-time deflection angle of the transfer robot by utilizing destination position information based on the pose of the industrial camera during shooting; and inputting the real-time deflection angle into a steering device of the transfer robot, and performing steering operation by the transfer robot according to the real-time deflection angle.
Optionally, the image information processing module specifically includes:
the navigation pattern paving unit is used for paving at least one navigation pattern containing small rectangles with different colors at equal intervals in the center of the travelling track of the carrying robot;
The pattern information acquisition unit is used for acquiring the paved navigation pattern information in the visual range of the transfer robot in real time through an industrial camera arranged in the middle of the bottom of the transfer robot;
The characteristic information extraction unit is used for inputting the navigation pattern information into the convolutional neural network to perform characteristic extraction processing to obtain a path characteristic diagram;
The candidate anchor frame generation unit is used for generating candidate frames of at least one different target category at the center point of the cells divided by the path characteristic diagram by using an anchor frame generation method;
the candidate anchor frame classification unit is used for classifying the candidate frames of different target categories according to colors;
The corner information extraction unit is used for extracting all corner position information in the visual range of the transfer robot through a corner detection algorithm.
Optionally, the deflection direction calculating module specifically includes:
the pose calculating unit is used for obtaining the pose of the industrial camera during shooting by utilizing a pose estimating algorithm based on the angular point position information and the internal parameters of the industrial camera;
a deflection angle calculation unit for calculating a real-time deflection angle of the transfer robot using destination position information based on the pose of the industrial camera at the time of photographing;
And the steering operation unit is used for inputting the real-time deflection angle into a steering device of the transfer robot, and the transfer robot performs steering operation according to the real-time deflection angle.
Compared with the prior art, the invention has the beneficial effects that:
through setting up image information processing module and deflection direction calculation module, practice thrift the manual work, effectively utilize commodity circulation center area, improve express mail handling efficiency to promote the quality of service of electronic commerce commodity circulation industry, put forward the visual navigation location strategy of drive structure, and optimize its target detection algorithm, make this transfer robot can satisfy stability and intelligent accurate navigation's requirement, thereby be applied to the commodity circulation center and improve its express mail handling efficiency.
Drawings
FIG. 1 is a flow chart of a visual intelligent navigation method of a transfer robot;
FIG. 2 is a flow chart of a method for inputting navigation pattern information into a convolutional neural network for feature extraction processing according to the present invention;
FIG. 3 is a flow chart of a method of generating at least one candidate box of different target classes at the center point of the cells divided by the path feature map using the anchor box generation method of the present invention;
Fig. 4 is a flowchart of a method for extracting all angular point position information in the visual range of a handling robot by an angular point detection algorithm according to the present invention;
fig. 5 is a flowchart of a method for obtaining the pose of an industrial camera during shooting by using a pose estimation algorithm based on the angular point position information and the internal parameters of the industrial camera.
Detailed Description
The following description is presented to enable one of ordinary skill in the art to make and use the invention. The preferred embodiments in the following description are by way of example only and other obvious variations will occur to those skilled in the art.
Referring to fig. 1, a visual intelligent navigation method for a transfer robot includes:
At least one navigation pattern containing small rectangles with different colors is paved at equal intervals in the center of a travelling track of the carrying robot;
The method comprises the steps that through an industrial camera arranged in the middle of the bottom of a carrying robot, navigation pattern information paved in the visual range of the carrying robot is collected in real time;
inputting the navigation pattern information into a convolutional neural network for feature extraction processing to obtain a path feature map;
Generating at least one candidate frame with different target categories at the center point of the cell divided by the path feature diagram by using an anchor frame generation method;
classifying the candidate frames of different target categories according to colors;
Extracting all angular point position information in the visual range of the transfer robot through an angular point detection algorithm;
obtaining the pose of the industrial camera during shooting by utilizing a pose estimation algorithm based on the angular point position information and the internal parameters of the industrial camera;
calculating a real-time deflection angle of the transfer robot by utilizing destination position information based on the pose of the industrial camera during shooting;
And inputting the real-time deflection angle into a steering device of the transfer robot, and performing steering operation by the transfer robot according to the real-time deflection angle.
Referring to fig. 2, inputting navigation pattern information into a convolutional neural network for feature extraction processing, and obtaining a path feature map specifically includes:
acquiring red, green and blue components of each pixel point of the navigation pattern information;
setting a convolution kernel matrix;
outputting all 3*3 pixel points in the navigation pattern information as unit cells, and combining red, green and blue components of the unit cells into an input matrix;
Calculating a matrix product of the input matrix and the convolution kernel matrix;
all matrix products are orderly arranged and output as a path characteristic diagram.
Convolutional neural networks, which are a deep learning model with great success in the field of computer vision and whose design inspiration comes from the vision system in biology and is aimed at simulating the way human vision processes, have made significant progress in image recognition, object detection, image generation and many other fields over the last few years, becoming an important component of computer vision and deep learning research.
Referring to fig. 3, generating, by using an anchor frame generation method, candidate frames of at least one different target class at cell center points of the path feature map division specifically includes:
acquiring red, green and blue components of each cell in the path feature map;
Substituting each cell red, green and blue components in the path feature diagram into a probability formula to obtain the superposition probability of each cell and each laid navigation pattern;
setting a coincidence probability threshold value;
judging whether the superposition probability of each cell and each laid navigation pattern is higher than a superposition probability threshold value, if so, outputting the cell as a candidate frame to be classified, and if not, not outputting;
The probability formula is as follows:
In which, in the process, For the probability of overlap of the ith cell and the jth navigation pattern, N is the total number of cells, M is the total number of navigation patterns,Respectively the red-green-blue three-color components of the ith cell,The red and blue components of the jth navigation pattern, respectively.
The object detection algorithm typically samples a large number of regions in the input image, then determines whether the regions contain objects of interest, and adjusts the region boundaries to more accurately predict the true bounding box of the object, with the dominant algorithm being the anchor box.
Referring to fig. 4, extracting, by using a corner detection algorithm, all corner position information in a visual range of the handling robot specifically includes:
acquiring a red, green and blue human eye sensitivity weighting coefficient;
calculating the gray value of each pixel point in the visual range of the transfer robot according to a gray value formula, wherein the gray value range is [0, 255];
Setting a window movement offset;
Substituting the gray value of each pixel point in the visible range of the transfer robot and the window moving offset into a first gray value change degree function;
carrying out Taylor expansion on the first gray value change degree function at each pixel point to obtain a second gray value change degree function;
calculating an approximate hessian matrix based on the second gray value change degree function;
calculating two characteristic values of the approximate hessian matrix corresponding to each pixel point;
Judging whether the two characteristic values are larger than a set characteristic threshold value, if so, outputting the pixel point as a corner point, and if not, not outputting;
The gray value formula is as follows:
In which, in the process, Is the (u) th row and the (v) th column the gray level of the pixel point is calculated,Respectively red, green and blue human eye sensitivity weighting coefficients,Red, green and blue three-color components of the pixel points of the ith row and the ith column respectively;
the first gray value variation degree function is as follows:
In which, in the process, As a function of the degree of gray value variation, (x, y) is the window shift offset,The total number of rows and the total number of columns of the pixel points are respectively, and u and v are respectively the number of rows and the number of columns of the pixel points;
the second gray value variation degree function is as follows:
Wherein X, Y is the first differential of the gray value variation degree function, A, B, C is the second differential of the gray value variation degree function, Is of a ratio ofInfinitesimal quantities of higher order;
The first-order differentiation is as follows:
In which, in the process, Calculating bias derivative of the gray value function formed by gray values of all pixel points;
the second order differential is:
In which, in the process, Is a Gaussian smoothing filter;
the approximated hessian matrix is:
In which, in the process, Is approximately a hessian matrix.
The corner points are usually intersection points among image contours, and for the same scene, even if the visual angle changes, the corner points generally have the characteristic of stable property, the pixel points in the area near the corner points have larger changes in the gradient direction or the gradient amplitude, and after the image is derived, the extreme points are usually the positions of the corner points.
Referring to fig. 5, obtaining the pose of the industrial camera during shooting by using a pose estimation algorithm based on the angular point position information and the internal parameters of the industrial camera specifically includes:
acquiring coordinates of at least three angular points and the size of an included angle formed by any two angular points and an industrial camera point;
Calculating the distance between each corner point and the industrial camera;
constructing a sphere model by taking each angular point as a sphere center and the distance between each angular point and an industrial camera as a radius;
And outputting the intersection point coordinates of at least three ball models as the pose information of the industrial camera.
The position and the gesture of the object are uniquely determined according to the reference coordinate system, and when the gesture of the same object is described in different reference coordinate systems, the gesture representation is changed, so that the description of the object in different reference coordinate systems is related by using coordinate transformation, and the coordinate transformation between different coordinate systems comprises: translational transformation, rotational transformation, and complex transformation.
Furthermore, based on the same inventive concept as the above-mentioned intelligent navigation method for the vision of the handling robot, the present disclosure further provides an intelligent navigation system for the vision of the handling robot, which includes:
The image information processing module is used for paving at least one navigation pattern containing small rectangles with different colors at equal intervals in the center of the travelling track of the carrying robot; the method comprises the steps that through an industrial camera arranged in the middle of the bottom of a carrying robot, navigation pattern information paved in the visual range of the carrying robot is collected in real time; inputting the navigation pattern information into a convolutional neural network for feature extraction processing to obtain a path feature map; generating at least one candidate frame with different target categories at the center point of the cell divided by the path feature diagram by using an anchor frame generation method; classifying the candidate frames of different target categories according to colors; extracting all angular point position information in the visual range of the transfer robot through an angular point detection algorithm;
the deflection direction calculation module is used for obtaining the pose of the industrial camera during shooting by utilizing a pose estimation algorithm based on the angular point position information and the internal parameters of the industrial camera; calculating a real-time deflection angle of the transfer robot by utilizing destination position information based on the pose of the industrial camera during shooting; and inputting the real-time deflection angle into a steering device of the transfer robot, and performing steering operation by the transfer robot according to the real-time deflection angle.
The image information processing module specifically comprises:
the navigation pattern paving unit is used for paving at least one navigation pattern containing small rectangles with different colors at equal intervals in the center of the travelling track of the carrying robot;
The pattern information acquisition unit is used for acquiring the paved navigation pattern information in the visual range of the transfer robot in real time through an industrial camera arranged in the middle of the bottom of the transfer robot;
The characteristic information extraction unit is used for inputting the navigation pattern information into the convolutional neural network to perform characteristic extraction processing to obtain a path characteristic diagram;
The candidate anchor frame generation unit is used for generating candidate frames of at least one different target category at the center point of the cells divided by the path characteristic diagram by using an anchor frame generation method;
the candidate anchor frame classification unit is used for classifying the candidate frames of different target categories according to colors;
The corner information extraction unit is used for extracting all corner position information in the visual range of the transfer robot through a corner detection algorithm.
The deflection direction calculation module specifically includes:
the pose calculating unit is used for obtaining the pose of the industrial camera during shooting by utilizing a pose estimating algorithm based on the angular point position information and the internal parameters of the industrial camera;
a deflection angle calculation unit for calculating a real-time deflection angle of the transfer robot using destination position information based on the pose of the industrial camera at the time of photographing;
And the steering operation unit is used for inputting the real-time deflection angle into a steering device of the transfer robot, and the transfer robot performs steering operation according to the real-time deflection angle.
Still further, the present disclosure also provides a computer readable storage medium, on which a computer readable program is stored, and when the computer readable program is called, the above-mentioned method for visual intelligent navigation of a handling robot is executed.
It is understood that the storage medium may be a magnetic medium, e.g., floppy disk, hard disk, magnetic tape; optical media such as DVD; or a semiconductor medium such as a solid state disk SolidStateDisk, SSD, etc.
In summary, the invention has the advantages that: through setting up image information processing module and deflection direction calculation module, practice thrift the manual work, effectively utilize commodity circulation center area, improve express mail handling efficiency to promote the quality of service of electronic commerce commodity circulation industry, put forward the visual navigation location strategy of drive structure, and optimize its target detection algorithm, make this transfer robot can satisfy stability and intelligent accurate navigation's requirement, thereby be applied to the commodity circulation center and improve its express mail handling efficiency.
The foregoing has shown and described the basic principles, principal features and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, and that the above embodiments and descriptions are merely illustrative of the principles of the present invention, and various changes and modifications may be made therein without departing from the spirit and scope of the invention, which is defined by the appended claims. The scope of the invention is defined by the appended claims and equivalents thereof.
Claims (8)
1. The visual intelligent navigation method for the transfer robot is characterized by comprising the following steps of:
At least one navigation pattern containing small rectangles with different colors is paved at equal intervals in the center of a travelling track of the carrying robot;
The method comprises the steps that through an industrial camera arranged in the middle of the bottom of a carrying robot, navigation pattern information paved in the visual range of the carrying robot is collected in real time;
inputting the navigation pattern information into a convolutional neural network for feature extraction processing to obtain a path feature map;
Generating at least one candidate frame with different target categories at the center point of the cell divided by the path feature diagram by using an anchor frame generation method;
classifying the candidate frames of different target categories according to colors;
Extracting all angular point position information in the visual range of the transfer robot through an angular point detection algorithm;
obtaining the pose of the industrial camera during shooting by utilizing a pose estimation algorithm based on the angular point position information and the internal parameters of the industrial camera;
calculating a real-time deflection angle of the transfer robot by utilizing destination position information based on the pose of the industrial camera during shooting;
And inputting the real-time deflection angle into a steering device of the transfer robot, and performing steering operation by the transfer robot according to the real-time deflection angle.
2. The method for intelligent navigation of a handling robot according to claim 1, wherein inputting navigation pattern information into a convolutional neural network for feature extraction processing, and obtaining a path feature map specifically comprises:
acquiring red, green and blue components of each pixel point of the navigation pattern information;
setting a convolution kernel matrix;
outputting all 3*3 pixel points in the navigation pattern information as unit cells, and combining red, green and blue components of the unit cells into an input matrix;
Calculating a matrix product of the input matrix and the convolution kernel matrix;
all matrix products are orderly arranged and output as a path characteristic diagram.
3. The method for intelligent navigation of a handling robot according to claim 2, wherein the generating, by using an anchor frame generation method, candidate frames of at least one different target class at cell center points of the path feature map division specifically includes:
acquiring red, green and blue components of each cell in the path feature map;
Substituting each cell red, green and blue components in the path feature diagram into a probability formula to obtain the superposition probability of each cell and each laid navigation pattern;
setting a coincidence probability threshold value;
judging whether the superposition probability of each cell and each laid navigation pattern is higher than a superposition probability threshold value, if so, outputting the cell as a candidate frame to be classified, and if not, not outputting;
The probability formula is as follows:
In which, in the process, For the probability of overlap of the ith cell and the jth navigation pattern, N is the total number of cells, M is the total number of navigation patterns,Respectively the red-green-blue three-color components of the ith cell,The red and blue components of the jth navigation pattern, respectively.
4. A visual intelligent navigation method for a handling robot according to claim 3, wherein the extracting all corner position information in the visual range of the handling robot by the corner detection algorithm specifically comprises:
acquiring a red, green and blue human eye sensitivity weighting coefficient;
calculating the gray value of each pixel point in the visual range of the transfer robot according to a gray value formula, wherein the gray value range is [0, 255];
Setting a window movement offset;
Substituting the gray value of each pixel point in the visible range of the transfer robot and the window moving offset into a first gray value change degree function;
carrying out Taylor expansion on the first gray value change degree function at each pixel point to obtain a second gray value change degree function;
calculating an approximate hessian matrix based on the second gray value change degree function;
calculating two characteristic values of the approximate hessian matrix corresponding to each pixel point;
Judging whether the two characteristic values are larger than a set characteristic threshold value, if so, outputting the pixel point as a corner point, and if not, not outputting;
The gray value formula is as follows:
In which, in the process, Is the (u) th row and the (v) th column the gray level of the pixel point is calculated,Respectively red, green and blue human eye sensitivity weighting coefficients,Red, green and blue three-color components of the pixel points of the ith row and the ith column respectively;
the first gray value variation degree function is as follows:
In which, in the process, As a function of the degree of gray value variation, (x, y) is the window shift offset,The total number of rows and the total number of columns of the pixel points are respectively, and u and v are respectively the number of rows and the number of columns of the pixel points;
the second gray value variation degree function is as follows:
Wherein X, Y is the first differential of the gray value variation degree function, A, B, C is the second differential of the gray value variation degree function, Is of a ratio ofInfinitesimal quantities of higher order;
The first-order differentiation is as follows:
In which, in the process, Calculating bias derivative of the gray value function formed by gray values of all pixel points;
the second order differential is:
In which, in the process, Is a Gaussian smoothing filter;
the approximated hessian matrix is:
In which, in the process, Is approximately a hessian matrix.
5. The visual intelligent navigation method of a handling robot according to claim 4, wherein the obtaining the pose of the industrial camera during shooting by using a pose estimation algorithm based on the angular point position information and the internal parameters of the industrial camera specifically comprises:
acquiring coordinates of at least three angular points and the size of an included angle formed by any two angular points and an industrial camera point;
Calculating the distance between each corner point and the industrial camera;
constructing a sphere model by taking each angular point as a sphere center and the distance between each angular point and an industrial camera as a radius;
And outputting the intersection point coordinates of at least three ball models as the pose information of the industrial camera.
6. A transfer robot vision intelligent navigation system for implementing a transfer robot vision intelligent navigation method according to any one of claims 1 to 5, comprising:
The image information processing module is used for paving at least one navigation pattern containing small rectangles with different colors at equal intervals in the center of the travelling track of the carrying robot; the method comprises the steps that through an industrial camera arranged in the middle of the bottom of a carrying robot, navigation pattern information paved in the visual range of the carrying robot is collected in real time; inputting the navigation pattern information into a convolutional neural network for feature extraction processing to obtain a path feature map; generating at least one candidate frame with different target categories at the center point of the cell divided by the path feature diagram by using an anchor frame generation method; classifying the candidate frames of different target categories according to colors; extracting all angular point position information in the visual range of the transfer robot through an angular point detection algorithm;
the deflection direction calculation module is used for obtaining the pose of the industrial camera during shooting by utilizing a pose estimation algorithm based on the angular point position information and the internal parameters of the industrial camera; calculating a real-time deflection angle of the transfer robot by utilizing destination position information based on the pose of the industrial camera during shooting; and inputting the real-time deflection angle into a steering device of the transfer robot, and performing steering operation by the transfer robot according to the real-time deflection angle.
7. The intelligent navigation system of a handling robot according to claim 6, wherein the image information processing module specifically comprises:
the navigation pattern paving unit is used for paving at least one navigation pattern containing small rectangles with different colors at equal intervals in the center of the travelling track of the carrying robot;
The pattern information acquisition unit is used for acquiring the paved navigation pattern information in the visual range of the transfer robot in real time through an industrial camera arranged in the middle of the bottom of the transfer robot;
The characteristic information extraction unit is used for inputting the navigation pattern information into the convolutional neural network to perform characteristic extraction processing to obtain a path characteristic diagram;
The candidate anchor frame generation unit is used for generating candidate frames of at least one different target category at the center point of the cells divided by the path characteristic diagram by using an anchor frame generation method;
the candidate anchor frame classification unit is used for classifying the candidate frames of different target categories according to colors;
The corner information extraction unit is used for extracting all corner position information in the visual range of the transfer robot through a corner detection algorithm.
8. The vision intelligent navigation system of a transfer robot according to claim 7, wherein the deflection direction calculation module specifically comprises:
the pose calculating unit is used for obtaining the pose of the industrial camera during shooting by utilizing a pose estimating algorithm based on the angular point position information and the internal parameters of the industrial camera;
a deflection angle calculation unit for calculating a real-time deflection angle of the transfer robot using destination position information based on the pose of the industrial camera at the time of photographing;
And the steering operation unit is used for inputting the real-time deflection angle into a steering device of the transfer robot, and the transfer robot performs steering operation according to the real-time deflection angle.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202411000598.4A CN118533182B (en) | 2024-07-25 | 2024-07-25 | Visual intelligent navigation method and system for transfer robot |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202411000598.4A CN118533182B (en) | 2024-07-25 | 2024-07-25 | Visual intelligent navigation method and system for transfer robot |
Publications (2)
Publication Number | Publication Date |
---|---|
CN118533182A true CN118533182A (en) | 2024-08-23 |
CN118533182B CN118533182B (en) | 2024-09-17 |
Family
ID=92389907
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202411000598.4A Active CN118533182B (en) | 2024-07-25 | 2024-07-25 | Visual intelligent navigation method and system for transfer robot |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118533182B (en) |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB1106972A (en) * | 1964-01-30 | 1968-03-20 | Mullard Ltd | Improvements in or relating to character recognition systems |
CN102789234A (en) * | 2012-08-14 | 2012-11-21 | 广东科学中心 | Robot navigation method and system based on color-coded identification |
WO2015024407A1 (en) * | 2013-08-19 | 2015-02-26 | 国家电网公司 | Power robot based binocular vision navigation system and method based on |
US20180172451A1 (en) * | 2015-08-14 | 2018-06-21 | Beijing Evolver Robotics Co., Ltd | Method and system for mobile robot to self-establish map indoors |
CN111768453A (en) * | 2020-07-17 | 2020-10-13 | 哈尔滨工业大学 | Navigation and positioning device and method in spacecraft cluster ground simulation system |
US20210302585A1 (en) * | 2018-08-17 | 2021-09-30 | Beijing Jingdong Shangke Information Technology Co., Ltd. | Smart navigation method and system based on topological map |
CN114067210A (en) * | 2021-11-18 | 2022-02-18 | 南京工业职业技术大学 | Mobile robot intelligent grabbing method based on monocular vision guidance |
JP2023072993A (en) * | 2021-11-15 | 2023-05-25 | トヨタ自動車株式会社 | Autonomous mobile robot system, docking path calculation program and docking path calculation method for the same |
CN117707067A (en) * | 2023-12-12 | 2024-03-15 | 无锡旺高新能源科技有限公司 | A smart AGV car |
-
2024
- 2024-07-25 CN CN202411000598.4A patent/CN118533182B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB1106972A (en) * | 1964-01-30 | 1968-03-20 | Mullard Ltd | Improvements in or relating to character recognition systems |
CN102789234A (en) * | 2012-08-14 | 2012-11-21 | 广东科学中心 | Robot navigation method and system based on color-coded identification |
WO2015024407A1 (en) * | 2013-08-19 | 2015-02-26 | 国家电网公司 | Power robot based binocular vision navigation system and method based on |
US20180172451A1 (en) * | 2015-08-14 | 2018-06-21 | Beijing Evolver Robotics Co., Ltd | Method and system for mobile robot to self-establish map indoors |
US20210302585A1 (en) * | 2018-08-17 | 2021-09-30 | Beijing Jingdong Shangke Information Technology Co., Ltd. | Smart navigation method and system based on topological map |
CN111768453A (en) * | 2020-07-17 | 2020-10-13 | 哈尔滨工业大学 | Navigation and positioning device and method in spacecraft cluster ground simulation system |
JP2023072993A (en) * | 2021-11-15 | 2023-05-25 | トヨタ自動車株式会社 | Autonomous mobile robot system, docking path calculation program and docking path calculation method for the same |
CN114067210A (en) * | 2021-11-18 | 2022-02-18 | 南京工业职业技术大学 | Mobile robot intelligent grabbing method based on monocular vision guidance |
CN117707067A (en) * | 2023-12-12 | 2024-03-15 | 无锡旺高新能源科技有限公司 | A smart AGV car |
Non-Patent Citations (1)
Title |
---|
李胜杰;徐少华;: "基于RGB-D视觉定位的购物机器人运动轨迹识别分析研究", 现代制造技术与装备, no. 01, 15 January 2020 (2020-01-15) * |
Also Published As
Publication number | Publication date |
---|---|
CN118533182B (en) | 2024-09-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108171748B (en) | Visual identification and positioning method for intelligent robot grabbing application | |
CN111665842B (en) | Indoor SLAM mapping method and system based on semantic information fusion | |
CN113538503B (en) | A solar panel defect detection method based on infrared images | |
CN110210398A (en) | A kind of three-dimensional point cloud semantic segmentation mask method | |
CN113096085A (en) | Container surface damage detection method based on two-stage convolutional neural network | |
CN108345912A (en) | Commodity rapid settlement system based on RGBD information and deep learning | |
CN108280397A (en) | Human body image hair detection method based on depth convolutional neural networks | |
CN115330734A (en) | Automatic robot repair welding system based on three-dimensional target detection and point cloud defect completion | |
CN112926694A (en) | Method for automatically identifying pigs in image based on improved neural network | |
CN110852186A (en) | Visual identification and picking sequence planning method for citrus on tree and simulation system thereof | |
CN106780564A (en) | A kind of anti-interference contour tracing method based on Model Prior | |
CN118377295A (en) | A logistics system path planning method and system based on visual recognition | |
CN115546202B (en) | Tray detection and positioning method for unmanned forklift | |
Zhang et al. | A fast detection and grasping method for mobile manipulator based on improved faster R-CNN | |
CN111897333A (en) | Robot walking path planning method | |
Zhang et al. | Damaged apple detection with a hybrid YOLOv3 algorithm | |
CN114548868B (en) | Machine vision-based warehouse stacking object inventory count method and device | |
CN118533182B (en) | Visual intelligent navigation method and system for transfer robot | |
CN119048487A (en) | Industrial product defect detection method based on improvement FASTER RCNN | |
Li et al. | A systematic strategy of pallet identification and picking based on deep learning techniques | |
CN114419096A (en) | Multi-target tracking method for aerial video based on trapezoid frame | |
CN112907666A (en) | Tray pose estimation method, system and device based on RGB-D | |
Zheng et al. | Robot target location based on the difference in monocular vision projection | |
Wang et al. | Design of a logistics warehouse robot positioning and recognition model based on improved EKF and calibration algorithm | |
Shi et al. | A fast workpiece detection method based on multi-feature fused SSD |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |