CN115035157B - AGV motion control method, device and medium based on visual tracking - Google Patents
AGV motion control method, device and medium based on visual tracking Download PDFInfo
- Publication number
- CN115035157B CN115035157B CN202210608667.4A CN202210608667A CN115035157B CN 115035157 B CN115035157 B CN 115035157B CN 202210608667 A CN202210608667 A CN 202210608667A CN 115035157 B CN115035157 B CN 115035157B
- Authority
- CN
- China
- Prior art keywords
- tracking target
- frame
- tracking
- determining
- agv
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 38
- 230000000007 visual effect Effects 0.000 title claims abstract description 18
- 230000008569 process Effects 0.000 claims abstract description 6
- 238000006073 displacement reaction Methods 0.000 claims description 16
- 239000013598 vector Substances 0.000 claims description 12
- 238000001514 detection method Methods 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 3
- 230000006870 function Effects 0.000 description 5
- 230000006872 improvement Effects 0.000 description 4
- 230000009286 beneficial effect Effects 0.000 description 3
- 238000004590 computer program Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000002035 prolonged effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0223—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0246—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
- G05D1/0253—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting relative motion information from a plurality of images taken successively, e.g. visual odometry, optical flow
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/60—Electric or hybrid propulsion means for production processes
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Remote Sensing (AREA)
- Multimedia (AREA)
- Aviation & Aerospace Engineering (AREA)
- Radar, Positioning & Navigation (AREA)
- Theoretical Computer Science (AREA)
- Automation & Control Theory (AREA)
- Electromagnetism (AREA)
- Image Analysis (AREA)
- Closed-Circuit Television Systems (AREA)
Abstract
The invention relates to the technical field of AGVs, in particular to an AGV motion control method, device and medium based on visual tracking, wherein the method comprises the following steps: collecting video streams around the AGV in the process of tracking targets by the AGV; extracting the latest acquired continuous adjacent frame images from the video stream, and determining the motion speed of the tracking target at the next moment based on the corresponding position of the tracking target in the continuous adjacent frames; controlling the AGV to control running at the moving speed of the tracking target at the next moment; the invention realizes real-time following by pre-judging the running speed in advance, thereby meeting the real-time requirement in practical application.
Description
Technical Field
The invention relates to the technical field of AGVs, in particular to an AGV motion control method, device and medium based on visual tracking.
Background
Based on ROS (Robot Operating System ), the AGV can simply, conveniently and efficiently estimate the target distance and angle through the acquired images, and keep a certain safety distance, and has wide application value in offices, patrol, welcome, industrial logistics and other scenes.
In the prior art, when visual positioning tracking is adopted, the problems of low accuracy, large time delay, poor following performance and the like of target positioning tracking exist because the visual positioning tracking is influenced by external conditions.
Therefore, in order to meet the real-time requirement of the AGV in practical application, it is necessary to improve the existing visual tracking mode of the AGV so as to improve the real-time performance of the AGV.
Disclosure of Invention
The invention aims to provide an AGV motion control method, device and medium based on visual tracking, which are used for solving one or more technical problems in the prior art and at least providing a beneficial selection or creation condition.
In order to achieve the above object, the present invention provides the following technical solutions:
In a first aspect, an embodiment of the present invention provides a method for controlling motion of an AGV based on visual tracking, where the method includes the following steps:
collecting video streams around the AGV in the process of tracking targets by the AGV;
Extracting the latest acquired continuous adjacent frame images from the video stream, and determining the motion speed of the tracking target at the next moment based on the corresponding position of the tracking target in the continuous adjacent frames;
and controlling the AGV to control running at the moving speed of the tracking target at the next moment.
Further, the extracting the last acquired continuous adjacent frame image from the video stream, and determining the motion speed of the tracking target at the next moment based on the position of the tracking target corresponding to the continuous adjacent frame, includes:
Extracting 3 frames of continuous images acquired latest from the video stream;
respectively carrying out connected domain identification on each frame of image, and determining a plurality of connected domains in each frame of image;
For each frame of image, generating a prediction frame of the image according to a trained target detection model, determining a connected domain matched with the prediction frame of the image, and determining n key points in an image area corresponding to the connected domain; wherein n is a natural number; the key points comprise corner points and edge points;
Determining the motion speed of the tracking target at the current moment according to the acquisition time interval of the previous two frames of images and the position deviation of n key points in the previous two frames of images;
Predicting the predicted position of the corresponding key point of the tracking target in the third frame image according to the motion speed of the tracking target;
Determining the speed deviation of the tracking target according to the predicted position of the corresponding key point of the tracking target in the third frame image and the position of the corresponding key point of the tracking target in the third frame image, and adjusting the moving speed of the tracking target at the current moment according to the speed deviation to obtain the moving speed of the tracking target at the next moment.
Further, the determining a connected domain matching with the prediction frame of the image, and determining n key points in the image area corresponding to the connected domain includes:
determining a minimum circumscribed frame of a connected domain matched with a predicted frame of the image;
determining whether the size deviation of the minimum circumscribed frame and the prediction frame is within a set size threshold range;
determining a connected domain corresponding to a minimum external frame with the size deviation within a set size threshold range;
Traversing each pixel point in the connected domain, determining a gradient value of each pixel point, taking n/2 pixel points with the largest gradient value as corner points, and taking the pixel point with the largest gradient value between adjacent corner points as an edge point to obtain n/2 edge points;
and forming a set by n/2 corner points and n/2 edge points to obtain n key points in total.
Further, the determining the motion speed of the tracking target at the current moment according to the acquisition time interval of the first two frames of images and the position deviation of n key points in the first two frames of images includes:
respectively determining the minimum external frames of n key points in the first two frames of images;
if the area of the overlapped part of the minimum circumscribed frame of n key points in the first two frames of images is larger than 0.6 times of the minimum circumscribed frame, calculating the vector difference of the centers of the 2 minimum circumscribed rectangles corresponding to the first two frames of images, and taking the vector difference as the displacement of the moving target;
And taking the ratio of the displacement of the moving target and the acquisition time interval of the first two frames of images as the moving speed of the tracking target at the current moment.
Further, determining a speed deviation of the tracking target according to a predicted position of the corresponding key point of the tracking target in the third frame image and a position of the corresponding key point of the tracking target in the third frame image, and adjusting a motion speed of the tracking target at a current moment according to the speed deviation to obtain a motion speed of the tracking target at a next moment, including:
Calculating displacement deviation of a predicted position of a corresponding key point of the tracking target in the third frame image and a position of the corresponding key point of the tracking target in the third frame image;
Taking the ratio of the displacement deviation to the acquisition time interval of the first frame image and the third frame image as a speed deviation;
and vector calculation is carried out on the speed deviation and the movement speed of the tracking target at the current moment to obtain the movement speed of the tracking target at the next moment.
In a second aspect, an embodiment of the present invention further provides an AGV motion control device based on visual tracking, where the device includes:
At least one processor;
At least one memory for storing at least one program;
The at least one program, when executed by the at least one processor, causes the at least one processor to implement the vision tracking based AGV motion control method as described in the first aspect.
In a third aspect, an embodiment of the present invention further provides a computer readable storage medium, where a vision tracking-based AGV motion control program is stored, where the vision tracking-based AGV motion control program, when executed by a processor, implements the steps of the vision tracking-based AGV motion control method according to the first aspect.
The beneficial effects of the invention are as follows: the invention discloses an AGV motion control method, device and medium based on visual tracking, which aim at the problems of low target tracking precision, large time delay, poor following performance and the like caused by the influence of external conditions in the AGV motion process, and determine the motion speed of a tracking target at the next moment based on the corresponding position of the tracking target in a continuous adjacent frame acquired last time, so as to determine the motion speed of the tracking target at the next moment in real time, realize real-time following by predicting the motion speed in advance and meet the real-time requirement in practical application.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions of the prior art, the drawings that are needed in the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of an AGV motion control method based on visual tracking in an embodiment of the invention;
FIG. 2 is a schematic diagram of an AGV motion control device based on visual tracking in accordance with an embodiment of the present invention.
Detailed Description
The conception, specific structure, and technical effects produced by the present application will be clearly and completely described below with reference to the embodiments and the drawings to fully understand the objects, aspects, and effects of the present application. It should be noted that, without conflict, the embodiments of the present application and features of the embodiments may be combined with each other.
Referring to fig. 1, fig. 1 is a schematic flow chart of an AGV motion control method based on visual tracking, which includes the following steps:
Step S100, collecting video streams around the AGV in the process of tracking targets by the AGV;
step S200, extracting the latest acquired continuous adjacent frame images from the video stream, and determining the motion speed of the tracking target at the next moment based on the position of the tracking target corresponding to the continuous adjacent frames;
and step S300, controlling the AGV to control running at the motion speed of the tracking target at the next moment.
Specifically, in the process of tracking targets by the AGV, controlling the AGV to acquire images, and acquiring images acquired by the AGV at the current moment; determining an image area where a tracking target is located in the image; according to the comparison of the images of the continuous adjacent frames, calculating the position of the tracking target in the images of the continuous adjacent frames, so as to determine the motion speed (the motion speed is a vector and comprises the speed and the direction) of the tracking target at the next moment in real time; and controlling the AGV to track the tracking target according to the calculated movement speed, and keeping the tracking target always positioned in the image acquisition range of the AGV. The invention can automatically track the target, does not need manual remote control AGV to track the moving target, and has high positioning precision.
As a further improvement of the foregoing embodiment, in step S200, the extracting, from the video stream, the last acquired continuous adjacent frame image, and determining, based on the position of the tracking target corresponding to the continuous adjacent frame, the movement speed of the tracking target at the next moment includes:
step S210, extracting 3 frames of continuous images which are acquired recently from the video stream;
in some embodiments, successive images of 3 frames are marked sequentially in chronological order by numbering, respectively denoted as a first frame image, a second frame image, and a third frame image.
Step S220, respectively carrying out connected domain identification on each frame of image, and determining a plurality of connected domains in each frame of image;
It should be noted that, for each frame of image, a trained object detection model is input to identify, a connected domain corresponding to a tracking object in the image is determined, n key points are determined in an image region corresponding to the connected domain, and n is a natural number; the key points comprise corner points and edge points.
Step S230, for each frame of image, generating a prediction frame of the image according to a trained target detection model, determining a connected domain matched with the prediction frame of the image, and determining n key points in an image area corresponding to the connected domain; wherein n is a natural number; the key points comprise corner points and edge points;
it should be noted that, in this embodiment, the network model used for training the target detection model includes, but is not limited to: resNet networks, VGG networks and the like, identifying a tracking target and the position and the scale of the tracking target through a target detection model, and taking the connected domain where the position is located as the connected domain corresponding to the tracking target.
It can be understood that the positions of the n key points cannot be concentrated too much, but should be distributed as uniformly as possible so as to embody the positions and the scales of the tracking targets;
The n key points comprise m key points in the depth image and (n-m) key points in the color image synchronized with the depth image;
Step S240, determining the motion speed of the tracking target at the current moment according to the acquisition time interval of the previous two frames of images and the position deviation of n key points in the previous two frames of images;
step S250, predicting the predicted position of the corresponding key point of the tracking target in the third frame image according to the motion speed of the tracking target;
Specifically, multiplying the time interval of the second frame image and the third frame image by the movement speed of the tracking target to obtain the displacement of the tracking target; and adding the positions of the n key points in the second frame image and the displacement vectors of the tracking target to obtain the predicted positions of the corresponding key points of the tracking target in the third frame image.
Step S260, determining the speed deviation of the tracking target according to the predicted position of the corresponding key point of the tracking target in the third frame image and the position of the corresponding key point of the tracking target in the third frame image, and adjusting the moving speed of the tracking target at the current moment according to the speed deviation to obtain the moving speed of the tracking target at the next moment.
As a further improvement of the above embodiment, in step S230, the determining a connected domain matching the prediction frame of the image, determining n key points in an image area corresponding to the connected domain includes:
step S231, determining the minimum circumscribed frame of the connected domain matched with the predicted frame of the image;
Step S232, determining whether the size deviation of the minimum external frame and the prediction frame is within a set size threshold range;
specifically, the side length of each side in the minimum circumscribed frame cannot be smaller than 4/6 of the prediction frame and cannot be larger than 7/6 of the prediction frame;
Step S233, determining a connected domain corresponding to a smallest external frame with the size deviation within a set size threshold range;
step S234, traversing each pixel point in the connected domain, determining a gradient value of each pixel point, taking n/2 pixel points with the largest gradient value as corner points, and taking the pixel point with the largest gradient value between adjacent corner points as an edge point to obtain n/2 edge points;
and S235, forming a set of n/2 corner points and n/2 edge points to obtain n key points.
Specifically, the method for determining the gradient value of each pixel point may adopt a GCM method, a DOG method or a ANDD method, where the size of n is determined according to the shape and the size of the tracking target, so that it is preferable to ensure that the contour of the tracking target can be accurately described, and in an embodiment, the value range of n is 10-100.
As a further improvement of the foregoing embodiment, in step S240, the determining the moving speed of the tracking target at the current moment according to the acquisition time interval of the first two frames of images and the position deviations of the n key points in the first two frames of images includes:
step S241, determining the minimum external frames of n key points in the previous two frames of images respectively;
Step S242, if the overlapping area of the smallest circumscribed frame of n key points in the first two frames of images is larger than 0.6 times of the smallest circumscribed frame, calculating the vector difference of the centers of the 2 smallest circumscribed rectangles corresponding to the first two frames of images, and using the vector difference as the displacement of the moving target;
Step S243, the ratio of the displacement of the moving object and the acquisition time interval of the previous two frames of images is used as the moving speed of the tracking object at the current moment.
It should be noted that, the unit of the motion speed is a pixel/sampling period, and the motion speed has two components, which respectively represent the speeds in the horizontal and vertical directions in the image.
As a further improvement of the foregoing embodiment, in step S260, determining a speed deviation of the tracking target according to a predicted position of the corresponding key point of the tracking target in the third frame image and a position of the corresponding key point of the tracking target in the third frame image, and adjusting a movement speed of the tracking target at a current time according to the speed deviation, to obtain a movement speed of the tracking target at a next time, where the step includes:
step S261, calculating displacement deviation of the predicted position of the corresponding key point of the tracking target in the third frame image and the position of the corresponding key point of the tracking target in the third frame image;
Step S262, taking the ratio of the displacement deviation to the acquisition time interval of the first frame image and the third frame image as a speed deviation;
In the step, the speed deviation is not determined based on the displacement deviation and the time interval corresponding to the displacement deviation, but the time interval is properly prolonged, and the total duration of 3 frames of images is comprehensively considered, so that the change of the movement speed is smoothed, and the subsequent severe movement of the AGV is avoided.
And step S263, vector calculation is carried out on the speed deviation and the movement speed of the tracking target at the current moment to obtain the movement speed of the tracking target at the next moment.
Specifically, the speed deviation is added with a motion speed vector of the tracking target at the current moment to obtain the motion speed of the tracking target at the next moment.
In correspondence with the method of fig. 1, and with reference to fig. 2, one embodiment of the present application also provides a vision tracking based AGV motion control device 10, the device 10 including a memory 11, a processor 12, and a computer program stored on the memory 11 and executable on the processor 12.
The processor 12 and the memory 11 may be connected by a bus or other means.
The non-transitory software programs and instructions required to implement the vision tracking based AGV motion control method of the above embodiments are stored in the memory 11 and when executed by the processor 12, perform the vision tracking based AGV motion control method of the above embodiments.
Corresponding to the method of fig. 1, an embodiment of the present application further provides a computer readable storage medium having stored thereon a vision tracking based AGV motion control program that when executed by a processor implements the steps of the vision tracking based AGV motion control method described in any of the embodiments above.
The content in the method embodiment is applicable to the embodiment of the device, and the functions specifically realized by the embodiment of the device are the same as those of the method embodiment, and the obtained beneficial effects are the same as those of the method embodiment.
The Processor may be a Central-Processing Unit (CPU), other general-purpose Processor, digital-Signal-Processor (DSP), application-Specific-Integrated-Circuit (ASIC), field-Programmable-Gate Array (FPGA), or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, or the like. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like that is the control center of the vision tracking based AGV motion control device and that utilizes various interfaces and lines to connect the various parts of the overall vision tracking based AGV motion control device operable device.
The memory may be used to store the computer program and/or modules and the processor may implement various functions of the vision tracking based AGV motion control device by running or executing the computer program and/or modules stored in the memory and invoking data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like; the storage data area may store data (such as audio data, phonebook, etc.) created according to the use of the handset, etc. In addition, the memory may include high-speed random access memory, and may also include non-volatile memory, such as a hard disk, memory, plug-in hard disk, smart-Media-Card (SMC), secure-Digital (SD) Card, flash Card (Flash-Card), at least one disk storage device, flash memory device, or other volatile solid-state storage device.
While the present application has been described in considerable detail and with particularity with respect to several described embodiments, it is not intended to be limited to any such detail or embodiments or any particular embodiment, but is to be considered as providing a broad interpretation of such claims by reference to the appended claims in light of the prior art and thus effectively covering the intended scope of the application. Furthermore, the foregoing description of the application has been presented in its embodiments contemplated by the inventors for the purpose of providing a useful description, and for the purposes of providing a non-essential modification of the application that may not be presently contemplated, may represent an equivalent modification of the application.
Claims (6)
1. An AGV motion control method based on visual tracking is characterized by comprising the following steps:
collecting video streams around the AGV in the process of tracking targets by the AGV;
Extracting the latest acquired continuous adjacent frame images from the video stream, and determining the motion speed of the tracking target at the next moment based on the corresponding position of the tracking target in the continuous adjacent frames;
Controlling the AGV to control running at the moving speed of the tracking target at the next moment;
extracting the latest acquired continuous adjacent frame images from the video stream, and determining the motion speed of the tracking target at the next moment based on the corresponding position of the tracking target in the continuous adjacent frames, wherein the method comprises the following steps:
Extracting 3 frames of continuous images acquired latest from the video stream;
respectively carrying out connected domain identification on each frame of image, and determining a plurality of connected domains in each frame of image;
For each frame of image, generating a prediction frame of the image according to a trained target detection model, determining a connected domain matched with the prediction frame of the image, and determining n key points in an image area corresponding to the connected domain; wherein n is a natural number; the key points comprise corner points and edge points;
Determining the motion speed of the tracking target at the current moment according to the acquisition time interval of the previous two frames of images and the position deviation of n key points in the previous two frames of images;
Predicting the predicted position of the corresponding key point of the tracking target in the third frame image according to the motion speed of the tracking target;
Determining the speed deviation of the tracking target according to the predicted position of the corresponding key point of the tracking target in the third frame image and the position of the corresponding key point of the tracking target in the third frame image, and adjusting the moving speed of the tracking target at the current moment according to the speed deviation to obtain the moving speed of the tracking target at the next moment.
2. The method of claim 1, wherein the determining a connected domain matching the predicted frame of the image, and determining n key points in an image area corresponding to the connected domain, comprises:
determining a minimum circumscribed frame of a connected domain matched with a predicted frame of the image;
determining whether the size deviation of the minimum circumscribed frame and the prediction frame is within a set size threshold range;
determining a connected domain corresponding to a minimum external frame with the size deviation within a set size threshold range;
Traversing each pixel point in the connected domain, determining a gradient value of each pixel point, taking n/2 pixel points with the largest gradient value as corner points, and taking the pixel point with the largest gradient value between adjacent corner points as an edge point to obtain n/2 edge points;
and forming a set by n/2 corner points and n/2 edge points to obtain n key points in total.
3. The method for controlling the movement of the AGV based on the visual tracking according to claim 1, wherein the determining the movement speed of the tracking target at the current moment according to the acquisition time interval of the previous two frames of images and the position deviation of n key points in the previous two frames of images comprises:
respectively determining the minimum external frames of n key points in the first two frames of images;
if the area of the overlapped part of the minimum circumscribed frame of n key points in the first two frames of images is larger than 0.6 times of the minimum circumscribed frame, calculating the vector difference of the centers of the 2 minimum circumscribed rectangles corresponding to the first two frames of images, and taking the vector difference as the displacement of the moving target;
And taking the ratio of the displacement of the moving target and the acquisition time interval of the first two frames of images as the moving speed of the tracking target at the current moment.
4. The method for controlling the movement of the AGV based on the visual tracking according to claim 1, wherein determining the speed deviation of the tracking target according to the predicted position of the corresponding key point of the tracking target in the third frame image and the position of the corresponding key point of the tracking target in the third frame image, and adjusting the movement speed of the tracking target at the current moment according to the speed deviation, to obtain the movement speed of the tracking target at the next moment, comprises:
Calculating displacement deviation of a predicted position of a corresponding key point of the tracking target in the third frame image and a position of the corresponding key point of the tracking target in the third frame image;
Taking the ratio of the displacement deviation to the acquisition time interval of the first frame image and the third frame image as a speed deviation;
and vector calculation is carried out on the speed deviation and the movement speed of the tracking target at the current moment to obtain the movement speed of the tracking target at the next moment.
5. An AGV motion control device based on visual tracking, the device comprising:
At least one processor;
At least one memory for storing at least one program;
The at least one program, when executed by the at least one processor, causes the at least one processor to implement the vision tracking based AGV motion control method as set forth in any one of claims 1 to 4.
6. A computer readable storage medium, wherein a vision tracking based AGV motion control program is stored on the computer readable storage medium, and the vision tracking based AGV motion control program when executed by a processor implements the steps of the vision tracking based AGV motion control method according to any one of claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210608667.4A CN115035157B (en) | 2022-05-31 | 2022-05-31 | AGV motion control method, device and medium based on visual tracking |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210608667.4A CN115035157B (en) | 2022-05-31 | 2022-05-31 | AGV motion control method, device and medium based on visual tracking |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115035157A CN115035157A (en) | 2022-09-09 |
CN115035157B true CN115035157B (en) | 2024-07-12 |
Family
ID=83122851
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210608667.4A Active CN115035157B (en) | 2022-05-31 | 2022-05-31 | AGV motion control method, device and medium based on visual tracking |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115035157B (en) |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105578034A (en) * | 2015-12-10 | 2016-05-11 | 深圳市道通智能航空技术有限公司 | Control method, control device and system for carrying out tracking shooting for object |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108010067B (en) * | 2017-12-25 | 2018-12-07 | 北京航空航天大学 | A kind of visual target tracking method based on combination determination strategy |
CN110334635B (en) * | 2019-06-28 | 2021-08-31 | Oppo广东移动通信有限公司 | Subject tracking method, apparatus, electronic device, and computer-readable storage medium |
CN112233177B (en) * | 2020-10-10 | 2021-07-30 | 中国安全生产科学研究院 | A method and system for estimating position and attitude of unmanned aerial vehicle |
-
2022
- 2022-05-31 CN CN202210608667.4A patent/CN115035157B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105578034A (en) * | 2015-12-10 | 2016-05-11 | 深圳市道通智能航空技术有限公司 | Control method, control device and system for carrying out tracking shooting for object |
Also Published As
Publication number | Publication date |
---|---|
CN115035157A (en) | 2022-09-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106774345B (en) | Method and equipment for multi-robot cooperation | |
US20190065885A1 (en) | Object detection method and system | |
CN111798487B (en) | Target tracking method, apparatus and computer readable storage medium | |
CN110874578A (en) | Unmanned aerial vehicle visual angle vehicle identification and tracking method based on reinforcement learning | |
JP7306766B2 (en) | Target motion information detection method, apparatus, equipment and medium | |
CN112171675B (en) | Obstacle avoidance method and device for mobile robot, robot and storage medium | |
US11200434B2 (en) | System and method for tracking objects using multi-edge bounding box factors | |
CN110232418B (en) | Semantic recognition method, terminal and computer readable storage medium | |
CN116105721B (en) | Loop optimization method, device and equipment for map construction and storage medium | |
US11972578B2 (en) | Method and system for object tracking using online training | |
CN117994987B (en) | Traffic parameter extraction method and related device based on target detection technology | |
CN110046677B (en) | Data preprocessing method, map construction method, loop closure detection method and system | |
CN114091515A (en) | Obstacle detection method, obstacle detection device, electronic apparatus, and storage medium | |
US20210174079A1 (en) | Method and apparatus for object recognition | |
WO2023202335A1 (en) | Target tracking method, robot, computer device, and storage medium | |
US20200134323A1 (en) | Information processing apparatus, control method, and program | |
CN118608435A (en) | Point cloud dedistortion method, device, electronic device and readable storage medium | |
Ye et al. | Towards anytime optical flow estimation with event cameras | |
US12217496B2 (en) | Hand gesture detection method involves acquiring initial depth image using backbone and apparatus, and non-transitory computer-readable storage medium | |
CN115035157B (en) | AGV motion control method, device and medium based on visual tracking | |
Jati et al. | Multi-sperm tracking using Hungarian Kalman Filter on low frame rate video | |
Ali et al. | Fast region-based DPM object detection for autonomous vehicles | |
CN112967320A (en) | Ship target detection tracking method based on bridge collision avoidance | |
US11314968B2 (en) | Information processing apparatus, control method, and program | |
CN115909219A (en) | Scene change detection method and system based on video analysis |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |