CN119085695B - Obstacle map marking method and system combined with unmanned vehicle - Google Patents
Obstacle map marking method and system combined with unmanned vehicle Download PDFInfo
- Publication number
- CN119085695B CN119085695B CN202411578428.4A CN202411578428A CN119085695B CN 119085695 B CN119085695 B CN 119085695B CN 202411578428 A CN202411578428 A CN 202411578428A CN 119085695 B CN119085695 B CN 119085695B
- Authority
- CN
- China
- Prior art keywords
- dynamic
- unmanned vehicle
- obstacle
- static
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000003068 static effect Effects 0.000 claims description 120
- 230000033001 locomotion Effects 0.000 claims description 106
- 230000006870 function Effects 0.000 claims description 97
- 238000001514 detection method Methods 0.000 claims description 79
- 238000000034 method Methods 0.000 claims description 63
- 238000004422 calculation algorithm Methods 0.000 claims description 32
- 230000001133 acceleration Effects 0.000 claims description 26
- 239000011159 matrix material Substances 0.000 claims description 25
- 239000002131 composite material Substances 0.000 claims description 23
- 230000009466 transformation Effects 0.000 claims description 21
- 230000002452 interceptive effect Effects 0.000 claims description 19
- 230000008569 process Effects 0.000 claims description 19
- 239000013598 vector Substances 0.000 claims description 16
- 238000012216 screening Methods 0.000 claims description 15
- 238000005259 measurement Methods 0.000 claims description 14
- 238000001914 filtration Methods 0.000 claims description 11
- 230000011218 segmentation Effects 0.000 claims description 10
- 230000004927 fusion Effects 0.000 claims description 8
- 238000012545 processing Methods 0.000 claims description 8
- 238000004590 computer program Methods 0.000 claims description 6
- 230000001629 suppression Effects 0.000 claims description 6
- 239000000284 extract Substances 0.000 claims description 5
- 238000012546 transfer Methods 0.000 claims 2
- 238000010586 diagram Methods 0.000 description 15
- 230000004888 barrier function Effects 0.000 description 10
- 230000007704 transition Effects 0.000 description 8
- 230000008447 perception Effects 0.000 description 5
- 230000008859 change Effects 0.000 description 4
- 238000012805 post-processing Methods 0.000 description 3
- 238000005070 sampling Methods 0.000 description 3
- 238000013519 translation Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 239000002245 particle Substances 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000008034 disappearance Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000004540 process dynamic Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
- G01C21/1652—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with ranging devices, e.g. LIDAR or RADAR
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/3407—Route searching; Route guidance specially adapted for specific applications
- G01C21/343—Calculating itineraries, i.e. routes leading from a starting point to a series of categorical destinations using a global route restraint, round trips, touristic trips
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/3446—Details of route searching algorithms, e.g. Dijkstra, A*, arc-flags, using precalculated routes
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/36—Input/output arrangements for on-board computers
- G01C21/3667—Display of a road map
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01P—MEASURING LINEAR OR ANGULAR SPEED, ACCELERATION, DECELERATION, OR SHOCK; INDICATING PRESENCE, ABSENCE, OR DIRECTION, OF MOVEMENT
- G01P3/00—Measuring linear or angular speed; Measuring differences of linear or angular speeds
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/02—Systems using the reflection of electromagnetic waves other than radio waves
- G01S17/06—Systems determining position data of a target
- G01S17/42—Simultaneous measurement of distance and other co-ordinates
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/66—Tracking systems using electromagnetic waves other than radio waves
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/93—Lidar systems specially adapted for specific applications for anti-collision purposes
- G01S17/931—Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/22—Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
- G06V10/449—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
- G06V10/451—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
- G06V10/454—Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/762—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Electromagnetism (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Automation & Control Theory (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Networks & Wireless Communication (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Mathematical Physics (AREA)
- Biophysics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Biodiversity & Conservation Biology (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
- Radar Systems Or Details Thereof (AREA)
Abstract
The invention provides a method and a system for marking an obstacle map by combining an unmanned vehicle, which relate to the technical field of map marking and comprise the steps of determining the movement direction and the movement path of the mobile unmanned vehicle according to the current position information of the mobile unmanned vehicle and preset target position information; the method comprises the steps of determining position information of a static obstacle through a laser radar scanning device installed on a mobile unmanned aerial vehicle, detecting a dynamic obstacle target according to dynamic point cloud data through a pre-built target detection model, tracking the dynamic obstacle according to a target tracking algorithm to obtain motion information of the dynamic obstacle, planning a local path avoiding the static obstacle and the dynamic obstacle in real time in the motion direction of the mobile unmanned aerial vehicle through an improved artificial potential field method, and generating a motion control instruction of the mobile unmanned aerial vehicle according to the local path.
Description
Technical Field
The invention relates to a map marking technology, in particular to a barrier map marking method and system combined with an unmanned vehicle.
Background
With the rapid development of artificial intelligence and automatic driving technologies, unmanned vehicles have become one of the hot spot directions of current research. Unmanned vehicles can navigate autonomously in complex road environments, and safety and reliability of the unmanned vehicles are widely concerned. In order to realize autonomous decision and path planning of the unmanned vehicle, a vehicle-mounted sensor is required to sense the surrounding environment in real time, acquire information such as the position, the speed and the like of the obstacle, and construct a dynamic environment map.
In the prior art, unmanned vehicles mainly sense the surrounding environment through laser radar, millimeter wave radar, vision sensor and other equipment, and detect static and dynamic obstacles. The sensors can scan the surrounding area at a certain frequency to acquire information such as the distance and angle of the obstacle. However, due to the nature and mounting location of the different sensors, the raw data they acquire often have noise, outliers and inconsistencies that are difficult to directly use to construct an accurate map of the environment.
Disclosure of Invention
The embodiment of the invention provides a method and a system for marking an obstacle map by combining an unmanned vehicle, which can solve the problems in the prior art.
In a first aspect of an embodiment of the present invention,
Provided is an obstacle map marking method combined with an unmanned vehicle, comprising:
The method comprises the steps of determining the movement direction and the movement path of a mobile unmanned vehicle according to the current position information of the mobile unmanned vehicle and the preset target position information, and acquiring point cloud data reflecting static obstacle position information and dynamic obstacle instantaneous position information in an indoor environment by scanning the surrounding environment of the mobile unmanned vehicle in real time through a laser radar scanning device arranged on the mobile unmanned vehicle;
Clustering the point cloud data through a clustering segmentation algorithm, extracting static point cloud data reflecting static obstacles and dynamic point cloud data reflecting dynamic obstacles from the point cloud data, and determining the position information of the static obstacles according to the clustered static point cloud data;
And generating a motion control instruction of the mobile unmanned aerial vehicle according to the local path, wherein the motion control instruction comprises a speed control instruction and a steering control instruction of the mobile unmanned aerial vehicle, and the mobile unmanned aerial vehicle is controlled to move along the local path until reaching a preset target position.
In an alternative embodiment of the present invention,
The dynamic obstacle target detection of the dynamic point cloud data through a pre-constructed target detection model comprises the following steps:
processing the dynamic point cloud data based on a pre-constructed target detection model, wherein the target detection model extracts multi-scale point cloud characteristics of the dynamic point cloud data through a PointNet ++ backbone network;
After PointNet ++ backbone network, cascading a multi-scale interactive attention module, wherein the multi-scale interactive attention module comprises a plurality of parallel attention units, respectively processes point cloud characteristics of different scales, calculates correlation of the point cloud characteristics among different positions to obtain attention diagrams by mapping the point cloud characteristics under each scale to a low-dimensional embedded space, and adaptively aggregates the attention diagrams of different scales by a fusion module of a target detection model to obtain the multi-scale interactive characteristics;
The target detection model comprises three parallel characteristic pyramid detection heads which are respectively responsible for detecting a first size target, a second size target and a third size target, each characteristic pyramid detection head comprises a central point prediction branch and a target size prediction branch, the central point prediction branch generates a central point thermodynamic diagram consistent with multi-scale interactive characteristic resolution through a convolution and an up-sampling layer, and the target size prediction branch estimates the size of a target three-dimensional boundary frame corresponding to each central point thermodynamic diagram in a regression mode;
And carrying out post-processing on the output of the three detection heads through a non-maximum value suppression module to obtain a multi-scale target detection result, and outputting at least one of the category, the position, the size and the orientation information of the dynamic obstacle in the dynamic point cloud data.
In an alternative embodiment of the present invention,
Performing target tracking on the dynamic obstacle according to a target tracking algorithm, wherein the step of obtaining the motion information of the dynamic obstacle comprises the following steps:
Inputting a multi-scale target detection result into a multi-target tracking frame based on Kalman filtering, adding an acceleration component into a target state space by the multi-target tracking frame, setting a state transition matrix and a process noise covariance matrix in the target state space, wherein the state transition matrix and the process noise covariance matrix contain prior statistical information of target positions, speeds and accelerations and are used for describing nonlinear motion characteristics of the target;
Carrying out data association on a multi-scale target detection result at the current moment and a track obtained in advance by utilizing a two-stage cascade matching strategy, wherein the two-stage cascade matching strategy comprises a preliminary screening based on appearance feature cosine similarity and a fine screening based on IoU geometric measurement, a candidate matching pair set is obtained through the preliminary screening, and a final matching result is obtained through the fine screening on the basis;
Based on the final matching result and the target prior motion model, estimating the state parameters of each dynamic obstacle, including position, speed and acceleration, by expanding the prediction and update process of the Kalman filter, and generating a continuous smooth tracking track.
In an alternative embodiment of the present invention,
After obtaining the final matching result, the method further comprises performing motion compensation on the mobile drone:
and fusing and estimating the motion state of the vehicle by using the measurement information of the vehicle-mounted IMU and the wheel speed meter of the mobile unmanned vehicle, recursively calculating the transformation relation between the vehicle coordinate system and the world coordinate system according to the motion state of the vehicle, carrying out coordinate transformation on the prediction result and the observation result of the tracker of the mobile unmanned vehicle by using the transformation relation, and unifying the prediction result and the observation result to the world coordinate system to realize motion compensation of the vehicle.
In an alternative embodiment of the present invention,
Based on the position information of the static obstacle and the motion information of the dynamic obstacle, planning a local path avoiding the static obstacle and the dynamic obstacle in real time in the motion direction of the mobile unmanned vehicle by an improved artificial potential field method comprises the following steps:
Establishing a static Gaussian potential field function aiming at the position information of the static obstacle, and establishing a dynamic Gaussian potential field function aiming at the motion information of the dynamic obstacle, wherein the peak positions of the static Gaussian potential field function and the dynamic Gaussian potential field function are determined by the positions of the obstacle, the dynamic peak position of the dynamic Gaussian potential field function deviates according to the speed direction of the dynamic obstacle, and the deviation of the dynamic peak position of the dynamic Gaussian potential field function is in direct proportion to the product of the moving speed of the dynamic obstacle and a preset sliding time window;
the method comprises the steps of carrying out weighted superposition on dynamic Gaussian potential field functions of all dynamic obstacles, combining the static Gaussian potential field functions to obtain a composite Gaussian potential field function describing the environment where the mobile unmanned vehicle is located, and carrying out self-adaptive adjustment on the weight coefficient of each obstacle in the composite Gaussian potential field function according to the type, the size and the distance between each obstacle and the mobile unmanned vehicle;
Determining a function gradient of the composite Gaussian potential field function according to the current position of the mobile unmanned vehicle, wherein the function gradient comprises a positive gradient direction and a negative gradient direction, the negative gradient direction is used as a target movement direction of the mobile unmanned vehicle, and the target movement direction guides the mobile unmanned vehicle to be far away from static barriers and dynamic barriers;
And taking the negative gradient direction as a preferred direction, and generating a local path by utilizing a parameterized curve fitting algorithm in combination with the kinematic constraint of the mobile unmanned vehicle and the obstacle boundary constraint.
In an alternative embodiment of the present invention,
The composite gaussian potential field function is as follows:
;
the static gaussian potential field function is as follows:
;
the dynamic gaussian potential field function is as follows:
;
Wherein U (p, t) represents a composite Gaussian potential field function at time t, U s (p) represents a static Gaussian potential field function, U d (p, t) represents a dynamic Gaussian potential field function at time t, Representing a position vector of the mobile drone in a two-dimensional plane,A position vector representing a static obstacle,Representing the position vector of the dynamic obstacle, a s、Ad represents the magnitude of the static gaussian potential field function and the magnitude of the dynamic gaussian potential field function, respectively,、The covariance matrixes of the static Gaussian potential field function and the dynamic Gaussian potential field function are respectively represented, w i represents a weight coefficient corresponding to the ith dynamic Gaussian potential field function, and n represents the number of dynamic obstacles.
In a second aspect of an embodiment of the present application,
Providing an obstacle map marking system in combination with an unmanned vehicle, comprising:
the system comprises a first unit, a second unit, a third unit and a fourth unit, wherein the first unit is used for determining the motion direction and the motion path of the mobile unmanned aerial vehicle according to the current position information of the mobile unmanned aerial vehicle and the preset target position information;
The second unit is used for clustering the point cloud data through a clustering segmentation algorithm, extracting static point cloud data reflecting static obstacles and dynamic point cloud data reflecting dynamic obstacles from the point cloud data, and determining the position information of the static obstacles according to the clustered static point cloud data;
The third unit is used for planning a local path avoiding the static obstacle and the dynamic obstacle in real time in the moving direction of the mobile unmanned vehicle through an improved artificial potential field method based on the position information of the static obstacle and the moving information of the dynamic obstacle, and generating a moving control instruction of the mobile unmanned vehicle according to the local path, wherein the moving control instruction comprises a speed control instruction and a steering control instruction of the mobile unmanned vehicle, and the moving unmanned vehicle is controlled to move along the local path until reaching a preset target position.
In a third aspect of an embodiment of the present invention,
There is provided an electronic device including:
A processor;
a memory for storing processor-executable instructions;
Wherein the processor is configured to invoke the instructions stored in the memory to perform the method described previously.
In a fourth aspect of an embodiment of the present invention,
There is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the method as described above.
The moving direction and the path of the mobile unmanned aerial vehicle are determined by acquiring the current position of the mobile unmanned aerial vehicle and the preset target position information, a basis is provided for the follow-up local path planning, and the autonomous navigation and decision-making of the unmanned aerial vehicle are facilitated. The laser radar scanning device on the mobile unmanned vehicle is utilized to scan the surrounding environment in real time, comprehensive and accurate point cloud data are obtained, the position information of static obstacles and dynamic obstacles in the indoor environment is reflected, and data support is provided for the unmanned vehicle to sense a complex dynamic environment. The point cloud data is processed through the clustering segmentation algorithm, static barriers and dynamic barriers are effectively distinguished, key environmental information is extracted, data redundancy and calculation complexity are reduced, and environmental perception efficiency and accuracy are improved.
Aiming at the dynamic obstacle, the identification, positioning and tracking of the dynamic obstacle are realized through the target detection model and the target tracking algorithm, the motion information of the dynamic obstacle, including the position, the speed and the like, is obtained, and an important basis is provided for the dynamic obstacle avoidance and decision of the unmanned vehicle. Based on the position information of the static obstacle and the motion information of the dynamic obstacle, the local path is planned in real time through an improved artificial potential field method, and the generated path can effectively avoid the static obstacle and the dynamic obstacle, so that the motion safety and continuity of the unmanned vehicle are ensured, and the autonomous navigation capability of the unmanned vehicle in a complex dynamic environment is improved.
According to the planned local path, a speed and steering control instruction of the unmanned vehicle is generated, so that the movement control of the unmanned vehicle is realized, the unmanned vehicle can stably and accurately run along the planned path, and finally reaches a preset target position, and the autonomous navigation task is completed. The whole technical scheme realizes the complete closed loop from environment sensing and path planning to motion control of the unmanned vehicle, and all modules are tightly combined and mutually coordinated, so that the autonomous navigation performance and the safety of the unmanned vehicle in a complex dynamic environment are improved.
Drawings
FIG. 1 is a flow chart of an obstacle map marking method combined with an unmanned vehicle according to an embodiment of the invention;
Fig. 2 is a schematic structural diagram of an obstacle map marking system combined with an unmanned vehicle according to an embodiment of the invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The technical scheme of the invention is described in detail below by specific examples. The following embodiments may be combined with each other, and some embodiments may not be repeated for the same or similar concepts or processes.
Fig. 1 is a flow chart of a method for marking an obstacle map combined with an unmanned vehicle according to an embodiment of the invention, as shown in fig. 1, the method includes:
S101, determining a movement direction and a movement path of the mobile unmanned aerial vehicle according to the current position information of the mobile unmanned aerial vehicle and preset target position information; the method comprises the steps of scanning the surrounding environment of a mobile unmanned vehicle in real time through a laser radar scanning device arranged on the mobile unmanned vehicle, and obtaining point cloud data reflecting static obstacle position information and dynamic obstacle instantaneous position information in the indoor environment;
Illustratively, the current position coordinates of the drone under the global coordinate system are obtained in real time by moving a GPS positioning module mounted on the drone. The coordinates of the target location, which is typically given by the user or an upper layer decision module, are read from a preset navigation task. And transmitting the current position coordinates and the target position coordinates to a subsequent path planning module to serve as a starting point and an ending point of path planning.
According to the current position and the target position, calculating a movement direction angle of the unmanned vehicle, taking the current position as a starting point, taking the target position as an end point, and planning an initial movement path by combining the movement direction angle. The path can be a straight line path, and can also take the kinematic constraint of the unmanned vehicle into consideration to generate a smooth curve path. And taking the planned motion path as a reference for the subsequent local path planning, and adjusting and optimizing the planned motion path in real time according to the environment information.
One or more laser radar scanning devices, such as single-line laser radar, multi-line laser radar or three-dimensional laser radar, are arranged on the mobile unmanned vehicle. The lidar emits laser pulses at a frequency (e.g., 10 Hz) to the surrounding environment and receives the returned laser echo signals. And calculating the three-dimensional coordinates of each laser point according to the information such as the emission angle, the return time and the like of the laser pulse to form one frame of point cloud data. Each point in the point cloud data represents an observation point in the environment, reflecting the position information of the object at that point.
Denoising the acquired original point cloud data, removing abnormal points and outliers, and improving data quality. Common denoising methods include statistical filtering, radius filtering, and the like. And the point cloud data is downsampled, so that the data volume is reduced, and the processing efficiency is improved. Common downsampling methods are voxel grid downsampling, random downsampling, etc. And converting the point cloud data into a local coordinate system of the mobile unmanned vehicle, and establishing the local coordinate system by taking the current position of the unmanned vehicle as an origin and the movement direction as the positive direction of the x-axis. Under a local coordinate system, the point cloud data reflects the instantaneous position information of static obstacles and dynamic obstacles in the surrounding environment of the unmanned vehicle.
The point cloud data is divided into a plurality of independent point cloud clusters by using algorithms such as normal vector segmentation, euclidean distance clustering and the like, and each point cloud cluster represents a potential obstacle. And extracting the characteristics of each point cloud cluster, and calculating the characteristics of the geometric center, bounding box, main direction and the like of the point cloud cluster for subsequent obstacle recognition and tracking. Static obstacles and dynamic obstacles are distinguished according to the characteristics of the point cloud cluster. The position of the static obstacle is fixed and the position of the dynamic obstacle is changed along with time. Combining the point cloud clusters of the static obstacle to form a static environment map, and independently processing the point cloud clusters of the dynamic obstacle to extract the motion information of the dynamic obstacle.
And determining the position information of the static obstacle according to the clustered point cloud data. For each static obstacle point cloud cluster, calculating the geometric center coordinates of the static obstacle point cloud clusters as the position information of the static obstacle. And combining the position information of all the static obstacles into a static environment map for subsequent path planning and obstacle avoidance decision. The static environment map can be represented by different data structures, such as occupied grid map, topological map and the like, and a proper map representation method is selected according to the specific application requirements.
And (3) performing target detection on each dynamic obstacle point cloud cluster by using a pre-trained target detection model (such as SSD, YOLO and the like), and identifying different types of dynamic obstacles. And extracting the position, the size, the orientation and other information of each dynamic obstacle according to the target detection result, and constructing a motion state vector of the dynamic obstacle. The motion state of each dynamic obstacle is tracked and predicted by using a multi-target tracking algorithm (such as Kalman filtering, particle filtering and the like), and the motion trail of each dynamic obstacle in a future period of time is estimated. And taking the information such as the real-time position, speed, acceleration and the like of the dynamic obstacle as the motion information thereof, and using the information for subsequent path planning and obstacle avoidance decision.
The method comprises the steps of acquiring current position information and preset target position information in real time by a mobile unmanned vehicle, determining an initial movement direction and a path, acquiring point cloud data of surrounding environment by a laser radar scanning device, dividing, clustering and identifying the point cloud data, extracting position information of static obstacles and movement information of dynamic obstacles, and providing environment perception data support for subsequent path planning and obstacle avoidance decision.
S102, clustering the point cloud data through a clustering segmentation algorithm, extracting static point cloud data reflecting static obstacles and dynamic point cloud data reflecting dynamic obstacles from the point cloud data, and determining the position information of the static obstacles according to the clustered static point cloud data;
Illustratively, a suitable cluster segmentation algorithm, such as DBSCAN, K-means, spectral clustering, etc., is selected based on the characteristics of the point cloud data and the cluster target selection algorithm. Parameters of a clustering algorithm, such as a clustering distance threshold value, a clustering point number threshold value and the like, are set, and the parameters determine granularity and effect of clustering. The preprocessed point cloud data is input into a clustering algorithm, and the algorithm divides the point cloud data into a plurality of independent point cloud clusters according to the distance or similarity measurement between the points. Each point cloud cluster represents a potential obstacle or object, and points within the point cloud cluster have similar characteristics or locations.
Analyzing the clustered point cloud clusters, and distinguishing static barriers from dynamic barriers according to the characteristics of the point cloud clusters. The position of the point cloud cluster of the static obstacle is fixed in continuous multi-frame point cloud data, and the position of the point cloud cluster of the dynamic obstacle is changed between different frames. And (3) calculating the speed and the track of the point cloud cluster by tracking the position change of the point cloud cluster in the continuous frames, and judging whether the point cloud cluster is a static obstacle or a dynamic obstacle. The method comprises the steps of extracting point cloud clusters belonging to static obstacles to form a static point cloud data set, and extracting point cloud clusters belonging to dynamic obstacles to form a dynamic point cloud data set.
For each static point cloud cluster, calculating the geometric center coordinates of the static point cloud clusters as the position information of the static obstacle. The geometric center coordinates can be obtained by averaging the coordinates of all points in the point cloud cluster, and the position information of all static obstacles is combined into a static environment map for subsequent path planning and obstacle avoidance decision. The static environment map can be represented by different data structures, such as occupied grid map, topological map and the like, and a proper map representation method is selected according to the specific application requirements.
For each dynamic point cloud cluster, motion parameters such as speed and acceleration are estimated by tracking the position change of the dynamic point cloud cluster in continuous frames. The commonly used motion estimation methods include Kalman filtering, particle filtering and the like, and the states of the dynamic obstacles are recursively estimated and updated by establishing a motion model of the dynamic obstacles. And predicting the motion track and position distribution of the dynamic obstacle in a future period of time according to the estimated motion parameters, and providing a reference for subsequent path planning and obstacle avoidance decision. And taking the information such as real-time position, speed, acceleration and the like of the dynamic obstacle as the motion information thereof, and forming complete environment perception data together with a static environment map.
Through the steps, the mobile unmanned vehicle can cluster point cloud data through a clustering segmentation algorithm, extract static point cloud data reflecting static obstacles and dynamic point cloud data reflecting dynamic obstacles, determine position information of the static obstacles according to the static point cloud data, construct a static environment map, and perform motion estimation and prediction on the dynamic point cloud data to obtain motion information of the dynamic obstacles.
In an alternative embodiment of the present invention,
The dynamic obstacle target detection of the dynamic point cloud data through a pre-constructed target detection model comprises the following steps:
processing the dynamic point cloud data based on a pre-constructed target detection model, wherein the target detection model extracts multi-scale point cloud characteristics of the dynamic point cloud data through a PointNet ++ backbone network;
After PointNet ++ backbone network, cascading a multi-scale interactive attention module, wherein the multi-scale interactive attention module comprises a plurality of parallel attention units, respectively processes point cloud characteristics of different scales, calculates correlation of the point cloud characteristics among different positions to obtain attention diagrams by mapping the point cloud characteristics under each scale to a low-dimensional embedded space, and adaptively aggregates the attention diagrams of different scales by a fusion module of a target detection model to obtain the multi-scale interactive characteristics;
The target detection model comprises three parallel characteristic pyramid detection heads which are respectively responsible for detecting a first size target, a second size target and a third size target, each characteristic pyramid detection head comprises a central point prediction branch and a target size prediction branch, the central point prediction branch generates a central point thermodynamic diagram consistent with multi-scale interactive characteristic resolution through a convolution and an up-sampling layer, and the target size prediction branch estimates the size of a target three-dimensional boundary frame corresponding to each central point thermodynamic diagram in a regression mode;
And carrying out post-processing on the output of the three detection heads through a non-maximum value suppression module to obtain a multi-scale target detection result, and outputting at least one of the category, the position, the size and the orientation information of the dynamic obstacle in the dynamic point cloud data.
Illustratively, pointNet ++ is chosen as the backbone network of the object detection model for extracting multi-scale features of the dynamic point cloud data. The PointNet ++ network generates point cloud features of multiple scales by downsampling and feature extraction layer by layer, and the features of each scale represent point cloud representations of different levels of abstraction. The PointNet ++ network input is dynamic point cloud data, and the output is multi-scale point cloud characteristics, so that rich characteristic representations are provided for subsequent target detection tasks.
After PointNet ++ backbone network, multi-scale interactive attention modules are cascaded for enhancing interactions and fusion between different scale features. The multi-scale interactive attention module comprises a plurality of parallel attention units, and each attention unit corresponds to a scale point cloud characteristic. For each scale of point cloud features, the point cloud features are mapped to a low-dimensional embedding space through an attention unit, and feature embedding representation under the scale is obtained. And calculating the correlation between feature embedding of different positions, generating attention force diagram, and capturing the dependency relationship of the point cloud features in space. And through a fusion module of the target detection model, attention force diagrams of different scales are adaptively aggregated, multi-scale interactive characteristics are obtained, and effective fusion of the characteristics of different scales is realized.
The target detection model comprises three parallel characteristic pyramid detection heads which are respectively responsible for detecting targets with different sizes. The first detection head is responsible for detecting small-size targets, the second detection head is responsible for detecting medium-size targets, and the third detection head is responsible for detecting large-size targets. Each feature pyramid detection head includes two branches, a center point predicted branch and a target size predicted branch. The central point prediction branch gradually restores the multi-scale interactive characteristic to the same resolution as the input point cloud through a convolution and up-sampling layer, and generates a central point thermodynamic diagram corresponding to the multi-scale interactive characteristic. And estimating the size of the target three-dimensional boundary frame corresponding to each center point thermodynamic diagram position by a regression mode, wherein the target size prediction branch comprises length, width, height and other parameters.
And inputting the dynamic point cloud data into a constructed target detection model, and obtaining detection results of three scales through processing of PointNet ++ backbone network, a multi-scale interactive attention module and a feature pyramid detection head. For each scale of detection results, a set of candidate target detection frames is generated by predicting the bounding box sizes of the branch estimates from the center point thermodynamic diagram generated by the center point predicted branch and the target sizes. And combining the three scale candidate detection frames, and removing the repeated and redundant detection frames through a Non-maximum suppression (Non-Maximum Suppression, NMS) algorithm to obtain a final multi-scale target detection result. The target detection result includes information such as the type (e.g., pedestrian, vehicle, etc.), position (center coordinates of the three-dimensional bounding box), size (length, width, height of the three-dimensional bounding box), and orientation (direction of the three-dimensional bounding box) of the detected dynamic obstacle.
Through the steps, the mobile unmanned vehicle can process dynamic point cloud data by utilizing a pre-constructed target detection model, multi-scale point cloud features are extracted through a PointNet ++ backbone network, and interaction and fusion of the features are enhanced by using a multi-scale interactive attention module. Then, targets with different sizes are detected through three parallel characteristic pyramid detection heads, post-processing is carried out through non-maximum suppression, and finally information such as the category, the position, the size and the orientation of dynamic obstacles is output, so that important environment perception data are provided for obstacle avoidance and navigation of the unmanned vehicle.
In an alternative embodiment of the present invention,
Performing target tracking on the dynamic obstacle according to a target tracking algorithm, wherein the step of obtaining the motion information of the dynamic obstacle comprises the following steps:
Inputting a multi-scale target detection result into a multi-target tracking frame based on Kalman filtering, adding an acceleration component into a target state space by the multi-target tracking frame, setting a state transition matrix and a process noise covariance matrix in the target state space, wherein the state transition matrix and the process noise covariance matrix contain prior statistical information of target positions, speeds and accelerations and are used for describing nonlinear motion characteristics of the target;
Carrying out data association on a multi-scale target detection result at the current moment and a track obtained in advance by utilizing a two-stage cascade matching strategy, wherein the two-stage cascade matching strategy comprises a preliminary screening based on appearance feature cosine similarity and a fine screening based on IoU geometric measurement, a candidate matching pair set is obtained through the preliminary screening, and a final matching result is obtained through the fine screening on the basis;
Based on the final matching result and the target prior motion model, estimating the state parameters of each dynamic obstacle, including position, speed and acceleration, by expanding the prediction and update process of the Kalman filter, and generating a continuous smooth tracking track.
Illustratively, a multi-objective tracking framework based on Kalman filtering is constructed. The object state space is designed to include the position, velocity and acceleration components of the object to describe the motion state of the dynamic obstacle. Defining a state transition matrix, describing the change relation of the state of the target between continuous time steps, and taking the dynamic model of the position, the speed and the acceleration of the target into consideration. And setting a process noise covariance matrix which represents uncertainty and random disturbance in the target motion process and is used for adjusting the prediction and update processes of the Kalman filter. The prior statistical information of the target position, speed and acceleration is introduced into the state transition matrix and the process noise covariance matrix so as to better describe the nonlinear motion characteristic of the target.
And acquiring a multi-scale target detection result at the current moment, wherein the multi-scale target detection result comprises information such as the category, the position, the size, the orientation and the like of the target. And acquiring pre-existing tracking tracks from the tracking result of the previous moment, wherein each tracking track corresponds to one dynamic obstacle. And adopting a two-stage cascade matching strategy to carry out data correlation on the detection result at the current moment and the pre-existing tracking track.
And in the first stage (primary screening), calculating the cosine similarity of the appearance features between each detection result and each tracking track, screening candidate matching pairs with similarity higher than a threshold value, and forming a candidate matching pair set.
And a second stage (fine screening) of calculating the intersection ratio (IoU) between the detection result and the tracking track for each matching pair in the candidate matching pair set, and selecting IoU the largest matching pair as the final matching result.
And for each detection result and tracking track pair which are finally matched, estimating the target state by using an extended Kalman filter. And predicting the target state at the current moment according to the target state and the state transition matrix at the last moment to obtain the predicted position, speed and acceleration.
And updating the extended Kalman filter, namely taking a detection result at the current moment as an observation value, and combining a predicted target state and an observation noise covariance matrix, and updating the target state through Kalman gain to obtain updated position, speed and acceleration estimated values.
And regarding the unmatched detection result as a new target, initializing a new tracking track for the new target, and setting an initial state estimated value. For the unmatched tracking tracks, the targets are considered to be possibly blocked or leave the field of view, and whether to keep or terminate the tracking tracks is determined according to a strategy. And generating a tracking track of each dynamic obstacle according to the updated target state estimated value, and realizing continuous and smooth target motion representation. For each tracked dynamic obstacle, position, velocity and acceleration information in its state estimate is extracted. And the motion information of the dynamic barrier is encoded and serialized in a proper data format (such as JSON, XML and the like), so that the subsequent transmission and processing are convenient. And the motion information of the dynamic obstacle is sent to other modules of the unmanned vehicle, such as path planning, decision control and the like, so that the unmanned vehicle can carry out obstacle avoidance and navigation decisions.
Through the steps, the mobile unmanned vehicle can track and evaluate the state of the multi-scale target detection result by utilizing a multi-target tracking frame based on Kalman filtering. By introducing the position, speed and acceleration information of the target and adding priori statistical information into the state transition matrix and the process noise covariance matrix, the nonlinear motion characteristic of the target can be better described. And a two-stage cascade matching strategy is adopted for data association, and reliable matching of the detection result and the tracking track is realized by combining the appearance feature similarity and the geometric measurement. And finally, estimating the state parameters of each dynamic obstacle through the prediction and updating processes of the extended Kalman filter, generating a continuous smooth tracking track, providing the motion information of the dynamic obstacle to other modules of the unmanned vehicle, and supporting the realization of obstacle avoidance and navigation functions.
In an alternative embodiment of the present invention,
After obtaining the final matching result, the method further comprises performing motion compensation on the mobile drone:
and fusing and estimating the motion state of the vehicle by using the measurement information of the vehicle-mounted IMU and the wheel speed meter of the mobile unmanned vehicle, recursively calculating the transformation relation between the vehicle coordinate system and the world coordinate system according to the motion state of the vehicle, carrying out coordinate transformation on the prediction result and the observation result of the tracker of the mobile unmanned vehicle by using the transformation relation, and unifying the prediction result and the observation result to the world coordinate system to realize motion compensation of the vehicle.
Illustratively, measurement information of an in-vehicle Inertial Measurement Unit (IMU) on a mobile drone is acquired, including angular velocity and acceleration data. And acquiring measurement information of a wheel speed meter on the mobile unmanned vehicle, wherein the measurement information comprises wheel rotation speed and vehicle speed data. And (3) carrying out time synchronization and coordinate system alignment on measurement information of the IMU and the wheel speed meter, and ensuring the consistency of data in time and space. And (3) fusing the measurement information of the IMU and the wheel speed meter by using a Kalman filter or other fusion algorithms, and estimating the self-vehicle motion state of the mobile unmanned vehicle, wherein the self-vehicle motion state comprises parameters such as position, speed, gesture and the like.
The vehicle coordinate system is defined, usually taking the geometric center of the mobile unmanned vehicle as an origin, the vehicle advancing direction is x-axis, the left side is y-axis, and the vertical upward direction is z-axis. A world coordinate system is defined, typically a fixed global reference system is selected, such as the northeast day (ENU) coordinate system or the global geographic coordinate system (e.g., WGS 84). Based on the estimated state of motion of the vehicle, a transformation matrix or transformation parameters of the vehicle coordinate system relative to the world coordinate system, including translation vectors and rotation matrices, are recursively calculated. The transformation matrix or transformation parameters may be calculated by using dead reckoning (dead reckoning) method, and the transformation relationship between the vehicle coordinate system and the world coordinate system is recursively updated according to the amount of change in the position, speed, and posture of the own vehicle.
And obtaining a prediction result of the multi-target tracker in a vehicle coordinate system, wherein the prediction result comprises state quantities such as a predicted position, a speed, an acceleration and the like of each target. The prediction result of the tracker is transformed from the vehicle coordinate system to the world coordinate system using a transformation relationship between the vehicle coordinate system and the world coordinate system. The coordinate transformation may be implemented by multiplying the position vector of the prediction result with a transformation matrix and adding a translation vector. The transformed prediction result represents the prediction state of each target under the world coordinate system, and the prediction state comprises information such as position, speed, acceleration and the like.
And acquiring a multi-scale target detection result at the current moment, wherein the multi-scale target detection result is used as an observation result of a tracker and comprises information such as detection position, size and orientation of each target. The observation result of the tracker is transformed from the vehicle coordinate system to the world coordinate system using a transformation relationship between the vehicle coordinate system and the world coordinate system. The coordinate transformation may be achieved by multiplying the position vector of the observation by a transformation matrix and adding a translation vector. The transformed observations represent the detection state of each target in the world coordinate system, including information such as position, size and orientation.
And inputting the transformed prediction result and observation result into a multi-target tracker, and carrying out target tracking and state estimation under a world coordinate system. The multi-target tracker uses a data correlation algorithm (such as a hungarian algorithm or a JPDA algorithm) to match the observation result with the existing tracking track according to the prediction result and the observation result. And for successfully matched observation results and tracking tracks, updating state estimation values of the targets by using a Kalman filter or other estimation algorithms to obtain information such as target positions, speeds, accelerations and the like under a world coordinate system. And regarding the unmatched tracking track as target disappearance or occlusion, determining whether to keep or terminate the tracking track according to a strategy.
Through the steps, the mobile unmanned vehicle can realize the self-vehicle motion compensation, and the prediction result and the observation result of the tracker are unified to the world coordinate system for processing. Firstly, the motion state of the vehicle is estimated by fusing the measurement information of the vehicle-mounted IMU and the wheel speed meter, and the transformation relation between the vehicle coordinate system and the world coordinate system is calculated in a recursion mode. Then, the predicted result and the observed result of the tracker are transformed from the vehicle coordinate system to the world coordinate system by using the transformation relationship. Finally, performing target tracking and state estimation under a world coordinate system to obtain the motion information such as the position, the speed, the acceleration and the like of the dynamic obstacle, and providing more accurate and consistent perception information for decision control of the unmanned vehicle.
S103, planning a local path avoiding the static obstacle and the dynamic obstacle in real time in the motion direction of the mobile unmanned aerial vehicle through an improved artificial potential field method based on the position information of the static obstacle and the motion information of the dynamic obstacle, and generating a motion control instruction of the mobile unmanned aerial vehicle according to the local path, wherein the motion control instruction comprises a speed control instruction and a steering control instruction of the mobile unmanned aerial vehicle, and controlling the mobile unmanned aerial vehicle to move along the local path until reaching a preset target position.
In an alternative embodiment of the present invention,
Based on the position information of the static obstacle and the motion information of the dynamic obstacle, planning a local path avoiding the static obstacle and the dynamic obstacle in real time in the motion direction of the mobile unmanned vehicle by an improved artificial potential field method comprises the following steps:
Establishing a static Gaussian potential field function aiming at the position information of the static obstacle, and establishing a dynamic Gaussian potential field function aiming at the motion information of the dynamic obstacle, wherein the peak positions of the static Gaussian potential field function and the dynamic Gaussian potential field function are determined by the positions of the obstacle, the dynamic peak position of the dynamic Gaussian potential field function deviates according to the speed direction of the dynamic obstacle, and the deviation of the dynamic peak position of the dynamic Gaussian potential field function is in direct proportion to the product of the moving speed of the dynamic obstacle and a preset sliding time window;
the method comprises the steps of carrying out weighted superposition on dynamic Gaussian potential field functions of all dynamic obstacles, combining the static Gaussian potential field functions to obtain a composite Gaussian potential field function describing the environment where the mobile unmanned vehicle is located, and carrying out self-adaptive adjustment on the weight coefficient of each obstacle in the composite Gaussian potential field function according to the type, the size and the distance between each obstacle and the mobile unmanned vehicle;
Determining a function gradient of the composite Gaussian potential field function according to the current position of the mobile unmanned vehicle, wherein the function gradient comprises a positive gradient direction and a negative gradient direction, the negative gradient direction is used as a target movement direction of the mobile unmanned vehicle, and the target movement direction guides the mobile unmanned vehicle to be far away from static barriers and dynamic barriers;
And taking the negative gradient direction as a preferred direction, and generating a local path by utilizing a parameterized curve fitting algorithm in combination with the kinematic constraint of the mobile unmanned vehicle and the obstacle boundary constraint.
Illustratively, position information of static obstacles in the environment is acquired, including coordinates, size, shape, and the like of the obstacles. For each static obstacle, a gaussian potential field function is constructed, and the peak position of the function coincides with the position of the obstacle. The gaussian potential field function may be in the form of a two-dimensional or three-dimensional gaussian distribution, wherein the peak position represents the center of the obstacle, and the standard deviation or covariance matrix of the function is set according to the size and shape of the obstacle. The static Gaussian potential field function represents the spatial distribution and the influence range of static obstacles in the environment, and the potential field value at the peak position is highest, so that the area which is least favorable for the passing of the mobile unmanned vehicles is represented.
And acquiring the motion information of the dynamic obstacle in the environment, including the position, the speed, the acceleration and the like of the obstacle. For each dynamic obstacle, a gaussian potential field function is constructed, and the peak position of the function initially coincides with the position of the obstacle. And dynamically shifting the peak position of the Gaussian potential field function according to the speed direction and the size of the dynamic obstacle. The calculation formula of the offset is that the offset=the speed size×the preset sliding time window. The dynamic gaussian potential field function is similar in form to the static gaussian potential field function, but the peak position dynamically changes with the movement of the obstacle, indicating the predicted position and range of influence of the dynamic obstacle over a period of time in the future.
And carrying out weighted superposition on the dynamic Gaussian potential field functions of all the dynamic obstacles to obtain a comprehensive dynamic Gaussian potential field function. And superposing the static Gaussian potential field function and the comprehensive dynamic Gaussian potential field function to obtain a composite Gaussian potential field function describing the environment where the mobile unmanned vehicle is located. In the composite Gaussian potential field function, a weight coefficient is allocated to each obstacle, and the size of the weight coefficient is adaptively adjusted according to the type (such as pedestrians, vehicles and the like) of the obstacle, the size and the distance between the obstacle and the mobile unmanned vehicle. The type and size of the obstacle determine the potential threat degree of the obstacle to the mobile unmanned vehicle, and the closer the distance is, the larger the weight of the obstacle is, and the larger the influence on the mobile unmanned vehicle is.
And calculating the gradient of the composite Gaussian potential field function at the position according to the current position of the mobile unmanned vehicle. The gradient of the composite gaussian potential field function comprises two directions, a positive gradient direction and a negative gradient direction. The positive gradient direction points in the direction in which the potential field value increases fastest, indicating approaching the obstacle, and the negative gradient direction points in the direction in which the potential field value decreases fastest, indicating moving away from the obstacle. And taking the negative gradient direction of the composite Gaussian potential field function as the target motion direction of the mobile unmanned vehicle, and guiding the mobile unmanned vehicle to move away from static obstacles and dynamic obstacles and towards a safety area with a low potential field value.
And taking the negative gradient direction of the composite Gaussian potential field function as a preferred direction, and generating a local path by combining the kinematic constraint (such as maximum speed, acceleration, steering angle and the like) of the mobile unmanned vehicle and the boundary constraint (such as keeping a safe distance with an obstacle) of the obstacle. The local path may be generated using a parametric curve fitting algorithm, such as spline curves, bezier curves, etc., by optimizing the locations of the control points such that the generated curves satisfy the kinematic constraints and the obstacle boundary constraints and are aligned with the preferred direction as much as possible. The generated local path provides a smooth and safe obstacle avoidance track for the mobile unmanned vehicle, guides the mobile unmanned vehicle to pass between the static obstacle and the dynamic obstacle, and timely adjusts the running direction and speed.
Through the steps, the mobile unmanned vehicle can utilize an improved artificial potential field method to plan a local path for avoiding static obstacles and dynamic obstacles in real time. Firstly, a static Gaussian potential field function is established according to the position information of a static obstacle, a dynamic Gaussian potential field function is established according to the motion information of a dynamic obstacle, and a composite Gaussian potential field function is obtained through weighted superposition. Then, a negative gradient direction of the composite Gaussian potential field function is determined according to the current position of the mobile unmanned vehicle and is used as a target movement direction. And finally, taking the negative gradient direction as a preferred direction, combining the kinematic constraint and the obstacle boundary constraint, generating a smooth and safe local path by using a parameterized curve fitting algorithm, and guiding the mobile unmanned vehicle to realize obstacle avoidance and navigation in a complex environment.
In an alternative embodiment of the present invention,
The composite gaussian potential field function is as follows:
;
the static gaussian potential field function is as follows:
;
the dynamic gaussian potential field function is as follows:
;
Wherein U (p, t) represents a composite Gaussian potential field function at time t, U s (p) represents a static Gaussian potential field function, U d (p, t) represents a dynamic Gaussian potential field function at time t, Representing a position vector of the mobile drone in a two-dimensional plane,A position vector representing a static obstacle,Representing the position vector of the dynamic obstacle, a s、Ad represents the magnitude of the static gaussian potential field function and the magnitude of the dynamic gaussian potential field function, respectively,、The covariance matrixes of the static Gaussian potential field function and the dynamic Gaussian potential field function are respectively represented, w i represents a weight coefficient corresponding to the ith dynamic Gaussian potential field function, and n represents the number of dynamic obstacles.
Fig. 2 is a schematic structural diagram of an obstacle map marking system combined with an unmanned vehicle according to an embodiment of the present invention, as shown in fig. 2, the system includes:
the system comprises a first unit, a second unit, a third unit and a fourth unit, wherein the first unit is used for determining the motion direction and the motion path of the mobile unmanned aerial vehicle according to the current position information of the mobile unmanned aerial vehicle and the preset target position information;
The second unit is used for clustering the point cloud data through a clustering segmentation algorithm, extracting static point cloud data reflecting static obstacles and dynamic point cloud data reflecting dynamic obstacles from the point cloud data, and determining the position information of the static obstacles according to the clustered static point cloud data;
The third unit is used for planning a local path avoiding the static obstacle and the dynamic obstacle in real time in the moving direction of the mobile unmanned vehicle through an improved artificial potential field method based on the position information of the static obstacle and the moving information of the dynamic obstacle, and generating a moving control instruction of the mobile unmanned vehicle according to the local path, wherein the moving control instruction comprises a speed control instruction and a steering control instruction of the mobile unmanned vehicle, and the moving unmanned vehicle is controlled to move along the local path until reaching a preset target position.
In a third aspect of an embodiment of the present invention,
There is provided an electronic device including:
A processor;
a memory for storing processor-executable instructions;
Wherein the processor is configured to invoke the instructions stored in the memory to perform the method described previously.
In a fourth aspect of an embodiment of the present invention,
There is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the method as described above.
The present invention may be a method, apparatus, system, and/or computer program product. The computer program product may include a computer readable storage medium having computer readable program instructions embodied thereon for performing various aspects of the present invention.
It should be noted that the above embodiments are merely for illustrating the technical solution of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the above embodiments, it should be understood by those skilled in the art that the technical solution described in the above embodiments may be modified or some or all of the technical features may be equivalently replaced, and these modifications or substitutions do not make the essence of the corresponding technical solution deviate from the scope of the technical solution of the embodiments of the present invention.
Claims (7)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202411578428.4A CN119085695B (en) | 2024-11-07 | 2024-11-07 | Obstacle map marking method and system combined with unmanned vehicle |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202411578428.4A CN119085695B (en) | 2024-11-07 | 2024-11-07 | Obstacle map marking method and system combined with unmanned vehicle |
Publications (2)
Publication Number | Publication Date |
---|---|
CN119085695A CN119085695A (en) | 2024-12-06 |
CN119085695B true CN119085695B (en) | 2025-01-21 |
Family
ID=93664173
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202411578428.4A Active CN119085695B (en) | 2024-11-07 | 2024-11-07 | Obstacle map marking method and system combined with unmanned vehicle |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN119085695B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN119556303A (en) * | 2025-01-21 | 2025-03-04 | 北京飞安航空科技有限公司 | Road obstacle perception system for unmanned vehicles based on lidar |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115683145A (en) * | 2022-11-03 | 2023-02-03 | 北京踏歌智行科技有限公司 | Automatic driving safety obstacle avoidance method based on track prediction |
CN115861968A (en) * | 2022-12-13 | 2023-03-28 | 徐工集团工程机械股份有限公司建设机械分公司 | Dynamic obstacle removing method based on real-time point cloud data |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10665115B2 (en) * | 2016-01-05 | 2020-05-26 | California Institute Of Technology | Controlling unmanned aerial vehicles to avoid obstacle collision |
WO2023183633A1 (en) * | 2022-03-25 | 2023-09-28 | Innovusion, Inc. | Methods and systems fault detection in lidar |
CN118429377A (en) * | 2024-05-07 | 2024-08-02 | 广州文远知行科技有限公司 | Thermodynamic diagram-based vehicle track determining method and device, vehicle and medium |
-
2024
- 2024-11-07 CN CN202411578428.4A patent/CN119085695B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115683145A (en) * | 2022-11-03 | 2023-02-03 | 北京踏歌智行科技有限公司 | Automatic driving safety obstacle avoidance method based on track prediction |
CN115861968A (en) * | 2022-12-13 | 2023-03-28 | 徐工集团工程机械股份有限公司建设机械分公司 | Dynamic obstacle removing method based on real-time point cloud data |
Also Published As
Publication number | Publication date |
---|---|
CN119085695A (en) | 2024-12-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US12223674B2 (en) | Methods and systems for joint pose and shape estimation of objects from sensor data | |
US20240338567A1 (en) | Multi-Task Multi-Sensor Fusion for Three-Dimensional Object Detection | |
EP3745158B1 (en) | Methods and systems for computer-based determining of presence of dynamic objects | |
US11593950B2 (en) | System and method for movement detection | |
KR102379295B1 (en) | RGB point cloud-based map generation system for autonomous vehicles | |
KR102376709B1 (en) | Point cloud registration system for autonomous vehicles | |
KR102334641B1 (en) | Map Partitioning System for Autonomous Vehicles | |
KR102319065B1 (en) | Real-time map generation system for autonomous vehicles | |
US11960290B2 (en) | Systems and methods for end-to-end trajectory prediction using radar, LIDAR, and maps | |
US10229510B2 (en) | Systems and methods to track vehicles proximate perceived by an autonomous vehicle | |
Scherer et al. | River mapping from a flying robot: state estimation, river detection, and obstacle mapping | |
WO2020243162A1 (en) | Methods and systems for trajectory forecasting with recurrent neural networks using inertial behavioral rollout | |
EP4078535A1 (en) | Methods and systems for constructing map data using poisson surface reconstruction | |
CN111771141A (en) | LIDAR positioning in autonomous vehicles using 3D CNN networks for solution inference | |
CN111788571A (en) | vehicle tracking | |
JP2019527832A (en) | System and method for accurate localization and mapping | |
US12026894B2 (en) | System for predicting near future location of object | |
CN109564285A (en) | Method and system for detecting ground marks in a traffic environment of a mobile unit | |
CN114120075B (en) | Three-dimensional target detection method integrating monocular camera and laser radar | |
CN119085695B (en) | Obstacle map marking method and system combined with unmanned vehicle | |
Liu et al. | Precise positioning and prediction system for autonomous driving based on generative artificial intelligence | |
CN115485698A (en) | Space-time interaction network | |
EP4148599A1 (en) | Systems and methods for providing and using confidence estimations for semantic labeling | |
CN115451948A (en) | A positioning odometer method and system for agricultural unmanned vehicles based on multi-sensor fusion | |
CN113741550B (en) | Mobile robot following method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |