[go: up one dir, main page]

CN119085695B - Obstacle map marking method and system combined with unmanned vehicle - Google Patents

Obstacle map marking method and system combined with unmanned vehicle Download PDF

Info

Publication number
CN119085695B
CN119085695B CN202411578428.4A CN202411578428A CN119085695B CN 119085695 B CN119085695 B CN 119085695B CN 202411578428 A CN202411578428 A CN 202411578428A CN 119085695 B CN119085695 B CN 119085695B
Authority
CN
China
Prior art keywords
dynamic
unmanned vehicle
obstacle
static
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202411578428.4A
Other languages
Chinese (zh)
Other versions
CN119085695A (en
Inventor
于飞
刘言
汪平凡
杨万鹏
陈嘉鑫
郭琴琴
郭元明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xingwang Marine Electric Technology Co ltd
Original Assignee
Beijing Xingwang Marine Electric Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xingwang Marine Electric Technology Co ltd filed Critical Beijing Xingwang Marine Electric Technology Co ltd
Priority to CN202411578428.4A priority Critical patent/CN119085695B/en
Publication of CN119085695A publication Critical patent/CN119085695A/en
Application granted granted Critical
Publication of CN119085695B publication Critical patent/CN119085695B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1652Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with ranging devices, e.g. LIDAR or RADAR
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3407Route searching; Route guidance specially adapted for specific applications
    • G01C21/343Calculating itineraries, i.e. routes leading from a starting point to a series of categorical destinations using a global route restraint, round trips, touristic trips
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3446Details of route searching algorithms, e.g. Dijkstra, A*, arc-flags, using precalculated routes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3667Display of a road map
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01PMEASURING LINEAR OR ANGULAR SPEED, ACCELERATION, DECELERATION, OR SHOCK; INDICATING PRESENCE, ABSENCE, OR DIRECTION, OF MOVEMENT
    • G01P3/00Measuring linear or angular speed; Measuring differences of linear or angular speeds
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/42Simultaneous measurement of distance and other co-ordinates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/66Tracking systems using electromagnetic waves other than radio waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Electromagnetism (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Automation & Control Theory (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention provides a method and a system for marking an obstacle map by combining an unmanned vehicle, which relate to the technical field of map marking and comprise the steps of determining the movement direction and the movement path of the mobile unmanned vehicle according to the current position information of the mobile unmanned vehicle and preset target position information; the method comprises the steps of determining position information of a static obstacle through a laser radar scanning device installed on a mobile unmanned aerial vehicle, detecting a dynamic obstacle target according to dynamic point cloud data through a pre-built target detection model, tracking the dynamic obstacle according to a target tracking algorithm to obtain motion information of the dynamic obstacle, planning a local path avoiding the static obstacle and the dynamic obstacle in real time in the motion direction of the mobile unmanned aerial vehicle through an improved artificial potential field method, and generating a motion control instruction of the mobile unmanned aerial vehicle according to the local path.

Description

Obstacle map marking method and system combined with unmanned vehicle
Technical Field
The invention relates to a map marking technology, in particular to a barrier map marking method and system combined with an unmanned vehicle.
Background
With the rapid development of artificial intelligence and automatic driving technologies, unmanned vehicles have become one of the hot spot directions of current research. Unmanned vehicles can navigate autonomously in complex road environments, and safety and reliability of the unmanned vehicles are widely concerned. In order to realize autonomous decision and path planning of the unmanned vehicle, a vehicle-mounted sensor is required to sense the surrounding environment in real time, acquire information such as the position, the speed and the like of the obstacle, and construct a dynamic environment map.
In the prior art, unmanned vehicles mainly sense the surrounding environment through laser radar, millimeter wave radar, vision sensor and other equipment, and detect static and dynamic obstacles. The sensors can scan the surrounding area at a certain frequency to acquire information such as the distance and angle of the obstacle. However, due to the nature and mounting location of the different sensors, the raw data they acquire often have noise, outliers and inconsistencies that are difficult to directly use to construct an accurate map of the environment.
Disclosure of Invention
The embodiment of the invention provides a method and a system for marking an obstacle map by combining an unmanned vehicle, which can solve the problems in the prior art.
In a first aspect of an embodiment of the present invention,
Provided is an obstacle map marking method combined with an unmanned vehicle, comprising:
The method comprises the steps of determining the movement direction and the movement path of a mobile unmanned vehicle according to the current position information of the mobile unmanned vehicle and the preset target position information, and acquiring point cloud data reflecting static obstacle position information and dynamic obstacle instantaneous position information in an indoor environment by scanning the surrounding environment of the mobile unmanned vehicle in real time through a laser radar scanning device arranged on the mobile unmanned vehicle;
Clustering the point cloud data through a clustering segmentation algorithm, extracting static point cloud data reflecting static obstacles and dynamic point cloud data reflecting dynamic obstacles from the point cloud data, and determining the position information of the static obstacles according to the clustered static point cloud data;
And generating a motion control instruction of the mobile unmanned aerial vehicle according to the local path, wherein the motion control instruction comprises a speed control instruction and a steering control instruction of the mobile unmanned aerial vehicle, and the mobile unmanned aerial vehicle is controlled to move along the local path until reaching a preset target position.
In an alternative embodiment of the present invention,
The dynamic obstacle target detection of the dynamic point cloud data through a pre-constructed target detection model comprises the following steps:
processing the dynamic point cloud data based on a pre-constructed target detection model, wherein the target detection model extracts multi-scale point cloud characteristics of the dynamic point cloud data through a PointNet ++ backbone network;
After PointNet ++ backbone network, cascading a multi-scale interactive attention module, wherein the multi-scale interactive attention module comprises a plurality of parallel attention units, respectively processes point cloud characteristics of different scales, calculates correlation of the point cloud characteristics among different positions to obtain attention diagrams by mapping the point cloud characteristics under each scale to a low-dimensional embedded space, and adaptively aggregates the attention diagrams of different scales by a fusion module of a target detection model to obtain the multi-scale interactive characteristics;
The target detection model comprises three parallel characteristic pyramid detection heads which are respectively responsible for detecting a first size target, a second size target and a third size target, each characteristic pyramid detection head comprises a central point prediction branch and a target size prediction branch, the central point prediction branch generates a central point thermodynamic diagram consistent with multi-scale interactive characteristic resolution through a convolution and an up-sampling layer, and the target size prediction branch estimates the size of a target three-dimensional boundary frame corresponding to each central point thermodynamic diagram in a regression mode;
And carrying out post-processing on the output of the three detection heads through a non-maximum value suppression module to obtain a multi-scale target detection result, and outputting at least one of the category, the position, the size and the orientation information of the dynamic obstacle in the dynamic point cloud data.
In an alternative embodiment of the present invention,
Performing target tracking on the dynamic obstacle according to a target tracking algorithm, wherein the step of obtaining the motion information of the dynamic obstacle comprises the following steps:
Inputting a multi-scale target detection result into a multi-target tracking frame based on Kalman filtering, adding an acceleration component into a target state space by the multi-target tracking frame, setting a state transition matrix and a process noise covariance matrix in the target state space, wherein the state transition matrix and the process noise covariance matrix contain prior statistical information of target positions, speeds and accelerations and are used for describing nonlinear motion characteristics of the target;
Carrying out data association on a multi-scale target detection result at the current moment and a track obtained in advance by utilizing a two-stage cascade matching strategy, wherein the two-stage cascade matching strategy comprises a preliminary screening based on appearance feature cosine similarity and a fine screening based on IoU geometric measurement, a candidate matching pair set is obtained through the preliminary screening, and a final matching result is obtained through the fine screening on the basis;
Based on the final matching result and the target prior motion model, estimating the state parameters of each dynamic obstacle, including position, speed and acceleration, by expanding the prediction and update process of the Kalman filter, and generating a continuous smooth tracking track.
In an alternative embodiment of the present invention,
After obtaining the final matching result, the method further comprises performing motion compensation on the mobile drone:
and fusing and estimating the motion state of the vehicle by using the measurement information of the vehicle-mounted IMU and the wheel speed meter of the mobile unmanned vehicle, recursively calculating the transformation relation between the vehicle coordinate system and the world coordinate system according to the motion state of the vehicle, carrying out coordinate transformation on the prediction result and the observation result of the tracker of the mobile unmanned vehicle by using the transformation relation, and unifying the prediction result and the observation result to the world coordinate system to realize motion compensation of the vehicle.
In an alternative embodiment of the present invention,
Based on the position information of the static obstacle and the motion information of the dynamic obstacle, planning a local path avoiding the static obstacle and the dynamic obstacle in real time in the motion direction of the mobile unmanned vehicle by an improved artificial potential field method comprises the following steps:
Establishing a static Gaussian potential field function aiming at the position information of the static obstacle, and establishing a dynamic Gaussian potential field function aiming at the motion information of the dynamic obstacle, wherein the peak positions of the static Gaussian potential field function and the dynamic Gaussian potential field function are determined by the positions of the obstacle, the dynamic peak position of the dynamic Gaussian potential field function deviates according to the speed direction of the dynamic obstacle, and the deviation of the dynamic peak position of the dynamic Gaussian potential field function is in direct proportion to the product of the moving speed of the dynamic obstacle and a preset sliding time window;
the method comprises the steps of carrying out weighted superposition on dynamic Gaussian potential field functions of all dynamic obstacles, combining the static Gaussian potential field functions to obtain a composite Gaussian potential field function describing the environment where the mobile unmanned vehicle is located, and carrying out self-adaptive adjustment on the weight coefficient of each obstacle in the composite Gaussian potential field function according to the type, the size and the distance between each obstacle and the mobile unmanned vehicle;
Determining a function gradient of the composite Gaussian potential field function according to the current position of the mobile unmanned vehicle, wherein the function gradient comprises a positive gradient direction and a negative gradient direction, the negative gradient direction is used as a target movement direction of the mobile unmanned vehicle, and the target movement direction guides the mobile unmanned vehicle to be far away from static barriers and dynamic barriers;
And taking the negative gradient direction as a preferred direction, and generating a local path by utilizing a parameterized curve fitting algorithm in combination with the kinematic constraint of the mobile unmanned vehicle and the obstacle boundary constraint.
In an alternative embodiment of the present invention,
The composite gaussian potential field function is as follows:
;
the static gaussian potential field function is as follows:
;
the dynamic gaussian potential field function is as follows:
;
Wherein U (p, t) represents a composite Gaussian potential field function at time t, U s (p) represents a static Gaussian potential field function, U d (p, t) represents a dynamic Gaussian potential field function at time t, Representing a position vector of the mobile drone in a two-dimensional plane,A position vector representing a static obstacle,Representing the position vector of the dynamic obstacle, a s、Ad represents the magnitude of the static gaussian potential field function and the magnitude of the dynamic gaussian potential field function, respectively,The covariance matrixes of the static Gaussian potential field function and the dynamic Gaussian potential field function are respectively represented, w i represents a weight coefficient corresponding to the ith dynamic Gaussian potential field function, and n represents the number of dynamic obstacles.
In a second aspect of an embodiment of the present application,
Providing an obstacle map marking system in combination with an unmanned vehicle, comprising:
the system comprises a first unit, a second unit, a third unit and a fourth unit, wherein the first unit is used for determining the motion direction and the motion path of the mobile unmanned aerial vehicle according to the current position information of the mobile unmanned aerial vehicle and the preset target position information;
The second unit is used for clustering the point cloud data through a clustering segmentation algorithm, extracting static point cloud data reflecting static obstacles and dynamic point cloud data reflecting dynamic obstacles from the point cloud data, and determining the position information of the static obstacles according to the clustered static point cloud data;
The third unit is used for planning a local path avoiding the static obstacle and the dynamic obstacle in real time in the moving direction of the mobile unmanned vehicle through an improved artificial potential field method based on the position information of the static obstacle and the moving information of the dynamic obstacle, and generating a moving control instruction of the mobile unmanned vehicle according to the local path, wherein the moving control instruction comprises a speed control instruction and a steering control instruction of the mobile unmanned vehicle, and the moving unmanned vehicle is controlled to move along the local path until reaching a preset target position.
In a third aspect of an embodiment of the present invention,
There is provided an electronic device including:
A processor;
a memory for storing processor-executable instructions;
Wherein the processor is configured to invoke the instructions stored in the memory to perform the method described previously.
In a fourth aspect of an embodiment of the present invention,
There is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the method as described above.
The moving direction and the path of the mobile unmanned aerial vehicle are determined by acquiring the current position of the mobile unmanned aerial vehicle and the preset target position information, a basis is provided for the follow-up local path planning, and the autonomous navigation and decision-making of the unmanned aerial vehicle are facilitated. The laser radar scanning device on the mobile unmanned vehicle is utilized to scan the surrounding environment in real time, comprehensive and accurate point cloud data are obtained, the position information of static obstacles and dynamic obstacles in the indoor environment is reflected, and data support is provided for the unmanned vehicle to sense a complex dynamic environment. The point cloud data is processed through the clustering segmentation algorithm, static barriers and dynamic barriers are effectively distinguished, key environmental information is extracted, data redundancy and calculation complexity are reduced, and environmental perception efficiency and accuracy are improved.
Aiming at the dynamic obstacle, the identification, positioning and tracking of the dynamic obstacle are realized through the target detection model and the target tracking algorithm, the motion information of the dynamic obstacle, including the position, the speed and the like, is obtained, and an important basis is provided for the dynamic obstacle avoidance and decision of the unmanned vehicle. Based on the position information of the static obstacle and the motion information of the dynamic obstacle, the local path is planned in real time through an improved artificial potential field method, and the generated path can effectively avoid the static obstacle and the dynamic obstacle, so that the motion safety and continuity of the unmanned vehicle are ensured, and the autonomous navigation capability of the unmanned vehicle in a complex dynamic environment is improved.
According to the planned local path, a speed and steering control instruction of the unmanned vehicle is generated, so that the movement control of the unmanned vehicle is realized, the unmanned vehicle can stably and accurately run along the planned path, and finally reaches a preset target position, and the autonomous navigation task is completed. The whole technical scheme realizes the complete closed loop from environment sensing and path planning to motion control of the unmanned vehicle, and all modules are tightly combined and mutually coordinated, so that the autonomous navigation performance and the safety of the unmanned vehicle in a complex dynamic environment are improved.
Drawings
FIG. 1 is a flow chart of an obstacle map marking method combined with an unmanned vehicle according to an embodiment of the invention;
Fig. 2 is a schematic structural diagram of an obstacle map marking system combined with an unmanned vehicle according to an embodiment of the invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The technical scheme of the invention is described in detail below by specific examples. The following embodiments may be combined with each other, and some embodiments may not be repeated for the same or similar concepts or processes.
Fig. 1 is a flow chart of a method for marking an obstacle map combined with an unmanned vehicle according to an embodiment of the invention, as shown in fig. 1, the method includes:
S101, determining a movement direction and a movement path of the mobile unmanned aerial vehicle according to the current position information of the mobile unmanned aerial vehicle and preset target position information; the method comprises the steps of scanning the surrounding environment of a mobile unmanned vehicle in real time through a laser radar scanning device arranged on the mobile unmanned vehicle, and obtaining point cloud data reflecting static obstacle position information and dynamic obstacle instantaneous position information in the indoor environment;
Illustratively, the current position coordinates of the drone under the global coordinate system are obtained in real time by moving a GPS positioning module mounted on the drone. The coordinates of the target location, which is typically given by the user or an upper layer decision module, are read from a preset navigation task. And transmitting the current position coordinates and the target position coordinates to a subsequent path planning module to serve as a starting point and an ending point of path planning.
According to the current position and the target position, calculating a movement direction angle of the unmanned vehicle, taking the current position as a starting point, taking the target position as an end point, and planning an initial movement path by combining the movement direction angle. The path can be a straight line path, and can also take the kinematic constraint of the unmanned vehicle into consideration to generate a smooth curve path. And taking the planned motion path as a reference for the subsequent local path planning, and adjusting and optimizing the planned motion path in real time according to the environment information.
One or more laser radar scanning devices, such as single-line laser radar, multi-line laser radar or three-dimensional laser radar, are arranged on the mobile unmanned vehicle. The lidar emits laser pulses at a frequency (e.g., 10 Hz) to the surrounding environment and receives the returned laser echo signals. And calculating the three-dimensional coordinates of each laser point according to the information such as the emission angle, the return time and the like of the laser pulse to form one frame of point cloud data. Each point in the point cloud data represents an observation point in the environment, reflecting the position information of the object at that point.
Denoising the acquired original point cloud data, removing abnormal points and outliers, and improving data quality. Common denoising methods include statistical filtering, radius filtering, and the like. And the point cloud data is downsampled, so that the data volume is reduced, and the processing efficiency is improved. Common downsampling methods are voxel grid downsampling, random downsampling, etc. And converting the point cloud data into a local coordinate system of the mobile unmanned vehicle, and establishing the local coordinate system by taking the current position of the unmanned vehicle as an origin and the movement direction as the positive direction of the x-axis. Under a local coordinate system, the point cloud data reflects the instantaneous position information of static obstacles and dynamic obstacles in the surrounding environment of the unmanned vehicle.
The point cloud data is divided into a plurality of independent point cloud clusters by using algorithms such as normal vector segmentation, euclidean distance clustering and the like, and each point cloud cluster represents a potential obstacle. And extracting the characteristics of each point cloud cluster, and calculating the characteristics of the geometric center, bounding box, main direction and the like of the point cloud cluster for subsequent obstacle recognition and tracking. Static obstacles and dynamic obstacles are distinguished according to the characteristics of the point cloud cluster. The position of the static obstacle is fixed and the position of the dynamic obstacle is changed along with time. Combining the point cloud clusters of the static obstacle to form a static environment map, and independently processing the point cloud clusters of the dynamic obstacle to extract the motion information of the dynamic obstacle.
And determining the position information of the static obstacle according to the clustered point cloud data. For each static obstacle point cloud cluster, calculating the geometric center coordinates of the static obstacle point cloud clusters as the position information of the static obstacle. And combining the position information of all the static obstacles into a static environment map for subsequent path planning and obstacle avoidance decision. The static environment map can be represented by different data structures, such as occupied grid map, topological map and the like, and a proper map representation method is selected according to the specific application requirements.
And (3) performing target detection on each dynamic obstacle point cloud cluster by using a pre-trained target detection model (such as SSD, YOLO and the like), and identifying different types of dynamic obstacles. And extracting the position, the size, the orientation and other information of each dynamic obstacle according to the target detection result, and constructing a motion state vector of the dynamic obstacle. The motion state of each dynamic obstacle is tracked and predicted by using a multi-target tracking algorithm (such as Kalman filtering, particle filtering and the like), and the motion trail of each dynamic obstacle in a future period of time is estimated. And taking the information such as the real-time position, speed, acceleration and the like of the dynamic obstacle as the motion information thereof, and using the information for subsequent path planning and obstacle avoidance decision.
The method comprises the steps of acquiring current position information and preset target position information in real time by a mobile unmanned vehicle, determining an initial movement direction and a path, acquiring point cloud data of surrounding environment by a laser radar scanning device, dividing, clustering and identifying the point cloud data, extracting position information of static obstacles and movement information of dynamic obstacles, and providing environment perception data support for subsequent path planning and obstacle avoidance decision.
S102, clustering the point cloud data through a clustering segmentation algorithm, extracting static point cloud data reflecting static obstacles and dynamic point cloud data reflecting dynamic obstacles from the point cloud data, and determining the position information of the static obstacles according to the clustered static point cloud data;
Illustratively, a suitable cluster segmentation algorithm, such as DBSCAN, K-means, spectral clustering, etc., is selected based on the characteristics of the point cloud data and the cluster target selection algorithm. Parameters of a clustering algorithm, such as a clustering distance threshold value, a clustering point number threshold value and the like, are set, and the parameters determine granularity and effect of clustering. The preprocessed point cloud data is input into a clustering algorithm, and the algorithm divides the point cloud data into a plurality of independent point cloud clusters according to the distance or similarity measurement between the points. Each point cloud cluster represents a potential obstacle or object, and points within the point cloud cluster have similar characteristics or locations.
Analyzing the clustered point cloud clusters, and distinguishing static barriers from dynamic barriers according to the characteristics of the point cloud clusters. The position of the point cloud cluster of the static obstacle is fixed in continuous multi-frame point cloud data, and the position of the point cloud cluster of the dynamic obstacle is changed between different frames. And (3) calculating the speed and the track of the point cloud cluster by tracking the position change of the point cloud cluster in the continuous frames, and judging whether the point cloud cluster is a static obstacle or a dynamic obstacle. The method comprises the steps of extracting point cloud clusters belonging to static obstacles to form a static point cloud data set, and extracting point cloud clusters belonging to dynamic obstacles to form a dynamic point cloud data set.
For each static point cloud cluster, calculating the geometric center coordinates of the static point cloud clusters as the position information of the static obstacle. The geometric center coordinates can be obtained by averaging the coordinates of all points in the point cloud cluster, and the position information of all static obstacles is combined into a static environment map for subsequent path planning and obstacle avoidance decision. The static environment map can be represented by different data structures, such as occupied grid map, topological map and the like, and a proper map representation method is selected according to the specific application requirements.
For each dynamic point cloud cluster, motion parameters such as speed and acceleration are estimated by tracking the position change of the dynamic point cloud cluster in continuous frames. The commonly used motion estimation methods include Kalman filtering, particle filtering and the like, and the states of the dynamic obstacles are recursively estimated and updated by establishing a motion model of the dynamic obstacles. And predicting the motion track and position distribution of the dynamic obstacle in a future period of time according to the estimated motion parameters, and providing a reference for subsequent path planning and obstacle avoidance decision. And taking the information such as real-time position, speed, acceleration and the like of the dynamic obstacle as the motion information thereof, and forming complete environment perception data together with a static environment map.
Through the steps, the mobile unmanned vehicle can cluster point cloud data through a clustering segmentation algorithm, extract static point cloud data reflecting static obstacles and dynamic point cloud data reflecting dynamic obstacles, determine position information of the static obstacles according to the static point cloud data, construct a static environment map, and perform motion estimation and prediction on the dynamic point cloud data to obtain motion information of the dynamic obstacles.
In an alternative embodiment of the present invention,
The dynamic obstacle target detection of the dynamic point cloud data through a pre-constructed target detection model comprises the following steps:
processing the dynamic point cloud data based on a pre-constructed target detection model, wherein the target detection model extracts multi-scale point cloud characteristics of the dynamic point cloud data through a PointNet ++ backbone network;
After PointNet ++ backbone network, cascading a multi-scale interactive attention module, wherein the multi-scale interactive attention module comprises a plurality of parallel attention units, respectively processes point cloud characteristics of different scales, calculates correlation of the point cloud characteristics among different positions to obtain attention diagrams by mapping the point cloud characteristics under each scale to a low-dimensional embedded space, and adaptively aggregates the attention diagrams of different scales by a fusion module of a target detection model to obtain the multi-scale interactive characteristics;
The target detection model comprises three parallel characteristic pyramid detection heads which are respectively responsible for detecting a first size target, a second size target and a third size target, each characteristic pyramid detection head comprises a central point prediction branch and a target size prediction branch, the central point prediction branch generates a central point thermodynamic diagram consistent with multi-scale interactive characteristic resolution through a convolution and an up-sampling layer, and the target size prediction branch estimates the size of a target three-dimensional boundary frame corresponding to each central point thermodynamic diagram in a regression mode;
And carrying out post-processing on the output of the three detection heads through a non-maximum value suppression module to obtain a multi-scale target detection result, and outputting at least one of the category, the position, the size and the orientation information of the dynamic obstacle in the dynamic point cloud data.
Illustratively, pointNet ++ is chosen as the backbone network of the object detection model for extracting multi-scale features of the dynamic point cloud data. The PointNet ++ network generates point cloud features of multiple scales by downsampling and feature extraction layer by layer, and the features of each scale represent point cloud representations of different levels of abstraction. The PointNet ++ network input is dynamic point cloud data, and the output is multi-scale point cloud characteristics, so that rich characteristic representations are provided for subsequent target detection tasks.
After PointNet ++ backbone network, multi-scale interactive attention modules are cascaded for enhancing interactions and fusion between different scale features. The multi-scale interactive attention module comprises a plurality of parallel attention units, and each attention unit corresponds to a scale point cloud characteristic. For each scale of point cloud features, the point cloud features are mapped to a low-dimensional embedding space through an attention unit, and feature embedding representation under the scale is obtained. And calculating the correlation between feature embedding of different positions, generating attention force diagram, and capturing the dependency relationship of the point cloud features in space. And through a fusion module of the target detection model, attention force diagrams of different scales are adaptively aggregated, multi-scale interactive characteristics are obtained, and effective fusion of the characteristics of different scales is realized.
The target detection model comprises three parallel characteristic pyramid detection heads which are respectively responsible for detecting targets with different sizes. The first detection head is responsible for detecting small-size targets, the second detection head is responsible for detecting medium-size targets, and the third detection head is responsible for detecting large-size targets. Each feature pyramid detection head includes two branches, a center point predicted branch and a target size predicted branch. The central point prediction branch gradually restores the multi-scale interactive characteristic to the same resolution as the input point cloud through a convolution and up-sampling layer, and generates a central point thermodynamic diagram corresponding to the multi-scale interactive characteristic. And estimating the size of the target three-dimensional boundary frame corresponding to each center point thermodynamic diagram position by a regression mode, wherein the target size prediction branch comprises length, width, height and other parameters.
And inputting the dynamic point cloud data into a constructed target detection model, and obtaining detection results of three scales through processing of PointNet ++ backbone network, a multi-scale interactive attention module and a feature pyramid detection head. For each scale of detection results, a set of candidate target detection frames is generated by predicting the bounding box sizes of the branch estimates from the center point thermodynamic diagram generated by the center point predicted branch and the target sizes. And combining the three scale candidate detection frames, and removing the repeated and redundant detection frames through a Non-maximum suppression (Non-Maximum Suppression, NMS) algorithm to obtain a final multi-scale target detection result. The target detection result includes information such as the type (e.g., pedestrian, vehicle, etc.), position (center coordinates of the three-dimensional bounding box), size (length, width, height of the three-dimensional bounding box), and orientation (direction of the three-dimensional bounding box) of the detected dynamic obstacle.
Through the steps, the mobile unmanned vehicle can process dynamic point cloud data by utilizing a pre-constructed target detection model, multi-scale point cloud features are extracted through a PointNet ++ backbone network, and interaction and fusion of the features are enhanced by using a multi-scale interactive attention module. Then, targets with different sizes are detected through three parallel characteristic pyramid detection heads, post-processing is carried out through non-maximum suppression, and finally information such as the category, the position, the size and the orientation of dynamic obstacles is output, so that important environment perception data are provided for obstacle avoidance and navigation of the unmanned vehicle.
In an alternative embodiment of the present invention,
Performing target tracking on the dynamic obstacle according to a target tracking algorithm, wherein the step of obtaining the motion information of the dynamic obstacle comprises the following steps:
Inputting a multi-scale target detection result into a multi-target tracking frame based on Kalman filtering, adding an acceleration component into a target state space by the multi-target tracking frame, setting a state transition matrix and a process noise covariance matrix in the target state space, wherein the state transition matrix and the process noise covariance matrix contain prior statistical information of target positions, speeds and accelerations and are used for describing nonlinear motion characteristics of the target;
Carrying out data association on a multi-scale target detection result at the current moment and a track obtained in advance by utilizing a two-stage cascade matching strategy, wherein the two-stage cascade matching strategy comprises a preliminary screening based on appearance feature cosine similarity and a fine screening based on IoU geometric measurement, a candidate matching pair set is obtained through the preliminary screening, and a final matching result is obtained through the fine screening on the basis;
Based on the final matching result and the target prior motion model, estimating the state parameters of each dynamic obstacle, including position, speed and acceleration, by expanding the prediction and update process of the Kalman filter, and generating a continuous smooth tracking track.
Illustratively, a multi-objective tracking framework based on Kalman filtering is constructed. The object state space is designed to include the position, velocity and acceleration components of the object to describe the motion state of the dynamic obstacle. Defining a state transition matrix, describing the change relation of the state of the target between continuous time steps, and taking the dynamic model of the position, the speed and the acceleration of the target into consideration. And setting a process noise covariance matrix which represents uncertainty and random disturbance in the target motion process and is used for adjusting the prediction and update processes of the Kalman filter. The prior statistical information of the target position, speed and acceleration is introduced into the state transition matrix and the process noise covariance matrix so as to better describe the nonlinear motion characteristic of the target.
And acquiring a multi-scale target detection result at the current moment, wherein the multi-scale target detection result comprises information such as the category, the position, the size, the orientation and the like of the target. And acquiring pre-existing tracking tracks from the tracking result of the previous moment, wherein each tracking track corresponds to one dynamic obstacle. And adopting a two-stage cascade matching strategy to carry out data correlation on the detection result at the current moment and the pre-existing tracking track.
And in the first stage (primary screening), calculating the cosine similarity of the appearance features between each detection result and each tracking track, screening candidate matching pairs with similarity higher than a threshold value, and forming a candidate matching pair set.
And a second stage (fine screening) of calculating the intersection ratio (IoU) between the detection result and the tracking track for each matching pair in the candidate matching pair set, and selecting IoU the largest matching pair as the final matching result.
And for each detection result and tracking track pair which are finally matched, estimating the target state by using an extended Kalman filter. And predicting the target state at the current moment according to the target state and the state transition matrix at the last moment to obtain the predicted position, speed and acceleration.
And updating the extended Kalman filter, namely taking a detection result at the current moment as an observation value, and combining a predicted target state and an observation noise covariance matrix, and updating the target state through Kalman gain to obtain updated position, speed and acceleration estimated values.
And regarding the unmatched detection result as a new target, initializing a new tracking track for the new target, and setting an initial state estimated value. For the unmatched tracking tracks, the targets are considered to be possibly blocked or leave the field of view, and whether to keep or terminate the tracking tracks is determined according to a strategy. And generating a tracking track of each dynamic obstacle according to the updated target state estimated value, and realizing continuous and smooth target motion representation. For each tracked dynamic obstacle, position, velocity and acceleration information in its state estimate is extracted. And the motion information of the dynamic barrier is encoded and serialized in a proper data format (such as JSON, XML and the like), so that the subsequent transmission and processing are convenient. And the motion information of the dynamic obstacle is sent to other modules of the unmanned vehicle, such as path planning, decision control and the like, so that the unmanned vehicle can carry out obstacle avoidance and navigation decisions.
Through the steps, the mobile unmanned vehicle can track and evaluate the state of the multi-scale target detection result by utilizing a multi-target tracking frame based on Kalman filtering. By introducing the position, speed and acceleration information of the target and adding priori statistical information into the state transition matrix and the process noise covariance matrix, the nonlinear motion characteristic of the target can be better described. And a two-stage cascade matching strategy is adopted for data association, and reliable matching of the detection result and the tracking track is realized by combining the appearance feature similarity and the geometric measurement. And finally, estimating the state parameters of each dynamic obstacle through the prediction and updating processes of the extended Kalman filter, generating a continuous smooth tracking track, providing the motion information of the dynamic obstacle to other modules of the unmanned vehicle, and supporting the realization of obstacle avoidance and navigation functions.
In an alternative embodiment of the present invention,
After obtaining the final matching result, the method further comprises performing motion compensation on the mobile drone:
and fusing and estimating the motion state of the vehicle by using the measurement information of the vehicle-mounted IMU and the wheel speed meter of the mobile unmanned vehicle, recursively calculating the transformation relation between the vehicle coordinate system and the world coordinate system according to the motion state of the vehicle, carrying out coordinate transformation on the prediction result and the observation result of the tracker of the mobile unmanned vehicle by using the transformation relation, and unifying the prediction result and the observation result to the world coordinate system to realize motion compensation of the vehicle.
Illustratively, measurement information of an in-vehicle Inertial Measurement Unit (IMU) on a mobile drone is acquired, including angular velocity and acceleration data. And acquiring measurement information of a wheel speed meter on the mobile unmanned vehicle, wherein the measurement information comprises wheel rotation speed and vehicle speed data. And (3) carrying out time synchronization and coordinate system alignment on measurement information of the IMU and the wheel speed meter, and ensuring the consistency of data in time and space. And (3) fusing the measurement information of the IMU and the wheel speed meter by using a Kalman filter or other fusion algorithms, and estimating the self-vehicle motion state of the mobile unmanned vehicle, wherein the self-vehicle motion state comprises parameters such as position, speed, gesture and the like.
The vehicle coordinate system is defined, usually taking the geometric center of the mobile unmanned vehicle as an origin, the vehicle advancing direction is x-axis, the left side is y-axis, and the vertical upward direction is z-axis. A world coordinate system is defined, typically a fixed global reference system is selected, such as the northeast day (ENU) coordinate system or the global geographic coordinate system (e.g., WGS 84). Based on the estimated state of motion of the vehicle, a transformation matrix or transformation parameters of the vehicle coordinate system relative to the world coordinate system, including translation vectors and rotation matrices, are recursively calculated. The transformation matrix or transformation parameters may be calculated by using dead reckoning (dead reckoning) method, and the transformation relationship between the vehicle coordinate system and the world coordinate system is recursively updated according to the amount of change in the position, speed, and posture of the own vehicle.
And obtaining a prediction result of the multi-target tracker in a vehicle coordinate system, wherein the prediction result comprises state quantities such as a predicted position, a speed, an acceleration and the like of each target. The prediction result of the tracker is transformed from the vehicle coordinate system to the world coordinate system using a transformation relationship between the vehicle coordinate system and the world coordinate system. The coordinate transformation may be implemented by multiplying the position vector of the prediction result with a transformation matrix and adding a translation vector. The transformed prediction result represents the prediction state of each target under the world coordinate system, and the prediction state comprises information such as position, speed, acceleration and the like.
And acquiring a multi-scale target detection result at the current moment, wherein the multi-scale target detection result is used as an observation result of a tracker and comprises information such as detection position, size and orientation of each target. The observation result of the tracker is transformed from the vehicle coordinate system to the world coordinate system using a transformation relationship between the vehicle coordinate system and the world coordinate system. The coordinate transformation may be achieved by multiplying the position vector of the observation by a transformation matrix and adding a translation vector. The transformed observations represent the detection state of each target in the world coordinate system, including information such as position, size and orientation.
And inputting the transformed prediction result and observation result into a multi-target tracker, and carrying out target tracking and state estimation under a world coordinate system. The multi-target tracker uses a data correlation algorithm (such as a hungarian algorithm or a JPDA algorithm) to match the observation result with the existing tracking track according to the prediction result and the observation result. And for successfully matched observation results and tracking tracks, updating state estimation values of the targets by using a Kalman filter or other estimation algorithms to obtain information such as target positions, speeds, accelerations and the like under a world coordinate system. And regarding the unmatched tracking track as target disappearance or occlusion, determining whether to keep or terminate the tracking track according to a strategy.
Through the steps, the mobile unmanned vehicle can realize the self-vehicle motion compensation, and the prediction result and the observation result of the tracker are unified to the world coordinate system for processing. Firstly, the motion state of the vehicle is estimated by fusing the measurement information of the vehicle-mounted IMU and the wheel speed meter, and the transformation relation between the vehicle coordinate system and the world coordinate system is calculated in a recursion mode. Then, the predicted result and the observed result of the tracker are transformed from the vehicle coordinate system to the world coordinate system by using the transformation relationship. Finally, performing target tracking and state estimation under a world coordinate system to obtain the motion information such as the position, the speed, the acceleration and the like of the dynamic obstacle, and providing more accurate and consistent perception information for decision control of the unmanned vehicle.
S103, planning a local path avoiding the static obstacle and the dynamic obstacle in real time in the motion direction of the mobile unmanned aerial vehicle through an improved artificial potential field method based on the position information of the static obstacle and the motion information of the dynamic obstacle, and generating a motion control instruction of the mobile unmanned aerial vehicle according to the local path, wherein the motion control instruction comprises a speed control instruction and a steering control instruction of the mobile unmanned aerial vehicle, and controlling the mobile unmanned aerial vehicle to move along the local path until reaching a preset target position.
In an alternative embodiment of the present invention,
Based on the position information of the static obstacle and the motion information of the dynamic obstacle, planning a local path avoiding the static obstacle and the dynamic obstacle in real time in the motion direction of the mobile unmanned vehicle by an improved artificial potential field method comprises the following steps:
Establishing a static Gaussian potential field function aiming at the position information of the static obstacle, and establishing a dynamic Gaussian potential field function aiming at the motion information of the dynamic obstacle, wherein the peak positions of the static Gaussian potential field function and the dynamic Gaussian potential field function are determined by the positions of the obstacle, the dynamic peak position of the dynamic Gaussian potential field function deviates according to the speed direction of the dynamic obstacle, and the deviation of the dynamic peak position of the dynamic Gaussian potential field function is in direct proportion to the product of the moving speed of the dynamic obstacle and a preset sliding time window;
the method comprises the steps of carrying out weighted superposition on dynamic Gaussian potential field functions of all dynamic obstacles, combining the static Gaussian potential field functions to obtain a composite Gaussian potential field function describing the environment where the mobile unmanned vehicle is located, and carrying out self-adaptive adjustment on the weight coefficient of each obstacle in the composite Gaussian potential field function according to the type, the size and the distance between each obstacle and the mobile unmanned vehicle;
Determining a function gradient of the composite Gaussian potential field function according to the current position of the mobile unmanned vehicle, wherein the function gradient comprises a positive gradient direction and a negative gradient direction, the negative gradient direction is used as a target movement direction of the mobile unmanned vehicle, and the target movement direction guides the mobile unmanned vehicle to be far away from static barriers and dynamic barriers;
And taking the negative gradient direction as a preferred direction, and generating a local path by utilizing a parameterized curve fitting algorithm in combination with the kinematic constraint of the mobile unmanned vehicle and the obstacle boundary constraint.
Illustratively, position information of static obstacles in the environment is acquired, including coordinates, size, shape, and the like of the obstacles. For each static obstacle, a gaussian potential field function is constructed, and the peak position of the function coincides with the position of the obstacle. The gaussian potential field function may be in the form of a two-dimensional or three-dimensional gaussian distribution, wherein the peak position represents the center of the obstacle, and the standard deviation or covariance matrix of the function is set according to the size and shape of the obstacle. The static Gaussian potential field function represents the spatial distribution and the influence range of static obstacles in the environment, and the potential field value at the peak position is highest, so that the area which is least favorable for the passing of the mobile unmanned vehicles is represented.
And acquiring the motion information of the dynamic obstacle in the environment, including the position, the speed, the acceleration and the like of the obstacle. For each dynamic obstacle, a gaussian potential field function is constructed, and the peak position of the function initially coincides with the position of the obstacle. And dynamically shifting the peak position of the Gaussian potential field function according to the speed direction and the size of the dynamic obstacle. The calculation formula of the offset is that the offset=the speed size×the preset sliding time window. The dynamic gaussian potential field function is similar in form to the static gaussian potential field function, but the peak position dynamically changes with the movement of the obstacle, indicating the predicted position and range of influence of the dynamic obstacle over a period of time in the future.
And carrying out weighted superposition on the dynamic Gaussian potential field functions of all the dynamic obstacles to obtain a comprehensive dynamic Gaussian potential field function. And superposing the static Gaussian potential field function and the comprehensive dynamic Gaussian potential field function to obtain a composite Gaussian potential field function describing the environment where the mobile unmanned vehicle is located. In the composite Gaussian potential field function, a weight coefficient is allocated to each obstacle, and the size of the weight coefficient is adaptively adjusted according to the type (such as pedestrians, vehicles and the like) of the obstacle, the size and the distance between the obstacle and the mobile unmanned vehicle. The type and size of the obstacle determine the potential threat degree of the obstacle to the mobile unmanned vehicle, and the closer the distance is, the larger the weight of the obstacle is, and the larger the influence on the mobile unmanned vehicle is.
And calculating the gradient of the composite Gaussian potential field function at the position according to the current position of the mobile unmanned vehicle. The gradient of the composite gaussian potential field function comprises two directions, a positive gradient direction and a negative gradient direction. The positive gradient direction points in the direction in which the potential field value increases fastest, indicating approaching the obstacle, and the negative gradient direction points in the direction in which the potential field value decreases fastest, indicating moving away from the obstacle. And taking the negative gradient direction of the composite Gaussian potential field function as the target motion direction of the mobile unmanned vehicle, and guiding the mobile unmanned vehicle to move away from static obstacles and dynamic obstacles and towards a safety area with a low potential field value.
And taking the negative gradient direction of the composite Gaussian potential field function as a preferred direction, and generating a local path by combining the kinematic constraint (such as maximum speed, acceleration, steering angle and the like) of the mobile unmanned vehicle and the boundary constraint (such as keeping a safe distance with an obstacle) of the obstacle. The local path may be generated using a parametric curve fitting algorithm, such as spline curves, bezier curves, etc., by optimizing the locations of the control points such that the generated curves satisfy the kinematic constraints and the obstacle boundary constraints and are aligned with the preferred direction as much as possible. The generated local path provides a smooth and safe obstacle avoidance track for the mobile unmanned vehicle, guides the mobile unmanned vehicle to pass between the static obstacle and the dynamic obstacle, and timely adjusts the running direction and speed.
Through the steps, the mobile unmanned vehicle can utilize an improved artificial potential field method to plan a local path for avoiding static obstacles and dynamic obstacles in real time. Firstly, a static Gaussian potential field function is established according to the position information of a static obstacle, a dynamic Gaussian potential field function is established according to the motion information of a dynamic obstacle, and a composite Gaussian potential field function is obtained through weighted superposition. Then, a negative gradient direction of the composite Gaussian potential field function is determined according to the current position of the mobile unmanned vehicle and is used as a target movement direction. And finally, taking the negative gradient direction as a preferred direction, combining the kinematic constraint and the obstacle boundary constraint, generating a smooth and safe local path by using a parameterized curve fitting algorithm, and guiding the mobile unmanned vehicle to realize obstacle avoidance and navigation in a complex environment.
In an alternative embodiment of the present invention,
The composite gaussian potential field function is as follows:
;
the static gaussian potential field function is as follows:
;
the dynamic gaussian potential field function is as follows:
;
Wherein U (p, t) represents a composite Gaussian potential field function at time t, U s (p) represents a static Gaussian potential field function, U d (p, t) represents a dynamic Gaussian potential field function at time t, Representing a position vector of the mobile drone in a two-dimensional plane,A position vector representing a static obstacle,Representing the position vector of the dynamic obstacle, a s、Ad represents the magnitude of the static gaussian potential field function and the magnitude of the dynamic gaussian potential field function, respectively,The covariance matrixes of the static Gaussian potential field function and the dynamic Gaussian potential field function are respectively represented, w i represents a weight coefficient corresponding to the ith dynamic Gaussian potential field function, and n represents the number of dynamic obstacles.
Fig. 2 is a schematic structural diagram of an obstacle map marking system combined with an unmanned vehicle according to an embodiment of the present invention, as shown in fig. 2, the system includes:
the system comprises a first unit, a second unit, a third unit and a fourth unit, wherein the first unit is used for determining the motion direction and the motion path of the mobile unmanned aerial vehicle according to the current position information of the mobile unmanned aerial vehicle and the preset target position information;
The second unit is used for clustering the point cloud data through a clustering segmentation algorithm, extracting static point cloud data reflecting static obstacles and dynamic point cloud data reflecting dynamic obstacles from the point cloud data, and determining the position information of the static obstacles according to the clustered static point cloud data;
The third unit is used for planning a local path avoiding the static obstacle and the dynamic obstacle in real time in the moving direction of the mobile unmanned vehicle through an improved artificial potential field method based on the position information of the static obstacle and the moving information of the dynamic obstacle, and generating a moving control instruction of the mobile unmanned vehicle according to the local path, wherein the moving control instruction comprises a speed control instruction and a steering control instruction of the mobile unmanned vehicle, and the moving unmanned vehicle is controlled to move along the local path until reaching a preset target position.
In a third aspect of an embodiment of the present invention,
There is provided an electronic device including:
A processor;
a memory for storing processor-executable instructions;
Wherein the processor is configured to invoke the instructions stored in the memory to perform the method described previously.
In a fourth aspect of an embodiment of the present invention,
There is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the method as described above.
The present invention may be a method, apparatus, system, and/or computer program product. The computer program product may include a computer readable storage medium having computer readable program instructions embodied thereon for performing various aspects of the present invention.
It should be noted that the above embodiments are merely for illustrating the technical solution of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the above embodiments, it should be understood by those skilled in the art that the technical solution described in the above embodiments may be modified or some or all of the technical features may be equivalently replaced, and these modifications or substitutions do not make the essence of the corresponding technical solution deviate from the scope of the technical solution of the embodiments of the present invention.

Claims (7)

1.结合无人车的障碍物地图标记方法,其特征在于,包括:1. An obstacle map marking method for an unmanned vehicle, characterized by comprising: 根据移动无人车当前所在位置信息以及预设目标位置信息,确定移动无人车的运动方向和运动路径;通过移动无人车上安装的激光雷达扫描装置,实时扫描移动无人车周围环境,获取反映室内环境中静态障碍物位置信息以及动态障碍物瞬时位置信息的点云数据;Determine the direction and path of movement of the mobile unmanned vehicle based on the current location information of the mobile unmanned vehicle and the preset target location information; scan the surrounding environment of the mobile unmanned vehicle in real time through the laser radar scanning device installed on the mobile unmanned vehicle to obtain point cloud data reflecting the location information of static obstacles and the instantaneous location information of dynamic obstacles in the indoor environment; 通过聚类分割算法对所述点云数据进行聚类,提取出点云数据中反映静态障碍物的静态点云数据以及反映动态障碍物的动态点云数据,根据聚类后的静态点云数据确定静态障碍物的位置信息;通过预先构建的目标检测模型对所述动态点云数据进行动态障碍物目标检测,并根据目标跟踪算法对所述动态障碍物进行目标跟踪,获取动态障碍物的运动信息;The point cloud data is clustered by a clustering segmentation algorithm, static point cloud data reflecting static obstacles and dynamic point cloud data reflecting dynamic obstacles are extracted from the point cloud data, and the position information of the static obstacles is determined according to the clustered static point cloud data; dynamic obstacle target detection is performed on the dynamic point cloud data by a pre-built target detection model, and the dynamic obstacle is tracked according to the target tracking algorithm to obtain the motion information of the dynamic obstacle; 基于静态障碍物的位置信息以及动态障碍物的运动信息,通过改进的人工势场法,在移动无人车的运动方向上实时规划出避让所述静态障碍物和动态障碍物的局部路径;根据所述局部路径,生成移动无人车的运动控制指令,所述运动控制指令包括移动无人车的速度控制指令和转向控制指令,控制移动无人车沿所述局部路径运动,直至到达预设目标位置;Based on the position information of the static obstacles and the motion information of the dynamic obstacles, a local path for avoiding the static obstacles and the dynamic obstacles is planned in real time in the motion direction of the mobile unmanned vehicle through an improved artificial potential field method; according to the local path, a motion control instruction of the mobile unmanned vehicle is generated, and the motion control instruction includes a speed control instruction and a steering control instruction of the mobile unmanned vehicle, and controls the mobile unmanned vehicle to move along the local path until it reaches a preset target position; 基于静态障碍物的位置信息以及动态障碍物的运动信息,通过改进的人工势场法,在移动无人车的运动方向上实时规划出避让所述静态障碍物和动态障碍物的局部路径包括:Based on the position information of the static obstacles and the motion information of the dynamic obstacles, a local path for avoiding the static obstacles and the dynamic obstacles is planned in real time in the motion direction of the mobile unmanned vehicle through an improved artificial potential field method, including: 针对所述静态障碍物的位置信息建立静态高斯势场函数,针对所述动态障碍物的运动信息建立动态高斯势场函数,其中,所述静态高斯势场函数以及所述动态高斯势场函数的峰值位置均由障碍物的位置决定,所述动态高斯势场函数的动态峰值位置根据所述动态障碍物的速度方向进行偏移,所述动态高斯势场函数的动态峰值位置的偏移量与所述动态障碍物的移动速度以及预设的滑动时间窗口的乘积成正比;A static Gaussian potential field function is established for the position information of the static obstacle, and a dynamic Gaussian potential field function is established for the motion information of the dynamic obstacle, wherein the peak positions of the static Gaussian potential field function and the dynamic Gaussian potential field function are both determined by the position of the obstacle, the dynamic peak position of the dynamic Gaussian potential field function is offset according to the speed direction of the dynamic obstacle, and the offset of the dynamic peak position of the dynamic Gaussian potential field function is proportional to the product of the moving speed of the dynamic obstacle and a preset sliding time window; 对所有动态障碍物的动态高斯势场函数进行加权叠加,结合所述静态高斯势场函数得到描述所述移动无人车所在环境的复合高斯势场函数,所述复合高斯势场函数中每个障碍物的权重系数根据其类型、大小以及与所述移动无人车之间的距离进行自适应调节;The dynamic Gaussian potential field functions of all dynamic obstacles are weightedly superimposed, and combined with the static Gaussian potential field function to obtain a composite Gaussian potential field function that describes the environment in which the mobile unmanned vehicle is located, wherein the weight coefficient of each obstacle in the composite Gaussian potential field function is adaptively adjusted according to its type, size, and distance from the mobile unmanned vehicle; 根据所述移动无人车的当前位置确定所述复合高斯势场函数的函数梯度,其中,所述函数梯度包括正梯度方向和负梯度方向,将所述负梯度方向作为所述移动无人车的目标运动方向,所述目标运动方向指引所述移动无人车远离静态障碍物和动态障碍物;Determining the function gradient of the composite Gaussian potential field function according to the current position of the mobile unmanned vehicle, wherein the function gradient includes a positive gradient direction and a negative gradient direction, and using the negative gradient direction as the target movement direction of the mobile unmanned vehicle, and the target movement direction guides the mobile unmanned vehicle to stay away from static obstacles and dynamic obstacles; 以所述负梯度方向为首选方向,结合所述移动无人车的运动学约束和障碍物边界约束,利用参数化曲线拟合算法生成局部路径;Taking the negative gradient direction as the preferred direction, combining the kinematic constraints of the mobile unmanned vehicle and the obstacle boundary constraints, and using a parameterized curve fitting algorithm to generate a local path; 所述复合高斯势场函数如下所示:The composite Gaussian potential field function is as follows: ; 所述静态高斯势场函数如下所示:The static Gaussian potential field function is as follows: ; 所述动态高斯势场函数如下所示:The dynamic Gaussian potential field function is as follows: ; 其中,U(p,t)表示t时刻的复合高斯势场函数,U s (p)表示静态高斯势场函数,U d (p,t)表示t时刻的动态高斯势场函数,表示移动无人车在二维平面内的位置矢量,表示静态障碍物的位置矢量,表示动态障碍物的位置矢量,A s A d 分别表示静态高斯势场函数的幅值和动态高斯势场函数的幅值,分别表示静态高斯势场函数和动态高斯势场函数的协方差矩阵,w i 表示第i个动态高斯势场函数对应的权重系数,n表示动态障碍物的数量。Among them, U(p,t) represents the composite Gaussian potential field function at time t , U s (p) represents the static Gaussian potential field function, and U d (p,t) represents the dynamic Gaussian potential field function at time t . represents the position vector of the mobile unmanned vehicle in the two-dimensional plane, represents the position vector of the static obstacle, represents the position vector of the dynamic obstacle, A s and A d represent the amplitude of the static Gaussian potential field function and the amplitude of the dynamic Gaussian potential field function respectively, , They represent the covariance matrices of the static Gaussian potential field function and the dynamic Gaussian potential field function respectively, wi represents the weight coefficient corresponding to the i -th dynamic Gaussian potential field function, and n represents the number of dynamic obstacles. 2.根据权利要求1所述的方法,其特征在于,通过预先构建的目标检测模型对所述动态点云数据进行动态障碍物目标检测包括:2. The method according to claim 1, characterized in that the step of performing dynamic obstacle target detection on the dynamic point cloud data using a pre-built target detection model comprises: 基于预先构建的目标检测模型对所述动态点云数据进行处理,其中,所述目标检测模型通过PointNet++骨干网络提取所述动态点云数据的多尺度点云特征;Processing the dynamic point cloud data based on a pre-built target detection model, wherein the target detection model extracts multi-scale point cloud features of the dynamic point cloud data through a PointNet++ backbone network; 在PointNet++骨干网络之后,级联多尺度交互式注意力模块,所述多尺度交互式注意力模块包含多个并行的注意力单元,分别处理不同尺度的点云特征,通过将每个尺度下的点云特征映射到低维嵌入空间,计算所述点云特征在不同位置之间的相关性得到注意力图,通过目标检测模型的融合模块自适应地聚合不同尺度的注意力图,得到多尺度交互式特征;After the PointNet++ backbone network, a multi-scale interactive attention module is cascaded. The multi-scale interactive attention module contains multiple parallel attention units, which process point cloud features of different scales respectively. The point cloud features at each scale are mapped to a low-dimensional embedding space, and the correlation between the point cloud features at different positions is calculated to obtain an attention map. The attention maps of different scales are adaptively aggregated through the fusion module of the target detection model to obtain multi-scale interactive features. 所述目标检测模型包括三个并行的特征金字塔检测头,分别负责第一尺寸目标、第二尺寸目标以及第三尺寸目标检测,每个特征金字塔检测头包括一个中心点预测分支和一个目标尺寸预测分支,中心点预测分支通过卷积和上采样层生成与多尺度交互式特征分辨率一致的中心点热力图,目标尺寸预测分支通过回归的方式估计每个中心点热力图对应的目标三维边界框尺寸,其中,第一尺寸、第二尺寸以及第三尺寸依次变大;The target detection model includes three parallel feature pyramid detection heads, which are responsible for the detection of targets of the first size, the second size and the third size respectively. Each feature pyramid detection head includes a center point prediction branch and a target size prediction branch. The center point prediction branch generates a center point heat map consistent with the multi-scale interactive feature resolution through convolution and upsampling layers. The target size prediction branch estimates the target three-dimensional bounding box size corresponding to each center point heat map through regression, wherein the first size, the second size and the third size are successively larger. 将三个检测头的输出通过一个非极大值抑制模块进行后处理,得到多尺度目标检测结果,输出所述动态点云数据中动态障碍物的类别、位置、大小和朝向信息中至少一种。The outputs of the three detection heads are post-processed through a non-maximum suppression module to obtain a multi-scale target detection result, and at least one of the category, position, size and orientation information of the dynamic obstacle in the dynamic point cloud data is output. 3.根据权利要求2所述的方法,其特征在于,根据目标跟踪算法对所述动态障碍物进行目标跟踪,获取动态障碍物的运动信息包括:3. The method according to claim 2, characterized in that tracking the dynamic obstacle according to the target tracking algorithm and obtaining the motion information of the dynamic obstacle comprises: 将多尺度目标检测结果输入到一个基于卡尔曼滤波的多目标跟踪框架中,所述多目标跟踪框架在目标状态空间中加入加速度分量,并在所述目标状态空间中设置状态转移矩阵和过程噪声协方差矩阵,所述状态转移矩阵和所述过程噪声协方差矩阵中包含目标位置、速度和加速度的先验统计信息,用于刻画目标的非线性运动特性;The multi-scale target detection results are input into a multi-target tracking framework based on Kalman filtering, wherein the multi-target tracking framework adds an acceleration component to the target state space, and sets a state transfer matrix and a process noise covariance matrix in the target state space, wherein the state transfer matrix and the process noise covariance matrix contain prior statistical information of the target position, velocity and acceleration, and are used to characterize the nonlinear motion characteristics of the target; 利用两阶段级联匹配策略对当前时刻的多尺度目标检测结果与与预先获取的跟踪轨迹进行数据关联,所述两阶段级联匹配策略包括基于外观特征余弦相似度的初筛和基于IoU几何度量的精筛,通过初筛得到候选匹配对集合,在此基础上进行精筛得到最终匹配结果;A two-stage cascade matching strategy is used to associate the multi-scale target detection result at the current moment with the previously acquired tracking trajectory. The two-stage cascade matching strategy includes a preliminary screening based on the cosine similarity of the appearance features and a fine screening based on the IoU geometric metric. A set of candidate matching pairs is obtained through the preliminary screening, and a fine screening is performed on this basis to obtain the final matching result. 基于最终匹配结果和目标先验运动模型,通过扩展卡尔曼滤波器的预测和更新过程,估计每个动态障碍物的状态参数,包括位置、速度和加速度,并生成连续平滑的跟踪轨迹。Based on the final matching results and the target prior motion model, the state parameters of each dynamic obstacle, including position, velocity and acceleration, are estimated through the prediction and update process of the extended Kalman filter, and a continuous and smooth tracking trajectory is generated. 4.根据权利要求3所述的方法,其特征在于,在得到最终匹配结果之后,所述方法还包括对所述移动无人车进行运动补偿:4. The method according to claim 3, characterized in that after obtaining the final matching result, the method further comprises performing motion compensation on the mobile unmanned vehicle: 将所述移动无人车的车载IMU和轮速计的测量信息融合估计自车运动状态,根据自车运动状态递归计算车辆坐标系与世界坐标系之间的变换关系,利用变换关系对所述移动无人车的跟踪器的预测结果和观测结果进行坐标变换,将所述预测结果和所述观测结果统一到世界坐标系下,实现自车运动补偿。The measurement information of the on-board IMU and wheel speed meter of the mobile unmanned vehicle is fused to estimate the motion state of the vehicle, and the transformation relationship between the vehicle coordinate system and the world coordinate system is recursively calculated according to the motion state of the vehicle. The prediction result and the observation result of the tracker of the mobile unmanned vehicle are transformed by using the transformation relationship, and the prediction result and the observation result are unified in the world coordinate system to realize the motion compensation of the vehicle. 5.结合无人车的障碍物地图标记系统,用于实现前述权利要求1-4中任一项所述的方法,其特征在于,包括:5. An obstacle map marking system in combination with an unmanned vehicle, used to implement the method according to any one of claims 1 to 4, characterized in that it comprises: 第一单元,用于根据移动无人车当前所在位置信息以及预设目标位置信息,确定移动无人车的运动方向和运动路径;通过移动无人车上安装的激光雷达扫描装置,实时扫描移动无人车周围环境,获取反映室内环境中静态障碍物位置信息以及动态障碍物瞬时位置信息的点云数据;The first unit is used to determine the moving direction and moving path of the mobile unmanned vehicle according to the current location information of the mobile unmanned vehicle and the preset target location information; through the laser radar scanning device installed on the mobile unmanned vehicle, the surrounding environment of the mobile unmanned vehicle is scanned in real time to obtain point cloud data reflecting the location information of static obstacles and the instantaneous location information of dynamic obstacles in the indoor environment; 第二单元,用于通过聚类分割算法对所述点云数据进行聚类,提取出点云数据中反映静态障碍物的静态点云数据以及反映动态障碍物的动态点云数据,根据聚类后的静态点云数据确定静态障碍物的位置信息;通过预先构建的目标检测模型对所述动态点云数据进行动态障碍物目标检测,并根据目标跟踪算法对所述动态障碍物进行目标跟踪,获取动态障碍物的运动信息;The second unit is used to cluster the point cloud data by a clustering segmentation algorithm, extract static point cloud data reflecting static obstacles and dynamic point cloud data reflecting dynamic obstacles from the point cloud data, and determine the position information of the static obstacles according to the clustered static point cloud data; perform dynamic obstacle target detection on the dynamic point cloud data by a pre-built target detection model, and track the dynamic obstacles according to the target tracking algorithm to obtain the motion information of the dynamic obstacles; 第三单元,用于基于静态障碍物的位置信息以及动态障碍物的运动信息,通过改进的人工势场法,在移动无人车的运动方向上实时规划出避让所述静态障碍物和动态障碍物的局部路径;根据所述局部路径,生成移动无人车的运动控制指令,所述运动控制指令包括移动无人车的速度控制指令和转向控制指令,控制移动无人车沿所述局部路径运动,直至到达预设目标位置。The third unit is used to plan a local path that avoids the static obstacles and dynamic obstacles in real time in the movement direction of the mobile unmanned vehicle based on the position information of the static obstacles and the motion information of the dynamic obstacles through an improved artificial potential field method; according to the local path, generate motion control instructions for the mobile unmanned vehicle, the motion control instructions including speed control instructions and steering control instructions for the mobile unmanned vehicle, and control the mobile unmanned vehicle to move along the local path until it reaches a preset target position. 6.一种电子设备,其特征在于,包括:6. An electronic device, comprising: 处理器;processor; 用于存储处理器可执行指令的存储器;a memory for storing processor-executable instructions; 其中,所述处理器被配置为调用所述存储器存储的指令,以执行权利要求1至4中任意一项所述的方法。The processor is configured to call the instructions stored in the memory to execute the method described in any one of claims 1 to 4. 7.一种计算机可读存储介质,其上存储有计算机程序指令,其特征在于,所述计算机程序指令被处理器执行时实现权利要求1至4中任意一项所述的方法。7. A computer-readable storage medium having computer program instructions stored thereon, wherein the computer program instructions, when executed by a processor, implement the method according to any one of claims 1 to 4.
CN202411578428.4A 2024-11-07 2024-11-07 Obstacle map marking method and system combined with unmanned vehicle Active CN119085695B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411578428.4A CN119085695B (en) 2024-11-07 2024-11-07 Obstacle map marking method and system combined with unmanned vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411578428.4A CN119085695B (en) 2024-11-07 2024-11-07 Obstacle map marking method and system combined with unmanned vehicle

Publications (2)

Publication Number Publication Date
CN119085695A CN119085695A (en) 2024-12-06
CN119085695B true CN119085695B (en) 2025-01-21

Family

ID=93664173

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411578428.4A Active CN119085695B (en) 2024-11-07 2024-11-07 Obstacle map marking method and system combined with unmanned vehicle

Country Status (1)

Country Link
CN (1) CN119085695B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119556303A (en) * 2025-01-21 2025-03-04 北京飞安航空科技有限公司 Road obstacle perception system for unmanned vehicles based on lidar

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115683145A (en) * 2022-11-03 2023-02-03 北京踏歌智行科技有限公司 Automatic driving safety obstacle avoidance method based on track prediction
CN115861968A (en) * 2022-12-13 2023-03-28 徐工集团工程机械股份有限公司建设机械分公司 Dynamic obstacle removing method based on real-time point cloud data

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10665115B2 (en) * 2016-01-05 2020-05-26 California Institute Of Technology Controlling unmanned aerial vehicles to avoid obstacle collision
WO2023183633A1 (en) * 2022-03-25 2023-09-28 Innovusion, Inc. Methods and systems fault detection in lidar
CN118429377A (en) * 2024-05-07 2024-08-02 广州文远知行科技有限公司 Thermodynamic diagram-based vehicle track determining method and device, vehicle and medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115683145A (en) * 2022-11-03 2023-02-03 北京踏歌智行科技有限公司 Automatic driving safety obstacle avoidance method based on track prediction
CN115861968A (en) * 2022-12-13 2023-03-28 徐工集团工程机械股份有限公司建设机械分公司 Dynamic obstacle removing method based on real-time point cloud data

Also Published As

Publication number Publication date
CN119085695A (en) 2024-12-06

Similar Documents

Publication Publication Date Title
US12223674B2 (en) Methods and systems for joint pose and shape estimation of objects from sensor data
US20240338567A1 (en) Multi-Task Multi-Sensor Fusion for Three-Dimensional Object Detection
EP3745158B1 (en) Methods and systems for computer-based determining of presence of dynamic objects
US11593950B2 (en) System and method for movement detection
KR102379295B1 (en) RGB point cloud-based map generation system for autonomous vehicles
KR102376709B1 (en) Point cloud registration system for autonomous vehicles
KR102334641B1 (en) Map Partitioning System for Autonomous Vehicles
KR102319065B1 (en) Real-time map generation system for autonomous vehicles
US11960290B2 (en) Systems and methods for end-to-end trajectory prediction using radar, LIDAR, and maps
US10229510B2 (en) Systems and methods to track vehicles proximate perceived by an autonomous vehicle
Scherer et al. River mapping from a flying robot: state estimation, river detection, and obstacle mapping
WO2020243162A1 (en) Methods and systems for trajectory forecasting with recurrent neural networks using inertial behavioral rollout
EP4078535A1 (en) Methods and systems for constructing map data using poisson surface reconstruction
CN111771141A (en) LIDAR positioning in autonomous vehicles using 3D CNN networks for solution inference
CN111788571A (en) vehicle tracking
JP2019527832A (en) System and method for accurate localization and mapping
US12026894B2 (en) System for predicting near future location of object
CN109564285A (en) Method and system for detecting ground marks in a traffic environment of a mobile unit
CN114120075B (en) Three-dimensional target detection method integrating monocular camera and laser radar
CN119085695B (en) Obstacle map marking method and system combined with unmanned vehicle
Liu et al. Precise positioning and prediction system for autonomous driving based on generative artificial intelligence
CN115485698A (en) Space-time interaction network
EP4148599A1 (en) Systems and methods for providing and using confidence estimations for semantic labeling
CN115451948A (en) A positioning odometer method and system for agricultural unmanned vehicles based on multi-sensor fusion
CN113741550B (en) Mobile robot following method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant