[go: up one dir, main page]

CN118259611B - Low-speed automatic driving vehicle control method, device, equipment and medium - Google Patents

Low-speed automatic driving vehicle control method, device, equipment and medium Download PDF

Info

Publication number
CN118259611B
CN118259611B CN202410355965.6A CN202410355965A CN118259611B CN 118259611 B CN118259611 B CN 118259611B CN 202410355965 A CN202410355965 A CN 202410355965A CN 118259611 B CN118259611 B CN 118259611B
Authority
CN
China
Prior art keywords
low
path
automatic driving
information
speed automatic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410355965.6A
Other languages
Chinese (zh)
Other versions
CN118259611A (en
Inventor
周郭许
周贵林
容伟力
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Technology filed Critical Guangdong University of Technology
Priority to CN202410355965.6A priority Critical patent/CN118259611B/en
Publication of CN118259611A publication Critical patent/CN118259611A/en
Application granted granted Critical
Publication of CN118259611B publication Critical patent/CN118259611B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/04Programme control other than numerical control, i.e. in sequence controllers or logic controllers
    • G05B19/042Programme control other than numerical control, i.e. in sequence controllers or logic controllers using digital processors
    • G05B19/0423Input/output
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/20Pc systems
    • G05B2219/25Pc structure of the system
    • G05B2219/25257Microcontroller
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Navigation (AREA)
  • Traffic Control Systems (AREA)

Abstract

本申请涉及一种低速自动驾驶小车控制方法、装置、设备及介质,所述方法包括:根据所述动态目标位置信息、可行驶路面区域以及车辆位置信息更新所述占据栅格地图数据,基于A*算法根据更新后的占据栅格地图数据确定用于导航所述低速自动驾驶小车的路径点,基于TEB算法根据所述低速自动驾驶小车的路径点确定所述低速自动驾驶小车的行驶路径;响应低速自动驾驶小车控制指令,驱动所述低速自动驾驶小车按照所述行驶路径行驶,以完成低速自动驾驶小车的控制。本申请能够通过获取路况图像帧以及占据栅格地图数据,形成环境地图,充分考虑了环境中的各种障碍物,避免了行驶路径上存在障碍物,使得车辆在运输时产生碰撞发生事故。

The present application relates to a control method, device, equipment and medium for a low-speed autonomous driving car, the method comprising: updating the occupation grid map data according to the dynamic target position information, the drivable road area and the vehicle position information, determining the path points for navigating the low-speed autonomous driving car according to the updated occupation grid map data based on the A* algorithm, determining the driving path of the low-speed autonomous driving car according to the path points of the low-speed autonomous driving car based on the TEB algorithm; responding to the control instruction of the low-speed autonomous driving car, driving the low-speed autonomous driving car to drive according to the driving path, so as to complete the control of the low-speed autonomous driving car. The present application can form an environmental map by acquiring road condition image frames and occupation grid map data, fully considering various obstacles in the environment, avoiding obstacles on the driving path, and causing collisions and accidents of vehicles during transportation.

Description

Low-speed automatic driving trolley control method, device, equipment and medium
Technical Field
The application relates to the field of automatic driving, in particular to a low-speed automatic driving trolley control method, a corresponding device, electronic equipment and a computer readable storage medium.
Background
In the prior art, development and application of autopilot systems are typically carried on some high performance development boards. However, such development boards need to be embedded in vehicles, and different types of designs are required for different vehicles, so that portability and universality are not achieved. Meanwhile, in order to keep high-performance computing capacity, the development board is high in required power consumption, and has a certain influence on the cruising ability of the vehicle.
In the conventional sensor information fusion scheme, various sensors are usually mounted at different positions of a vehicle to acquire environmental information for processing by an automatic driving algorithm. This also makes different vehicles require different types of designs, without portability and versatility. Meanwhile, because the sensors are distributed in a scattered way, the acquired information needs to be additionally aligned in time, and the information processing time is increased.
In summary, the development and application of the autopilot system in the prior art are usually carried on some high-performance development boards, the required power consumption is high, and because the sensors are distributed, the acquired information needs to be additionally aligned in time, so that the time for processing the information is increased.
Disclosure of Invention
The present application has been made to solve the above-mentioned problems, and an object of the present application is to provide a low-speed automated driving cart control method, a corresponding apparatus, an electronic device, and a computer-readable storage medium.
In order to meet the purposes of the application, the application adopts the following technical scheme:
A control method of a low-speed automatic driving cart according to one of the objects of the present application includes:
acquiring road condition image frames of all angles of a low-speed automatic driving trolley and occupying raster map data, and calculating and determining the confidence coefficient of each dynamic target detection frame between the current road condition image frame and the previous road condition image frame by adopting an intersection ratio matching algorithm based on the position of a moving object and a re-identification matching algorithm based on appearance characteristics so as to match and track the dynamic target position information of the current road condition image frame and the previous road condition image frame;
Deducing surface normal information from the road condition image frame, extracting feature mapping from the RGB image and the deduced surface normal information by adopting an RGB encoder and a surface normal encoder respectively, and determining a drivable road surface area according to the fused RGB and surface normal information by adopting a pre-trained semantic road segmentation model;
determining wheel type odometer data and GPS data, and fusing the wheel type odometer data and the GPS data based on a preset extended Kalman filtering algorithm to determine vehicle position information of a low-speed automatic driving trolley;
Updating the occupied grid map data according to the dynamic target position information, the drivable pavement area and the vehicle position information, determining a path point for navigating the low-speed automatic driving trolley according to the updated occupied grid map data based on an A-type algorithm, and determining a driving path of the low-speed automatic driving trolley according to the path point of the low-speed automatic driving trolley based on a TEB algorithm;
and responding to a control instruction of the low-speed automatic driving trolley, and driving the low-speed automatic driving trolley to run according to the running path so as to complete the control of the low-speed automatic driving trolley.
Optionally, a step of calculating and determining the confidence of each dynamic target detection frame between the current road condition image frame and the previous road condition image frame by adopting an intersection ratio matching algorithm based on the position of the moving object and a re-identification matching algorithm based on the appearance characteristics so as to match and track the dynamic target position information of the current road condition image frame and the previous road condition image frame, includes:
Constructing a three-dimensional model corresponding to each dynamic target to obtain the motion state information of each dynamic target relative to the low-speed automatic driving trolley;
Determining the observed value corresponding to each dynamic target obtained by a monocular camera, estimating the state information predicted value corresponding to each dynamic target in the current road condition image frame according to the motion state information of the low-speed automatic driving trolley in the preset number of road condition image frames by adopting a Kalman filtering algorithm, and updating the optimal estimated value of the current road condition image frame based on the observed value corresponding to each dynamic target and the state information predicted value corresponding to each dynamic target to serve as the latest state information of each dynamic target;
And matching and weighting the obtained confidence coefficient by adopting an intersection ratio matching algorithm based on the position of the moving object and a re-identification matching algorithm based on the appearance characteristics based on the latest state information of each dynamic object so as to carry out object matching and tracking between the current road condition image frame and the previous road condition image frame.
Optionally, deducing surface normal information from the road condition image frame, extracting feature mapping from the RGB image and the deduced surface normal information by adopting an RGB encoder and a surface normal encoder, and determining a drivable road surface area according to the fused RGB and surface normal information by adopting a pre-trained semantic road segmentation model, wherein the method comprises the following steps:
Deducing surface normal information from a depth map corresponding to the road condition image frame, acquiring three-dimensional information of each pixel according to CCS coordinates, calculating horizontal gradients and vertical gradients based on a Sobel operator to determine an x-direction component and a y-direction component of the surface normal information, and selecting 8 adjacent points to calculate a normal vector component in the estimated z-direction by adopting a K-dimensional space data structure;
Extracting feature maps from the RGB image and the surface normal information respectively by adopting an RGB encoder and a surface normal encoder, carrying out element summation hierarchical fusion on the extracted feature maps, and carrying out re-fusion in a decoder through densely connected jump connection so as to determine fused RGB and surface normal information;
And determining a drivable pavement area according to the fused RGB and surface normal information by adopting a pre-trained semantic road segmentation model.
Optionally, the step of fusing the wheel odometer data and the GPS data based on a preset extended kalman filter algorithm to determine vehicle position information of the low-speed autopilot, includes:
The step of fusing the wheel type odometer data and the GPS data based on a preset extended Kalman filtering algorithm comprises the following steps:
According to formula X k -=AXk-1+Buk-1, the prior estimated value X k - of the next cycle is predicted through the state transition matrix A and the prior estimated value X k-1 in the previous cycle, and the initial prior state value X k - is the value provided by the sensor;
Updating a priori covariance matrix according to a formula P k -=APk-1AT +Q, wherein Q is a covariance matrix of a systematic process error, and P k-1 is a posterior covariance matrix in the previous cycle;
According to the formula Obtaining a K k Kalman gain value of each cycle, wherein H is an observation matrix which is responsible for converting an estimated value into data of the same type as the observed value, and R is covariance of measurement noise;
According to formula X k=Xk -+Kk(Zk-HXk -), a posterior estimate value X k is calculated from a priori estimates X k -, kalman gain K k, observations Z k, and observation matrix H of the current cycle, where the observations are sensor information acquired for each cycle;
Updating the new round of posterior covariance matrix P k according to formula P k=(I-KkH)Pk -;
And (5) circularly iterating the steps (1) to (5) to obtain the data information of the fused wheel type odometer data and GPS data.
Optionally, the step of determining, based on the a-algorithm, a waypoint for navigating the low speed autonomous driving trolley from the updated occupancy grid map data includes:
taking the updated occupied grid map data as map data of an A-algorithm global path planning, taking the current position of the low-speed automatic driving trolley as a path starting point, and taking a position designated by GPS coordinates as an end point, wherein the occupied grid map data comprises the current position, the end point position, a road network and barrier information of the low-speed automatic driving trolley;
using Manhattan distance definition heuristic function to measure the sum of distances from the current node to the target node in the horizontal and vertical directions;
initializing a data structure, adding the path starting point to an open list, creating an open list to store nodes to be explored, creating a closed list to store the explored nodes, adding the path starting point to the open list, and setting the estimated cost value of the path starting point to 0;
in each iteration, selecting the node with the minimum cost value from the open list, moving the selected node into a closed list to represent that the node has been explored, calculating cost values of adjacent nodes of the selected node, adding the cost values to the open list, and if the cost values are not in the closed list, updating father node information to record the best paths reaching the nodes;
checking whether a target node has been found or not each time a node is added to the open list, if the target node is found, indicating that a path has been found, the search may be ended;
After the target node is found, tracing back father node information, and tracing back from the target node to the starting point all the time to obtain a complete path;
Optimizing the searched path, smoothing the path by adopting a Bezier curve to reduce sharp turns and acute angles in the path, removing redundant nodes by combining the nodes to reduce the path length, and outputting a series of path points for navigating the low-speed automatic driving trolley.
Optionally, the step of determining the driving path of the low-speed automatic driving trolley according to the path point of the low-speed automatic driving trolley based on the TEB algorithm includes:
Discretizing the path points for navigating the low speed autonomous vehicle into a series of points and then connecting the points to form a preliminary elastic band;
Adjusting the preliminary elastic band according to a path optimization target based on a preset path optimization algorithm to determine an optimized elastic band, wherein the path optimization target comprises one or more of minimizing the length of the elastic band, minimizing the steering angle of the low-speed automatic driving trolley and maximizing the distance between the elastic band and an obstacle;
And taking the optimized elastic belt as a local path of the low-speed automatic driving trolley, and driving the low-speed automatic driving trolley to run according to the local path.
Optionally, the low-speed automatic driving trolley comprises one or more of an unmanned express delivery vehicle and an intelligent transport vehicle.
A low-speed automatic driving cart control device according to another object of the present application includes:
The dynamic target tracking module is used for acquiring road condition image frames of all angles of the low-speed automatic driving trolley and occupying raster map data, calculating and determining the confidence coefficient of each dynamic target detection frame between the current road condition image frame and the previous road condition image frame by adopting an intersection ratio matching algorithm based on the position of a moving object and a re-identification matching algorithm based on appearance characteristics so as to match and track the dynamic target position information of the current road condition image frame and the previous road condition image frame;
the drivable pavement determining module is configured to infer surface normal information from the road condition image frames, extract feature mapping from the RGB image and the inferred surface normal information respectively by adopting an RGB encoder and a surface normal encoder, and determine a drivable pavement area according to the fused RGB and surface normal information by adopting a pre-trained semantic road segmentation model;
the vehicle position determining module is used for determining wheel type odometer data and GPS data, and fusing the wheel type odometer data and the GPS data based on a preset extended Kalman filtering algorithm to determine vehicle position information of the low-speed automatic driving trolley;
A travel path determining module configured to update the occupied grid map data according to the dynamic target position information, the drivable road surface area, and the vehicle position information, determine a path point for navigating the low-speed autonomous driving trolley according to the updated occupied grid map data based on an a-algorithm, and determine a travel path of the low-speed autonomous driving trolley according to the path point of the low-speed autonomous driving trolley based on a TEB algorithm;
And the automatic driving control module is used for responding to the control instruction of the low-speed automatic driving trolley and driving the low-speed automatic driving trolley to run according to the running path so as to complete the control of the low-speed automatic driving trolley.
An electronic device adapted to another object of the application comprises a central processor and a memory, said central processor being adapted to invoke the steps of running a computer program stored in said memory for performing the low speed autonomous car control method according to the application.
A computer-readable storage medium adapted to another object of the present application stores, in the form of computer-readable instructions, a computer program implemented according to the low-speed automated driving cart control method, which when invoked by a computer, performs the steps comprised by the corresponding method.
Compared with the prior art, the application aims at the problems that the development and the application of an automatic driving system in the prior art are usually carried on a plurality of high-performance development boards, the required power consumption is higher, and the acquired information needs to be additionally aligned in time due to the scattered arrangement of the sensors, the time for information processing is increased, and the like, and the application comprises the following beneficial effects:
firstly, portability and universality are greatly improved, a hardware architecture based on a mainstream smart phone is developed and used, a vehicle main body does not need to be additionally refitted, and a user can enable the vehicle to obtain an automatic driving function only by connecting the smart phone with the vehicle;
Secondly, the application does not influence the endurance experience of the vehicle, does not need to modify the vehicle, and is additionally provided with an additional sensor. Only part of sensor equipment of the smart phone is used, the collected information is processed through a low-power-consumption processor of the smart phone, the system can be kept to normally operate for a long time by means of the self-charging battery of the smart phone, and the extra influence on the power supply of a vehicle is avoided.
Thirdly, the application greatly reduces the use cost of the user, the user does not need to bear the refitting cost of the vehicle, such as sensors of the laser radar, the millimeter wave radar, the GPS and the like which are mainstream at present, and the complete function of the application can be experienced only by using a daily smart phone and a common vehicle.
Furthermore, the application can form an environment map by acquiring road condition image frames and occupying raster map data, fully considers various obstacles in the environment, and avoids the occurrence of obstacles on a driving path, so that vehicles collide during transportation to generate accidents, and further the transportation efficiency is affected. No obstacle is ensured on the driving path, accident damage caused by collision to people and objects is avoided, and time waste caused by accidents is also avoided. Therefore, on the premise of ensuring safety, the transportation efficiency of the vehicle is improved.
Drawings
The foregoing and/or additional aspects and advantages of the application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings, in which:
FIG. 1 is an exemplary network architecture employed by the low speed autopilot control method of the present application;
FIG. 2 is a flow chart of a method for controlling a low-speed autonomous driving cart according to an embodiment of the present application;
FIG. 3 is a diagram illustrating the overall configuration of a Bluetooth pairing of a smart phone to a designated vehicle in an embodiment of the present application;
FIG. 4 is a schematic block diagram of a low-speed autopilot control apparatus in an embodiment of the present application;
fig. 5 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative only and are not to be construed as limiting the application.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless expressly stated otherwise, as understood by those skilled in the art. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. The term "and/or" as used herein includes all or any element and all combination of one or more of the associated listed items.
It will be understood by those skilled in the art that all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs unless defined otherwise. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
As used herein, "client," "terminal device," and "terminal device" are understood by those skilled in the art to include both devices that include only wireless signal receivers without transmitting capabilities and devices that include receiving and transmitting hardware capable of two-way communication over a two-way communication link. Such devices may include cellular or other communication devices such as Personal computers, tablet computers, cellular or other communication devices having a single-wire or multi-wire display or no multi-wire display, PCS (Personal Communications Service, personal communication system) which may combine voice, data processing, facsimile and/or data communication capabilities, PDA (Personal DIGITAL ASSISTANT ) which may include a radio frequency receiver, pager, internet/intranet access, web browser, notepad, calendar and/or GPS (Global Positioning System ) receiver, conventional laptop and/or palmtop computer or other device having and/or including a radio frequency receiver. As used herein, "client," "terminal device" may be portable, transportable, installed in a vehicle (aeronautical, maritime, and/or land-based), or adapted and/or configured to operate locally and/or in a distributed fashion, at any other location(s) on earth and/or in space. As used herein, a "client," "terminal device," or "terminal device" may also be a communication terminal, an internet terminal, or a music/video playing terminal, for example, may be a PDA, a MID (Mobile INTERNET DEVICE ), and/or a Mobile phone with a music/video playing function, or may also be a device such as a smart tv, a set top box, or the like.
The application refers to hardware such as a server, a client, a service node, and the like, which essentially is an electronic device with personal computer and other functions, and is a hardware device with necessary components disclosed by von neumann principles such as a central processing unit (including an arithmetic unit and a controller), a memory, an input device, an output device, and the like, wherein a computer program is stored in the memory, and the central processing unit calls the program stored in the memory to run, executes instructions in the program, and interacts with the input and output devices, thereby completing specific functions.
It should be noted that the concept of the present application, called "server", is equally applicable to the case of server clusters. The servers should be logically partitioned, physically separate from each other but interface-callable, or integrated into a physical computer or group of computers, according to network deployment principles understood by those skilled in the art. Those skilled in the art will appreciate this variation and should not be construed as limiting the implementation of the network deployment approach of the present application.
One or more technical features of the present application, unless specified in the clear, may be deployed either on a server for implementation and the client remotely invokes an online service interface provided by the acquisition server for implementation of the access, or may be deployed and run directly on the client for implementation of the access.
The neural network model cited or possibly cited in the application can be deployed on a remote server and can be used for implementing remote call on a client, or can be deployed on a client with sufficient equipment capability for direct call, unless specified by plaintext, and in some embodiments, when the neural network model runs on the client, the corresponding intelligence can be obtained through migration learning so as to reduce the requirement on the running resources of the hardware of the client and avoid excessively occupying the running resources of the hardware of the client.
The various data related to the present application, unless specified in the plain text, may be stored either remotely in a server or in a local terminal device, as long as it is suitable for being invoked by the technical solution of the present application.
It will be appreciated by those skilled in the art that the various methods of the application, although described based on the same concepts as one another in common, may be performed independently of one another unless otherwise indicated. Similarly, for the various embodiments disclosed herein, all concepts described herein are presented based on the same general inventive concept, and thus, concepts described herein with respect to the same general inventive concept, and concepts that are merely convenient and appropriately modified, although different, should be interpreted as equivalents.
The various embodiments of the present application to be disclosed herein, unless the plain text indicates a mutually exclusive relationship with each other, the technical features related to the various embodiments may be cross-combined to flexibly construct a new embodiment, so long as the combination does not depart from the inventive spirit of the present application and can satisfy the needs in the art or solve the deficiencies in the prior art. This variant will be known to the person skilled in the art.
The computing resources of smartphones have increased significantly over the past few years, which has enabled them to carry more and more complex applications, including deep learning applications. Deep learning is a machine learning technique that requires a significant amount of computational resources to model training and reasoning. With the continual rise and optimization of smart phone computing resources, they have been provided with the ability to carry deep learning applications.
Along with the continuous development of technology, the functions and performances of the smart phone are greatly improved. Among these, the application of multiple sensors is one of the keys that smart phones can achieve more functionality and provide a better user experience. The multisensors of a smart phone include a variety of different types of sensors, such as vision sensors, lidar, GPS, etc. The sensors can sense the environment and the state around the mobile phone, collect various data, analyze and process the data through algorithms and data processing technology, and provide more accurate and reliable information for the mobile phone.
Visual perception plays a key role in autopilot, and by acquiring rich color and texture information, an autopilot vehicle can quickly and accurately identify and locate objects on a road, such as vehicles, pedestrians, road markings, and the like. Meanwhile, the vision sensor can establish an environment model around the vehicle to help the vehicle to conduct path planning and obstacle avoidance. In addition, the visual perception can enable the vehicle to understand dynamic changes in traffic scenes, and tasks such as obstacle avoidance, road recognition and the like are performed. In order to improve accuracy and reliability of visual perception, new techniques such as deep learning algorithms are used therein to process complex image information and extract useful features.
High-precision fusion positioning is a solution for improving positioning precision by combining different sensors and positioning technologies. The system can fuse data from different sources, including GPS, inertial Measurement Unit (IMU), wheel speed sensor, laser radar (LiDAR), image sensor, etc., to provide accurate, reliable and comprehensive position information, and help the vehicle to realize safe and efficient automatic driving in various scenes.
The core system of the low-speed unmanned vehicle comprises an automatic driving system, a sensor system, a navigation system and the like. The automatic driving system can control the running speed, direction, braking, etc. of the vehicle according to the position and surrounding information of the vehicle. The sensor system comprises various sensors, such as a laser radar, a camera, ultrasonic waves and the like, and is used for sensing the surrounding environment and providing functions of positioning, navigation, obstacle avoidance and the like of the vehicle. The navigation system can provide route planning and navigation for the vehicle, so that the vehicle can be ensured to run autonomously according to a preset route.
Referring to fig. 1, the low-speed automatic driving trolley control method of the present application may be implemented based on a low-speed automatic driving trolley control system in a mobile terminal, and a front camera, a rear camera, a GPS and a wheel odometer of a vehicle of a smart phone may collect data at the same time and send the obtained data to a central processor of the smart phone for processing respectively.
The road condition image frames of the front image, the rear image and other angles of the front image and the rear image are put into two neural networks together to respectively obtain three-dimensional tracking information (such as people, vehicles and the like) of obstacles around the vehicle and road information (normal lanes, park roads, indoor floors and the like) of the vehicle which can safely pass through.
The GPS positioning information and the vehicle motion information provided by the wheel type odometer are fused through Kalman filtering to obtain the vehicle real-time motion positioning information, and the obtained various information is synchronously used for track planning, so that route planning and navigation are provided for the vehicle, and the vehicle can independently run according to a preset route and safely avoid various encountered obstacles.
The whole set of low-speed unmanned vehicle system is well packaged, can be well adapted and applied to the mainstream smart phone, and finally, a user can enable the vehicle to have an automatic driving function only by connecting the mobile phone to the unmanned vehicle.
With reference to the above exemplary scenario and referring to fig. 2, in one embodiment, the low-speed autopilot control method of the present application includes:
Step S10, acquiring road condition image frames of all angles of a low-speed automatic driving trolley and occupying raster map data, and calculating and determining the confidence coefficient of each dynamic target detection frame between the current road condition image frame and the previous road condition image frame by adopting an intersection ratio matching algorithm based on the position of a moving object and a re-identification matching algorithm based on appearance characteristics so as to match and track the dynamic target position information of the current road condition image frame and the previous road condition image frame;
The mobile terminal can acquire road condition image frames of all angles of a low-speed automatic driving trolley and occupy grid map data, the mobile terminal can be a smart phone, a computer and the like, the road condition image frames comprise image frames of static barriers, dynamic targets, road surfaces and the like, an intersection matching algorithm based on the positions of moving objects and a re-identification matching algorithm based on appearance characteristics are adopted, the confidence of each dynamic target detection frame between the current road condition image frame and the previous road condition image frame is calculated and determined so as to match and track the dynamic target position information of the current road condition image frame and the previous road condition image frame, and the low-speed automatic driving trolley comprises one or more of an unmanned express car and an intelligent transport car.
Specifically, a moving object position-based intersection ratio matching algorithm and an appearance feature-based re-recognition matching algorithm are adopted to calculate and determine the confidence of each dynamic target detection frame between the current road condition image frame and the previous road condition image frame so as to match and track the dynamic target position information of the current road condition image frame and the previous road condition image frame, and the method comprises the following steps:
step S101, constructing a three-dimensional model corresponding to each dynamic target, and obtaining the motion state information of each dynamic target relative to the low-speed automatic driving trolley;
Step S103, determining the observed value corresponding to each dynamic target obtained by a monocular camera, estimating the state information predicted value corresponding to each dynamic target in the current road condition image frame according to the motion state information of the low-speed automatic driving trolley in the preset number of road condition image frames by adopting a Kalman filtering algorithm, and updating the optimal estimated value of the current road condition image frame based on the observed value corresponding to each dynamic target and the state information predicted value corresponding to each dynamic target to serve as the latest state information of each dynamic target;
Step 105, matching and weighting the obtained confidence coefficient by adopting an intersection ratio matching algorithm based on the position of the moving object and a re-identification matching algorithm based on the appearance characteristics based on the latest state information of each dynamic object so as to perform object matching and tracking between the current road condition image frame and the previous road condition image frame.
More specifically, based on a pre-trained target detection model, target detection is performed on a dynamic target such as a pedestrian, an automobile and the like which are common on a real road, the robustness of track detection is enhanced by utilizing Kalman filtering, the moving target in a two-dimensional image is projected into a three-dimensional space, the target matching between a current road condition image frame and a previous road condition image frame is performed by utilizing a confidence coefficient obtained by weighting an intersection ratio (IOU) matching algorithm based on the position of the moving object and a re-recognition matching algorithm based on appearance characteristics, the method can reduce the frequency of false detection and missed detection, and effectively realize the tracking of an object which performs nonlinear motion in three dimensions, and the specific steps are as follows:
(1) The monocular camera also has the capability of restoring the three-dimensional model by utilizing auxiliary learning through the conventional 5 three-dimensional frame detection heads and the 5 auxiliary training heads, wherein the three-dimensional frame detection heads are mainly used for predicting the size of the three-dimensional frame, the rough depth value of the center point of the two-dimensional frame and the coordinate offset of the center, and the auxiliary heads are mainly used for quantifying the errors of the center point of the two-dimensional frame and the 8 vertexes.
(2) Establishing a three-dimensional model of each dynamic target, carrying out three-dimensional reconstruction on the dynamic targets by utilizing the return information in the step (1) to obtain the motion state of each dynamic target relative to the trolley, wherein the establishment of the three-dimensional model comprises the following steps:
The camera is carried on the vehicle to move along with the vehicle, a camera coordinate system taking a camera optical center as an origin is firstly established, according to a right rule, an X axis of the camera coordinate system is defined to be parallel to a horizontal plane and perpendicular to the advancing direction of the trolley and is positive to the right, a Y axis of the camera coordinate system is parallel to the horizontal plane and is positive to the advancing direction of the trolley, and a Z axis of the camera coordinate system is perpendicular to the horizontal direction, so that three-dimensional coordinates of each target relative to the camera can be obtained, and the movement track of each target, namely the relative track of the target relative to the movement of the camera, can be obtained. Secondly, defining the upper left corner of an image acquired by the camera as an origin of a pixel u-v coordinate system, taking the horizontal right as a u-axis and taking the vertical downward as a v-axis, and taking the main point (u 0, v 0) of the opposite image as an x-axis and a y-axis which are respectively in the same direction as the v-axis of the u-axis, wherein the camera is known as an internal reference. The three are used together for the conversion of the pixel coordinates of the object to the camera coordinates. Our transformation of the u-v coordinate system and the x-y coordinate system satisfies:
And according to the central projection characteristics, the coordinate relation of the pixel points meets the equal proportion relation, and then the camera coordinate system and the x-y coordinate system meet the following conditions:
the conversion relation between the pixel coordinates and the camera coordinates can be obtained by combining the above formulas (1) and (2):
(3) Estimating a state information predicted value of a current road condition image frame target by using Kalman filtering according to the state information of the previous 1-t-1 frame targets, acquiring an observed value of the target by a monocular camera, and updating an optimal estimated value of the current road condition image frame based on the observed value corresponding to each dynamic target and the state information predicted value corresponding to each dynamic target to be used as the latest state information of each dynamic target;
(4) And (3) obtaining the latest state information of a plurality of targets of the current road condition image frame by the step (3), and matching the targets of the current road condition image frame and the previous road condition image frame by the joint data association of the IOU similarity and Re-ID Re-identification so as to achieve the aim of matching and tracking.
The calculation of the IOU similarity is to project the cuboid of the previous road condition image frame and the current road condition image frame target to an xy plane, and calculate the overlapping area of two detection frames in the plane:
Meanwhile, the overlapping height is multiplied by the overlapping area to obtain the overlapping volume, and the three-dimensional IOU similarity can be calculated by the same method and is marked as d (1) (i, j).
A depth appearance feature descriptor is designed for each target, which is a 128-dimensional unit feature vector of the Re-ID network trained offline on the pedestrian Re-identification dataset. For each tracker, it keeps its last 100 sets of appearance feature descriptors R that are successfully associated with the detection box and calculates their minimum cosine distance from the detection box:
Establishing a matching association, combining two indexes by using a weighted sum, and controlling the influence of each index on matching through a super parameter lambda:
ci,j=λd(1)(i,j)+(1-λ)d(2)(i,j), (6)
And matching and weighting the obtained confidence coefficient by adopting an intersection ratio matching algorithm based on the position of the moving object and a re-identification matching algorithm based on the appearance characteristics based on the latest state information of each dynamic object so as to carry out object matching and tracking between the current road condition image frame and the previous road condition image frame.
S20, deducing surface normal information from the road condition image frame, extracting feature mapping from the RGB image and the deduced surface normal information by adopting an RGB encoder and a surface normal encoder, and determining a drivable road surface area according to the fused RGB and surface normal information by adopting a pre-trained semantic road segmentation model;
After matching and tracking the dynamic target position information of the current road condition image frame and the previous road condition image frame, deducing surface normal information from the road condition image frame, respectively extracting feature mapping from the RGB image and the deduced surface normal information by adopting an RGB encoder and a surface normal encoder, and determining a travelable road surface area according to the fused RGB and surface normal information by adopting a pre-trained semantic road segmentation model;
Specifically, the method includes the steps of deducing surface normal information from the road condition image frame, extracting feature mapping from the RGB image and the deduced surface normal information by adopting an RGB encoder and a surface normal encoder, and determining a drivable road surface area according to the fused RGB and surface normal information by adopting a pre-trained semantic road segmentation model, wherein the method comprises the following steps:
Step S201, deducing surface normal information from a depth map corresponding to the road condition image frame, acquiring three-dimensional information of each pixel according to CCS coordinates, calculating horizontal gradients and vertical gradients based on Sobel operators to determine an x-direction component and a y-direction component of the surface normal information, and selecting 8 adjacent points to calculate a normal vector component in the estimated z-direction by adopting a K-dimensional space data structure;
Step S203, extracting feature maps from the RGB image and the surface normal information by adopting an RGB encoder and a surface normal encoder respectively, carrying out hierarchical fusion on the extracted feature maps through element summation, and carrying out re-fusion in a decoder through jump connection of dense connection so as to determine the fused RGB and surface normal information;
and step S205, determining a drivable pavement area according to the fused RGB and surface normal information by adopting a pre-trained semantic road segmentation model.
Road passable region detection fusing surface normal information in a CNN neural network, deducing surface normal information from dense parallax/depth images, merging RGB and surface normal information into semantic road segmentation using a depth learning model, the specific steps of which are as follows:
And deducing surface normal information from the depth/parallax image, and for a depth map corresponding to the road condition image frame, obtaining three-dimensional information of each element (pixel) by connecting CCS coordinates, and calculating horizontal and vertical gradients by utilizing a Sobel operator (Sobel operator) to obtain an x-direction component and a y-direction component of the surface normal information. Meanwhile, 8 adjacent points are selected by adopting a K-dimensional space data structure (K-D tree) to calculate a normal vector component in the estimated z direction, and the expression of n x、ny and n zi is obtained as follows:
nx=-dfxgx,ny=-dfygy, (7)
And fusing RGB and surface normal information to realize passable area detection, extracting feature maps from the RGB image and the surface normal information respectively by adopting an RGB encoder and a surface normal encoder, merging the extracted feature maps through element summation layering, merging again in a decoder through densely connected jump connection to determine the fused RGB and surface normal information, and determining a passable pavement area according to the fused RGB and surface normal information by adopting a pre-trained semantic road segmentation model.
Step S30, determining wheel type odometer data and GPS data, and fusing the wheel type odometer data and the GPS data based on a preset extended Kalman filtering algorithm to determine vehicle position information of the low-speed automatic driving trolley;
After a pre-trained semantic road segmentation model is adopted to determine a drivable road surface area according to the fused RGB and surface normal information, wheel type odometer data and GPS data are determined, and the wheel type odometer data and the GPS data are fused based on a preset extended Kalman filtering algorithm so as to determine vehicle position information of a low-speed automatic driving trolley;
Specifically, the step of fusing the wheel odometer data with GPS data based on a preset extended kalman filter algorithm to determine vehicle position information of the low-speed autonomous driving trolley includes:
The GPS signal is composed of a carrier wave, a ranging code and a navigation data code. The carrier is two carrier frequencies of the L-band, and the ranging code includes a C/a code and a P code, which are used to measure the pseudo-range error of the satellite to the observation point. The navigation data code comprises satellite ephemeris, satellite clock correction parameters, ionosphere correction parameters and other information, and is used for calculating the position of a space satellite and correcting the pseudo-range error from the satellite to an observation point. These signals work together to enable the GPS to accurately locate the position of the observation point.
According to the steering angle theta of the front wheel and the linear velocity v of the rear wheel fed back by the Ackerman chassis, a two-dimensional coordinate point (x, y) and a course angle of the wheel type odometer are obtained through a simplified model 'bicycle model' of the Ackerman kinematic modelCalculating a turning radius R:
where R is the turning radius and L is the wheelbase between the front and rear wheels.
Calculating the angular velocity omega:
where ω is the angular velocity.
Calculating course angle
Wherein, Is the heading angle obtained in the last heading angle calculation process, and deltat is the interval time of two data samples.
A two-dimensional component v x、vy of the velocity is calculated:
where v x is the velocity component of the linear velocity in the x-axis direction and v y is the velocity component of the linear velocity in the y-axis direction.
Calculating two-dimensional coordinates (x, y):
Wherein x is a coordinate value in the x-axis direction, and x' is a coordinate value in the y-axis direction obtained in the last two-dimensional coordinate calculation, and y is a coordinate value in the y-axis direction obtained in the last two-dimensional coordinate calculation.
Kalman filtering (KALMAN FILTER) is a highly efficient recursive filter (autoregressive filter) that can estimate the state of a dynamic system from a series of incomplete and noisy measurements. The Kalman filtering will take into account the joint distribution of each measurement quantity at different times, and will then generate an estimate of the unknown variable based on the values of each measurement quantity, and will therefore be more accurate than an estimate based on a single measurement quantity alone. The algorithm program of the Kalman filtering has two steps, namely prediction estimation and update correction:
In the estimating step, kalman filtering produces an estimate of the current state, which also includes uncertainty;
in the update step, as soon as the next measurement is observed (which must contain some degree of error, including random noise), the estimate is corrected by weighted averaging, and the higher the certainty, the higher the measurement weight.
The currently used sensors mainly comprise a wheel type odometer and a GPS positioning module, and because the frequency of the wheel type odometer is higher, the attitude solution of the wheel type odometer is often used as the prediction of track increment, and the steps of fusing the wheel type odometer data and the GPS data based on a preset extended Kalman filtering algorithm are as follows:
According to formula X k -=AXk-1+Buk-1, the prior estimated value X k - of the next cycle is predicted through the state transition matrix A and the prior estimated value X k-1 in the previous cycle, and the initial prior state value X k - is the value provided by the sensor;
Updating a priori covariance matrix according to a formula P k -=APk-1AT +Q, wherein Q is a covariance matrix of a systematic process error, and P k-1 is a posterior covariance matrix in the previous cycle;
According to the formula Obtaining a K k Kalman gain value of each cycle, wherein H is an observation matrix which is responsible for converting an estimated value into data of the same type as the observed value, and R is covariance of measurement noise;
According to formula X k=Xk -+Kk(Zk-HXk -), a-priori estimate X k is calculated from the prior estimate X k -, the kalman gain K k, the observation Z k, and the observation matrix H of the present cycle. The observations here are sensor information acquired per cycle.
(5) Updating the new round of posterior covariance matrix P k according to formula P k=(I-KkH)Pk -;
The steps (1) and (2) belong to priori prediction, the steps (3) to (5) belong to posterior correction, and the steps (1) to (5) are iterated circularly to obtain the data information of the fused wheel type odometer data and GPS data.
Step S40, updating the occupied grid map data according to the dynamic target position information, the drivable pavement area and the vehicle position information, determining a path point for navigating the low-speed automatic driving trolley according to the updated occupied grid map data based on an A-algorithm, and determining a driving path of the low-speed automatic driving trolley according to the path point of the low-speed automatic driving trolley based on a TEB algorithm;
After determining vehicle position information of a low-speed automatic driving trolley, updating the occupied grid map data according to the dynamic target position information, a drivable road surface area and the vehicle position information, determining a path point for navigating the low-speed automatic driving trolley according to the updated occupied grid map data based on an A-type algorithm, and determining a driving path of the low-speed automatic driving trolley according to the path point of the low-speed automatic driving trolley based on a TEB algorithm;
In some embodiments, the custom gridOS class is used for unifying map-related variables and functions in a management system, the member objects of the custom gridOS class mainly comprise nav_ msgs: occupancyGridPtr type occupied grid map variables, quaternions representing pose transformation of a camera (car body), translation vectors, the current position of the camera (car body) and various message variables for storing ROS topic information, and the member functions mainly comprise a constructor, a destructor, a map updating function, a camera (car body) pose updating function and a function for processing the subscribed ROS topic information. After the main function of the map system starts to run, a map system object of gridOS classes is firstly constructed for processing various tasks of the map system. Each pointer variable in the member variables is initialized in the constructor, and each default value of the map variable, such as resolution, width and height, initial offset rotation, is set. The method comprises the steps of starting a ZED2 camera in a main function, setting various hardware parameters of the camera, obtaining left and right eye RGB images and depth maps of the camera, releasing image information through ROS topics, detecting a passable area and a lane line for 3D target detection tracking and track prediction, detecting road edges, simultaneously, obtaining pose transformation information of a visual odometer of the camera, releasing the information through the ROS topics for camera (car body) positioning information fusion, and in addition, obtaining obstacle point clouds generated by the camera, carrying out voxel filtering on the point clouds and only reserving the point clouds in a given range for updating the obstacle information on the map. And setting a ROS subscriber and a callback function, correspondingly acquiring 3D target tracking and predicting information, passable area and lane line information, road edge information and camera (car body) pose transformation information obtained after information fusion from each topic, and correspondingly storing the camera (car body) pose transformation information into each member variable of the map system object.
After receiving the messages of each topic, the messages enter each member function of the map system object to be correspondingly processed. And uniformly converting the coordinates of the 3D target, the predicted track information, the lane line information, the road edge information and the obstacle point cloud into a vehicle coordinate system. And constructing a small window on the map by taking the camera position as an origin, dividing the range of all the points under the vehicle coordinate system according to the resolution of the map, correspondingly projecting the range onto a window area, and updating the local state of the map in real time according to window information. The cost values of grids corresponding to the 3D object, the lane line information, the road edge information and the obstacle point cloud on the map are set to be 100, the cost values of grids corresponding to the predicted track of the 3D object are set to be numbers between 0 and 100, the cost values of grids of the rest positions are set to be 0, and the cost values of grids corresponding to the rest positions on the map are set to be 0, and the cost values of grids corresponding to the rest positions are set to be numbers which indicate that the position on the map is free of obstacles (unoccupied). After the projection process is finished, the grids with the barriers are inflated, namely, the grids near the grids are set to be occupied, so that safety accidents caused by the fact that vehicles are excessively close to the barriers when driving along a path obtained by path planning are avoided. And finally, publishing the processed map variable through the ROS topic, subscribing the topic in other nodes, acquiring map information and planning a path.
Further, the step of determining a waypoint for navigating the low speed autonomous driving cart based on the updated occupancy grid map data based on the a-algorithm comprises:
Step S401, taking the updated occupied grid map data as map data of global path planning of an A-algorithm, taking the current position of the low-speed automatic driving trolley as a path starting point, and taking a position designated by GPS coordinates as an end point, wherein the occupied grid map data comprises the current position, the end point position, a road network and barrier information of the low-speed automatic driving trolley;
step S402, a Manhattan distance definition heuristic function is adopted, and the sum of distances from a current node to a target node in the horizontal and vertical directions is measured;
Step S403, initializing a data structure, adding the path starting point to an open list, creating an open list to store nodes to be explored, creating a closed list to store the explored nodes, adding the path starting point to the open list, and setting the estimated cost value to 0;
Step S404, selecting the node with the minimum cost value from the open list in each iteration, moving the selected node into a closed list to represent the nodes which have been explored, calculating the cost values of the adjacent nodes of the selected node, adding the cost values into the open list, and if the cost values are not in the closed list, updating father node information to record the best paths reaching the nodes;
Step S405, when adding a node to the open list each time, checking whether a target node has been found, if the target node is found, indicating that a path has been found, the search may be ended;
step S406, after the target node is found, tracing back parent node information, and tracing back to the starting point from the target node all the time to obtain a complete path;
Step S407, optimizing the searched path, smoothing the path by adopting a Bezier curve to reduce sharp turns and acute angles in the path, removing redundant nodes by combining the nodes to reduce the path length, and outputting a series of path points for navigating the low-speed automatic driving trolley.
Further, the step of determining the driving path of the low-speed automatic driving trolley according to the path point of the low-speed automatic driving trolley based on a TEB algorithm comprises the following steps:
Step S4001, discretizing the path points for navigating the low-speed automatic driving trolley into a series of points, and then connecting the points to form a primary elastic belt;
Step S4003, adjusting the preliminary elastic band according to a path optimization objective based on a preset path optimization algorithm to determine an optimized elastic band, wherein the path optimization objective includes one or more of minimizing a length of the elastic band, minimizing a steering angle of the low-speed autopilot cart, and maximizing a distance between the elastic band and an obstacle;
and step S4003, taking the optimized elastic belt as a local path of the low-speed automatic driving trolley, and driving the low-speed automatic driving trolley to run according to the local path.
The unmanned vehicle performs global path planning, which is a complex process, and needs to comprehensively consider map data, a path planning algorithm and vehicle dynamics. The path planning algorithm is selected by taking into consideration the factors of ensuring that the path does not intersect obstacles, avoiding collisions, optimizing the path length to the extent possible to reduce the distance travelled, and optimizing the path to minimize the energy consumption of the vehicle.
Comprehensively considering the above factors, selecting an a-algorithm to perform path searching, wherein the following steps are the steps of performing global path planning by using the a-algorithm:
a. Preparing a map and environment data:
The obtained occupied grid map is used as map data of path planning, wherein the map data comprise the current position, the terminal position, the road network, the obstacle information and the like of the trolley. The current position of the trolley is set as a route start point, and the position specified by using GPS coordinates is set as an end point. The user clicks or enters latitude and longitude coordinates of the target location on the map. Searching the acquired occupied grid map for a path.
B. defining a heuristic function:
a heuristic function (valuation function) h (n) is defined that is used to estimate the cost from node n to the target node. The sum of distances in the horizontal and vertical directions from the current node to the target node is measured using a manhattan distance definition heuristic function.
C. Initialize the data structure and add the start point to the open list:
An Open List (Open List) is created to store the nodes to be explored, and a Closed List (Closed List) is created to store the nodes that have been explored. The starting point is added to the open list and its estimated cost (f-value) is set to 0.
D. and (5) circularly searching:
In each iteration, the node with the smallest f-value is selected from the open list. The selected node is moved to a closed list, indicating that it has been explored. The neighbor node f values of the selected nodes are calculated and added to the open list if they are not in the closed list. Parent node information is updated to record the best paths to reach these nodes.
E. checking the target node:
Each time a node is added to the open list, it is checked whether the target node has been found. If the target node is found, indicating that the path has been found, the search may be ended.
F. And (3) path backtracking:
and after the target node is found, tracing back parent node information, and tracing back from the target node to the starting point all the time to obtain a complete path.
G. Path optimization:
optimizing the searched path, smoothing the path by using Bezier curve, reducing sharp turns and acute angles in the path, improving the quality of the path, reducing energy consumption and the like. The redundant nodes are removed by the merging node, and the path length is reduced.
H. output path:
finally, a string of waypoints is output that can be used to navigate the autonomous car.
I. updating in real time:
And circularly searching paths in the occupied grid map updated in real time, judging whether the first 20 nodes of the current position of the trolley can pass or not in the moving process of the trolley, and if so, searching the paths again, and using the paths before, thereby reducing the path planning time.
The local path planning is a connection part from the global path to the actual chassis control, acquires path point information issued by the global path planning, and outputs speed and angular speed information to the chassis after processing. I.e. act to generate a specific, viable, obstacle avoidance path following the global path point, and to avoid the obstacle in real time when the global path is not kept up. The local path planning algorithm adopted by the application is a TEB (TIMED ELASTIC Band) algorithm. TEB (TIMED ELASTIC Band) is a method for low-speed autopilot local path planning. The main idea is to consider the path of the low-speed autopilot car as an Elastic Band and to adjust this Band by an optimization algorithm so that it can avoid obstacles and be as close as possible to the global path.
The algorithm comprises the following steps:
a. Initialization-first, the global path is discretized into a series of points, which are then connected to form a preliminary elastic band.
B. Optimization, then, the elastic band is adjusted by an optimization algorithm. The optimization objectives include minimizing the length of the elastic belt, minimizing the steering angle of the low speed autopilot cart, and maximizing the distance of the elastic belt from the obstacle, among others.
C. And finally, taking the optimized elastic belt as a local path of the low-speed automatic driving trolley, and controlling the movement of the low-speed automatic driving trolley according to the path.
In some embodiments, some important functions of the TEB algorithm are:
1. time elastic band:
wherein t represents time, x (t) represents the state of the low-speed automatic driving trolley at time t, Representing an n-dimensional Euclidean space, l (t) and u (t) represent the lower and upper bounds, respectively, of the time elastic band at time. The time t time elastic belt is used for describing dynamic constraint of the low-speed automatic driving trolley in the motion process, and the motion state of the low-speed automatic driving trolley is ensured to be within a certain range.
2. Obstacle cost function:
Where x represents the state of the low speed autonomous driving cart, n obs represents the number of obstacles, d i (x) represents the distance between the low speed autonomous driving cart and the $i-th obstacle, ρ (·) represents the obstacle cost function. The obstacle cost function is used for describing obstacle avoidance constraint between the low-speed automatic driving trolley and the obstacle, and the motion state of the low-speed automatic driving trolley is guaranteed not to collide with the obstacle.
3. Path cost function:
Where x represents the state of the low-speed autonomous driving trolley, α and β represent weights of the obstacle cost function and the control cost function, respectively, u (t) represents a control input of the low-speed autonomous driving trolley at time t, and t 0 and t f represent a start time and an end time of the path, respectively.
4. Steering angle:
The steering of the low speed autopilot is as smooth as possible, which can be achieved by minimizing the steering angle between adjacent path segments, i.e.: Where angle (a, b) is the angle between vectors a and b. Where angle (a, b) is the angle between vectors a and b.
And S50, responding to a control instruction of the low-speed automatic driving trolley, and driving the low-speed automatic driving trolley to run according to the running path so as to complete the control of the low-speed automatic driving trolley.
After determining the running path of the low-speed automatic driving trolley according to the path point of the low-speed automatic driving trolley based on a TEB algorithm, the mobile terminal can respond to the control instruction of the low-speed automatic driving trolley to drive the low-speed automatic driving trolley to run according to the running path so as to complete the control of the low-speed automatic driving trolley.
In some embodiments, referring to fig. 3, a smart phone may be used to download custom software to control the mobile phone to pair to a specified vehicle through bluetooth, and bluetooth transmits a radio wave signal in 2.4G frequency band, where the communication protocol, modulation mode, transmission rate, security mechanism, etc. of the signal are all compatible with the hardware of the vehicle, and when the vehicle radio wave receiver receives the signal, the receiver demodulates the special radio wave signal. The method comprises the steps of demodulating to obtain an original digital signal, verifying, performing digital signature operation, and formatting the digital data into CAN information after verification. Finally, CAN messages, i.e., message formats, are transmitted over the CAN bus of the vehicle, which CAN be read by various Electronic Control Units (ECUs) within the vehicle to perform the requested operation. The vehicle-mounted system is provided with a radio wave emitting module, so that the process is reversible, and vehicle state information is transferred to a mobile phone user-defined software visualization window.
(1) Bluetooth transmitting terminal of mobile phone
A bluetooth wireless communication protocol is selected. Bluetooth is a wireless technology standard for exchanging data over short distances. It works by establishing a secure low power short range connection in the 2.4GHz ISM band. A Gaussian Frequency Shift Keying (GFSK) modulation mode is selected. Gaussian Frequency Shift Keying (GFSK) modulation of bluetooth is a Frequency Modulation (FM) scheme in which binary data is represented by small frequency deviations.
The basic formula for GFSK modulation can be expressed as:
f(t)=f_c+Δf·g(t),
Where f (t) is the instantaneous frequency, f_c is the carrier frequency, and Δf is the frequency deviation, which determines the difference between the frequencies of logical 1 and 0. g (t) is a baseband data signal processed by a gaussian filter, and the baseband signal refers to an unmodulated original data signal.
(2) Vehicle system receiving terminal
When the vehicle radio wave signal receiver receives GFSK modulated signals from a particular cell phone, the receiver will perform the following steps to demodulate:
a. The received signal-the received radio frequency signal is first amplified.
B. frequency down conversion by reducing the frequency of the signal to a lower Intermediate Frequency (IF) by a mixer.
C. Filtering, filtering noise and unwanted signals using a bandpass filter.
D. Frequency demodulation-the signal is passed through a frequency demodulator, typically a frequency discriminator, which converts the frequency change back into a voltage change, corresponding to the original digital data.
E. Digitization the analog signal is converted to a digital signal by an analog-to-digital converter (ADC).
The resulting digital signal will typically contain some data for verification, such as a digital signature. The receiver will check the data to confirm that the signal was not tampered with and from a legitimate source.
The process of converting the demodulated digital signal into CAN information typically involves the steps of:
a. And analyzing the data stream, namely analyzing the demodulated original data stream into a message frame structure.
B. Message frame identification-different parts in the message frame, such as frame ID, control field, data field, CRC, etc., are identified according to the CAN protocol specification.
C. data extraction-extracting the data field from the message frame, which is the part containing the actual information.
D. Error checking and handling-the use of CRC or other error detection codes in the frames to verify the defects of the data.
E. signal mapping-mapping the extracted data into a corresponding CAN message format for transmission onto a CAN bus, the message format corresponding to a number of column control commands of the vehicle, and so on.
As can be seen from the foregoing embodiments, compared with the prior art, the present application aims at the problems that in the prior art, development and application of an autopilot system are usually carried on some high-performance development boards, the required power consumption is high, and because the sensors are distributed, the acquired information needs to be additionally aligned in time, and the time for processing the information is increased, and the present application includes, but is not limited to, the following advantages:
firstly, portability and universality are greatly improved, a hardware architecture based on a mainstream smart phone is developed and used, a vehicle main body does not need to be additionally refitted, and a user can enable the vehicle to obtain an automatic driving function only by connecting the smart phone with the vehicle;
Secondly, the application does not influence the endurance experience of the vehicle, does not need to modify the vehicle, and is additionally provided with an additional sensor. Only part of sensor equipment of the smart phone is used, the collected information is processed through a low-power-consumption processor of the smart phone, the system can be kept to normally operate for a long time by means of the self-charging battery of the smart phone, and the extra influence on the power supply of a vehicle is avoided.
Thirdly, the application greatly reduces the use cost of the user, the user does not need to bear the refitting cost of the vehicle, such as sensors of the laser radar, the millimeter wave radar, the GPS and the like which are mainstream at present, and the complete function of the application can be experienced only by using a daily smart phone and a common vehicle.
Furthermore, the application can form an environment map by acquiring road condition image frames and occupying raster map data, fully considers various obstacles in the environment, and avoids the occurrence of obstacles on a driving path, so that vehicles collide during transportation to generate accidents, and further the transportation efficiency is affected. No obstacle is ensured on the driving path, accident damage caused by collision to people and objects is avoided, and time waste caused by accidents is also avoided. Therefore, on the premise of ensuring safety, the transportation efficiency of the vehicle is improved.
Referring to fig. 4, a low-speed automatic driving trolley control device according to one of the objects of the present application includes a dynamic target tracking module 1100, a driving surface determining module 1200, a vehicle position determining module 1300, a driving path determining module 1400 and an automatic driving control module 1500. The dynamic target tracking module 1100 is configured to acquire road condition image frames of all angles of the low-speed automatic driving trolley and occupy raster map data, and calculate and determine confidence levels of all dynamic target detection frames between the current road condition image frame and the previous road condition image frame by adopting an intersection ratio matching algorithm based on the position of a moving object and a re-identification matching algorithm based on appearance characteristics so as to match and track dynamic target position information of the current road condition image frame and the previous road condition image frame; a drivable road surface determination module 1200 arranged to infer surface normal information from the road condition image frames, extract feature maps from the RGB image and the inferred surface normal information, respectively, using an RGB encoder and a surface normal encoder, determine drivable road surface areas from the fused RGB and surface normal information using a pre-trained semantic road segmentation model, a vehicle position determination module 1300 arranged to determine wheel odometer data and GPS data, fuse the wheel odometer data and GPS data based on a pre-set extended kalman filter algorithm to determine vehicle position information of a low-speed autonomous driving dolly, a travel path determination module 1400 arranged to update the occupancy grid map data from the dynamic target position information, drivable road surface areas and vehicle position information, determine path points for navigating the low-speed autonomous driving dolly from the updated occupancy grid map data based on an a-algorithm, determine travel paths of the low-speed autonomous driving dolly from the path points of the low-speed autonomous driving dolly based on a TEB algorithm, an autonomous driving control module 1500 arranged to control the low-speed autonomous driving dolly in response to a low-speed autonomous driving dolly control instruction, to complete the control of the low-speed automatic driving trolley.
On the basis of any embodiment of the present application, referring to fig. 5, another embodiment of the present application further provides an electronic device, where the electronic device may be implemented by a computer device, and as shown in fig. 5, the internal structure of the computer device is schematically shown. The computer device includes a processor, a computer readable storage medium, a memory, and a network interface connected by a system bus. The computer readable storage medium of the computer device stores an operating system, a database and computer readable instructions, the database can store a control information sequence, and when the computer readable instructions are executed by the processor, the processor can realize a low-speed automatic driving trolley control method. The processor of the computer device is used to provide computing and control capabilities, supporting the operation of the entire computer device. The memory of the computer device may have stored therein computer readable instructions that, when executed by the processor, cause the processor to perform the low speed autonomous vehicle control method of the present application. The network interface of the computer device is for communicating with a terminal connection. It will be appreciated by those skilled in the art that the structure shown in FIG. 5 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
The processor in this embodiment is configured to execute specific functions of each module and its sub-module in fig. 4, and the memory stores program codes and various data required for executing the above modules or sub-modules. The network interface is used for data transmission between the user terminal or the server. The memory in the present embodiment stores program codes and data required for executing all modules/sub-modules in the low-speed automated guided vehicle control apparatus of the present application, and the server can call the program codes and data of the server to execute the functions of all sub-modules.
The present application also provides a storage medium storing computer readable instructions that, when executed by one or more processors, cause the one or more processors to perform the steps of the method for controlling a low speed autonomous driving cart according to any of the embodiments of the present application.
The application also provides a computer program product comprising computer programs/instructions which when executed by one or more processors implement the steps of the low speed autopilot control method of any one of the embodiments of the application.
Those skilled in the art will appreciate that all or part of the processes implementing the methods of the above embodiments of the present application may be implemented by a computer program for instructing relevant hardware, where the computer program may be stored on a computer readable storage medium, where the program, when executed, may include processes implementing the embodiments of the methods described above. The storage medium may be a computer readable storage medium such as a magnetic disk, an optical disk, a Read-Only Memory (ROM), or a random access Memory (Random Access Memory, RAM).
The foregoing is only a partial embodiment of the present application, and it should be noted that it will be apparent to those skilled in the art that modifications and adaptations can be made without departing from the principles of the present application, and such modifications and adaptations are intended to be comprehended within the scope of the present application.
In summary, the application can form an environment map by acquiring road condition image frames and occupying raster map data, fully considers various obstacles in the environment, and avoids the occurrence of obstacles on a driving path, so that the vehicle collides during transportation to generate accidents.

Claims (9)

1. A low-speed automated driving cart control method, comprising:
Acquiring road condition image frames of all angles of a low-speed automatic driving trolley and occupying raster map data, calculating and determining the confidence coefficient of each dynamic target detection frame between the current road condition image frame and the previous road condition image frame by adopting an intersection ratio matching algorithm based on the position of a moving object and a re-identification matching algorithm based on appearance characteristics so as to match and track the dynamic target position information of the current road condition image frame and the previous road condition image frame, wherein the method comprises the following steps:
Constructing a three-dimensional model corresponding to each dynamic target to obtain the motion state information of each dynamic target relative to the low-speed automatic driving trolley;
Determining the observed value corresponding to each dynamic target obtained by a monocular camera, estimating the state information predicted value corresponding to each dynamic target in the current road condition image frame according to the motion state information of the low-speed automatic driving trolley in the preset number of road condition image frames by adopting a Kalman filtering algorithm, and updating the optimal estimated value of the current road condition image frame based on the observed value corresponding to each dynamic target and the state information predicted value corresponding to each dynamic target to serve as the latest state information of each dynamic target;
matching and weighting the obtained confidence coefficient by adopting an intersection ratio matching algorithm based on the position of the moving object and a re-identification matching algorithm based on the appearance characteristics based on the latest state information of each dynamic object so as to carry out object matching tracking between the current road condition image frame and the previous road condition image frame;
Deducing surface normal information from the road condition image frame, extracting feature mapping from the RGB image and the deduced surface normal information by adopting an RGB encoder and a surface normal encoder respectively, and determining a drivable road surface area according to the fused RGB and surface normal information by adopting a pre-trained semantic road segmentation model;
determining wheel type odometer data and GPS data, and fusing the wheel type odometer data and the GPS data based on a preset extended Kalman filtering algorithm to determine vehicle position information of a low-speed automatic driving trolley;
Updating the occupied grid map data according to the dynamic target position information, the drivable pavement area and the vehicle position information, determining a path point for navigating the low-speed automatic driving trolley according to the updated occupied grid map data based on an A-type algorithm, and determining a driving path of the low-speed automatic driving trolley according to the path point of the low-speed automatic driving trolley based on a TEB algorithm;
and responding to a control instruction of the low-speed automatic driving trolley, and driving the low-speed automatic driving trolley to run according to the running path so as to complete the control of the low-speed automatic driving trolley.
2. The method of claim 1, wherein the step of estimating surface normal information from the road condition image frames, extracting feature maps from the RGB image and the estimated surface normal information using an RGB encoder and a surface normal encoder, respectively, and determining a travelable road surface area from the fused RGB and surface normal information using a pre-trained semantic road segmentation model comprises:
Deducing surface normal information from a depth map corresponding to the road condition image frame, acquiring three-dimensional information of each pixel according to CCS coordinates, calculating horizontal gradients and vertical gradients based on a Sobel operator to determine an x-direction component and a y-direction component of the surface normal information, and selecting 8 adjacent points to calculate a normal vector component in the estimated z-direction by adopting a K-dimensional space data structure;
Extracting feature maps from the RGB image and the surface normal information respectively by adopting an RGB encoder and a surface normal encoder, carrying out element summation hierarchical fusion on the extracted feature maps, and carrying out re-fusion in a decoder through densely connected jump connection so as to determine fused RGB and surface normal information;
And determining a drivable pavement area according to the fused RGB and surface normal information by adopting a pre-trained semantic road segmentation model.
3. The low-speed autonomous driving car control method according to claim 1, wherein the step of fusing the wheel odometer data with GPS data based on a preset extended kalman filter algorithm to determine vehicle position information of the low-speed autonomous driving car comprises:
The step of fusing the wheel type odometer data and the GPS data based on a preset extended Kalman filtering algorithm comprises the following steps:
According to formula X k -=AXk-1+Buk-1, the prior estimated value X k - of the next cycle is predicted through the state transition matrix A and the prior estimated value X k-1 in the previous cycle, and the initial prior state value X k - is the value provided by the sensor;
Updating a priori covariance matrix according to a formula P k -=APk-1AT +Q, wherein Q is a covariance matrix of a systematic process error, and P k-1 is a posterior covariance matrix in the previous cycle;
According to the formula Obtaining a K k Kalman gain value of each cycle, wherein H is an observation matrix which is responsible for converting an estimated value into data of the same type as the observed value, and R is covariance of measurement noise;
According to formula X k=Xk -+Kk(Zk-HXk -), a posterior estimate value X k is calculated from a priori estimates X k -, kalman gain K k, observations Z k, and observation matrix H of the current cycle, where the observations are sensor information acquired for each cycle;
Updating the new round of posterior covariance matrix P k according to formula P k=(I-KkH)Pk -;
And circularly iterating the steps to obtain the data information of the wheel type odometer data and the GPS data after fusion.
4. The low-speed autonomous driving cart control method according to claim 1, wherein the step of determining a waypoint for navigating the low-speed autonomous driving cart from updated occupancy grid map data based on an a-x algorithm, comprises:
taking the updated occupied grid map data as map data of an A-algorithm global path planning, taking the current position of the low-speed automatic driving trolley as a path starting point, and taking a position designated by GPS coordinates as an end point, wherein the occupied grid map data comprises the current position, the end point position, a road network and barrier information of the low-speed automatic driving trolley;
using Manhattan distance definition heuristic function to measure the sum of distances from the current node to the target node in the horizontal and vertical directions;
initializing a data structure, adding the path starting point to an open list, creating an open list to store nodes to be explored, creating a closed list to store the explored nodes, adding the path starting point to the open list, and setting the estimated cost value of the path starting point to 0;
in each iteration, selecting the node with the minimum cost value from the open list, moving the selected node into a closed list to represent that the node has been explored, calculating cost values of adjacent nodes of the selected node, adding the cost values to the open list, and if the cost values are not in the closed list, updating father node information to record the best paths reaching the nodes;
checking whether a target node has been found or not each time a node is added to the open list, if the target node is found, indicating that a path has been found, the search may be ended;
After the target node is found, tracing back father node information, and tracing back from the target node to the starting point all the time to obtain a complete path;
Optimizing the searched path, smoothing the path by adopting a Bezier curve to reduce sharp turns and acute angles in the path, removing redundant nodes by combining the nodes to reduce the path length, and outputting a series of path points for navigating the low-speed automatic driving trolley.
5. The low-speed automatic driving cart control method according to claim 1, characterized in that the step of determining a travel path of the low-speed automatic driving cart from a path point of the low-speed automatic driving cart based on a TEB algorithm includes:
Discretizing the path points for navigating the low speed autonomous vehicle into a series of points and then connecting the points to form a preliminary elastic band;
Adjusting the preliminary elastic band according to a path optimization target based on a preset path optimization algorithm to determine an optimized elastic band, wherein the path optimization target comprises one or more of minimizing the length of the elastic band, minimizing the steering angle of the low-speed automatic driving trolley and maximizing the distance between the elastic band and an obstacle;
And taking the optimized elastic belt as a local path of the low-speed automatic driving trolley, and driving the low-speed automatic driving trolley to run according to the local path.
6. The low-speed automated guided vehicle control method of any one of claims 1 to 5, wherein the low-speed automated guided vehicle comprises one or more of an unmanned express vehicle, an intelligent carrier vehicle, or any combination thereof.
7. A low-speed automated driving cart control device, comprising:
The dynamic target tracking module is used for acquiring road condition image frames of all angles of the low-speed automatic driving trolley and occupying raster map data, calculating and determining the confidence coefficient of each dynamic target detection frame between the current road condition image frame and the previous road condition image frame by adopting an intersection ratio matching algorithm based on the position of a moving object and a re-identification matching algorithm based on appearance characteristics so as to match and track the dynamic target position information of the current road condition image frame and the previous road condition image frame;
the drivable pavement determining module is configured to infer surface normal information from the road condition image frames, extract feature mapping from the RGB image and the inferred surface normal information respectively by adopting an RGB encoder and a surface normal encoder, and determine a drivable pavement area according to the fused RGB and surface normal information by adopting a pre-trained semantic road segmentation model;
the vehicle position determining module is used for determining wheel type odometer data and GPS data, and fusing the wheel type odometer data and the GPS data based on a preset extended Kalman filtering algorithm to determine vehicle position information of the low-speed automatic driving trolley;
A travel path determining module configured to update the occupied grid map data according to the dynamic target position information, the drivable road surface area, and the vehicle position information, determine a path point for navigating the low-speed autonomous driving trolley according to the updated occupied grid map data based on an a-algorithm, and determine a travel path of the low-speed autonomous driving trolley according to the path point of the low-speed autonomous driving trolley based on a TEB algorithm;
And the automatic driving control module is used for responding to the control instruction of the low-speed automatic driving trolley and driving the low-speed automatic driving trolley to run according to the running path so as to complete the control of the low-speed automatic driving trolley.
8. An electronic device comprising a central processor and a memory, characterized in that the central processor is arranged to invoke a computer program stored in the memory for performing the steps of the method according to any of claims 1 to 6.
9. A computer-readable storage medium, characterized in that it stores in the form of computer-readable instructions a computer program implemented according to the method of any one of claims 1 to 6, which, when invoked by a computer, performs the steps comprised by the corresponding method.
CN202410355965.6A 2024-03-27 2024-03-27 Low-speed automatic driving vehicle control method, device, equipment and medium Active CN118259611B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410355965.6A CN118259611B (en) 2024-03-27 2024-03-27 Low-speed automatic driving vehicle control method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410355965.6A CN118259611B (en) 2024-03-27 2024-03-27 Low-speed automatic driving vehicle control method, device, equipment and medium

Publications (2)

Publication Number Publication Date
CN118259611A CN118259611A (en) 2024-06-28
CN118259611B true CN118259611B (en) 2025-03-14

Family

ID=91610418

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410355965.6A Active CN118259611B (en) 2024-03-27 2024-03-27 Low-speed automatic driving vehicle control method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN118259611B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114450724A (en) * 2020-03-19 2022-05-06 辉达公司 Future Trajectory Prediction in a Multi-Actor Environment for Autonomous Machine Applications
CN116011762A (en) * 2022-12-30 2023-04-25 珠海市格努信息技术有限公司 Intelligent scheduling method and device for a plurality of warehouse unmanned vehicles

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107702716B (en) * 2017-08-31 2021-04-13 广州小鹏汽车科技有限公司 Unmanned driving path planning method, system and device
CN113819917B (en) * 2021-09-16 2024-11-29 广西综合交通大数据研究院 Automatic driving path planning method, device, equipment and storage medium
CN114005021B (en) * 2021-12-27 2022-03-22 中国农业大学 Laser vision fusion based unmanned inspection system and method for aquaculture workshop
CN117292352B (en) * 2023-09-11 2024-05-31 东南大学 Obstacle recognition and avoidance method and trolley system for open world target detection
CN117109624A (en) * 2023-10-13 2023-11-24 淮安中科晶上智能网联研究院有限公司 Hybrid path planning method of low-speed unmanned vehicle based on A and parallel TEB

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114450724A (en) * 2020-03-19 2022-05-06 辉达公司 Future Trajectory Prediction in a Multi-Actor Environment for Autonomous Machine Applications
CN116011762A (en) * 2022-12-30 2023-04-25 珠海市格努信息技术有限公司 Intelligent scheduling method and device for a plurality of warehouse unmanned vehicles

Also Published As

Publication number Publication date
CN118259611A (en) 2024-06-28

Similar Documents

Publication Publication Date Title
US11531110B2 (en) LiDAR localization using 3D CNN network for solution inference in autonomous driving vehicles
US11594011B2 (en) Deep learning-based feature extraction for LiDAR localization of autonomous driving vehicles
Suhr et al. Sensor fusion-based low-cost vehicle localization system for complex urban environments
US11364931B2 (en) Lidar localization using RNN and LSTM for temporal smoothness in autonomous driving vehicles
CN106908775B (en) A real-time positioning method for unmanned vehicles based on laser reflection intensity
Alonso et al. Accurate global localization using visual odometry and digital maps on urban environments
Wu et al. Robust LiDAR-based localization scheme for unmanned ground vehicle via multisensor fusion
CN109086277A (en) A kind of overlay region building ground drawing method, system, mobile terminal and storage medium
CN111915675B (en) Particle drift-based particle filtering point cloud positioning method, device and system thereof
CN114111774B (en) Vehicle positioning method, system, equipment and computer readable storage medium
CN112380314B (en) Road network information processing method and device, storage medium and electronic equipment
Li et al. Hybrid filtering framework based robust localization for industrial vehicles
US12210595B2 (en) Systems and methods for providing and using confidence estimations for semantic labeling
CN117974916A (en) High-precision map generation method and device based on information fusion
CN116958452A (en) Three-dimensional reconstruction method and system
CN112651991B (en) Visual positioning method, device and computer system
CN118259611B (en) Low-speed automatic driving vehicle control method, device, equipment and medium
Zhou et al. Localization for unmanned vehicle
CN115752476B (en) Vehicle ground library repositioning method, device, equipment and medium based on semantic information
Lee et al. Infrastructure node-based vehicle localization for autonomous driving
Chipka et al. Estimation and navigation methods with limited information for autonomous urban driving
CN113762030B (en) Data processing method, device, computer equipment and storage medium
Zhuy et al. On the ecosystem of high-definition (hd) maps
Jiménez et al. LiDAR-based SLAM algorithm for indoor scenarios
CN119502956B (en) Position detection method, vehicle control method, device, vehicle, medium and chip

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant