[go: up one dir, main page]

CN120270232B - Obstacle detection method, device, equipment and medium based on path constraint - Google Patents

Obstacle detection method, device, equipment and medium based on path constraint

Info

Publication number
CN120270232B
CN120270232B CN202510776547.9A CN202510776547A CN120270232B CN 120270232 B CN120270232 B CN 120270232B CN 202510776547 A CN202510776547 A CN 202510776547A CN 120270232 B CN120270232 B CN 120270232B
Authority
CN
China
Prior art keywords
obstacle
obstacles
tracked
trajectory
path constraint
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202510776547.9A
Other languages
Chinese (zh)
Other versions
CN120270232A (en
Inventor
谢旭龙
周光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
DeepRoute AI Ltd
Original Assignee
DeepRoute AI Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by DeepRoute AI Ltd filed Critical DeepRoute AI Ltd
Priority to CN202510776547.9A priority Critical patent/CN120270232B/en
Publication of CN120270232A publication Critical patent/CN120270232A/en
Application granted granted Critical
Publication of CN120270232B publication Critical patent/CN120270232B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/06Automatic manoeuvring for parking
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/0098Details of control systems ensuring comfort, safety or stability not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • B60W60/0011Planning or execution of driving tasks involving control alternatives for a single driving scenario, e.g. planning several paths to avoid obstacles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W2050/0001Details of the control system
    • B60W2050/0043Signal treatments, identification of variables or parameters, parameter estimation or state estimation
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2420/00Indexing codes relating to the type of sensors based on the principle of their operation
    • B60W2420/40Photo, light or radio wave sensitive means, e.g. infrared sensors
    • B60W2420/403Image sensing, e.g. optical camera
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2552/00Input parameters relating to infrastructure
    • B60W2552/50Barriers

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application discloses a method, a device, equipment and a medium for detecting obstacles based on path constraint, and relates to the field of automatic driving. The obstacle detection method based on path constraint comprises the steps of generating a track area on a parking track based on the position and the direction of the parking track, splicing a plurality of sections of continuous track areas into a path constraint area of the parking track, obtaining obstacles to be tracked in a scene, screening out the obstacles to be tracked outside the path constraint area, and carrying out multi-frame matching on the obstacles to be tracked in the path constraint area so as to identify target obstacles with continuous time sequences in the obstacles to be tracked.

Description

Obstacle detection method, device, equipment and medium based on path constraint
Technical Field
The present application relates to the field of autopilot, and in particular, to a method, apparatus, device, and medium for detecting an obstacle based on path constraint.
Background
In an automatic driving parking system, a BEV+transducer architecture is commonly used for realizing tasks such as 3D target detection and free space detection, but due to problems such as physical limitation of a camera, problems of near field perception blind areas, special obstacle omission or homochromatic obstacle problems and invalid tracking exist.
Such as the case where the fisheye camera is at a close distance from the vehicle body where the image distortion rate is high, this may lead to free space polygon collapse or the case where the bounding box of the 3D object obstacle is inaccurate. Meanwhile, objects such as ground locks and thin rods are often small in grounding point, the model is easy to buckle and subtract, the ultrasonic sensor is also often missed in detection, and as the distance between a camera and an obstacle is shortened, the obstacle often occupies a large area of the image from the image, so that the model cannot distinguish the real appearance of the obstacle, various missed detection is caused, and finally collision risks are generated.
In addition, when false detection occurs on the obstacle, if some safety-oriented bottom-covering strategies exist in the post-processing, the whole parking space becomes narrow, so that the drivable area becomes smaller, and finally the planning efficiency becomes lower, for example, the situation that a common planning algorithm based on a heuristic optimization method and the like possibly has no solution exists.
Disclosure of Invention
The application mainly provides a method, a device, equipment and a medium for detecting an obstacle based on path constraint, which are used for solving the problem of insufficient obstacle perception precision in an automatic driving scene.
The technical scheme includes that the obstacle detection method based on path constraint comprises the steps of generating a track area on a parking track based on the position and the direction of the parking track, splicing a plurality of sections of continuous track areas into a path constraint area of the parking track, obtaining obstacles to be tracked in a scene, screening out the obstacles to be tracked outside the path constraint area, and carrying out multi-frame matching on the obstacles to be tracked in the path constraint area to identify target obstacles with continuous time sequences in the obstacles to be tracked.
In some embodiments, the generating the track area on the parking track based on the position and the direction of the parking track comprises selecting a first section of a plurality of sections of continuous tracks in the running direction of the vehicle as the parking track, and constructing the track area on the parking track based on the distance and the direction angle between a start discrete point and a terminal discrete point on the parking track.
In some embodiments, the splicing the plurality of continuous track areas into the path constraint area of the parking track comprises splicing the track areas according to a normal vector between the discrete points and the plurality of continuous track areas, and expanding the spliced track areas based on the length of the vehicle and the width of the vehicle to form the path constraint area.
In some embodiments, the obtaining the obstacle to be tracked in the scene comprises detecting the obstacle in the scene, classifying the obstacle to judge the type of the obstacle, and taking the obstacle with a static classification result as the obstacle to be tracked in the scene.
In some embodiments, the determining the type of the obstacle includes marking the type of the obstacle as static in response to the obstacle having and having only free space polygon detection results, or marking the type of the obstacle as dynamic in response to the obstacle having and having only 3D bounding box detection results.
In some embodiments, the obstacle detection method further comprises treating the obstacle as an obstacle to be tracked in the scene in response to the obstacle intersecting the obstacle with the ultrasonic detection result.
In some embodiments, the multi-frame matching is performed on the to-be-tracked obstacle in the path constraint area to identify a target obstacle with continuous time sequence in the to-be-tracked obstacle, and the method comprises the steps of continuously obtaining images of multiple frames of the to-be-tracked obstacle, performing multi-target tracking matching through the images of the to-be-tracked obstacle, extracting a vertex of a 3D detection frame and a contour point set of a free space polygon from a successfully matched pair of obstacles, wherein the point sequence of the contour point set meets the point sequence requirement of a Boots algorithm library, merging all the contour point sets to generate a convex hull, and taking the convex hull as the geometric representation of the successfully matched obstacle.
In order to solve the technical problems, the other technical scheme adopted by the application is to provide an obstacle detection method based on path constraint, which comprises a generation module, a splicing module, a screening module and an identification module, wherein the generation module is used for generating a track area on a parking track based on the position and the direction of the parking track, the splicing module is used for splicing a plurality of continuous track areas into a path constraint area of the parking track, the screening module is used for acquiring an obstacle to be tracked in a scene and screening the obstacle to be tracked outside the path constraint area, and the identification module is used for carrying out multi-frame matching on the obstacle to be tracked in the path constraint area so as to identify a target obstacle with continuous time sequence in the obstacle to be tracked.
The application also provides computer equipment, which comprises a memory and at least one processor, wherein the memory stores instructions, and the at least one processor calls the instructions in the memory so that the computer equipment executes the obstacle detection method.
The present application also provides a computer-readable storage medium having instructions stored thereon that, when executed by a processor, implement the obstacle detection method as described above.
The application has the beneficial effects that the application discloses a method, a device, equipment and a medium for detecting the obstacle based on path constraint, which are different from the prior art. The method comprises the steps of generating a track area on a parking track based on the position and the direction of the parking track, splicing a plurality of sections of continuous track areas into a path constraint area of the parking track, acquiring an obstacle to be tracked in a scene, screening out the obstacle to be tracked outside the path constraint area, constraining the tracking range of the obstacle in a limited area, reducing data processing redundancy, and improving the efficiency of obstacle tracking and identifying. And carrying out multi-frame matching on the to-be-tracked obstacle in the path constraint area so as to identify the target obstacle with continuous time sequence in the to-be-tracked obstacle, thereby realizing effective tracking and identification of the obstacle, increasing the safety and reducing the collision risk.
Drawings
For a clearer description of embodiments of the application or of solutions in the prior art, the drawings that are necessary for the description of the embodiments or of the prior art will be briefly described, it being apparent that the drawings in the description below are only some embodiments of the application, from which, without the inventive effort, other drawings can be obtained for a person skilled in the art, in which:
FIG. 1 is a schematic flow chart of an embodiment of a path constraint-based obstacle detection method provided by the application;
FIG. 2 is a schematic view of a path constraint area of a parking trajectory in the method of FIG. 1;
FIG. 3 is a flow chart of one embodiment of the method step 10 shown in FIG. 1;
FIG. 4 is a flow chart of one embodiment of the method step 20 shown in FIG. 1;
FIG. 5 is a flow chart of one embodiment of the method step 30 shown in FIG. 1;
FIG. 6 is a flow chart of one embodiment of the method step 40 shown in FIG. 1;
FIG. 7 is a schematic structural diagram of an embodiment of a path constraint-based obstacle detecting apparatus according to the present application;
fig. 8 is a schematic diagram of an embodiment of a computer device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The terms "first," "second," "third," and the like in embodiments of the present application are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first", "a second", and "a third" may explicitly or implicitly include at least one such feature. In the description of the present application, the meaning of "plurality" means at least two, for example, two, three, etc., unless specifically defined otherwise. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
Referring to fig. 1, fig. 1 is a flow chart of an embodiment of a path constraint-based obstacle detection method, which includes the steps of:
and 10, generating a track area on the parking track based on the position and the direction of the parking track.
During the automatic parking of an autonomous vehicle, a parking trajectory in the direction of travel is produced, which is composed of a plurality of successive trajectory segments. Each track section is defined by a plurality of discrete points, and a corresponding rectangular area is generated as a track area corresponding to the parking track section by calculating the linear distance and the direction angle between the points.
Referring to fig. 2, fig. 2 is a schematic view of a path constraint area of a parking trajectory in the method shown in fig. 1. During automatic parking, the vehicle travels along a predetermined path that is divided into a plurality of successive, shorter travel segments, each such segment being referred to as a track segment. The track segment is a basic unit constituting the entire parking track.
A trajectory is the entire path traveled by a vehicle from a starting parking location to a target parking location, and is made up of a series of successive trajectory segments. For example, during parking of a vehicle, a travel route is planned from a current position as a starting parking position of the vehicle to a target parking position of the vehicle, the travel route forming a continuous track segment extending from the current position to the target parking position.
These track segments are logically continuous, representing continuous motion during vehicle parking. To mathematically and computationally describe the track segments, each track segment is further subdivided into points, which are referred to as discrete points. The overall parking trajectory is planned and then divided into several shorter trajectory segments for implementing and controlling the trajectory. Each track segment is a portion of a track that collectively completes a parking task.
Discrete points are a specific representation of a track segment, and by connecting these points we can describe approximately the shape and orientation of the track segment. Discrete points are obtained by an algorithm or planning tool that samples uniformly or on demand over the track segment. The location and number of these points depends on the length, shape and computational accuracy requirements of the track segment.
The continuous track area is a series of rectangles, the length of each rectangle is related to the linear distance between two points, the width of each rectangle is related to the vehicle width, and the compensation value is increased, so that the vehicle body and the safety range are covered.
For any two adjacent discrete points on the track segment, the straight line distance is the Euclidean distance between the two discrete points, and can be obtained through simple geometric calculation. This distance reflects the actual distance the vehicle travels between these two points.
The direction angle refers to the angle between the tangential direction at a point on the track segment and a reference direction (e.g., the initial direction of the vehicle or a direction in the global coordinate system). The direction angle can be obtained by calculating the vector between two adjacent discrete points and utilizing the formula of the included angle between the vector and the reference direction. The direction angle is used to determine the direction of travel of the track segment at each point.
Further, referring to fig. 3, step 10 includes the steps of:
and 11, selecting a first section (a first running section on a preset running path) of the multiple continuous tracks in the running direction of the vehicle as a parking track.
And selecting a first section of track in the multiple sections of continuous tracks as a starting section of the parking track, wherein the starting point and the end point of the section of track are respectively defined as a point A and a point B. And determining the length of the rectangular area by calculating the linear distance from the point A to the point B and determining the width of the rectangular area by combining the width of the vehicle and the compensation value. By the method, a rectangular area corresponding to the first section of parking track is generated, and the safety range of vehicle running is ensured to be covered.
And 12, constructing a track area on the parking track based on the distance and the direction angle between the initial discrete point and the terminal discrete point on the parking track.
And (3) generating a corresponding rectangular area by calculating the distance and the direction angle between discrete points on each track, so as to ensure that the length of each rectangle is consistent with the distance between the points, and the width is the vehicle width plus the compensation value.
For each discrete point on the track segment, a rectangular region is generated based on the linear distance (as the length of the rectangle) and the direction angle of the point, and the vehicle width plus a certain compensation value (as the width of the rectangle). This rectangular area covers the possible travel range of the vehicle at this point and takes into account a certain safety margin.
And connecting rectangular areas corresponding to all the discrete points on the track section to form a continuous track area. This trajectory region represents the range over which the vehicle is likely to travel over the entire parking trajectory and is used for subsequent obstacle tracking and obstacle avoidance planning.
And 20, splicing the multiple continuous track areas into a path constraint area of the parking track.
By splicing the track areas one by one, a complete path constraint area is formed, the whole parking track is ensured to be covered, and the safety and the accuracy of the vehicle in the parking process are ensured.
The path constraint area is used for guiding the automatic driving vehicle to follow a preset path in the parking process, avoiding deviating from the track and ensuring the accuracy and safety of the parking operation. The area not only covers the actual running track of the vehicle, but also provides additional safe buffer space, and potential collision risks are effectively prevented.
Meanwhile, screening and tracking the obstacle through the path constraint area, and judging whether the obstacle falls in the path constraint area or not. Judging whether the polygon formed by the path constraint area is intersected with the obstacle polygon or the line, and if so, adding the candidate queue.
Further, referring to fig. 4, step 20 includes the steps of:
and 21, splicing the track areas according to the normal vector among the discrete points and the multi-section continuous track areas.
By calculating the normal vector among the discrete points, the smooth boundary of the rectangular area during splicing is ensured, and overlapping or gaps are avoided. And each track area is spliced one by one to form a continuous and seamless path constraint area, so that the whole parking track is covered, and the safety and the accuracy of the vehicle in the parking process are ensured.
The normal vector between discrete points refers to a direction vector perpendicular to the vector connecting two adjacent discrete points. The normal vector is used to determine the perpendicular direction of the tangential direction of the track segment at each discrete point, to determine the orientation of the rectangle, to ensure that the rectangle can properly cover the range of possible travel of the vehicle.
And 22, expanding the spliced track area based on the length of the vehicle and the width of the vehicle to form a path constraint area.
When the vehicle is expanded, the length and the width of the vehicle are considered, rectangular areas are respectively supplemented at the head, the body and the tail of the vehicle, the route constraint area is ensured to cover the whole appearance of the vehicle, the missed detection of obstacles is prevented, and the parking safety is improved. Through accurate calculation, seamless connection between the expansion area and the original track area is ensured, a complete path constraint area is formed, and accuracy and safety of automatic driving parking operation are ensured.
And 30, acquiring the to-be-tracked obstacle in the scene, and screening out the to-be-tracked obstacle outside the path constraint area.
By comparing the positions of the obstacles with the boundaries of the path constraint areas, the obstacles which are not in the path constraint areas are accurately screened out, only potential risk points are ensured to be focused, and the pertinence and the efficiency of obstacle tracking are improved.
And selecting part of the obstacles in the scene as the obstacles to be tracked for tracking and identifying, precisely screening the obstacles in the path constraint area by utilizing the intersection judgment of the track polygon and the obstacle polygon or the line, adding the obstacles into the candidate queue, ensuring the effectiveness and the accuracy of the tracked objects, and further optimizing the safety performance of the automatic driving parking system.
The obstacle to be tracked is continuously tracked and matched by the system, part of the obstacle to be tracked is selected to be fused, a Kalman filter is adopted to conduct state prediction, and the obstacle to be tracked is sent to a downstream processing unit.
Further, referring to fig. 5, step 30 includes the steps of:
And 31, detecting the obstacle in the scene and classifying the obstacle to judge the type of the obstacle.
Obstacle data detected by each sensor are acquired, and preliminary screening is performed by calculating the geometric intersection of the track polygon and the obstacle boundary (polygon/line segment). By comparing the vertices of the trajectory polygons with the vertices of the obstacle boundaries, it is determined whether they intersect or overlap.
Optionally, collision detection is performed by simple geometric judgment, separation theorem (SAT), or ray casting is used to judge whether the obstacle vertex is in the track area.
The ‌ -track polygon ‌ refers to a polygonal area configured according to the expected travel track of the vehicle in the autonomous parking system. This polygonal area covers the range over which the vehicle is likely to travel and is used to determine which obstacles are likely to collide with the vehicle. In constructing the trajectory polygon, factors such as the size, direction of travel, and safety margin of the vehicle are generally considered. The trajectory polygon is defined by a series of discrete points and their normal vectors, which are connected to form a rectangular or near rectangular area to cover the entire path of travel of the vehicle.
Obstacle boundary ‌ refers to the outer contour of the geometry of the obstacle in two or three dimensions. In an autonomous parking system, the obstacle may be static (e.g., a curb, a wall) or dynamic (e.g., a pedestrian, other vehicle). The boundaries of the obstacle are typically detected and determined by a perception sensor (e.g., camera, radar, lidar, etc.). The boundary of the obstacle may be a polygon (e.g., a closed shape connected by a plurality of vertices) or a line segment (e.g., an edge representing a road edge). For collision detection and trajectory planning, the system needs to accurately acquire boundary information of the obstacle.
Geometric intersection ‌ refers to the case where two geometric shapes have a common portion in space. In an autonomous parking system, geometric intersection is used to determine whether a trajectory polygon overlaps or is likely to contact an obstacle boundary. If the trajectory polygon intersects an obstacle boundary, this means that the vehicle may collide with the obstacle when traveling along the current trajectory. Therefore, the system needs to perform intersection detection to ensure the safety of the parking process.
In particular, triple attribute analysis of physical features (e.g., size, shape), dynamic features (e.g., velocity vector, acceleration), and semantic categories (e.g., vehicle, pedestrian, or non-motor vehicle, etc.) is performed on the obstacle.
In particular, the types of obstacles include, but are not limited to, motor vehicles (static/dynamic), non-motor vehicles, pedestrians, ground obstacles, and the like.
Specifically, the type of the marker obstacle is static in response to the obstacle having and only free space polygon detection results.
Or, in response to the presence of an obstacle and only the 3D bounding box detection result, marking the type of obstacle as dynamic.
In the automatic driving perception system, the obstacle type marking rules may include a free space polygon detection result and a ‌ D bounding box detection result based on the detection result form.
The free space polygon detection results are used to mark static obstacles by default when the obstacle is only identified as a polygonal region by the free space detection algorithm (e.g., a non-drivable region generated by semantic segmentation), the decision basis being that the free space detection is primarily directed to ground-based fixed obstacles (e.g., curbs, piers, etc.).
The 3D bounding box detection result is used to generate a bounding box (such as the LiDAR point cloud or bbox of visual fusion output) when the obstacle passes the 3D object detection, and the default mark is a dynamic obstacle. The dynamic characteristic judgment basis comprises geometrical attributes such as hidden size, orientation and the like of the boundary box and a speed vector to be calculated by matching with time sequence tracking.
If two detection results exist at the same time, the 3D bounding box data is preferentially adopted and the multi-mode fusion verification is started, and the free space polygon can be used as supplementary verification (such as confirmation of the static obstacle position) of the 3D detection.
Through comprehensive analysis of the two detection results, the system can more accurately identify the obstacle state, improve the robustness of the sensing system and ensure the accuracy and safety of automatic driving decisions.
And 32, taking the obstacle with the static classification result as an obstacle to be tracked in the scene.
Although static obstacles have no motion trend, obstacle avoidance constraints for path planning need to be continuously tracked to prevent sudden changes or new obstacles from being generated due to environmental changes. By updating the position information of the static obstacle in real time, the system can dynamically adjust the running path and ensure the safe passing of vehicles. Meanwhile, the state of the static obstacle is continuously monitored, so that potential risks can be identified, and the reliability and stability of the automatic driving system are further improved.
Alternatively, in response to the obstacle intersecting the obstacle with the ultrasonic detection result, the obstacle is taken as an obstacle to be tracked in the scene.
For an obstacle intersecting an ultrasonic obstacle, although the obstacle is not directly output to a downstream system, tracking is still required to ensure that obstacle data for output is available when ultrasonic missed.
When geometrical overlap exists between the obstacle polygons detected by other sensors (such as LiDAR or vision) and the ultrasonic detection area or the rapid collision detection is carried out by adopting a Separation Axis Theorem (SAT), the intersection area ratio is calculated to exceed a preset threshold (such as more than or equal to 15 percent), and the obstacle is judged to intersect with the ultrasonic obstacle. And (3) assigning unique tracking IDs for the obstacles meeting the conditions, recording attributes, and preferentially correcting the positions of the obstacles by using ultrasonic ranging data.
And 40, performing multi-frame matching on the obstacles to be tracked in the path constraint area so as to identify the target obstacles with continuous time sequences in the obstacles to be tracked.
When multi-frame matching is carried out on the obstacle to be tracked in the path constraint area, a space-time alignment mechanism is established, sensor data of continuous frames are uniformly converted into a vehicle coordinate system, convex hull simplification is carried out on free space polygons, and the complexity of subsequent matching calculation is reduced.
Free space Polygon (FREE SPACE Polygon) ‌ is a geometric model describing a safe driving area of a vehicle in an automatic driving system, and a sensor, such as a laser radar, a camera and a 4D millimeter wave radar, detects an obstacle boundary in the environment, reversely deduces a non-obstacle area which the vehicle can drive, and is formed by connecting a series of vertexes to cover a potential safe path range of the current moment or a predicted period of the vehicle. For example, in a parking scenario, the free-space polygon may be determined jointly by the parking space boundaries, the vehicle dynamics constraints.
When multi-frame matching is carried out on the obstacle in the path constraint area, the processing of the free space polygon specifically comprises the steps of mapping sensor data of continuous frames into a unified vehicle coordinate system, extracting the obstacle boundary through a clustering and edge detection algorithm based on the sensor original data, defining the area outside the obstacle boundary as free space, constructing a corresponding polygon expression, and simplifying the complex free space polygon into a convex polygon so as to reduce the calculation complexity of a subsequent matching algorithm.
When the simplified convex polygon and the obstacle track are subjected to geometric intersection detection, the calculation efficiency is remarkably improved due to the reduction of the number of vertexes.
The multi-frame matching core algorithm carries out inter-frame obstacle association by adopting the Hungary algorithm, the matching standard comprises polygon overlapping area proportion and motion consistency, continuous tracking ID is allocated to successfully matched obstacles, and time sequence track points are recorded.
Sequence points recorded in time sequence of the motion state (position, speed, acceleration and the like) of the obstacle in continuous frames after successful tracking form ‌ space-time track chains, and the sequence points are used for recording time information ‌ of the track points.
And judging whether the two are in accordance with the motion mode of the same target by combining the motion vector difference of the Kalman filtering predicted track and the current frame detection result.
In the embodiment, a Hungary algorithm is adopted to solve the maximum matching of the minimum cost by constructing ‌ bipartite graph model ‌ and taking polygon overlapping proportion and motion consistency as edge weights. The matching is successful when the total matching cost of the target obstacle and ‌ of a certain track is the lowest.
Optionally, setting a continuous 3-frame match failure moves out of the elimination rule of the tracking list.
By the mode, the system can not only efficiently and dynamically identify the obstacle, but also accurately grasp the real-time state of the static obstacle, and ensure the accuracy of path planning. Meanwhile, the introduction of the elimination mechanism effectively reduces the error tracking and improves the response speed and decision reliability of the whole perception system.
Further, referring to fig. 6, step 40 further includes the steps of:
and 41, continuously acquiring images of a plurality of frames of obstacles to be tracked.
Through an image processing technology, the characteristic information of the obstacle is extracted and compared with multi-frame data, and the obstacle with successfully matched characteristic information is brought into a continuous tracking list, so that the tracking continuity is ensured.
And 42, performing multi-target tracking matching through the image of the obstacle to be tracked.
In the multi-target tracking matching process, the image characteristic information and the space-time data are fused, so that the matching precision is further improved. The successfully matched obstacle will continue to remain in the tracking list and update its track information. For the obstacle with failed continuous multi-frame matching, the system moves the obstacle out of the tracking list according to a preset rule, so that the error tracking accumulation is avoided.
Specifically, 2D/3D bounding boxes, category probability distributions, and appearance features of detection targets are extracted from successive video frames, and a current frame detection set and an existing tracking target set are maintained. Constructing a cost matrix comprising two dimensions of a space distance and a semantic distance, forming comprehensive matching cost through weighted fusion, searching an optimal matching scheme by using a Hungary algorithm, and ensuring that the global matching cost is minimum.
And updating the motion track of the successfully matched target, initializing a new track by unmatched detection, and moving out the continuously mismatched tracking target from the system.
Through the series of steps, the system can accurately identify and track the obstacle in real time, and reliable perception support is provided for automatic driving.
And 43, extracting the vertex of the 3D detection frame and the outline point set of the free space polygon from the successfully matched obstacle pair, wherein the point sequence of the outline point set meets the point sequence requirement of a booth algorithm library.
And extracting 8 vertexes of the 3D detection frame and a contour point set of the free space polygon from the matched barrier pair, combining all input polygons to output results as minimum circumscribed convex polygons, and covering all original barrier areas.
The obstacle pairs are corresponding relations formed by matching ‌ adjacent frames or the same obstacle instance detected by different sensors ‌ through a data association algorithm (such as a hungarian algorithm).
And 44, merging all contour point sets to generate a convex hull, wherein the convex hull is used as the geometric representation of the successfully matched obstacle, and the successfully matched obstacle is used as the target obstacle.
And taking the matched obstacle as an obstacle pair on the basis of time sequence, and finding out a contour point set of the obstacle pair. The set of contour points comprises a bounding box of the 3D detection and a pure polygon of the free space detection. The point sets are input into a boost algorithm library, and a union operation is utilized to generate a new convex hull polygon, so that the area of the polygon is ensured to be larger and the convex hull characteristic is ensured.
The set of contour points ‌ of ‌ obstacle pairs refers to the set of geometric feature points of two consecutive frames (e.g., the t-1 st frame and the t-th frame) that are determined to be the same obstacle in the time-series multi-frame matching, including the set of ‌ D-detected bounding box points and the set of free-space-detected pure polygon points ‌.
And taking the fused convex hull as a new geometric representation of the obstacle to update the obstacle state. And predicting the position of the next frame through Kalman filtering, and reducing the matching search range of the next round.
And generating a new convex hull by combining the contour point sets of the obstacle pairs. Combining the 3D boundary frame vertexes of the two frames and the free space polygon vertexes into a unified set, extracting convex hull vertexes from the combined point set, and outputting the polygon to meet convexity. The new convex hull needs to cover all areas of the original point set, so its area is typically larger than Shan Zhen convex hulls.
The convex hull is the smallest convex polygon containing all points, without concave regions. The fused convex hulls can more robustly represent the space-time movement trend of the obstacle.
And inputting parameters such as the central point, the length and the width of the fusion convex hull into Kalman filtering, predicting the position and the speed of the next frame, and defining a candidate matching region of the next frame according to a prediction result.
By means of the obstacle tracking mode, the system can effectively identify and process dynamic and static obstacles, and robustness of the sensing system is improved.
Through the track tracking scheme, the system accurately identifies the obstacle in a limited range, reduces redundant codes and improves safety. And the Kalman filtering predicts the position, optimizes the matching search, ensures high efficiency of dynamic and static obstacle treatment, and enhances the stability of a perception system. The modularized design is convenient to maintain, the update of the follow-up sensing model can be flexibly adjusted, and the continuous optimization of the system is ensured.
Optionally, an area that is otherwise composed is selected as the tracking area according to the specific service and scene. If the low-speed closed scene adopts fixed grid partition+IOU matching, balancing precision and instantaneity, and the high-speed dynamic scene fuses the millimeter wave Lei Dadian cloud convex hull and the visual ROI region, and is matched with a self-adaptive Hungary algorithm.
The fixed grid partition is to divide the scene into regular rectangular grid cells, and manage the target position through grid indexes.
‌ IOU (Intersection over Union) matches are used to measure the overlap ratio of the target areas of adjacent frames.
The millimeter wave Lei Dadian cloud convex hull extracts the outline vertexes of the obstacle through radar point cloud data, and the minimum convex polygon is constructed to represent the geometric shape of the target. The millimeter wave radar provides high refresh rate point cloud data, and overcomes the defect of motion blur of vision at high speed.
The ‌ visual ROI (Region of Interest) region is a region of interest delineated based on a camera detection box or semantic segmentation result.
Optionally, the tracking algorithm adopted by the application comprises, but is not limited to, introducing a Kalman filter in the embodiment, and the matching algorithm can be adjusted according to actual needs. If Hungary matching is adopted, or greedy matching or nearest neighbor matching algorithm is adopted, the calculated amount is further reduced.
The method for detecting an obstacle based on path constraint in the embodiment of the present invention is described above, and the apparatus for detecting an obstacle based on path constraint in the embodiment of the present invention is described below, referring to fig. 7, and one embodiment of the apparatus for detecting an obstacle based on path constraint in the embodiment of the present invention includes:
The generating module 401 is configured to generate a track area on the parking track based on the position and the direction of the parking track.
A stitching module 402, configured to stitch the multiple continuous track areas into a path constraint area of the parking track.
And the screening module 403 is configured to obtain an obstacle to be tracked in the scene, and screen out the obstacle to be tracked outside the path constraint area.
The identifying module 404 is configured to perform multi-frame matching on the to-be-tracked obstacles in the path constraint area, so as to identify a target obstacle with continuous time sequence in the to-be-tracked obstacles.
The above-described feature extraction apparatus in the embodiment of the present invention is described in detail in fig. 7 from the point of view of a modularized functional entity, and the computer device in the embodiment of the present invention is described in detail in the point of view of hardware processing.
Fig. 8 is a schematic diagram of a computer device according to an embodiment of the present invention, where the computer device 500 may have a relatively large difference due to different configurations or performances, and may include one or more processors (centralprocessingunits, CPU) 510 (e.g., one or more processors) and a memory 520, and one or more storage mediums 530 (e.g., one or more mass storage devices) storing application programs 533 or data 532. Wherein memory 520 and storage medium 530 may be transitory or persistent storage. The program stored in the storage medium 530 may include one or more modules (not shown), each of which may include a series of instruction operations in the computer device 500. Still further, the processor 510 may be arranged to communicate with a storage medium 530 to execute a series of instruction operations in the storage medium 530 on the computer device 500.
The computer device 500 may also include one or more power supplies 540, one or more wired or wireless network interfaces 550, one or more input/output interfaces 560, and/or one or more operating systems 531, such as WindowsServe, macOSX, unix, linux, freeBSD, and the like. Those skilled in the art will appreciate that the computer device architecture shown in fig. 8 is not limiting of the computer device and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
The present invention also provides a computer device, including a memory and a processor, where the memory stores computer readable instructions that, when executed by the processor, cause the processor to perform the steps of the path constraint-based obstacle detection method in the above embodiments.
The present invention also provides a computer readable storage medium, which may be a non-volatile computer readable storage medium, and may also be a volatile computer readable storage medium, where instructions are stored in the computer readable storage medium, when the instructions are executed on a computer, cause the computer to perform the steps of the path constraint-based obstacle detection method.
Compared with the prior art, the method for automatically detecting the obstacle in the parking space by adopting the path constraint mode is different from the prior art, and the method for constructing the path constraint area is particularly provided, so that the path constraint area is reasonably constructed to carry out obstacle screening, the obstacle tracking efficiency is improved on the basis of ensuring the safety, and the efficient obstacle detection and fusion are realized by optimizing the construction mode of the track area and combining a plurality of tracking algorithms such as a Kalman filter, so that the calculated amount of the system is further reduced, and the overall performance is improved.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. The storage medium includes various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The foregoing description is only illustrative of the present application and is not intended to limit the scope of the application, and all equivalent structures or equivalent processes or direct or indirect application in other related technical fields are included in the scope of the present application.

Claims (8)

1.一种基于路径约束的障碍物检测方法,其特征在于,包括:1. A path constraint-based obstacle detection method, comprising: 选取车辆行驶方向上多段连续轨迹中的第一段作为泊车轨迹;Selecting the first segment of multiple continuous trajectories in the vehicle's driving direction as the parking trajectory; 基于所述泊车轨迹上起始离散点到终端离散点之间的距离和方向角构建所述泊车轨迹上的轨迹区域;Constructing a trajectory area on the parking trajectory based on the distance and direction angle between the starting discrete point and the terminal discrete point on the parking trajectory; 根据各所述离散点之间的法向量和多段连续的所述轨迹区域,对所述轨迹区域进行拼接;splicing the trajectory areas according to the normal vectors between the discrete points and multiple continuous trajectory areas; 基于车辆的长度和车辆的宽度对拼接后的轨迹区域进行扩展,以形成所述路径约束区域;Expanding the spliced trajectory area based on the length and width of the vehicle to form the path constraint area; 获取场景中的待跟踪障碍物,并筛除所述路径约束区域外的待跟踪障碍物;Obtain obstacles to be tracked in the scene, and filter out obstacles to be tracked outside the path constraint area; 对所述路径约束区域中的所述待跟踪障碍物进行多帧匹配,以识别所述待跟踪障碍物中时序连续的目标障碍物。Multi-frame matching is performed on the obstacles to be tracked in the path constraint area to identify target obstacles that are continuous in time sequence among the obstacles to be tracked. 2.根据权利要求1所述的障碍物检测方法,其特征在于,所述获取场景中的待跟踪障碍物,包括:2. The obstacle detection method according to claim 1, wherein obtaining obstacles to be tracked in the scene comprises: 检测所述场景中的障碍物,并对所述障碍物进行分类,以判断所述障碍物的类型;Detecting obstacles in the scene and classifying the obstacles to determine the type of the obstacles; 将分类结果为静态的障碍物作为场景中的待跟踪障碍物。Obstacles classified as static are regarded as obstacles to be tracked in the scene. 3.根据权利要求2所述的障碍物检测方法,其特征在于,所述判断所述障碍物的类型,包括:3. The obstacle detection method according to claim 2, wherein determining the type of the obstacle comprises: 响应于所述障碍物有且仅有自由空间多边形检测结果,标记所述障碍物的类型为静态;In response to the obstacle having only a free space polygon detection result, marking the type of the obstacle as static; 或,响应于所述障碍物有且仅有3D边界框检测结果,标记所述障碍物的类型为动态。Alternatively, in response to the obstacle having only a 3D bounding box detection result, the type of the obstacle is marked as dynamic. 4.根据权利要求2所述的障碍物检测方法,其特征在于,所述障碍物检测方法,还包括:4. The obstacle detection method according to claim 2, further comprising: 响应于所述障碍物与有超声波检测结果的障碍物相交,将所述障碍物作为场景中的待跟踪障碍物。In response to the obstacle intersecting with an obstacle having an ultrasonic detection result, the obstacle is taken as an obstacle to be tracked in the scene. 5.根据权利要求1所述的障碍物检测方法,其特征在于,所述对所述路径约束区域中的所述待跟踪障碍物进行多帧匹配,以识别所述待跟踪障碍物中时序连续的目标障碍物,包括:5. The obstacle detection method according to claim 1, wherein performing multi-frame matching on the obstacles to be tracked in the path constraint area to identify target obstacles that are temporally continuous among the obstacles to be tracked comprises: 持续获取多帧所述待跟踪障碍物的图像;Continuously acquiring multiple frames of images of the obstacle to be tracked; 通过所述待跟踪障碍物的图像进行多目标跟踪匹配;Performing multi-target tracking and matching through the image of the obstacle to be tracked; 从匹配成功的障碍物对中提取3D检测框的顶点和自由空间多边形的轮廓点集,所述轮廓点集的点序满足boots算法库的点序要求;Extract the vertices of the 3D detection box and the contour point set of the free space polygon from the successfully matched obstacle pairs, where the point order of the contour point set meets the point order requirements of the boots algorithm library; 合并所有所述轮廓点集以生成凸包,并将所述凸包作为匹配成功的障碍物的几何表征,并以匹配成功的障碍物作为所述目标障碍物。All the contour point sets are merged to generate a convex hull, and the convex hull is used as a geometric representation of the successfully matched obstacle, and the successfully matched obstacle is used as the target obstacle. 6.一种基于路径约束的障碍物检测装置,其特征在于,包括:6. An obstacle detection device based on path constraints, comprising: 生成模块,用于基于泊车轨迹的位置和方向,在所述泊车轨迹上生成轨迹区域;所述生成模块进一步用于选取车辆行驶方向上多段连续轨迹中的第一段作为泊车轨迹;基于所述泊车轨迹上起始离散点到终端离散点之间的距离和方向角构建所述泊车轨迹上的轨迹区域;a generation module configured to generate a trajectory region on the parking trajectory based on the position and direction of the parking trajectory; the generation module further configured to select a first segment of a plurality of continuous trajectories in the direction of vehicle travel as the parking trajectory; and construct the trajectory region on the parking trajectory based on the distance and direction angle between a starting discrete point and an ending discrete point on the parking trajectory; 拼接模块,用于将多段连续的所述轨迹区域拼接为所述泊车轨迹的路径约束区域;所述拼接模块进一步用于根据各所述离散点之间的法向量和多段连续的所述轨迹区域,对所述轨迹区域进行拼接;基于车辆的长度和车辆的宽度对拼接后的轨迹区域进行扩展,以形成所述路径约束区域;a splicing module configured to splice a plurality of continuous trajectory regions into a path constraint region of the parking trajectory; the splicing module further configured to splice the trajectory regions based on normal vectors between the discrete points and the plurality of continuous trajectory regions; and to expand the spliced trajectory regions based on a vehicle length and a vehicle width to form the path constraint region; 筛除模块,用于获取场景中的待跟踪障碍物,并筛除所述路径约束区域外的待跟踪障碍物;A screening module, configured to obtain obstacles to be tracked in the scene and screen out obstacles to be tracked outside the path constraint area; 识别模块,用于对所述路径约束区域中的所述待跟踪障碍物进行多帧匹配,以识别所述待跟踪障碍物中时序连续的目标障碍物。The identification module is configured to perform multi-frame matching on the obstacles to be tracked in the path constraint area to identify target obstacles that are continuous in time sequence among the obstacles to be tracked. 7.一种计算机设备,其特征在于,所述计算机设备包括:存储器和至少一个处理器,所述存储器中存储有指令;7. A computer device, characterized in that the computer device comprises: a memory and at least one processor, wherein the memory stores instructions; 所述至少一个处理器调用所述存储器中的所述指令,以使得所述计算机设备执行如权利要求1-5中任一项所述的障碍物检测方法。The at least one processor calls the instructions in the memory to enable the computer device to execute the obstacle detection method according to any one of claims 1 to 5. 8.一种计算机可读存储介质,所述计算机可读存储介质上存储有指令,其特征在于,所述指令被处理器执行时实现如权利要求1-5中任一项所述的障碍物检测方法。8. A computer-readable storage medium having instructions stored thereon, wherein when the instructions are executed by a processor, the obstacle detection method according to any one of claims 1 to 5 is implemented.
CN202510776547.9A 2025-06-11 2025-06-11 Obstacle detection method, device, equipment and medium based on path constraint Active CN120270232B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202510776547.9A CN120270232B (en) 2025-06-11 2025-06-11 Obstacle detection method, device, equipment and medium based on path constraint

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202510776547.9A CN120270232B (en) 2025-06-11 2025-06-11 Obstacle detection method, device, equipment and medium based on path constraint

Publications (2)

Publication Number Publication Date
CN120270232A CN120270232A (en) 2025-07-08
CN120270232B true CN120270232B (en) 2025-09-23

Family

ID=96243951

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202510776547.9A Active CN120270232B (en) 2025-06-11 2025-06-11 Obstacle detection method, device, equipment and medium based on path constraint

Country Status (1)

Country Link
CN (1) CN120270232B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111516676A (en) * 2020-04-30 2020-08-11 重庆长安汽车股份有限公司 Automatic parking method, system, automobile and computer readable storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102005034700A1 (en) * 2005-07-26 2007-02-08 Robert Bosch Gmbh Park Pilot
CN115236651B (en) * 2021-12-17 2025-07-22 上海仙途智能科技有限公司 Obstacle detection method and electronic device
CN118701034A (en) * 2024-06-27 2024-09-27 惠州市德赛西威智能交通技术研究院有限公司 Parking path planning method, vehicle control system and storage medium

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111516676A (en) * 2020-04-30 2020-08-11 重庆长安汽车股份有限公司 Automatic parking method, system, automobile and computer readable storage medium

Also Published As

Publication number Publication date
CN120270232A (en) 2025-07-08

Similar Documents

Publication Publication Date Title
JP7140922B2 (en) Multi-sensor data fusion method and apparatus
JP7455851B2 (en) Autonomous vehicle planning and forecasting
Chen et al. Suma++: Efficient lidar-based semantic slam
CN111611853B (en) Sensing information fusion method, device and storage medium
US8098889B2 (en) System and method for vehicle detection and tracking
Ferryman et al. Visual surveillance for moving vehicles
Nedevschi et al. Stereo-based pedestrian detection for collision-avoidance applications
CN111937036B (en) Method, device and computer-readable storage medium having instructions for processing sensor data
US11093762B2 (en) Method for validation of obstacle candidate
Chen et al. Dynamic environment modeling with gridmap: a multiple-object tracking application
CN110834631A (en) Pedestrian avoiding method and device, vehicle and storage medium
CN114355921B (en) Vehicle tracking track generation method and device, electronic equipment and storage medium
CN114030483B (en) Vehicle control method, device, electronic equipment and medium
CN118235180A (en) Method and device for predicting drivable lanes
KR20230036243A (en) Real-time 3D object detection and tracking system using visual and LiDAR
Huang et al. An online multi-lidar dynamic occupancy mapping method
KR102786684B1 (en) Vision based autonomous driving device and method of operation thereof
CN114509079A (en) Method and system for ground projection for autonomous driving
KR102444675B1 (en) Apparatus and method for predicting lane change of surrounding objects
Qing et al. A novel particle filter implementation for a multiple-vehicle detection and tracking system using tail light segmentation
CN120270232B (en) Obstacle detection method, device, equipment and medium based on path constraint
Guo et al. CADAS: A multimodal advanced driver assistance system for normal urban streets based on road context understanding
CN113703455B (en) Semantic information labeling method of laser point cloud and related equipment
US12326505B2 (en) Object shape detection apparatus and method
KR20240048748A (en) Method and apparatus of determinig line information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant