[go: up one dir, main page]

CN115909277A - Obstacle detection method, obstacle detection device, electronic device and computer-readable storage medium - Google Patents

Obstacle detection method, obstacle detection device, electronic device and computer-readable storage medium Download PDF

Info

Publication number
CN115909277A
CN115909277A CN202211584513.2A CN202211584513A CN115909277A CN 115909277 A CN115909277 A CN 115909277A CN 202211584513 A CN202211584513 A CN 202211584513A CN 115909277 A CN115909277 A CN 115909277A
Authority
CN
China
Prior art keywords
obstacle
grid
point cloud
cloud image
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211584513.2A
Other languages
Chinese (zh)
Inventor
陈海波
李翔
白凤君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenlan Artificial Intelligence Shenzhen Co Ltd
Original Assignee
Shenlan Artificial Intelligence Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenlan Artificial Intelligence Shenzhen Co Ltd filed Critical Shenlan Artificial Intelligence Shenzhen Co Ltd
Priority to CN202211584513.2A priority Critical patent/CN115909277A/en
Publication of CN115909277A publication Critical patent/CN115909277A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The application provides an obstacle detection method, an obstacle detection device, an electronic device and a computer-readable storage medium, wherein the method comprises the following steps: acquiring an original point cloud image in real time by using a radar; carrying out coordinate correction on the original point cloud image to obtain a corrected point cloud image; performing raster division on data in the region of interest of the corrected point cloud image to obtain a plurality of grids; obtaining obstacle marking results of each grid; and acquiring an obstacle detection result corresponding to the original point cloud image based on the obstacle marking result of each grid. The problem of the cost that causes during data acquisition, mark and training is too high in the deep learning can be solved, need not to gather a large amount of data and also need not to carry out complicated study consuming time and calculation operation, can detect out the barrier in real time and accurately, improved the precision that the barrier detected by a wide margin, promoted computational rate.

Description

Obstacle detection method, obstacle detection device, electronic device and computer-readable storage medium
Technical Field
The present application relates to the field of technologies of automatic driving, computer vision, and data processing, and in particular, to a method and an apparatus for detecting an obstacle, an electronic device, and a computer-readable storage medium.
Background
Automatic driving, also known as unmanned driving, computer driving or wheeled mobile robot, is a leading-edge technology that relies on computer and artificial intelligence technology to achieve complete, safe and effective driving without manual manipulation.
Autopilot relies on an autopilot control system, which is generally divided into three parts: the system comprises an environment perception module, a decision planning module and a control execution module.
The environment sensing module collects the peripheral information of the automobile through various sensors, and the commonly used sensors comprise a camera, a laser radar, a millimeter wave radar, combined navigation and the like; the environment sensing module can make decisions (steering, lane changing, accelerating and decelerating) by collecting the information around the automobile. The environmental perception includes the state of the vehicle itself, roads, pedestrians, traffic signals, traffic signs, traffic conditions, surrounding vehicles, etc.
The decision planning module is a technology which embodies the core of intelligence in unmanned driving, plans the current vehicle (speed planning, orientation planning, acceleration planning and the like) by comprehensively analyzing the information provided by the environment sensing system and routing and addressing results from a high-precision map, and generates corresponding decisions (vehicle following, lane changing, parking and the like). The planning technique also needs to take into account the mechanical, dynamic, and kinematic properties of the vehicle. Common decision making techniques include expert control, hidden markov models, bayesian networks, fuzzy logic, and the like.
The core technology of the control execution module mainly comprises longitudinal control technology and transverse control technology of the vehicle. The longitudinal control, namely the driving and braking control of the vehicle, means that the accurate following of the expected vehicle speed is realized through the coordination of an accelerator and braking. And transverse control, namely, the path tracking of the automatic driving automobile is realized through the adjustment of the steering wheel angle and the control of the tire force.
Patent CN114693696A discloses a point cloud data processing method, which includes: acquiring initial point cloud data and determining a target area of the initial point cloud data, wherein the target area comprises each point cloud in the initial point cloud data; performing grid division on the target area to obtain a plurality of grids, wherein the feature data of the point cloud in each grid meets the feature distribution condition, and the feature data is used for representing the space geometric relationship between the point cloud and the neighborhood points of the point cloud; selecting a target point cloud of each grid from the point clouds contained in each grid, and deleting other point clouds except the target point cloud in each grid; and obtaining target point cloud data according to the target point cloud of each grid. The method can only obtain target point cloud data, cannot locate the obstacle, and does not relate to a method for detecting the obstacle.
Based on this, the present application provides an obstacle detection method, apparatus, electronic device, and computer-readable storage medium to improve the prior art.
Disclosure of Invention
The invention aims to provide a method and a device for detecting obstacles, electronic equipment and a computer readable storage medium, wherein the grid is used for detecting the obstacles, so that the problem of overhigh cost caused by data acquisition, marking and training in deep learning can be solved, a large amount of data does not need to be acquired, complex time-consuming learning and calculation operation is not needed, the obstacles can be accurately detected in real time, the accuracy of obstacle detection is greatly improved, and the calculation speed is increased.
The purpose of the application is realized by adopting the following technical scheme:
in a first aspect, the present application provides a method of obstacle detection, the method comprising:
acquiring an original point cloud image in real time by using a radar;
carrying out coordinate correction on the original point cloud image to obtain a corrected point cloud image;
performing raster division on data in the region of interest of the corrected point cloud image to obtain a plurality of grids;
obtaining obstacle marking results of each grid;
and acquiring an obstacle detection result corresponding to the original point cloud image based on the obstacle marking result of each grid.
The technical scheme has the beneficial effects that: the method comprises the steps of utilizing radar (Lidar) to obtain an original point cloud image in real time, and carrying out coordinate correction on the original point cloud image because the ground of the original point cloud image is not parallel to the real ground generally so as to obtain a corrected point cloud image, thereby facilitating the subsequent calculation step. After the correction, an interested area is designed and obtained in the corrected image, data in the interested area are obtained, and grids are divided according to the data in the interested area, so that a plurality of grids are obtained. And detecting obstacles in the grid and marking the obstacles to obtain a marking result so as to obtain an obstacle detection result corresponding to the original point cloud image. The obstacle detection method can solve the problem of overhigh cost caused by data acquisition, labeling and training, can accurately detect the obstacle in real time without acquiring a large amount of data and carrying out complex and time-consuming learning and calculation operation, greatly improves the obstacle detection precision and improves the calculation speed.
In some optional embodiments, the performing coordinate rectification on the original point cloud image comprises:
acquiring a normal vector Np of an inclined ground corresponding to the original point cloud image under a preset rectangular coordinate system;
acquiring a normal vector Nz of the real ground corresponding to the original point cloud image under the preset rectangular coordinate system;
calculating a rotation transformation matrix between a normal vector Np of the inclined ground and a normal vector Nz of the real ground;
and carrying out rotation transformation on the original point cloud image by using the rotation transformation matrix to obtain the corrected point cloud image.
The technical scheme has the beneficial effects that: firstly, a rectangular coordinate system is set, an original point cloud image is placed in a preset rectangular coordinate system (under a common condition, the ground of the original point cloud image and the real ground form a certain angle, namely the included angle between the ground of the original point cloud image and the real ground is not 0 degrees), a normal vector Np of an inclined ground and a normal vector Nz of the real ground corresponding to the original point cloud image are obtained, a rotation transformation matrix between the normal vector Np of the inclined ground and the normal vector Nz of the real ground is calculated, and then the original point cloud image is subjected to rotation transformation according to the calculation result of the rotation transformation matrix so as to correct the coordinates of the original point cloud image, so that a corrected point cloud image is obtained. The advantage of using the rotation change matrix for calculation is that the rotation change matrix of the same posture is unique, and repeated calculation is not needed; the rotation superposition calculation can be directly carried out, and the calculation process is simple; the method is friendly to coordinate transformation, and is more suitable for the step of correcting the coordinates of the original point cloud image in unmanned driving. The original point cloud image is subjected to rotation transformation through the rotation transformation matrix, rigid body transformation of the point cloud is processed beneficially, the calculated amount is reduced, and the calculation accuracy is ensured.
In some optional embodiments, the process of acquiring data within the region of interest of the rectified point cloud image comprises:
and aiming at the corrected point cloud image, reserving data in a set region of interest, and removing data outside the region of interest to obtain data in the region of interest of the corrected point cloud image.
The technical scheme has the beneficial effects that: firstly, a region of interest (ROI) is set, secondly, data in the set ROI are reserved, and data outside the ROI are removed, so that the calculated data volume can be reduced, the method is more targeted, the point cloud number can be effectively reduced, and point cloud data compression is realized.
In some optional embodiments, the rasterizing the data within the region of interest of the rectified point cloud image comprises:
and performing raster division on data in the region of interest of the corrected point cloud image based on the set raster resolution, so that the point cloud data corresponding to the corrected point cloud image is divided into a plurality of rasters.
The technical scheme has the beneficial effects that: the method comprises the steps of acquiring the resolution of a grid in advance, dividing point cloud data into a plurality of grids on the premise of ensuring the resolution precision of the grid, wherein the grid data is more specific relative to vector expression, and the data information amount is more specific. The dividing step can selectively simplify the data in the point cloud data, and can more specifically select the data, so that the calculated data amount is reduced, and the calculation power loss of a server (a computer for managing calculation resources) is reduced.
In some alternative embodiments, the obtaining the obstacle labeling result for each of the grids includes:
counting the number of point clouds in each grid, and marking the grids with the number reaching a preset number threshold as effective grids;
sorting the height of the point clouds in each effective grid to obtain the maximum height value of each effective grid;
determining that the obstacle marking result of the effective grid is an obstacle when the maximum height value of the effective grid is greater than a first preset height threshold value.
The technical scheme has the beneficial effects that: firstly, a quantity threshold value and a first height threshold value are set, wherein the quantity threshold value is used for judging whether a grid is an effective grid according to the number of point clouds in the grid, and the first height threshold value is used for judging whether an obstacle exists in the effective grid according to the maximum height value of the effective grid. Through setting up quantity threshold value and first height threshold value, can judge effective grid and barrier directly perceivedly, the judgement process is simple, and the judgement is fast, and judgment efficiency is high, and the judgement resource that consumes is few. Through the marking of the effective grids and the barriers, the judgment result can be directly obtained, the judgment step does not need to be repeated, and time and resources are saved. When the receiving equipment is used for receiving the marking instruction, the marking operation can be performed more orderly and logically, the operation efficiency is improved, and various operations are ensured to be executed correctly.
In some optional embodiments, the obtaining the obstacle marking result for each of the grids further comprises:
when the maximum height value of the effective grid is not larger than the first preset height threshold value, calculating the maximum height difference between the effective grid and the grids in the surrounding neighborhood;
when the maximum height difference between the effective grid and the grids in the surrounding neighborhood is larger than a second preset height threshold value, determining that the obstacle marking result of the effective grid is an obstacle;
when the maximum height difference between the effective grid and the surrounding grid is not larger than the second preset height threshold value, determining that the obstacle marking result of the effective grid is a non-obstacle.
The technical scheme has the beneficial effects that: and when the maximum height value of the effective grid is not greater than a first preset height threshold value, calculating the maximum height difference between the effective grid and the surrounding neighborhood grid. And setting a second height threshold value, wherein the second height threshold value is used for judging whether the effective grid has obstacles or not according to the maximum height difference between the effective grid and the grids in the surrounding neighborhood. When the maximum height difference between the effective grid and the grids in the surrounding neighborhood is larger than a second preset height threshold value, determining that the effective grid has obstacles; and when the maximum height difference between the effective grid and the surrounding adjacent grids is not larger than a second preset height threshold value, determining that no barrier exists in the effective grid. By setting the second height threshold value, the obstacles in the effective grid can be intuitively judged, the judgment process is simple, the judgment speed is high, the judgment efficiency is high, and the consumed judgment resources are few. The judgment result can be directly obtained by marking the obstacles in the effective grid without repeatedly carrying out the judgment step, so that the time and the resource are saved. When the receiving equipment is used for receiving the marking instruction, the marking operation can be performed more orderly and logically, the operation efficiency is improved, and various operations are ensured to be executed correctly.
In some optional embodiments, the obtaining the obstacle detection result corresponding to the original point cloud image based on the obstacle marking result of each grid includes:
acquiring size information of the obstacle and barycentric coordinate information of the obstacle under a preset rectangular coordinate system based on the point cloud data of each grid of the obstacle marked by the obstacle marking result;
calculating to obtain real coordinate information of each grid of the obstacle under a preset rectangular coordinate system based on the coordinate information of the region of interest and the index information of each grid;
and acquiring three-dimensional detection information of the obstacle as an obstacle detection result corresponding to the original point cloud image based on the size information of the obstacle, the barycentric coordinate of the obstacle and the real coordinate information of each grid of the obstacle.
The technical scheme has the beneficial effects that: from the point cloud data of each of the grids of the obstacle as the obstacle marking result, the size information of the obstacle and the barycentric coordinate information in the specific rectangular coordinate system can be obtained. This is advantageous for obtaining subsequent coordinate data. Also, the coordinate information of the region of interest and the index information of each grid can more conveniently calculate the real coordinate information of each obstacle in each grid under the preset rectangular coordinate system to determine the position of each obstacle. The three-dimensional detection information of the obstacle is obtained through the size information of the obstacle, the barycentric coordinate of the obstacle and the real coordinate information of each grid of the obstacle, and is used as the obstacle detection result corresponding to the original point cloud image, so that the space modeling difficulty in subsequent obstacle avoidance can be reduced, calculation can be performed more orderly and logically, the operation efficiency is improved, and various operations are guaranteed to be performed correctly.
In some optional embodiments, the process of acquiring the size information of the obstacle includes:
for each of the grids whose obstacle marking result is an obstacle, performing the following processing: traversing all point cloud data in the grid, and respectively finding out the maximum value and the minimum value corresponding to three coordinate axes of the preset rectangular coordinate system corresponding to the grid;
calculating to obtain size information of the obstacle based on the difference between the maximum value and the minimum value corresponding to the three coordinate axes of the preset rectangular coordinate system corresponding to all grids of the obstacle;
the technical scheme has the beneficial effects that: traversing all the point cloud data, namely adopting non-discarding type full processing on the rasterized obstacle point cloud data, ensuring the integrity of the point cloud data, establishing the subsequent threshold range on the full data, and respectively finding out the maximum value and the minimum value corresponding to three coordinate axes of a preset rectangular coordinate system corresponding to the grid. The integrity of barrier data is guaranteed, the size of the barrier is calculated by using the corresponding difference between the maximum value and the minimum value, for example, the size of the barrier can be output as a standard cuboid, the fitting degree of the size of the barrier can be improved, the space modeling difficulty in subsequent barrier avoidance is reduced, the real-time performance of the barrier avoidance is improved, meanwhile, the barrier completely covers the barrier, the subsequent barrier avoidance measures aiming at the size of the barrier can be guaranteed to be safe and reliable, and the situation that the obstacle is too small or too close to cause the rubbing or the collision due to the estimation of the barrier is avoided.
In some optional embodiments, the obtaining of barycentric coordinate information of the obstacle in a preset rectangular coordinate system includes:
calculating to obtain a marking result of the obstacles as gravity center coordinate information of each grid of the obstacles under the preset rectangular coordinate system;
and calculating to obtain the barycentric coordinate information of the obstacle under the preset rectangular coordinate system based on the barycentric coordinate information of each grid under the preset rectangular coordinate system.
The technical scheme has the beneficial effects that: and calculating barycentric coordinate information of each grid under a preset rectangular coordinate system, and calculating barycentric coordinate information of the obstacle under the preset rectangular coordinate system according to the barycentric coordinate information. The step can further determine the barycentric coordinate information of the obstacle by calculating the barycentric coordinate information of the grid, so that the calculation range is reduced, repeated calculation steps are reduced, and time and resources are saved. If the barycentric coordinate information of the grid is not obtained in advance, the complexity and difficulty are increased when the barycentric coordinate information of the obstacle is calculated, and the error is not reduced.
In a second aspect, the present application provides an obstacle detection apparatus, the apparatus comprising:
the acquisition module is used for acquiring an original point cloud image in real time by using a radar;
the correction module is used for carrying out coordinate correction on the original point cloud image to obtain a corrected point cloud image;
the dividing module is used for carrying out grid division on data in the region of interest of the corrected point cloud image to obtain a plurality of grids;
the marking module is used for acquiring an obstacle marking result of each grid;
and the result module is used for acquiring an obstacle detection result corresponding to the original point cloud image based on the obstacle marking result of each grid.
In some alternative embodiments, the orthotic module is configured to:
acquiring a normal vector Np of an inclined ground corresponding to the original point cloud image under a preset rectangular coordinate system;
acquiring a normal vector Nz of a real ground corresponding to the original point cloud image under the preset rectangular coordinate system;
calculating a rotation transformation matrix between a normal vector Np of the inclined ground and a normal vector Nz of the real ground;
and carrying out rotation transformation on the original point cloud image by using the rotation transformation matrix to obtain the corrected point cloud image.
In some optional embodiments, the dividing module is further configured to:
and aiming at the corrected point cloud image, reserving data in a set region of interest, and removing data outside the region of interest to obtain data in the region of interest of the corrected point cloud image.
In some optional embodiments, the dividing module is configured to:
and performing raster division on data in the region of interest of the corrected point cloud image based on the set raster resolution, so that the point cloud data corresponding to the corrected point cloud image is divided into a plurality of rasters.
In some optional embodiments, the tagging module is to:
counting the number of point clouds in each grid, and marking the grids with the number reaching a preset number threshold as effective grids;
sorting the height of the point cloud in each effective grid to obtain the maximum height value of each effective grid;
determining that the obstacle marking result of the effective grid is an obstacle when the maximum height value of the effective grid is greater than a first preset height threshold value.
In some optional embodiments, the tagging module is further configured to:
when the maximum height value of the effective grid is not larger than the first preset height threshold value, calculating the maximum height difference between the effective grid and the grids in the surrounding neighborhood;
when the maximum height difference between the effective grid and the grids in the surrounding neighborhood is larger than a second preset height threshold value, determining that the obstacle marking result of the effective grid is an obstacle;
when the maximum height difference between the effective grid and the surrounding grid is not larger than the second preset height threshold value, determining that the obstacle marking result of the effective grid is a non-obstacle.
In some optional embodiments, the results module is to:
acquiring size information of the obstacle and barycentric coordinate information of the obstacle under a preset rectangular coordinate system based on the point cloud data of each grid of the obstacle marked by the obstacle;
calculating to obtain real coordinate information of each grid of the barrier under a preset rectangular coordinate system based on the coordinate information of the region of interest and the index information of each grid;
and acquiring three-dimensional detection information of the obstacle as an obstacle detection result corresponding to the original point cloud image based on the size information of the obstacle, the barycentric coordinate of the obstacle and the real coordinate information of each grid of the obstacle.
In some optional embodiments, the process of acquiring the size information of the obstacle includes:
for each of the grids for which the obstacle labeling result is an obstacle, performing the following processing: traversing all point cloud data in the grid, and respectively finding out the maximum value and the minimum value corresponding to three coordinate axes of the preset rectangular coordinate system corresponding to the grid;
and calculating to obtain the size information of the obstacle based on the difference between the maximum value and the minimum value corresponding to the three coordinate axes of the preset rectangular coordinate system corresponding to all grids of the obstacle as the obstacle marking result.
In some optional embodiments, the obtaining of the barycentric coordinate information of the obstacle in the preset rectangular coordinate system includes:
calculating to obtain a marking result of the obstacle as gravity center coordinate information of each grid of the obstacle under the preset rectangular coordinate system;
calculating to obtain the barycentric coordinate information of the barrier under the preset rectangular coordinate system based on the barycentric coordinate information of each grid under the preset rectangular coordinate system
In a third aspect, the present application provides an electronic device, which includes a memory and a processor, where the memory stores a computer program, and the processor implements the steps of any one of the above methods when executing the computer program.
In a fourth aspect, the present application provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, carries out the steps of any of the methods described above.
Drawings
The present application is further described below with reference to the accompanying drawings and embodiments.
Fig. 1 shows a schematic flow chart of an obstacle detection method provided in the present application.
Fig. 2 shows a schematic flow chart of performing coordinate rectification on an original point cloud image according to the present application.
Fig. 3 is a schematic flow chart illustrating a method for obtaining an obstacle marking result for each grid according to the present application.
Fig. 4 is a schematic structural diagram illustrating another example of obtaining the obstacle marking result for each grid according to the present application.
Fig. 5 shows a schematic flowchart for obtaining an obstacle detection result according to the present application.
Fig. 6 shows a schematic flowchart for acquiring size information of an obstacle according to the present application.
Fig. 7 shows a schematic flowchart of acquiring barycentric coordinate information of an obstacle according to the present application.
Fig. 8 shows a schematic structural diagram of an obstacle detection device provided in the present application.
Fig. 9 shows a block diagram of an electronic device provided in the present application.
Fig. 10 shows a schematic structural diagram of a program product provided in the present application.
Fig. 11 shows a schematic structural diagram of a grid division provided in the present application.
Detailed Description
The following embodiments are further described with reference to the accompanying drawings and the detailed description, and it should be noted that, in the absence of conflict, any combination between the embodiments or technical features described below may form a new embodiment.
In the embodiments of the present application, "at least one" means one or more, and "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone, wherein A and B can be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of the singular or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, a and b, a and c, b and c, a and b and c, wherein a, b and c can be single or multiple. It is to be noted that "at least one item" may also be interpreted as "one or more item(s)".
It should also be noted that in the embodiments of the present application, words such as "exemplary" or "for example" are used to indicate examples, illustrations or descriptions. Any embodiment or design described herein as "exemplary" or "e.g.," is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
Method embodiment
Referring to fig. 1, fig. 1 shows a schematic flowchart of an obstacle detection method provided in an embodiment of the present application.
The embodiment of the application provides an obstacle detection method, which comprises the following steps:
step S101: acquiring an original point cloud image in real time by using a radar;
step S102: carrying out coordinate correction on the original point cloud image to obtain a corrected point cloud image;
step S103: performing raster division on data in the region of interest of the corrected point cloud image to obtain a plurality of grids;
step S104: obtaining obstacle marking results of each grid;
step S105: and acquiring an obstacle detection result corresponding to the original point cloud image based on the obstacle marking result of each grid.
Therefore, the original point cloud image is obtained in real time by using a radar (Lidar), and because the ground of the original point cloud image is not parallel to the real ground, the original point cloud image needs to be subjected to coordinate correction, so that a corrected point cloud image is obtained, and the subsequent calculation step is facilitated. After the correction, an interested area is designed and obtained in the corrected image, data in the interested area is obtained, and grids are divided according to the data in the interested area to obtain a plurality of grids. And detecting obstacles in the grid and marking the obstacles to obtain a marking result, so as to obtain an obstacle detection result corresponding to the original point cloud image. The obstacle detection method can solve the problem of overhigh cost caused by data acquisition, labeling and training, can accurately detect the obstacle in real time without acquiring a large amount of data and carrying out complex and time-consuming learning and calculation operation, greatly improves the obstacle detection precision and improves the calculation speed.
Radars are electronic devices that detect objects using electromagnetic waves. The radar emits electromagnetic waves to irradiate a target and receives the echo of the target, so that information such as the distance from the target to an electromagnetic wave emission point, the distance change rate (radial speed), the azimuth and the altitude is obtained. Taking the lidar as an example, the sensor data of the lidar is represented as a 3D point cloud, where each point corresponds to a measurement of a single lidar beam. Each point is described by coordinates in the X, Y, Z axes and other attributes such as the intensity of the reflected laser pulse and even the secondary echo caused by partial reflections at the object boundaries. Another manifestation of lidar is depth imaging. The depth image saves the 3D point cloud data as a 360 degree "picture (depth image)" of the scanning environment, where the row dimension represents the elevation angle of the laser beam and the column dimension represents the azimuth angle. With each incremental rotation about the Z-axis, the lidar sensor will return a number of range and intensity measurements, which are then stored in corresponding cells of the depth image. Based on the depth image provided by the laser radar, a visualization method is needed to convert the depth image into a 3D point cloud image. The visualization method can be as follows: reconstructing X, Y and Z components of the point p by using coordinates based on the pitch angle, the yaw angle and the actual distance p of the point; correcting the azimuth angle to enable the center of the depth image to correspond to the direction of the X axis in the front direction of the vehicle; calculating X, Y and Z coordinates of each depth image; all 3D point cloud data with a visualization range greater than zero are extracted using the open3D toolbox.
Referring to fig. 11, fig. 11 shows a schematic structural diagram of a grid division provided in the present application.
The embodiment of the application can be applied to unmanned equipment, such as unmanned vehicles, unmanned planes, wheeled robots, multi-legged robots (such as robot dogs) and the like.
The radar may be, for example, a pulse radar, a continuous wave (doppler) radar, an FMCW radar, a Moving Target Indication (MTI) radar, a synthetic aperture radar, a frequency agile radar, a cone scanning radar, a beyond-the-horizon radar, a microwave radar, a millimeter wave radar, a laser radar, or the like.
The number of radars loaded on the unmanned aerial vehicle is not limited in the embodiments of the present application, and may be, for example, 1, 2, 3, or the like.
The format of the original point cloud image is not limited in the embodiment of the present application, and may be, for example, BMP, JPG, PNG, TIF, GIF, PCX, PSD, CDR, PCD, UFO, EPS, RAW, and the like.
The format of the corrected point cloud image is not limited in the embodiment of the present application, and may be, for example, BMP, JPG, PNG, TIF, GIF, PCX, PSD, CDR, PCD, UFO, EPS, RAW, and the like.
The size of the original point cloud image is not limited in the embodiments of the present application, and may be, for example, 17KB, 20KB, 4MB, 9MB, and the like.
The size of the rectified point cloud image is not limited in the embodiments of the present application, and may be, for example, 17KB, 20KB, 4MB, 9MB, or the like.
The number of the regions of interest is not limited in the embodiments of the present application, and may be, for example, 1, 2, 3, 4, 5, 10, 20, 50, 100, and the like.
The number of the grids is not limited in the embodiments of the present application, and may be, for example, 1, 2, 3, 4, 5, 10, 20, 50, 100, 500, 1000, 10000, 100000, or the like.
The number of obstacles in the embodiments of the present application is not limited, and may be, for example, 1, 2, 3, 4, 5, 10, 20, 50, 100, 500, or the like.
The obstacle marking result is not limited in the embodiments of the present application, and may be represented by one or more of symbols, patterns, letters, numbers, and letters, for example. For example, "Positive", "v", "1", "yes" or "a01" may be used to indicate that "obstacle" exists in the grid, and "x", "0", "no" or "B02" may be used to indicate that the grid is "non-obstacle".
The obstacle detection result is not limited in the embodiment of the present application, and may be, for example, three-dimensional coordinate data, contour data, a barycentric position, three-dimensional point cloud data, and the like of an obstacle in a preset area. The preset area is an area corresponding to an original point cloud image acquired by a radar in real time.
The number of the original point cloud images acquired by the radar is not limited in the embodiment of the present application, and may be, for example, 1, 2, 3, and the like. As an example, each original point cloud image may be rectified to yield one rectified point cloud image.
When a plurality of radars are adopted to obtain a plurality of original point cloud images in real time, the obstacle detection result corresponding to each original point cloud image can be respectively obtained, and then the data fusion mode is adopted to carry out data fusion on the obstacle detection result corresponding to each original point cloud image, so that the obstacle detection result of the target area is obtained. The target area may refer to, for example, a sector area in front of the unmanned aerial device (the sector area has a certain height and a horizontal cross section is a sector).
Referring to fig. 2, fig. 2 is a schematic diagram illustrating a process of performing coordinate rectification on the original point cloud image according to an embodiment of the present application.
In some optional embodiments, the step S102 includes:
step S201: acquiring a normal vector Np of an inclined ground corresponding to the original point cloud image under a preset rectangular coordinate system;
step S202: acquiring a normal vector Nz of a real ground corresponding to the original point cloud image under the preset rectangular coordinate system;
step S203: calculating a rotation transformation matrix between a normal vector Np of the inclined ground and a normal vector Nz of the real ground;
step S204: and carrying out rotation transformation on the original point cloud image by using the rotation transformation matrix to obtain the corrected point cloud image.
Therefore, firstly, a rectangular coordinate system is set, an original point cloud image is placed in a preset rectangular coordinate system (under a normal condition, the ground of the original point cloud image and the real ground form a certain angle, namely, the included angle between the ground and the real ground is not 0 degrees), a normal vector Np of an inclined ground and a normal vector Nz of the real ground corresponding to the original point cloud image are obtained, a rotation transformation matrix between the normal vector Np of the inclined ground and the normal vector Nz of the real ground is calculated, and then the original point cloud image is subjected to rotation transformation according to the calculation result of the rotation transformation matrix so as to correct the coordinates of the original point cloud image, so that a corrected point cloud image is obtained. The advantage of using the rotation change matrix for calculation is that the rotation change matrix of the same posture is unique, and repeated calculation is not needed; the rotary superposition calculation can be directly carried out, and the calculation process is simple; friendly to coordinate transformation and more suitable for the step of correcting the coordinates of the original point cloud image in unmanned driving. The original point cloud image is subjected to rotation transformation through the rotation transformation matrix, rigid transformation of the point cloud is favorably processed, the calculated amount is reduced, and the calculation accuracy is ensured.
In some alternative embodiments, the normal vector Np of the inclined ground can be first calculated and ensured to be oriented toward the Z-axis forward direction pointing to the preset rectangular coordinate system (including three coordinate axes of X, Y and Z), and then the rotation transformation matrix of the normal direction and the Z-axis Nz (0, 1) of the preset rectangular coordinate axis is calculated and applied to the original point cloud image for rotation transformation. The specific calculation formula is as follows:
1. the rotation axis is calculated by means of cross multiplication: n is a radical of r =N P ×N Z
2. Calculating an included angle between the plane normal Np and the standard Z axis:
3. computing a rotation transformation matrix between two vectors:
Figure BDA0003991800680000111
in some alternative embodiments, the rotation transformation may be performed by a combination of manual and intelligent methods. The manual method may be to manually calculate the rotation transformation matrix and correct the coordinates of the original point cloud image, receive the calculation result by using the interactive device, and transmit the calculation result to the user terminal. The intelligent mode can be to complete the rectification operation on the original point cloud image based on the code, for example, the rotation transformation can be performed by using python open3d function.
In some optional embodiments, the intelligent method can also combine with the pcl library to perform a rotation transformation by using ros, wherein the main function used for the point cloud transformation is pcl: VOid pcl: transformPoint cloud (const pcl: pointCloud < PointT > & closed _ in, pcl: pointCloud < PointT > & closed _ out, const Eigen: matrix4f & transform). Wherein, the parameter cloudin is a source point cloud, the cloudo-out is a transformed point cloud, and the transform is a transformation matrix.
In some optional embodiments, the corrected point cloud image may be further checked by using visualization to determine whether the ground of the corrected point cloud image is parallel to a horizontal axis of the preset rectangular coordinate system. Firstly setting coordinate axes, background colors and point display sizes, secondly defining R, G and B colors for the point cloud, outputting the point cloud to a viewer, using color management, and then setting a window position, namely displaying corrected coordinates after rotation transformation. And then, verifying whether the ground of the corrected image is parallel to a horizontal axis of a preset rectangular coordinate system or not through manual comparison.
In an embodiment of the present application, the rotation transformation matrix may include: a Rotation Matrix (Rotation Matrix) and a Transformation Matrix (Transformation Matrix). Wherein the rotation matrix performs a rotation operation and the transformation matrix performs a transformation operation. The transformation operations may in turn include one or more of Euclidean changes, similarity changes, radial changes, mapping changes.
In the embodiments of the present application, the dimensions of the rigid body motion include Six degrees of freedom (Six degrees of freedom): the degree of freedom of motion of a rigid body in three-dimensional space. In particular, the rigid body can translate on three mutually perpendicular coordinate axes, namely, front and back, up and down, left and right, and can rotate on three mutually perpendicular axes (a preset rectangular coordinate system), and the three rotation directions are called pitch (pitch), yaw (yaw) and roll (roll).
The real ground is not limited in the embodiments of the present application, and may be, for example, a sea level in a geographic sense, a horizontal plane in a geographic sense, or a horizontal road surface during driving of an automobile.
In some optional embodiments, the process of acquiring data within the region of interest of the rectified point cloud image comprises:
and aiming at the corrected point cloud image, reserving data in a set region of interest, and removing data outside the region of interest to obtain data in the region of interest of the corrected point cloud image.
Therefore, a region of interest (ROI) is set firstly, data in the set ROI are reserved secondly, and data outside the ROI are removed (namely removed), so that the calculated data amount can be reduced, more pertinence is achieved, the point cloud number can be effectively reduced, and point cloud data compression is realized.
In some alternative embodiments, a statistical outlier removal filter (statistical outlierer removal filter) may be used to determine and remove data outside the region of interest from the rectified point cloud image data. In a specific implementation, the parameters of the statistical outlier elimination filter can be set as follows: the search neighborhood for a certain point cloud in the corrected point cloud image is a set neighborhood value, and the standard deviation multiple of the reference is a set multiple value, for example, the set neighborhood value may be 30, and the standard deviation multiple may be 1. And for a certain point cloud in the corrected point cloud image in the set search neighborhood range, if the average distance between the point cloud and the rest point clouds in the search neighborhood is greater than the standard range of point cloud distribution in the search neighborhood, determining the point cloud as an outlier (namely data outside the region of interest) and deleting the outlier. In specific implementation, the average distance from each point cloud in the corrected point cloud image to all neighborhood points in the search neighborhood range is calculated, the obtained result is assumed to be Gaussian distribution, the shape of the Gaussian distribution is determined by the mean value and the standard deviation, and the point with the corresponding average distance outside the standard range is determined to be the outlier deletion, wherein the standard range is the range corresponding to the product of the standard deviation and the set standard deviation multiple.
In some optional embodiments, a data set with the target as a main body may also be acquired to obtain attribute information (such as coordinate values) corresponding to the target, and the attribute information is mapped to a high-dimensional space point. And clustering the high-dimensional information points by using DBSCAN, counting the information of the central point of the maximum cluster of each information point in the data set, taking the range from the maximum value to the minimum value of all data in the data set, setting the range as an interested region threshold value, judging whether a target is in an interested region by using the interested region threshold value, and rejecting data outside the interested region to obtain data in the interested region of the corrected point cloud image. Among them, DBSCAN (sensitivity-Based Spatial Clustering of Applications with Noise) is a Clustering algorithm Based on Density. It defines clusters as the largest set of densely connected points, enables areas with a sufficiently high density to be divided into clusters, and enables arbitrarily shaped clusters to be found in a noisy spatial database.
The number of radar point cloud data within the region of interest is not limited in the embodiments of the present application, and may be, for example, 1, 2, 3, 4, 5, 10, 20, 50, 100, 500, 1000, 10000, and the like.
In some optional embodiments, the rasterizing the data within the region of interest of the rectified point cloud image comprises:
and performing raster division on data in the region of interest of the corrected point cloud image based on the set raster resolution, so that the point cloud data corresponding to the corrected point cloud image is divided into a plurality of grids.
Therefore, the resolution of the grid is acquired in advance (by adopting a manual setting mode), the point cloud data is divided into a plurality of grids on the premise of ensuring the resolution precision of the grid, the grid data is more specific to the vector expression, and the data information quantity is more specific. The dividing step can selectively reduce the data in the point cloud data, and can more specifically select the data, thereby reducing the calculated data amount and reducing the calculation power loss of a server (a computer for managing calculation resources).
In the embodiments of the present application, the resolution of the grid refers to the size of the cells in the grid data set and the ratio between the screen pixels and the image pixels at the current map scale. For example, one screen pixel may be the result of resampling nine image pixels to one pixel, when the grid resolution is 1.
In some optional embodiments, the operation of viewing the grid resolution may be right-clicking a layer in the content list, and then clicking an attribute; clicking a display tab; selecting a grid resolution to be displayed in the content list; click on determine to see grid resolution.
In some alternative embodiments, the operation of altering the resolution of the grid may be to alter the resolution of the grid using some resolution altering tool. If one grid has finer resolution than the other grids, the finer resolution grid may be resampled to the same resolution as the coarser grid, resulting in the same resolution for all grid data sets. This can increase processing speed and reduce data size. Unlike the pel size setting in the analysis environment, the resolution modification tool is only applicable to the generated grid.
In some alternative embodiments, the method of altering the resolution of the grid includes interpolation and aggregation. The interpolation method works through a resampling tool in a "grid" toolset in a "data management" toolset. The tool adopts a NEAREST neighbor method (NEAREST), a BILINEAR method (BILINEAR), a CUBIC interpolation method (CUBIC) or a mode resampling Method (MAJORITY) and the like to change the resolution of the value of the input grid. The aggregation method utilizes a statistical aggregation method specified in the neighborhood to obtain values of the output grids at different resolutions. This method works through aggregation and block statistics tools. The aggregation method may aggregate values of a set of pixels to generate a coarser resolution pixel. The types of statistical data that can be used to aggregate input values are Sum, min, max, mean, and Median. By means of which the required grid resolution can be set.
In the embodiment of the present application, the grid division of the region of interest includes: dividing the region of interest into a plurality of grids according to the set grid resolution; for each grid which does not meet the division condition, the division processing is not carried out; executing the following steps for each grid meeting the dividing condition: dividing a target grid into a plurality of grids, wherein the target grid is each grid meeting the dividing conditions; the division condition is whether the feature data of the point cloud in the grid meets one or more of a resolution condition, a feature distribution condition, a geometric shape condition and the like. In the present application, a uniform grid division method may be adopted, and a non-uniform grid division method may also be adopted. For example, after acquiring the external cuboid of the point cloud data corresponding to the corrected point cloud image, the external cuboid may be uniformly divided according to the set grid size, and the external cuboid may be divided into a plurality of grids. The length of each side of the grid corresponding to the set grid size can be the same or different. For example, the grid size may be set to be 50 × 40 × 30cm, that is, when the grid corresponding to the grid size is set to be a rectangular parallelepiped with each side length being 50, 40, or 30cm, the circumscribed rectangular parallelepiped is uniformly divided into a plurality of grids with the size of 50 × 40 × 30 cm. For example, the product of each side length of the circumscribed cuboid and a setting coefficient may be respectively determined as each grid side length corresponding to the set grid size, and the setting coefficient is greater than 0 and not greater than 1. For example, if the coefficient is set to 0.1%, and when the size of the external rectangular solid is 100 × 80 × 40 meters, the side lengths of the respective grids corresponding to the grid sizes are set to 100 × 0.1% =0.1 meter, 80 × 0.1% =0.08 meter, and 40 × 0.1% =0.04 meter, that is, the grid size is set to 10 × 8 × 4 cm, the data processing apparatus uniformly divides the external rectangular solid into a plurality of grids having sizes of 10 × 8 × 4 cm; for example, when the sides of the circumscribed rectangular parallelepiped are all 50 meters, that is, when the circumscribed rectangular parallelepiped is a cube, the grid size is set to 5 × 5 × 5 cm.
In a specific embodiment, for example, the resolution may also be set to 500dpi, when the resolution of the grid is not less than 500dpi, the grid is retained and divided, and when the resolution of the grid is less than 500dpi, the grid is not divided, and the grid is directly removed.
Referring to fig. 3, fig. 3 is a schematic flow chart illustrating a result of obtaining an obstacle marking result for each grid according to an embodiment of the present application.
In some embodiments, the step S104 includes:
step S301: counting the number of point clouds in each grid, and marking the grids with the number reaching a preset number threshold as effective grids;
step S302: sorting the height of the point clouds in each effective grid to obtain the maximum height value of each effective grid;
step S303: determining that the obstacle marking result of the effective grid is an obstacle when the maximum height value of the effective grid is greater than a first preset height threshold value.
Therefore, firstly, a quantity threshold value and a first height threshold value are set, the quantity threshold value is used for judging whether the grid is an effective grid according to the point cloud quantity in the grid, and the first height threshold value is used for judging whether an obstacle exists in the effective grid according to the maximum height value of the effective grid. Through setting up quantity threshold value and first height threshold value, can judge effective grid and barrier directly perceivedly, the judgement process is simple, and the judgement is fast, and judgment efficiency is high, and the judgement resource that consumes is few. Through the marking of the effective grids and the barriers, the judgment result can be directly obtained, the judgment step does not need to be repeated, and the time and the resources are saved. When the receiving equipment is used for receiving the marking instruction, the marking operation can be performed more orderly and logically, the operation efficiency is improved, and various operations are ensured to be executed correctly.
In the embodiment of the present application, the point cloud number reaching the preset number threshold means that the point cloud number is not less than (greater than or equal to) the preset number threshold.
In some optional embodiments, the method may further comprise: and marking the grids with the point cloud number smaller than a preset number threshold as invalid grids. In other optional embodiments, the method may further comprise: and no operation is performed on the grids with the point transport quantity smaller than the preset quantity threshold value.
In one particular embodiment, the number of point clouds in the grid may be counted using pcl.
In a specific embodiment, the sorting the heights of the point clouds in each effective grid to obtain the maximum height value of each effective grid includes sorting the heights of the point clouds in the effective grids, selecting a part of the point clouds (which may be 1, 2, 3, 4, 5, 10, 20, 50, 100, 500, 1000, 10000, for example), calculating the average height value, and using the average height value to represent the maximum height value of the effective grid. And if the maximum height value of the effective grid is larger than a preset first height threshold value, marking the obstacle of the effective grid as an obstacle.
In one embodiment, the method of ordering the height of the point cloud in each active grid can be manual or intelligent. The intelligent method may be, for example, a divide-and-conquer method, in which a large file containing a large amount of data is divided into a plurality of small files to ensure discretization of the data, then the data in the small files are sorted respectively by using fast-sorting, bubble-sorting, selective-sorting, insert-sorting or heap-sorting, and the like, and then the small files are merged, and the small files containing respective "maximum values" may be cached using one variable, and then the large file is written once after being accumulated to a certain number. And then emptying the variables, and repeating the cycle until all the small files are completely written. In the field of automatic driving, due to the fact that the number of point clouds in the image is large, sorting efficiency can be improved by using an intelligent sorting mode, and point cloud data processing efficiency can be improved in practical application.
In another embodiment, the specific operation of determining whether the number of point clouds in the grid reaches the preset number threshold may be to perform point determination on the point cloud data in the grid by using a point determination model to obtain a point determination result of the grid, where the point determination result of the grid is used to indicate whether the number of point clouds in the grid reaches the preset number threshold. The training process of the point number judgment model comprises the following steps: acquiring a first training set, wherein the first training set comprises a plurality of first training data, each first training data comprises point cloud data of a sample grid and marking data of a point judgment result of the sample grid, and the point judgment result of the sample grid is used for indicating whether the point cloud number of the sample grid reaches a preset number threshold value; for each first training data in the first training set, performing the following: inputting point cloud data in a sample grid in the first training data into a preset first deep learning model to obtain prediction data of a point number judgment result of the sample grid; updating model parameters of the first deep learning model based on prediction data and marking data of the point judgment result of the sample grid; detecting whether a preset first training end condition is met; if yes, taking the trained first deep learning model as the point judgment model; and if not, continuing to train the first deep learning model by utilizing the next first training data.
In another embodiment, the specific operation of determining whether the maximum height value of the effective grid is greater than the first preset height threshold may be to perform height value determination on the point cloud data of the effective grid by using a height value determination model to obtain a maximum height value determination result of the effective grid, where the maximum height value determination result of the effective grid is used to indicate whether the maximum height value of the effective grid is greater than the first preset height threshold. The training process of the height value judgment model comprises the following steps: acquiring a second training set, wherein the second training set comprises a plurality of second training data, each second training data comprises point cloud data of a sample effective grid and marking data of a maximum height value judgment result of the sample effective grid, and the maximum height value judgment result of the effective grid is used for indicating whether the maximum height value of the effective grid is greater than a first preset height threshold value or not; for each second training data in the second training set, performing the following: inputting point cloud data of the sample effective grid in the second training data into a preset second deep learning model to obtain prediction data of a maximum height value judgment result of the sample effective grid; updating model parameters of the second deep learning model based on prediction data and marking data of a maximum height value judgment result of the sample effective grid; detecting whether a preset second training end condition is met or not; if so, taking the trained second deep learning model as the height value judgment model; and if not, continuing to train the second deep learning model by utilizing the next second training data.
The number of training data (including the first training data to the third training data) is not limited in the embodiments of the present application, and may be, for example, 1, 2, 3, 4, 5, 10, 20, 50, 100, 500, 1000, 10000.
The method for acquiring the annotation data in the embodiment of the present application is not limited, and for example, a manual annotation method may be adopted, and an automatic annotation method or a semi-automatic annotation method may also be adopted.
The embodiment of the present application does not limit the model training process, and for example, a supervised learning training mode may be adopted, or a semi-supervised learning training mode may be adopted, or an unsupervised learning training mode may be adopted.
In the embodiment of the present application, each training end condition (including the first training end condition to the third training end condition) is not limited, and it may be, for example, that the number of times of training reaches a preset number of times (the preset number of times is, for example, 1 time, 3 times, 10 times, 100 times, 1000 times, 10000 times, and the like), or may be that training data in a corresponding training set all complete one or multiple times of training, or may be that a total loss value obtained by this training is not greater than a preset loss value.
The number threshold is not limited in the embodiment of the present application, and may be, for example, 1, 2, 3, 4, 5, 10, 20, 50, 100, 500, 1000, 10000.
The first height threshold is not limited in the embodiments of the present application, and may be, for example, 1mm, 1cm, 10cm, 30cm, 50cm, 1m, 2m, 3m, 4m, 5m, 10m, 20m, 50m, 100m, 500m, 1000m, and 10000m.
The interactive device is not limited in this embodiment, and may be, for example, a mobile phone, a tablet computer, a notebook computer, a desktop computer, an intelligent wearable device, or an intelligent terminal device with a mouse, a touch pad, and a touch pen, or the interactive device may be a workstation or a console.
Referring to fig. 4, fig. 4 is a schematic flow chart illustrating another method for obtaining the obstacle marking result of each grid according to the embodiment of the present application.
In some embodiments, the step S104 may further include:
step S401: when the maximum height value of the effective grid is not larger than the first preset height threshold value, calculating the maximum height difference between the effective grid and the surrounding neighborhood grid;
step S402: when the maximum height difference between the effective grid and the grids in the surrounding neighborhood is larger than a second preset height threshold value, determining that the obstacle marking result of the effective grid is an obstacle;
step S403: when the maximum height difference between the effective grid and the surrounding neighborhood grid is not larger than the second preset height threshold value, determining that the obstacle marking result of the effective grid is a non-obstacle.
Therefore, when the maximum height value of the effective grid is not larger than the first preset height threshold value, the maximum height difference between the effective grid and the surrounding neighborhood grid is calculated. And setting a second height threshold value, wherein the second height threshold value is used for judging whether the effective grid has obstacles or not according to the maximum height difference between the effective grid and the grids in the surrounding neighborhood. When the maximum height difference between the effective grid and the grids in the surrounding neighborhood is larger than a second preset height threshold value, determining that the effective grid has obstacles; and when the maximum height difference between the effective grid and the surrounding adjacent grids is not larger than a second preset height threshold value, determining that no barrier exists in the effective grid.
By setting the second height threshold value, the obstacles in the effective grid can be intuitively judged, the judgment process is simple, the judgment speed is high, the judgment efficiency is high, and the consumed judgment resources are few. The judgment result can be directly obtained by marking the obstacles in the effective grid without repeatedly carrying out the judgment step, so that the time and the resource are saved. When the receiving equipment is used for receiving the marking instruction, the marking operation can be performed more orderly and logically, the operation efficiency is improved, and various operations are ensured to be executed correctly.
In some alternative embodiments, the method of obtaining the neighborhood grid may be obtaining neighborhood statistics by neighborhood operations. The four-around neighborhood operation is also called four-around focus operation (focal operation), and relates to a focus pixel and a group of surrounding pixels. The surrounding picture elements are selected according to their distance and/or directionality relationship with respect to the focus picture element. One necessary parameter for the neighborhood around operation is the neighborhood type. The neighborhood types may generally include rectangular, circular, and wedge shapes, in this application, a grid of divisions. The surrounding rectangular neighborhood is defined by width and height in units of pixels, such as a 3 x 3 window centered on the focus pixel. The neighborhood statistic value means that the surrounding neighborhood operation is usually calculated by using pixel values in the neighborhood, and then the calculated value is given to a focus pixel. To complete a grid neighborhood operation, the focus pixel needs to be moved from one pixel to another until all pixels have been visited. The surrounding neighborhood operation uses the pixel values defining the neighborhood, rather than the pixel values of a different input grid. From the output grid, the neighborhood operation may be statistical values such as minimum, maximum, range, sum, average, median, standard deviation, etc., or may be a list of measurement values such as mode, minority, and category. In the present application, the maximum height of the surrounding neighborhood grid needs to be obtained.
In some optional embodiments, the specific operation of determining whether the maximum height difference between the effective grid and the surrounding neighborhood grid is greater than the second preset height threshold may be that, based on the point cloud data of the effective grid and the surrounding neighborhood grid, a height difference determination model is used to perform height difference determination on the maximum height difference between the effective grid and the surrounding neighborhood grid to obtain a maximum height difference determination result of the effective grid and the surrounding neighborhood grid, where the maximum height difference determination result of the effective grid and the surrounding neighborhood grid is used to indicate whether the maximum height difference between the effective grid and the surrounding neighborhood grid is greater than the second preset height threshold. Wherein, the training process of the height difference judging model comprises the following steps: acquiring a third training set, wherein the third training set comprises a plurality of third training data, each third training data comprises point cloud data of an effective grid and a grid in the surrounding neighborhood and marking data of a judgment result of the maximum height difference between the effective grid and the grid in the surrounding neighborhood, and the judgment result of the maximum height difference between the effective grid and the grid in the surrounding neighborhood is used for indicating whether the maximum height difference between the effective grid and the grid in the surrounding neighborhood is larger than a second preset height threshold value or not; for each third training data in the third training set, performing the following: inputting point cloud data of the effective grid and surrounding grids in the third training data into a preset third deep learning model to obtain prediction data of a maximum height difference judgment result of the effective grid and the surrounding grids; updating model parameters of the third deep learning model based on prediction data and marking data of a maximum height difference judgment result of the sample effective grid and surrounding neighborhood grids; detecting whether a preset third training end condition is met; if yes, the trained third deep learning model is used as the height difference judgment model; and if not, continuing to train the second deep learning model by utilizing the next third training data.
Referring to fig. 5, fig. 5 is a schematic flowchart illustrating a process for obtaining an obstacle detection result according to an embodiment of the present application.
In some embodiments, the step S105 includes:
step S501: acquiring size information of the obstacle and barycentric coordinate information of the obstacle under a preset rectangular coordinate system based on the point cloud data of each grid of the obstacle marked by the obstacle marking result;
step S502: calculating to obtain real coordinate information of each grid of the obstacle under a preset rectangular coordinate system based on the coordinate information of the region of interest and the index information of each grid;
step S503: and acquiring three-dimensional detection information of the obstacle as an obstacle detection result corresponding to the original point cloud image based on the size information of the obstacle, the barycentric coordinates of the obstacle and the real coordinate information of each grid of the obstacle.
Thus, it is possible to further obtain the size information of the obstacle and the barycentric coordinate information in the specific rectangular coordinate system from the point cloud data of each of the grids of the obstacle as the obstacle marking result. This is advantageous for obtaining subsequent coordinate data. Also, the coordinate information of the region of interest and the index information of each grid can more conveniently calculate the real coordinate information of each obstacle in each grid under the preset rectangular coordinate system to determine the position of each obstacle. The three-dimensional detection information of the obstacle is obtained through the size information of the obstacle, the barycentric coordinate of the obstacle and the real coordinate information of each grid of the obstacle, and is used as the obstacle detection result corresponding to the original point cloud image, so that the space modeling difficulty in subsequent obstacle avoidance can be reduced, calculation can be performed more orderly and logically, the operation efficiency is improved, and various operations are guaranteed to be performed correctly.
The index information is not limited in the embodiment of the present application, and may be, for example, R tree series spatial index information, quadtree spatial index information, spatial index information of fixed grid division, or customized index information suitable for the present application.
In some optional embodiments, barycentric coordinate information of the obstacle in the preset rectangular coordinate system may be calculated by Opencv: and (4) converting colors of the input obstacles to be calculated and drawing a gravity center image, so that the coordinate information of the gravity center can be calculated. Taking a space triangle as an example, for any point P in the space triangle ABC, three numbers w1, w2, and w3 are only required, and w1+ w2+ w3=1, so that P = w1 a + w 2B + w 3C (i.e., P represents a linear combination of a, B, and C), and (w 1, w2, and w 3) is referred to as the barycentric coordinate of the point P on the space triangle.
Referring to fig. 6, fig. 6 is a schematic diagram illustrating a process for acquiring size information of an obstacle according to an embodiment of the present application.
In some embodiments, the acquiring of the size information of the obstacle based on the obstacle marking result of each of the grids includes:
step S601: for each of the grids for which the obstacle labeling result is an obstacle, performing the following processing: traversing all point cloud data in the grid, and respectively finding out the maximum value and the minimum value corresponding to three coordinate axes of the preset rectangular coordinate system corresponding to the grid;
step S602: and calculating to obtain the size information of the obstacle based on the difference between the maximum value and the minimum value corresponding to the three coordinate axes of the preset rectangular coordinate system corresponding to all grids of the obstacle as the obstacle marking result.
Therefore, all the point cloud data are traversed, namely the rasterized obstacle point cloud data are subjected to non-discarding type full processing, the integrity of the point cloud data is guaranteed, the subsequent threshold range is set on the full data, and the maximum value and the minimum value corresponding to three coordinate axes of the preset rectangular coordinate system corresponding to the grid can be found out respectively. The integrity of barrier data is guaranteed, the size of the barrier is calculated by using the corresponding difference between the maximum value and the minimum value, for example, the size of the barrier can be output as a standard cuboid, the fitting degree of the size of the barrier can be improved, the space modeling difficulty in subsequent barrier avoidance is reduced, the real-time performance of the barrier avoidance is improved, meanwhile, the barrier completely covers the barrier, the subsequent barrier avoidance measures aiming at the size of the barrier can be guaranteed to be safe and reliable, and the situation that the obstacle is too small or too close to cause the rubbing or the collision due to the estimation of the barrier is avoided.
In the present embodiment, the output shape of the obstacle size may be a cuboid, cube, cylinder, cone, sphere, hemisphere, terrace, or other irregular shape.
In some optional embodiments, a form of parallel processing and then merging may be adopted during traversing data, so that the speed will be significantly increased, or a sequential execution form is adopted, and a previous processing point is marked and released after a point is processed, so that it can be ensured that the memory usage is small, and the use of both schemes is a feasible scheme, and the self-defined use is selectively performed according to different scene requirements and resource limitations. And the final data output format is the maximum value and the minimum value on three standard axes under a space coordinate system, namely completely including all rasterized obstacle point cloud data.
In a specific embodiment, for each barrier grid, traversing all point clouds in the grid to find the maximum value and the minimum value of the X axis, the Y axis and the Z axis; and calculating the difference between the maximum value and the minimum value of the X axis, the Y axis and the Z axis to obtain the length, the width and the height of the obstacle, and obtaining the size information of the obstacle.
Referring to fig. 7, fig. 7 is a schematic flowchart illustrating a process of acquiring barycentric coordinate information of an obstacle according to an embodiment of the present application.
In some embodiments, the acquiring of the barycentric coordinate information of the obstacle in a preset rectangular coordinate system includes:
step S701: calculating to obtain a marking result of the obstacle as gravity center coordinate information of each grid of the obstacle under the preset rectangular coordinate system;
step S702: and calculating to obtain the barycentric coordinate information of the obstacle under the preset rectangular coordinate system based on the barycentric coordinate information of each grid under the preset rectangular coordinate system.
Therefore, the barycentric coordinate information of each grid under the preset rectangular coordinate system is calculated, and the barycentric coordinate information of the obstacle under the preset rectangular coordinate system is calculated according to the barycentric coordinate information. The step can further determine the barycentric coordinate information of the obstacle by calculating the barycentric coordinate information of the grid, so that the calculation range is reduced, repeated calculation steps are reduced, and time and resources are saved. If the barycentric coordinate information of the grids is not obtained in advance, the complexity and difficulty can be increased when the barycentric coordinate information of the obstacles is calculated, and the errors are not reduced.
In a specific embodiment, the barycentric coordinate value of the obstacle point cloud can be calculated in a traversing manner, the real coordinate value of the grid is calculated according to the starting point of the region of interest and the grid index, and barycentric coordinate information of the obstacle under the preset rectangular coordinate system is obtained through calculation.
Referring to fig. 8, fig. 8 shows a schematic structural diagram of an obstacle detection device provided in an embodiment of the present application.
The embodiment of the present application further provides an obstacle detection device, and the specific implementation manner of the obstacle detection device is consistent with the implementation manner and the achieved technical effect described in the implementation manner of the method, and details are not repeated.
The device comprises:
the acquisition module 101 is used for acquiring an original point cloud image in real time by using a radar;
the correction module 102 is configured to perform coordinate correction on the original point cloud image to obtain a corrected point cloud image;
a dividing module 103, configured to perform raster division on data within the region of interest of the corrected point cloud image to obtain multiple grids;
a marking module 104, configured to obtain an obstacle marking result for each grid;
a result module 105, configured to obtain an obstacle detection result corresponding to the original point cloud image based on an obstacle marking result of each grid.
In some optional embodiments, the orthotic module 102 is configured to:
acquiring a normal vector Np of an inclined ground corresponding to the original point cloud image under a preset rectangular coordinate system;
acquiring a normal vector Nz of the real ground corresponding to the original point cloud image under the preset rectangular coordinate system;
calculating a rotation transformation matrix between a normal vector Np of the inclined ground and a normal vector Nz of the real ground;
and carrying out rotation transformation on the original point cloud image by using the rotation transformation matrix to obtain the corrected point cloud image.
In some optional embodiments, the orthotic module 102 may be further configured to:
and aiming at the corrected point cloud image, reserving data in a set region of interest, and removing data outside the region of interest to obtain data in the region of interest of the corrected point cloud image.
In some optional embodiments, the dividing module 103 may be configured to:
and performing raster division on data in the region of interest of the corrected point cloud image based on the set raster resolution, so that the point cloud data corresponding to the corrected point cloud image is divided into a plurality of rasters.
In some alternative embodiments, the tagging module 104 may be configured to:
counting the number of point clouds in each grid, and marking the grids with the number reaching a preset number threshold as effective grids;
sorting the height of the point clouds in each effective grid to obtain the maximum height value of each effective grid;
determining that the obstacle marking result of the effective grid is an obstacle when the maximum height value of the effective grid is greater than a first preset height threshold value.
In some optional embodiments, the tagging module 104 may be further configured to:
when the maximum height value of the effective grid is not larger than the first preset height threshold value, calculating the maximum height difference between the effective grid and the grids in the surrounding neighborhood;
when the maximum height difference between the effective grid and the grids in the surrounding neighborhood is larger than a second preset height threshold value, determining that the obstacle marking result of the effective grid is an obstacle;
when the maximum height difference between the effective grid and the surrounding neighborhood grid is not larger than the second preset height threshold value, determining that the obstacle marking result of the effective grid is a non-obstacle.
In some alternative embodiments, the results module 105 may be configured to:
acquiring size information of the obstacle and barycentric coordinate information of the obstacle under a preset rectangular coordinate system based on the point cloud data of each grid of the obstacle marked by the obstacle;
calculating to obtain real coordinate information of each grid of the barrier under a preset rectangular coordinate system based on the coordinate information of the region of interest and the index information of each grid;
and acquiring three-dimensional detection information of the obstacle as an obstacle detection result corresponding to the original point cloud image based on the size information of the obstacle, the barycentric coordinate of the obstacle and the real coordinate information of each grid of the obstacle.
In some optional embodiments, the process of acquiring the size information of the obstacle includes:
for each of the grids for which the obstacle labeling result is an obstacle, performing the following processing: traversing all point cloud data in the grid, and respectively finding out the maximum value and the minimum value corresponding to three coordinate axes of the preset rectangular coordinate system corresponding to the grid;
calculating to obtain size information of the obstacle based on the difference between the maximum value and the minimum value corresponding to the three coordinate axes of the preset rectangular coordinate system corresponding to all grids of the obstacle;
in some optional embodiments, the obtaining of the barycentric coordinate information of the obstacle in a preset rectangular coordinate system includes:
calculating to obtain a marking result of the obstacle as gravity center coordinate information of each grid of the obstacle under the preset rectangular coordinate system;
and calculating to obtain the barycentric coordinate information of the obstacle under the preset rectangular coordinate system based on the barycentric coordinate information of each grid under the preset rectangular coordinate system.
Device embodiments
Referring to fig. 9, fig. 9 shows a block diagram of an electronic device 200 according to an embodiment of the present disclosure.
The electronic device 200 includes at least one memory 210, at least one processor 220, and a bus 230 connecting different platform systems.
The memory 210 may include readable media in the form of volatile memory, such as Random Access Memory (RAM) 211 and/or cache memory 212, and may further include Read Only Memory (ROM) 213.
The memory 210 further stores a computer program, and the computer program can be executed by the processor 220, so that the processor 220 implements the steps of any one of the methods, and a specific implementation manner of the method is consistent with the implementation manner and the achieved technical effect described in the implementation manner of the method, and a part of the contents are not described again.
Memory 210 may also include a utility 214 having at least one program module 215, such program modules 215 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Accordingly, the processor 220 may execute the computer programs described above, and may execute the utility 214.
The processor 220 may employ one or more Application Specific Integrated Circuits (ASICs), DSPs, programmable Logic Devices (PLDs), complex Programmable Logic Devices (CPLDs), field Programmable Gate Arrays (FPGAs), or other electronic components.
Bus 230 may be one or more of any of several types of bus structures including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a local bus using any of a variety of bus architectures.
The electronic device 200 may also communicate with one or more external devices 240, such as a keyboard, pointing device, bluetooth device, etc., and may also communicate with one or more devices capable of interacting with the electronic device 200, and/or with any devices (e.g., routers, modems, etc.) that enable the electronic device 200 to communicate with one or more other computing devices. Such communication may be through input-output interface 250. Also, the electronic device 200 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the Internet) via the network adapter 260. The network adapter 260 may communicate with other modules of the electronic device 200 via the bus 230. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the electronic device 200, including but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID systems, tape drives, and data backup storage platforms, to name a few.
Media embodiments
An embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the steps of any one of the methods are implemented, and a specific implementation manner of the method is consistent with the implementation manner and the achieved technical effect described in the implementation manner of the method, and some details are not repeated.
Referring to fig. 10, fig. 10 is a schematic structural diagram illustrating a program product provided in an embodiment of the present application.
The program product is for implementing any of the methods described above. The program product may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present invention is not limited in this respect, and in the embodiments of the present application, the readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable storage medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable storage medium may also be any readable medium that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing. Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the C language, python language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In situations involving remote computing devices, the remote computing devices may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to external computing devices (e.g., through the internet using an internet service provider).
While the present application is described in terms of various aspects, including exemplary embodiments, the principles of the invention should not be limited to the disclosed embodiments, but are also intended to cover various modifications, equivalents, and alternatives falling within the spirit and scope of the invention as defined by the appended claims.

Claims (12)

1. An obstacle detection method, characterized in that the method comprises:
acquiring an original point cloud image in real time by using a radar;
carrying out coordinate correction on the original point cloud image to obtain a corrected point cloud image;
performing raster division on data in the region of interest of the corrected point cloud image to obtain a plurality of grids;
obtaining obstacle marking results of each grid;
and acquiring an obstacle detection result corresponding to the original point cloud image based on the obstacle marking result of each grid.
2. The obstacle detection method according to claim 1, wherein the coordinate rectification of the original point cloud image includes:
acquiring a normal vector Np of an inclined ground corresponding to the original point cloud image under a preset rectangular coordinate system;
acquiring a normal vector Nz of a real ground corresponding to the original point cloud image under the preset rectangular coordinate system;
calculating a rotation transformation matrix between a normal vector Np of the inclined ground and a normal vector Nz of the real ground;
and performing rotation transformation on the original point cloud image by using the rotation transformation matrix to obtain the corrected point cloud image.
3. The obstacle detection method according to claim 1, wherein the process of acquiring data within the region of interest of the corrected point cloud image includes:
and aiming at the corrected point cloud image, reserving data in a set region of interest, and removing data outside the region of interest to obtain data in the region of interest of the corrected point cloud image.
4. The obstacle detection method according to claim 1, wherein the gridding the data within the region of interest of the rectified point cloud image comprises:
and performing raster division on data in the region of interest of the corrected point cloud image based on the set raster resolution, so that the point cloud data corresponding to the corrected point cloud image is divided into a plurality of grids.
5. The obstacle detection method according to claim 1, wherein the acquiring an obstacle marking result for each of the grids includes:
counting the number of point clouds in each grid, and marking the grids with the number reaching a preset number threshold value as effective grids;
sorting the height of the point clouds in each effective grid to obtain the maximum height value of each effective grid;
determining that the obstacle marking result of the effective grid is an obstacle when the maximum height value of the effective grid is greater than a first preset height threshold value.
6. The obstacle detection method according to claim 5, wherein the acquiring an obstacle marking result for each of the grids further includes:
when the maximum height value of the effective grid is not larger than the first preset height threshold value, calculating the maximum height difference between the effective grid and the grids in the surrounding neighborhood;
when the maximum height difference between the effective grid and the grids in the surrounding neighborhood is larger than a second preset height threshold value, determining that the obstacle marking result of the effective grid is an obstacle;
when the maximum height difference between the effective grid and the surrounding neighborhood grid is not larger than the second preset height threshold value, determining that the obstacle marking result of the effective grid is a non-obstacle.
7. The obstacle detection method according to claim 1, wherein the obtaining of the obstacle detection result corresponding to the original point cloud image based on the obstacle marking result for each grid includes:
acquiring size information of the obstacle and barycentric coordinate information of the obstacle under a preset rectangular coordinate system based on the point cloud data of each grid of the obstacle marked by the obstacle marking result;
calculating to obtain real coordinate information of each grid of the barrier under a preset rectangular coordinate system based on the coordinate information of the region of interest and the index information of each grid;
and acquiring three-dimensional detection information of the obstacle as an obstacle detection result corresponding to the original point cloud image based on the size information of the obstacle, the barycentric coordinates of the obstacle and the real coordinate information of each grid of the obstacle.
8. The obstacle detection method according to claim 7, wherein the process of acquiring the size information of the obstacle includes:
for each of the grids whose obstacle marking result is an obstacle, performing the following processing: traversing all point cloud data in the grid, and respectively finding out the maximum value and the minimum value corresponding to three coordinate axes of the preset rectangular coordinate system corresponding to the grid;
and calculating to obtain the size information of the obstacle based on the difference between the maximum value and the minimum value corresponding to the three coordinate axes of the preset rectangular coordinate system corresponding to all grids of the obstacle as the obstacle marking result.
9. The obstacle detection method according to claim 7, wherein the process of acquiring barycentric coordinate information of the obstacle in a preset rectangular coordinate system includes:
calculating to obtain a marking result of the obstacle as gravity center coordinate information of each grid of the obstacle under the preset rectangular coordinate system;
and calculating to obtain the barycentric coordinate information of the obstacle under the preset rectangular coordinate system based on the barycentric coordinate information of each grid under the preset rectangular coordinate system.
10. An obstacle detection device, characterized in that the device comprises:
the acquisition module is used for acquiring an original point cloud image in real time by using a radar;
the correction module is used for carrying out coordinate correction on the original point cloud image to obtain a corrected point cloud image;
the dividing module is used for carrying out raster division on data in the region of interest of the corrected point cloud image to obtain a plurality of grids;
the marking module is used for acquiring an obstacle marking result of each grid;
and the result module is used for acquiring an obstacle detection result corresponding to the original point cloud image based on the obstacle marking result of each grid.
11. An electronic device, characterized in that the electronic device comprises a memory storing a computer program and a processor implementing the steps of the method according to any of claims 1-9 when the processor executes the computer program.
12. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 9.
CN202211584513.2A 2022-12-09 2022-12-09 Obstacle detection method, obstacle detection device, electronic device and computer-readable storage medium Pending CN115909277A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211584513.2A CN115909277A (en) 2022-12-09 2022-12-09 Obstacle detection method, obstacle detection device, electronic device and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211584513.2A CN115909277A (en) 2022-12-09 2022-12-09 Obstacle detection method, obstacle detection device, electronic device and computer-readable storage medium

Publications (1)

Publication Number Publication Date
CN115909277A true CN115909277A (en) 2023-04-04

Family

ID=86493706

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211584513.2A Pending CN115909277A (en) 2022-12-09 2022-12-09 Obstacle detection method, obstacle detection device, electronic device and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN115909277A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024217388A1 (en) * 2023-04-21 2024-10-24 深圳绿米联创科技有限公司 Object determination method and apparatus, and storage medium and computer device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024217388A1 (en) * 2023-04-21 2024-10-24 深圳绿米联创科技有限公司 Object determination method and apparatus, and storage medium and computer device

Similar Documents

Publication Publication Date Title
CN111880191B (en) Map generation method based on multi-agent laser radar and visual information fusion
CN108509820B (en) Obstacle segmentation method and device, computer equipment and readable medium
CN111192295A (en) Target detection and tracking method, related device and computer readable storage medium
CN110632617B (en) A method and device for laser radar point cloud data processing
CN108470174B (en) Obstacle segmentation method and device, computer equipment and readable medium
CN115205391A (en) Target prediction method based on three-dimensional laser radar and vision fusion
WO2021056516A1 (en) Method and device for target detection, and movable platform
US20230049383A1 (en) Systems and methods for determining road traversability using real time data and a trained model
WO2024001969A1 (en) Image processing method and apparatus, and storage medium and computer program product
CN114241448A (en) Method, device, electronic device and vehicle for obtaining obstacle course angle
WO2020157138A1 (en) Object detection apparatus, system and method
CN112823353A (en) Object localization using machine learning
CN113112491A (en) Cliff detection method and device, robot and storage medium
CN111736167B (en) Method and device for obtaining laser point cloud density
CN116710809A (en) Systems and methods for monitoring LiDAR sensor health
WO2024012211A1 (en) Autonomous-driving environmental perception method, medium and vehicle
CN112639822A (en) Data processing method and device
CN117302263A (en) Method, device, equipment and storage medium for detecting drivable area
CN117516560A (en) An unstructured environment map construction method and system based on semantic information
CN117423102A (en) Point cloud data processing method and related equipment
CN115909277A (en) Obstacle detection method, obstacle detection device, electronic device and computer-readable storage medium
CN115171378B (en) A long-distance multi-vehicle high-precision detection and tracking method based on roadside radar
CN116664809A (en) Three-dimensional information acquisition method, three-dimensional information acquisition device, computer equipment, storage medium and product
CN115239841A (en) Lane line generation method, device, computer equipment and storage medium
CN116559847A (en) Lidar external parameter calibration method, device and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination