CN114120255A - Target identification method and device based on laser radar speed measurement - Google Patents
Target identification method and device based on laser radar speed measurement Download PDFInfo
- Publication number
- CN114120255A CN114120255A CN202111273263.6A CN202111273263A CN114120255A CN 114120255 A CN114120255 A CN 114120255A CN 202111273263 A CN202111273263 A CN 202111273263A CN 114120255 A CN114120255 A CN 114120255A
- Authority
- CN
- China
- Prior art keywords
- point cloud
- target
- target recognition
- data
- training
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 51
- 238000005259 measurement Methods 0.000 title claims abstract description 44
- 238000012549 training Methods 0.000 claims abstract description 84
- 238000006243 chemical reaction Methods 0.000 claims description 33
- 238000004590 computer program Methods 0.000 claims description 18
- 230000006870 function Effects 0.000 claims description 15
- 230000009466 transformation Effects 0.000 claims description 9
- 239000000284 extract Substances 0.000 claims description 4
- 230000002123 temporal effect Effects 0.000 claims 2
- 238000001514 detection method Methods 0.000 abstract description 12
- 230000003068 static effect Effects 0.000 abstract description 4
- 238000004891 communication Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 6
- 238000006073 displacement reaction Methods 0.000 description 4
- 238000007781 pre-processing Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 238000009499 grossing Methods 0.000 description 2
- 230000010365 information processing Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- BULVZWIRKLYCBC-UHFFFAOYSA-N phorate Chemical compound CCOP(=S)(OCC)SCSCC BULVZWIRKLYCBC-UHFFFAOYSA-N 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012827 research and development Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention provides a target identification method and a target identification device based on laser radar speed measurement, wherein the method comprises the following steps: extracting two adjacent frames of point cloud images from the acquired laser radar point cloud data for splicing to obtain spliced point cloud data; inputting the spliced point cloud data into a target recognition model to obtain a target recognition result output by the target recognition model; the target recognition model is obtained based on point cloud training data and a corresponding target recognition truth value. According to the invention, target identification is carried out on the point cloud splicing data after splicing through the target identification model so as to improve the target identification precision, thereby avoiding the false detection or false association of a static target crossing a road, further ensuring the safety of automatic driving of a vehicle and reducing the occurrence probability of accidents.
Description
Technical Field
The invention relates to the technical field of automatic driving, in particular to a target identification method and device based on laser radar speed measurement.
Background
An automatic vehicle (Self-piloting automobile) is also called an unmanned vehicle and is an intelligent vehicle which realizes unmanned driving through a computer system. The automated driving includes a driver-assist vehicle that assists the driver in driving and a fully automated-driving unmanned vehicle. With the continuous and deep research on automatic driving at home and abroad at the present stage, the automatic driving technology gradually advances from auxiliary driving to unmanned driving, and no matter which stage of automatic driving research and development, the vehicle performance needs to be detected to confirm or improve the safety of the vehicle. The vehicle is especially important for measuring the speed of a front target in the driving process.
The front target is generally a vehicle running ahead of the current vehicle, the speed measurement of the front target in the existing auxiliary or automatic driving process is mostly completed by a millimeter wave radar, and when the target approaches to a millimeter wave radar antenna, the frequency of a reflected signal is higher than that of a transmitter; conversely, when the target moves away from the antenna, the reflected signal frequency will be lower than the transmitter frequency. Therefore, the relative speed of the target and the millimeter wave radar can be calculated by means of the change value of the frequency.
However, the millimeter wave radar cannot acquire specific types of targets, and when the vehicle runs on a road, the millimeter wave radar can recognize a front portal frame, an overpass and the like as static targets crossing the road, so that the vehicle is braked by mistake in an automatic driving state.
Disclosure of Invention
The invention provides a target identification method and device based on laser radar speed measurement, which are used for solving the defect that a vehicle is braked by mistake because a millimeter wave radar cannot acquire the specific type of a target during speed measurement in the prior art, realizing speed measurement of the target and identifying the type of the target at the same time and improving the target detection performance.
The invention provides a target identification method based on laser radar speed measurement, which comprises the following steps: extracting two adjacent frames of point cloud images from the acquired laser radar point cloud data for splicing to obtain spliced point cloud data; inputting the spliced point cloud data into a target recognition model to obtain a target recognition result output by the target recognition model; the target recognition model is obtained based on point cloud training data and a corresponding target recognition truth value.
According to the target identification method based on laser radar speed measurement provided by the invention, the method for extracting two adjacent frames of point cloud images from the acquired laser radar point cloud data for splicing comprises the following steps: extracting two adjacent frames of point cloud images from the acquired laser radar point cloud data; performing coordinate conversion on a previous frame point cloud image in the two adjacent frames of point cloud images based on a next frame point cloud image; and splicing the previous frame of point cloud image after the coordinate conversion to the next frame of point cloud image.
According to the target identification method based on laser radar speed measurement provided by the invention, the coordinate conversion of the previous frame point cloud image in the two adjacent frame point cloud images based on the next frame point cloud image comprises the following steps: performing coordinate conversion on a previous frame point cloud image in the two adjacent frames of point cloud images based on a world coordinate system to obtain intermediate conversion data; and carrying out coordinate conversion on the intermediate conversion data based on the next frame point cloud image in the two adjacent frame point cloud images.
According to the target identification method based on laser radar speed measurement provided by the invention, before the splicing the former frame point cloud image after the coordinate conversion to the latter frame point cloud image, the method further comprises the following steps: adding a first feature as a time feature to the previous frame point cloud image; and adding a second feature as a time feature to the subsequent frame of point cloud image.
According to the target identification method based on laser radar speed measurement provided by the invention, the training of the target identification model comprises the following steps: acquiring point cloud training data; obtaining a target identification truth value according to the point cloud training data; extracting two adjacent frames of point clouds from the point cloud training data for splicing, and inputting the spliced point cloud training data into a target recognition model to obtain a target training result; and constructing a loss function based on the target training result and the target identification truth value, converging based on the loss function, and finishing training.
According to the target identification method based on laser radar speed measurement provided by the invention, the target identification true value comprises a speed true value, and the target identification true value is obtained according to the point cloud training data, and the method comprises the following steps: obtaining a motion track of a target according to the point cloud training data; and calculating the speed of the target in each frame of point cloud training data according to the motion trail to obtain a true speed value.
The invention also provides a target identification device based on laser radar speed measurement, which comprises: the splicing module extracts two adjacent frames of point cloud images from the acquired point cloud data for splicing to obtain spliced point cloud data; the point cloud data is obtained by detecting the advancing road of the vehicle based on the laser radar; the identification module is used for inputting the spliced point cloud data into a target identification model to obtain a target identification result output by the target identification model; the target recognition model is obtained based on point cloud training data and a corresponding target recognition truth value.
The invention also provides an electronic device, which comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein the processor executes the program to realize the steps of any one of the above target identification methods based on laser radar speed measurement.
The present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when being executed by a processor, implements the steps of the method for identifying a target based on lidar velocity measurement as described in any of the above.
The present invention also provides a computer program product, which includes a computer program, and when the computer program is executed by a processor, the steps of the method for identifying a target based on lidar speed measurement as described in any one of the above embodiments are implemented.
According to the target identification method and device based on laser radar speed measurement, the point cloud images of two adjacent frames in the point cloud data of the laser radar are spliced, so that the point cloud is denser, and the identification precision of a subsequent target identification model is improved conveniently; the point cloud splicing data spliced based on the laser radar point cloud data is subjected to target identification through the target identification model, a target identification result is obtained, and the target detection performance is improved, so that the false detection or false association of radar speed measurement on static targets crossing roads such as a portal frame and the like is further avoided, the safety of automatic driving of vehicles is ensured, and the occurrence probability of accidents is reduced.
Drawings
In order to more clearly illustrate the technical solutions of the present invention or the prior art, the drawings needed for the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
FIG. 1 is a schematic flow chart of a target identification method based on laser radar speed measurement provided by the invention;
FIG. 2 is a schematic diagram of a training process of a target recognition model provided by the present invention;
FIG. 3 is a schematic structural diagram of a target identification device based on laser radar speed measurement provided by the invention;
FIG. 4 is a schematic diagram of a training module according to the present invention;
fig. 5 is a schematic structural diagram of an electronic device provided in the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 shows a schematic flow chart of a target identification method based on laser radar speed measurement, where the method includes:
s01, extracting two adjacent frames of point cloud images from the acquired point cloud data for splicing to obtain spliced point cloud data; the point cloud data is obtained by detecting the advancing road of the vehicle based on the laser radar;
s02, inputting the spliced point cloud data into a target recognition model to obtain a target recognition result output by the target recognition model; the target recognition model is obtained based on point cloud training data and a corresponding target recognition truth value.
It should be noted that S0N in this specification does not represent the order of the target identification method based on laser radar speed measurement, and the target identification method based on laser radar speed measurement according to the present invention is described below with reference to fig. 2 specifically.
Step S01, extracting two adjacent frames of point cloud images from the acquired point cloud data for splicing to obtain spliced point cloud data; the point cloud data is obtained by detecting the advancing road of the vehicle based on the laser radar.
In this embodiment, extracting two adjacent frames of point cloud images from the acquired point cloud data for stitching includes: extracting two adjacent frames of point cloud images from the acquired point cloud data; performing coordinate conversion on a previous frame point cloud image in two adjacent frames of point cloud images based on a next frame point cloud image; and splicing the former frame of point cloud image after coordinate conversion to the latter frame of point cloud image. It should be noted that the stitched point cloud image includes information of each point in the corresponding point cloud data, specifically includes position information of each point and speed information of each point.
Specifically, coordinate conversion is performed on a previous frame point cloud in two adjacent frame point clouds based on a next frame point cloud image, and the coordinate conversion includes: performing coordinate conversion on a previous frame point cloud image in two adjacent frames of point cloud images based on a world coordinate system to obtain intermediate conversion data; and performing coordinate conversion on the intermediate conversion data based on the next frame point cloud image in the two adjacent frames of point cloud images. Further, coordinate transformation is performed on the intermediate transformation data based on a subsequent frame point cloud image in two adjacent frame point cloud images, including: performing coordinate conversion on the next frame of point cloud image based on a world coordinate system to obtain reference coordinate data; and converting the intermediate conversion data into a coordinate system of the next frame of point cloud image based on the reference coordinate data.
It should be noted that, through the world coordinate system, the previous frame of point cloud image is firstly converted from the coordinate system where the previous frame of point cloud image is located to the world coordinate system, and then the previous frame of point cloud image is converted from the world coordinate system to the coordinate system where the next frame of point cloud image is located based on the relationship between the coordinate system where the next frame of point cloud image is located and the world coordinate system, so that the previous frame of point cloud image can be spliced to the next frame of point cloud image conveniently.
In an optional embodiment, before stitching the coordinate-converted previous frame point cloud image to the next frame point cloud image, the method further includes: adding a first feature as a time feature to a previous frame of point cloud image; and adding a second feature as a time feature to the next frame of point cloud image.
It should be noted that the first feature and the second feature are used as time features to distinguish the point cloud image belonging to the previous frame from the point cloud image belonging to the next frame in the stitched point cloud data, so that the subsequent learning of the speed information of the corresponding target based on the time features is facilitated. In addition, the time characteristics are added, so that the speed information of the target can be conveniently learned by using the target recognition model subsequently, and the target speed is further obtained.
In an optional embodiment, before extracting two adjacent frames of point clouds from the acquired point cloud data for splicing, the method comprises the following steps: and detecting the advancing road of the vehicle based on the laser radar to obtain point cloud data. It should be noted that the point cloud data, i.e., the three-dimensional lidar point cloud data, is data obtained by scanning the road ahead of the vehicle by the lidar; the laser radar is an active remote sensing device using a laser as a transmitting light source and adopting a photoelectric detection technical means, and mainly comprises a transmitting system, a receiving system, an information processing system and the like, which is not limited in this embodiment.
In addition, the autonomous vehicle may be provided with a laser radar device (e.g., a front and rear laser radar device, etc.), a camera device (e.g., a front camera, etc.), a sensing device (e.g., a left and rear wheel sensor, etc.), and the like, which is not limited in this embodiment. The laser radar device can scan the surrounding environment with a certain radius length, and the result is displayed in a 3D map mode, so that the most initial judgment basis is given to the computer equipment. The front and rear laser radar devices can also be combined with the front camera to measure the distance between the autonomous vehicle and each object in front, rear, left and right.
It should be noted that, in order to improve the accuracy of subsequent model prediction, after the point cloud data is acquired, the point cloud data may also be preprocessed to effectively filter clutter interference of a single radar point. Specifically, the point cloud data is smoothed by median filtering, and isolated noise points are removed. In other embodiments, other preprocessing means may be adopted to remove clutter interference of a single radar point.
In an optional embodiment, after acquiring the point cloud data, the method further includes: and clustering the point cloud data, so that the point cloud data corresponding to the similar targets are clustered into the same cluster, and a subsequent target identification model is convenient to identify each cluster of data in a clustering result, thereby reducing the calculation amount of the subsequent target identification model.
Step S02, inputting the spliced point cloud data into a target recognition model to obtain a target information recognition result output by the target recognition model; the target recognition model is obtained based on point cloud training data and a corresponding target recognition truth value.
In this embodiment, the stitched point cloud data includes a plurality of stitched point cloud images, and the target recognition model is configured to perform target recognition based on the speed feature and the position feature extracted from the stitched point cloud data to output a target recognition result corresponding to a target, where the target recognition result includes information such as a target category, a target speed, a target position, and a target size, so as to control automatic driving of the vehicle.
It should be noted that, when the target identification model identifies the point cloud registration data, since the registration point cloud data includes the position information in the direction perpendicular to the height of the ground surface plane, the target identification model can determine the specific type of the target according to the information, for example, the vehicle, the pedestrian, the stationary target on the driving road, the portal frame, the overpass, and the like are identified as the stationary target crossing the road, thereby avoiding the subsequent false detection of the stationary target type crossing the road, such as the portal frame, and the like, and avoiding the subsequent control of the vehicle automatic form according to the false detection target, further ensuring the safety of the vehicle automatic driving, and reducing the occurrence probability of accidents.
Referring to fig. 2, in an optional embodiment, the target identification method based on laser radar speed measurement further includes training a target identification model, specifically including:
s11, point cloud training data is obtained; (ii) a
S12, obtaining a target identification truth value according to the point cloud training data;
s13, extracting two adjacent frames of point clouds from the point cloud training data for splicing, and inputting the spliced point cloud training data into a target recognition model to obtain a target training result;
and S14, constructing a loss function based on the target training result and the target identification true value, converging based on the loss function, and ending the training.
Specifically, the method comprises the following steps:
first, point cloud training data is obtained. In this embodiment, the acquired point cloud training data is point cloud data used for training. The jth point in the ith frame point cloud training image is represented as (x)ij,yij,zij)。
And secondly, obtaining a target identification truth value according to the point cloud training data. In this embodiment, the target identification true value includes a category true value corresponding to the category of the target, a speed true value corresponding to the speed of the target, a position true value corresponding to the position of the target, and a size true value corresponding to the size of the target. For example, when the target identification true value includes a true speed value, obtaining a target identification true value according to the point cloud training data includes: obtaining a motion track of the target according to the point cloud training data; and calculating the speed of the target in each frame of point cloud training data according to the motion trail to obtain a true speed value. The formula for calculating the true speed value at time t is:
wherein, vxtRepresenting the speed in the x-direction at time t, vytIndicating the speed in the y-direction at time t, Δ tnextRepresenting the time difference, Δ t, between the next instant and the current instant tprevRepresenting the time difference between the current time t and the previous time, t representing the current time, Δ tnext+ΔtprevRepresenting the time difference, x, between the next and the previous timet+ΔtnextRepresenting the x-displacement, x, of the next instant at the current instant tt+ΔtprevX-displacement, y, representing the time immediately preceding the current time tt+ΔtnextIndicating the y-displacement, y, of the next instant of time to the current instant tt+ΔtprevRepresenting the x-displacement at a time previous to the current y-time.
The forward direction of the vehicle body may be defined as the x-axis direction, the direction perpendicular to the x-axis on the ground plane and the leftward direction of the vehicle body may be defined as the y-axis direction, and the upward direction perpendicular to the ground plane may be defined as the z-axis direction.
And then, extracting two adjacent frames of point clouds from the point cloud training data for splicing, and inputting the spliced point cloud training data into a target recognition model to obtain a target training result. In this embodiment, the target training result includes information such as a target training category, a target training speed, a target training position, and a target training size.
And finally, constructing a loss function based on the target training result and the target identification true value, converging based on the loss function, and finishing the training. Taking the target training speed output by the model as an example, in order to improve the accuracy of the target vehicle speed prediction, the corresponding loss function L is calculated as follows:
l represents the average error magnitude of the target training speed and the true speed in the target recognition true value, where vxiRepresents the ith x-direction target training speed,represents the x-direction true velocity, vy, in the ith target recognition true valueiRepresents the ith y-direction target training speed,representing the y-direction true speed in the ith target recognition true value.
It should be noted that the smaller the loss function, the higher the accuracy of the target recognition model. Training ends when the calculated loss function tends to converge. In addition, the target training type, the target training position and the target training size output by the model can be calculated by respectively referring to the loss function construction mode according to the type true value, the position true value and the size true value, so that the prediction accuracy of the model on the target type, the target position and the target size is improved.
In summary, the point cloud images of two adjacent frames in the laser radar point cloud data are spliced, so that the point cloud is denser, and the identification precision of a subsequent target identification model is improved conveniently; the point cloud splicing data spliced based on the laser radar point cloud data is subjected to target identification through the target identification model, a target identification result is obtained, and the target detection performance is improved, so that the false detection or false association of radar speed measurement on static targets crossing roads such as a portal frame and the like is further avoided, the safety of automatic driving of vehicles is ensured, and the occurrence probability of accidents is reduced.
The target identification device based on the laser radar speed measurement provided by the invention is described below, and the target identification device based on the laser radar speed measurement described below and the target identification method based on the laser radar speed measurement described above can be referred to correspondingly.
Fig. 3 shows a schematic structural diagram of a target identification device based on laser radar speed measurement, which includes:
the splicing module 31 is used for extracting two adjacent frames of point cloud images from the acquired point cloud data and splicing to obtain spliced point cloud data; the point cloud data is obtained by detecting the advancing road of the vehicle based on the laser radar;
the identification module 32 is used for inputting the spliced point cloud data into the target identification model to obtain a target identification result output by the target identification model; the target recognition model is obtained based on point cloud training data and a corresponding target recognition truth value.
In this embodiment, the splicing module 31 includes: the point cloud extraction unit is used for extracting two adjacent frames of point cloud images from the acquired point cloud data; the coordinate conversion unit is used for carrying out coordinate conversion on a previous frame point cloud image in two adjacent frames of point cloud images based on a next frame point cloud image; and the splicing unit is used for splicing the previous frame of point cloud image subjected to coordinate conversion to the next frame of point cloud image. It should be noted that the stitched point cloud image includes information of each point in the corresponding point cloud data, specifically includes position information of each point and speed information of each point.
Still further, a coordinate conversion unit includes: the first conversion subunit performs coordinate conversion on a previous frame point cloud image in two adjacent frames of point cloud images based on a world coordinate system to obtain intermediate conversion data; and the second conversion subunit performs coordinate conversion on the intermediate conversion data based on the next frame point cloud image in the two adjacent frame point cloud images. More specifically, the second converting subunit includes: the reference coordinate conversion unit is used for carrying out coordinate conversion on the next frame of point cloud image based on a world coordinate system to obtain reference coordinate data; and the point cloud coordinate conversion unit is used for converting the intermediate conversion data into a coordinate system of the next frame of point cloud image based on the reference coordinate data.
In an optional embodiment, the splicing module 31 further includes: a first feature adding unit that adds a first feature as a time feature to a previous frame point cloud image; and the second feature adding unit is used for adding a second feature to the next frame of point cloud image as a time feature. It should be noted that the first feature and the second feature are used as time features to distinguish the point cloud image belonging to the previous frame from the point cloud image belonging to the next frame in the stitched point cloud data, so that the subsequent learning of the speed information of the corresponding target based on the time features is facilitated. In addition, the time characteristics are added, so that the speed information of the target can be conveniently learned by using the target recognition model subsequently, and the target speed is further obtained.
In an optional embodiment, the apparatus further comprises: and the data acquisition module is used for detecting the advancing road of the vehicle based on the laser radar to obtain point cloud data. It should be noted that the point cloud data, i.e., the three-dimensional lidar point cloud data, is data obtained by scanning the road ahead of the vehicle by the lidar; the laser radar is an active remote sensing device using a laser as a transmitting light source and adopting a photoelectric detection technical means, and mainly comprises a transmitting system, a receiving system, an information processing system and the like, which is not limited in this embodiment.
In an optional embodiment, the apparatus further comprises: and the data preprocessing module is used for preprocessing the point cloud data so as to effectively filter clutter interference of a single radar point. Specifically, the data preprocessing module comprises: and the smoothing unit is used for smoothing the point cloud data by adopting median filtering to remove isolated noise points.
In an optional embodiment, the apparatus further comprises: and the clustering module is used for clustering the point cloud data so as to cluster the point cloud data corresponding to the similar targets into a same cluster, so that the subsequent target identification model can conveniently identify each cluster of data in the clustering result, and further the calculation amount of the subsequent target identification model is reduced.
An identification module 32 comprising: the input unit is used for inputting the spliced point cloud data into the target identification model unit; the target identification model unit is used for identifying the input spliced point cloud data to obtain a target identification result; and an output unit that outputs the target recognition result. In this embodiment, the stitched point cloud data includes a plurality of stitched point cloud images, and the target recognition model is configured to perform target recognition based on extracting speed features and position features from the stitched point cloud data to output a recognition result of a corresponding target, where the target recognition result includes information such as a target category, a target speed, a target position, and a target size, so as to control automatic driving of the vehicle.
It should be noted that, when the target identification model identifies the point cloud registration data, since the registration point cloud data includes the position information in the direction perpendicular to the height of the ground surface plane, the target identification model can determine the specific type of the target according to the information, for example, the vehicle, the pedestrian, the stationary target on the driving road, the portal frame, the overpass, and the like are identified as the stationary target crossing the road, thereby avoiding the subsequent false detection of the stationary target type crossing the road, such as the portal frame, and the like, and avoiding the subsequent control of the vehicle automatic form according to the false detection target, further ensuring the safety of the vehicle automatic driving, and reducing the occurrence probability of accidents.
In an alternative embodiment, the apparatus further comprises a training module 33 for training the object recognition model unit.
Referring to fig. 4, a training module, comprising: a data acquisition unit 41 that acquires point cloud training data; a true value obtaining unit 42, obtaining a target identification true value according to the point cloud training data; the training unit 43 extracts two adjacent frames of point clouds from the point cloud training data for splicing, and inputs the spliced point cloud training data into the target recognition model to obtain a target training result; the loss function obtaining unit 44 constructs a loss function based on the target training result and the target identification true value, and terminates the training based on the loss function convergence.
Specifically, the true value obtaining unit 42 includes: the track acquisition subunit is used for acquiring a motion track of the target according to the point cloud training data; and the true value acquiring subunit calculates the speed of the target in each frame of point cloud training data according to the motion trail to obtain a true speed value.
Fig. 5 illustrates a physical structure diagram of an electronic device, which may include, as shown in fig. 5: a processor (processor)51, a communication Interface (communication Interface)52, a memory (memory)53 and a communication bus 54, wherein the processor 51, the communication Interface 52 and the memory 53 complete communication with each other through the communication bus 54. The processor 51 may invoke logic instructions in the memory 53 to perform a method of target identification based on lidar speed measurement, the method comprising: extracting two adjacent frames of point cloud images from the acquired point cloud data for splicing to obtain spliced point cloud data; the point cloud data is obtained by detecting the advancing road of the vehicle based on the laser radar; inputting the spliced point cloud data into a target recognition model to obtain a target recognition result output by the target recognition model; the target recognition model is obtained based on point cloud training data and a corresponding target recognition truth value.
In addition, the logic instructions in the memory 53 may be implemented in the form of software functional units and stored in a computer readable storage medium when the logic instructions are sold or used as independent products. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In another aspect, the present invention further provides a computer program product, where the computer program product includes a computer program, the computer program may be stored on a non-transitory computer-readable storage medium, and when the computer program is executed by a processor, a computer can execute the method for identifying a target based on lidar velocity measurement provided by the above methods, where the method includes: extracting two adjacent frames of point cloud images from the acquired point cloud data for splicing to obtain spliced point cloud data; the point cloud data is obtained by detecting the advancing road of the vehicle based on the laser radar; inputting the spliced point cloud data into a target recognition model to obtain a target recognition result output by the target recognition model; the target recognition model is obtained based on point cloud training data and a corresponding target recognition truth value.
In yet another aspect, the present invention further provides a non-transitory computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements a method for identifying a target based on lidar speed measurement, where the method includes: extracting two adjacent frames of point cloud images from the acquired point cloud data for splicing to obtain spliced point cloud data; the point cloud data is obtained by detecting the advancing road of the vehicle based on the laser radar; inputting the spliced point cloud data into a target recognition model to obtain a target recognition result output by the target recognition model; the target recognition model is obtained based on point cloud training data and a corresponding target recognition truth value.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111273263.6A CN114120255A (en) | 2021-10-29 | 2021-10-29 | Target identification method and device based on laser radar speed measurement |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111273263.6A CN114120255A (en) | 2021-10-29 | 2021-10-29 | Target identification method and device based on laser radar speed measurement |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114120255A true CN114120255A (en) | 2022-03-01 |
Family
ID=80379541
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111273263.6A Pending CN114120255A (en) | 2021-10-29 | 2021-10-29 | Target identification method and device based on laser radar speed measurement |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114120255A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115097159A (en) * | 2022-05-06 | 2022-09-23 | 北京市农林科学院智能装备技术研究中心 | Airflow field measuring device, airflow field measuring method and plant protection spraying machine |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110246159A (en) * | 2019-06-14 | 2019-09-17 | 湖南大学 | The 3D target motion analysis method of view-based access control model and radar information fusion |
CN110942449A (en) * | 2019-10-30 | 2020-03-31 | 华南理工大学 | Vehicle detection method based on laser and vision fusion |
CN111429514A (en) * | 2020-03-11 | 2020-07-17 | 浙江大学 | A 3D real-time target detection method for lidar based on fusion of multi-frame time series point clouds |
CN111950428A (en) * | 2020-08-06 | 2020-11-17 | 东软睿驰汽车技术(沈阳)有限公司 | Target obstacle identification method, device and vehicle |
CN112731338A (en) * | 2020-12-30 | 2021-04-30 | 潍柴动力股份有限公司 | Storage logistics AGV trolley obstacle detection method, device, equipment and medium |
-
2021
- 2021-10-29 CN CN202111273263.6A patent/CN114120255A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110246159A (en) * | 2019-06-14 | 2019-09-17 | 湖南大学 | The 3D target motion analysis method of view-based access control model and radar information fusion |
CN110942449A (en) * | 2019-10-30 | 2020-03-31 | 华南理工大学 | Vehicle detection method based on laser and vision fusion |
CN111429514A (en) * | 2020-03-11 | 2020-07-17 | 浙江大学 | A 3D real-time target detection method for lidar based on fusion of multi-frame time series point clouds |
CN111950428A (en) * | 2020-08-06 | 2020-11-17 | 东软睿驰汽车技术(沈阳)有限公司 | Target obstacle identification method, device and vehicle |
CN112731338A (en) * | 2020-12-30 | 2021-04-30 | 潍柴动力股份有限公司 | Storage logistics AGV trolley obstacle detection method, device, equipment and medium |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115097159A (en) * | 2022-05-06 | 2022-09-23 | 北京市农林科学院智能装备技术研究中心 | Airflow field measuring device, airflow field measuring method and plant protection spraying machine |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11320833B2 (en) | Data processing method, apparatus and terminal | |
CN111611853B (en) | Sensing information fusion method, device and storage medium | |
WO2021012254A1 (en) | Target detection method, system, and mobile platform | |
Asvadi et al. | 3D object tracking using RGB and LIDAR data | |
WO2020029706A1 (en) | Dummy lane line elimination method and apparatus | |
CN109212532B (en) | Method and apparatus for detecting obstacles | |
WO2018177026A1 (en) | Device and method for determining road edge | |
CN110988912A (en) | Road target and distance detection method, system and device for automatic driving vehicle | |
CN115496923B (en) | Multi-mode fusion target detection method and device based on uncertainty perception | |
CN110794406A (en) | Multi-source sensor data fusion system and method | |
CN110674705A (en) | Small-sized obstacle detection method and device based on multi-line laser radar | |
CN112733678A (en) | Ranging method, ranging device, computer equipment and storage medium | |
CN114296095A (en) | Method, device, vehicle and medium for extracting effective target of automatic driving vehicle | |
CN114241448A (en) | Method, device, electronic device and vehicle for obtaining obstacle course angle | |
CN117830642B (en) | Target speed prediction method and device based on millimeter wave radar and storage medium | |
CN112683228A (en) | Monocular camera ranging method and device | |
CN114119729A (en) | Obstacle identification method and device | |
CN114694115A (en) | Road obstacle detection method, device, equipment and storage medium | |
CN117516560A (en) | An unstructured environment map construction method and system based on semantic information | |
CN114120255A (en) | Target identification method and device based on laser radar speed measurement | |
EP3428876A1 (en) | Image processing device, apparatus control system, imaging device, image processing method, and program | |
CN113935946B (en) | Method and device for detecting underground obstacle in real time | |
CN112906519B (en) | Vehicle type identification method and device | |
CN116543032B (en) | Impact object ranging method, device, ranging equipment and storage medium | |
CN113887294B (en) | Wheel contact point detection method, device, electronic device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |