CN112837410B - Training model and point cloud processing method and device - Google Patents
Training model and point cloud processing method and device Download PDFInfo
- Publication number
- CN112837410B CN112837410B CN202110195817.9A CN202110195817A CN112837410B CN 112837410 B CN112837410 B CN 112837410B CN 202110195817 A CN202110195817 A CN 202110195817A CN 112837410 B CN112837410 B CN 112837410B
- Authority
- CN
- China
- Prior art keywords
- point cloud
- sub
- processed
- point
- target object
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000012549 training Methods 0.000 title claims abstract description 184
- 238000003672 processing method Methods 0.000 title abstract description 7
- 238000012545 processing Methods 0.000 claims abstract description 227
- 238000000280 densification Methods 0.000 claims description 180
- 238000000034 method Methods 0.000 claims description 98
- 230000000295 complement effect Effects 0.000 claims description 69
- 238000003860 storage Methods 0.000 claims description 20
- 238000004590 computer program Methods 0.000 claims description 17
- 230000011218 segmentation Effects 0.000 claims description 12
- 230000008569 process Effects 0.000 description 34
- 238000010586 diagram Methods 0.000 description 19
- 238000002360 preparation method Methods 0.000 description 13
- 230000006870 function Effects 0.000 description 12
- 238000005516 engineering process Methods 0.000 description 9
- 230000006872 improvement Effects 0.000 description 8
- 238000002372 labelling Methods 0.000 description 6
- 238000005520 cutting process Methods 0.000 description 5
- SLXKOJJOQWFEFD-UHFFFAOYSA-N 6-aminohexanoic acid Chemical compound NCCCCCC(O)=O SLXKOJJOQWFEFD-UHFFFAOYSA-N 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 230000007613 environmental effect Effects 0.000 description 3
- 238000004519 manufacturing process Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 230000002950 deficient Effects 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000011217 control strategy Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 229920001296 polysiloxane Polymers 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 239000010979 ruby Substances 0.000 description 1
- 229910001750 ruby Inorganic materials 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 239000002023 wood Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Graphics (AREA)
- Data Mining & Analysis (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Computer Hardware Design (AREA)
- Architecture (AREA)
- Geometry (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Image Analysis (AREA)
Abstract
The specification discloses a training model and a processing method and a processing device of point clouds, wherein an original sub-point cloud cut from the original point cloud is condensed by utilizing a complete point cloud corresponding to a preset target object to obtain a dense sub-point cloud serving as a training sample, the dense sub-point cloud is used for training the dense model, the trained dense model is used for carrying out the dense processing on the sub-point cloud to be processed, which contains the target object in the point cloud to be processed, and the point cloud to be processed can be subjected to the dense processing without utilizing a two-dimensional image, so that noise is not introduced, and the condensed point cloud is more accurate.
Description
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method and apparatus for processing a training model and a point cloud.
Background
Currently, unmanned technologies have matured, and in unmanned technologies, environmental awareness is a very important group.
Generally, radar is often used to sense the surrounding environment, and the denser the point cloud obtained by the radar is, the more accurate and reliable the subsequent processing such as environmental sensing based on the point cloud is, so how to obtain the dense point cloud is a problem to be solved. Wherein the radar for environmental perception comprises at least a lidar.
In the prior art, a depth image is typically used to densify the radar-derived origin cloud.
However, even a depth image is a two-dimensional image, which has inherent defects in structural information, and therefore, the densification of the point cloud by the image inevitably introduces noise, resulting in a densification effect.
Disclosure of Invention
The embodiment of the specification provides a training model and a processing method and device of point cloud, so as to partially solve the problems existing in the prior art.
The embodiment of the specification adopts the following technical scheme:
a method of training a model provided herein includes:
acquiring an original point cloud;
splitting the original point cloud to obtain a plurality of original sub point clouds;
identifying whether at least partial point clouds corresponding to the target object exist in the original point clouds or not according to each original point cloud, and if so, obtaining dense point clouds corresponding to the original point clouds by adopting a preset complete point cloud corresponding to the target object and the original point clouds;
the original sub-point cloud is used as a training sample to be input into a densification processing model, the dense sub-point cloud corresponding to the original sub-point cloud is used as a first label of the training sample, the difference between the sub-point cloud after densification processing output by the densification processing model and the first label is minimum as a training target, the densification processing model is trained, and the densification processing model is used for carrying out densification processing on the input point cloud.
Optionally, identifying whether at least part of the point clouds corresponding to the target object exists in the original sub-point clouds specifically includes:
identifying whether at least partial point clouds corresponding to the target object exist in the original sub point clouds through an identification model;
the recognition model is obtained by training the following method:
determining a sample point cloud and point clouds corresponding to all targets in the sample point cloud;
dividing the sample point cloud to obtain a plurality of sample sub point clouds;
for each sample sub-point cloud, if the intersection ratio of the sample sub-point cloud and the point cloud corresponding to at least one target object is larger than a set threshold, marking the sample sub-point cloud as at least partial point cloud corresponding to the target object, and if the intersection ratio of the sample sub-point cloud and the point cloud corresponding to any target object is not larger than the set threshold, marking the sample sub-point cloud as at least partial point cloud corresponding to the target object does not exist;
and training the recognition model according to each sample sub-point cloud and the label of each sample sub-point cloud.
Optionally, presetting the complete point cloud corresponding to the target object specifically includes:
pre-establishing a three-dimensional model corresponding to the target object;
and obtaining the complete point cloud corresponding to the target object based on the three-dimensional model corresponding to the target object.
Optionally, obtaining a dense sub-point cloud corresponding to the original sub-point cloud by using a complete point cloud corresponding to the target object and the original sub-point cloud, which specifically includes:
determining the attribute of a target object according to at least part of point clouds corresponding to the target object contained in the original sub-point clouds, wherein the attribute comprises at least one of the size and the pose of the target object;
according to the attribute of the target object, adjusting a preset complete point cloud corresponding to the target object;
and merging the adjusted complete point cloud corresponding to the target object into the original sub-point cloud according to the position of at least part of the point cloud corresponding to the target object in the original sub-point cloud, so as to obtain a dense sub-point cloud corresponding to the original sub-point cloud.
Optionally, the method further comprises:
and taking the original sub-point cloud as a training sample to be input into a complement processing model, taking the adjusted complete point cloud corresponding to the target object as a second label of the training sample, taking the minimum difference between the complete point cloud corresponding to the target object output by the complement processing model and the second label as a training target, and training the complement processing model, wherein the complement processing model is used for carrying out complement processing on at least part of point clouds corresponding to the target object in the input point clouds.
Optionally, the method further comprises:
when the point cloud to be processed is processed, the point cloud to be processed is segmented to obtain a plurality of sub point clouds to be processed;
for each sub-point cloud to be processed, identifying whether at least part of point clouds corresponding to the target object exist in the sub-point clouds to be processed, if so, inputting the sub-point clouds to be processed into a trained densification processing model to obtain the sub-point clouds to be processed after the densification processing output by the trained densification processing model;
and merging all the sub-point clouds to be processed, which are not input into the trained densification processing model, and all the sub-point clouds to be processed, which are subjected to densification processing, so as to obtain the point clouds after densification processing.
Optionally, the method further comprises:
when the point cloud to be processed is processed, the point cloud to be processed is segmented to obtain a plurality of sub point clouds to be processed;
identifying whether at least partial point clouds corresponding to the target object exist in each sub-point cloud to be processed, and if so, respectively inputting the sub-point clouds to be processed into a trained densification processing model and a trained complement processing model;
combining the densely processed sub-point cloud to be processed output by the trained densely processed model and the fully processed sub-point cloud to be processed output by the trained fully processed model to obtain a fully condensed sub-point cloud corresponding to the sub-point cloud to be processed;
And merging all the sub-point clouds to be processed and all the complementary dense sub-point clouds which are not input into the trained dense processing model and the trained complementary processing model to obtain the complementary dense point clouds.
The processing method of the point cloud provided by the specification comprises the following steps:
acquiring point cloud to be processed;
dividing the point cloud to be processed to obtain a plurality of sub point clouds to be processed;
for each sub-point cloud to be processed, identifying whether at least part of point clouds corresponding to the target object exist in the sub-point clouds to be processed, if so, inputting the sub-point clouds to be processed into a trained densification processing model to obtain the sub-point clouds to be processed after the densification processing output by the trained densification processing model;
and merging all the sub-point clouds to be processed, which are not input into the trained densification processing model, and all the sub-point clouds to be processed, which are subjected to densification processing, so as to obtain the point clouds after densification processing.
Optionally, inputting the to-be-processed sub-point cloud into a trained densification processing model to obtain a densified to-be-processed sub-point cloud output by the trained densification processing model, which specifically includes:
respectively inputting the sub-point cloud to be processed into a trained densification processing model and a trained complement processing model to obtain a densified sub-point cloud to be processed output by the trained densification processing model and a complement processed sub-point cloud to be processed output by the trained complement processing model;
Merging the sub-point clouds to be processed which are not input into the trained densification processing model and the sub-point clouds to be processed which are subjected to densification processing to obtain the point clouds after densification processing, wherein the method specifically comprises the following steps of:
combining the densely processed sub-point cloud to be processed output by the trained densely processed model and the fully processed sub-point cloud to be processed output by the trained fully processed model to obtain a fully condensed sub-point cloud corresponding to the sub-point cloud to be processed;
and merging all the sub-point clouds to be processed and all the complementary dense sub-point clouds which are not input into the trained dense processing model and the trained complementary processing model to obtain the complementary dense point clouds.
The present specification provides an apparatus for training a model, comprising:
the acquisition module is used for acquiring an original point cloud;
the segmentation module is used for segmenting the original point cloud to obtain a plurality of original sub point clouds;
the sample generation module is used for identifying whether at least partial point clouds corresponding to the target object exist in the original point clouds or not according to each original point cloud, and if so, acquiring dense point clouds corresponding to the original point clouds by adopting a preset complete point cloud corresponding to the target object and the original point clouds;
The training module is used for inputting the original sub-point cloud as a training sample into a densification processing model, taking the dense sub-point cloud corresponding to the original sub-point cloud as a first label of the training sample, training the densification processing model by taking the minimum difference between the sub-point cloud after the densification processing output by the densification processing model and the first label as a training target, and the densification processing model is used for carrying out densification processing on the input point cloud.
The processing apparatus of a point cloud provided in the present specification includes:
the acquisition module is used for acquiring the point cloud to be processed;
the segmentation module is used for segmenting the point cloud to be processed to obtain a plurality of sub point clouds to be processed;
the processing module is used for identifying whether at least part of point clouds corresponding to the target object exist in the sub-point clouds to be processed according to each sub-point cloud to be processed, and if so, inputting the sub-point clouds to be processed into the trained dense processing model to obtain the dense sub-point clouds to be processed, which are output by the trained dense processing model;
and the merging module is used for merging the sub-point clouds to be processed which are not input into the trained densification processing model and the sub-point clouds to be processed which are subjected to densification processing to obtain the point clouds after densification processing.
A computer readable storage medium is provided in the present specification, where the storage medium stores a computer program, where the computer program when executed by a processor implements the method for training a model and the method for processing a point cloud described above.
The electronic device provided by the specification comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor realizes the method for training the model and the method for processing the point cloud when executing the program.
The above-mentioned at least one technical scheme that this description embodiment adopted can reach following beneficial effect:
according to the embodiment of the specification, the original sub-point cloud cut from the original point cloud is condensed by utilizing the complete point cloud corresponding to the preset target object, so that the dense sub-point cloud serving as a training sample is obtained, the dense sub-point cloud is utilized to train the dense model, the trained dense model is utilized to carry out the dense processing on the sub-point cloud to be processed containing the target object in the point cloud to be processed, the point cloud can be subjected to the dense processing without utilizing a two-dimensional image, and noise is not introduced, so that the condensed point cloud is more accurate.
Drawings
The accompanying drawings, which are included to provide a further understanding of the specification, illustrate and explain the exemplary embodiments of the present specification and their description, are not intended to limit the specification unduly. In the drawings:
FIG. 1 is a schematic flow chart of a preparation phase provided in an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a complete point cloud of a three-dimensional model of an acquisition target provided in the present specification;
FIG. 3 is a schematic flow chart of a sample generation stage provided in an embodiment of the present disclosure;
fig. 4 is a schematic diagram of an original point cloud segmentation provided in the present specification;
FIG. 5 is a schematic diagram of a process for determining a dense sub-point cloud corresponding to an original sub-point cloud provided in the present specification;
FIG. 6 is a schematic flow chart of generating a sample according to an embodiment of the present disclosure;
FIG. 7 is a schematic flow chart of a training phase according to an embodiment of the present disclosure;
FIG. 8 is a schematic flow chart of a practical use stage according to the embodiment of the present disclosure;
fig. 9 and 10 are schematic diagrams of the point cloud densification process to be processed provided in the present specification;
FIG. 11 is a schematic flow chart of a training model according to an embodiment of the present disclosure;
fig. 12 is a schematic structural diagram of a training model device according to an embodiment of the present disclosure;
Fig. 13 is a schematic structural diagram of a processing device for point cloud according to an embodiment of the present disclosure;
fig. 14 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
The embodiment of the specification abandons a method for densifying point cloud by using a two-dimensional image, and the method for densifying point cloud mainly comprises four stages:
in the preparation stage, firstly, three-dimensional modeling is carried out on a target object to obtain a three-dimensional model of the target object, and then, the complete point cloud of the three-dimensional model is obtained according to the three-dimensional model of the target object.
In the generation stage of the sample, an original sub-point cloud which is cut from the original point cloud and contains the target object is used as a training sample, and the original sub-point cloud is condensed by utilizing the complete point cloud corresponding to the target object, so that a dense sub-point cloud serving as a label is obtained.
In the training stage, the original sub-point cloud is used as a training sample to be input into a densification processing model, and the dense sub-point cloud is used as a label to train the dense sub-point cloud.
In the actual use stage, firstly cutting the point cloud to be processed to obtain a plurality of sub-point clouds to be processed, inputting the sub-point clouds to be processed containing the target object into a trained densification model to obtain the sub-point clouds to be processed after densification, and finally merging the sub-point clouds to be processed which do not contain the target object with the sub-point clouds to be processed after densification to obtain the point clouds after densification.
For the purposes of making the objects, technical solutions and advantages of the present specification more apparent, the technical solutions of the present specification will be clearly and completely described below with reference to specific embodiments of the present specification and corresponding drawings. It will be apparent that the described embodiments are only some, but not all, of the embodiments of the present specification. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are intended to be within the scope of the present disclosure.
The following describes in detail the technical solutions provided by the embodiments of the present specification with reference to the accompanying drawings.
Fig. 1 is a schematic flow chart of a preparation phase provided in an embodiment of the present disclosure, including:
s100: and aiming at a preset target object, carrying out three-dimensional modeling on the target object to obtain a three-dimensional model of the target object.
In one or more embodiments of the present disclosure, a densification model that can densify a point cloud is obtained through training by a machine learning method, so as to solve the problem that noise is introduced and the point cloud densification is not ideal when the existing two-dimensional image-based point cloud densification process is performed. The effect of model training is generally related to the quality of the training sample, so in this specification, the data required for generating the training sample is prepared before generating the training sample, that is, the preparation stage shown in fig. 1.
In general, since the sample generation and model training require a lot of resources such as storage resources and computing resources, the sample generation and model training are generally performed by a server. For the same reason, the following description will also be made by taking, as an example, a flow of a preparation stage in the method of performing the densified point cloud by a server, where the server is specifically a single device or a system (e.g., a distributed system) composed of a plurality of devices, and the description is not limited and may be set as needed.
Specifically, in the preparation stage, the server can perform three-dimensional modeling on a target object, wherein the target object is a preset object needing to be subjected to point cloud densification. The server can model the target object according to the three-dimensional data of the target object, which is acquired in advance. The three-dimensional data of the target object can be obtained through laser three-dimensional scanning and the like, modeling is performed based on the three-dimensional data, and of course, how to obtain the three-dimensional data and how to perform modeling is a mature technology at present, and the application is not repeated.
In addition, since the object to be densified is generally required to include various types of obstacles, the server can model each type of object separately to obtain models of various types of objects. The types of the target objects can be distinguished according to different application scenes of the point cloud, and can be roughly divided into various vehicles running on roads such as motor vehicles and bicycles, objects arranged on roads such as signboards, street lamp poles, traffic lamp poles and trees, and pedestrians.
Further, the above objects of various types are also present in actual scenes, and there is generally a significant difference in their appearance and dimensions. For example, there are large differences in profile between small and medium trucks, and there are differences in length, height, and width even for small cars. Therefore, in order to make the point cloud of the subsequent densification more accurate, the server in the present specification may also respectively obtain three-dimensional data of each type of object of different appearance sizes for each type of object, and respectively establish a plurality of three-dimensional models corresponding to the type of object.
S102: and obtaining the complete point cloud of the three-dimensional model according to the three-dimensional model of the target object.
In one or more embodiments of the present disclosure, after step S100, a three-dimensional model of the target object is built in a three-dimensional space for the target object, and the preparation performed in the preparation stage is to obtain a training sample capable of training the point cloud densification process, and what is needed is a point cloud. After determining the three-dimensional model of the target object, the server can determine the complete point cloud of the three-dimensional model by setting a virtual sensor according to the three-dimensional model of the target object. The sensor may specifically be a camera, or other devices capable of collecting the target object point cloud, which is not limited in this specification, and the sensor is taken as an example for the following description.
Specifically, the server may set a virtual camera for acquiring point cloud data in the three-dimensional space for each obtained three-dimensional model of each target object, obtain a three-dimensional model image of the target object through the virtual camera, and then determine to obtain the complete point cloud of the three-dimensional model of the target object by way of punctuation in the image. Wherein the complete point cloud is a densified point cloud. To achieve this, the number of virtual cameras provided in the three-dimensional space may be plural, and images are acquired from different distances and angles on the three-dimensional model of the object to determine the point clouds of the three-dimensional model of the object at respective positions, as shown in fig. 2.
Fig. 2 is a schematic diagram of a complete point cloud for collecting a three-dimensional model of a target object provided in the present specification, where the center of the figure is the three-dimensional model of the target object, the surrounding is a virtual camera labeled as a camera and set in a three-dimensional space, the virtual camera collects an image of the three-dimensional model of the target object in the three-dimensional space, and coordinates of each punctuation point are determined by punctuating the three-dimensional model in the image, where the labeled points can be regarded as points in the point cloud. Therefore, the server can obtain the complete point cloud corresponding to the target object by collecting punctuation of the images through a plurality of virtual cameras. Unlike the actual scene, the radar can only scan the target object from one direction, in the three-dimensional space, the images of the three-dimensional model of the target object can be acquired through a plurality of virtual cameras, the point clouds corresponding to the target object determined in different directions are obtained, and the point clouds obtained in the three-dimensional space are combined to be used as the complete point clouds of the three-dimensional model, namely the complete point clouds corresponding to the target object, so that the complete point clouds are naturally condensed point clouds.
In one embodiment provided herein, the server may reconstruct a complete point cloud of a three-dimensional model of an object by simulating camera positions and observing the three-dimensional model from 80 directions in three-dimensional space.
In the present specification, even if the complete point cloud obtained by the server is an area where no occlusion exists, and the point cloud obtained by a certain camera is an area where an occlusion exists, the point cloud of the occluded area can be determined by capturing images by cameras provided at other positions.
In addition, in the present specification, the server may also move the positions of the virtual cameras through a small range in the three-dimensional space, so that more images can be obtained from the three-dimensional model of the object through each virtual camera, thereby determining more point clouds. Of course, how to increase the density of the point cloud obtained in the three-dimensional space can also be used, and other methods are also used, which are not described in the specification, and can be set according to the needs, for example, more virtual cameras are set, or the pose of the virtual cameras is adjusted according to the step size, etc.
Through the flow of the preparation phase shown in fig. 1, the server can determine, for various targets that need to be densified, that the target is already a full point cloud of densification. And in the subsequent generation sample stage, a dense sub-point cloud with dense passing can be obtained based on the dense complete point cloud and used as a label of a training sample. Thus training to obtain a densification model which can perform densification on the sensor acquisition origin cloud.
Fig. 3 is a schematic flow chart of a sample generation stage in the training model method according to the embodiment of the present disclosure, including:
s200: an original point cloud is obtained.
In one or more embodiments provided herein, a server may first obtain an origin cloud when it is desired to generate training samples for training a densification process model. The original point cloud is acquired in an actual scene by a radar, for example, by a vehicle provided with the radar, and is acquired while traveling in the actual scene, and the specification of how the original point cloud is acquired is not limited.
Of course, in the unmanned technical field, driving decisions are generally performed according to point clouds acquired in real time, so that the original point cloud is data corresponding to point clouds of a single frame in the specification.
S202: and cutting the original point cloud to obtain a plurality of original sub point clouds.
In the present disclosure, since the distribution of the target objects in the actual scene is uneven, that is, there may be some areas with more target objects and some areas with no target objects, so in order to improve the efficiency, the densification process is also more accurate, and after the server obtains the original point cloud, the server may segment the original point cloud to obtain a plurality of original sub point clouds. And processing each original sub-point cloud respectively when the samples are generated subsequently.
The method for splitting the original point cloud can be set according to the needs, for example, the original point cloud is split by adopting a furthest point sampling method (Farthest Point Sample, FPS), and of course, how many original sub-point clouds the original point cloud is split into can also be set according to the needs, and the specification is not limited.
In this specification, after the server obtains each original sub-point cloud, data such as a position and a shape of each original sub-point cloud in the original point cloud may be determined. So that the original point cloud can be reconstructed according to the original sub point clouds when needed, or the processed original point clouds can be obtained after the original sub point clouds are processed.
S204: and identifying whether at least part of point clouds corresponding to the target object exist in the original point clouds according to each original point cloud, and if so, obtaining dense point clouds corresponding to the original point clouds by adopting a preset complete point cloud corresponding to the target object and the original point clouds.
In the present specification, since the purpose of the trained densification model is to "learn" how to automatically densify the point cloud while ensuring the densification accuracy, when the model is applied, the acquired original point cloud or original sub-point cloud may be input into the densification model to obtain the densified point cloud output by the densification model, so each original sub-point cloud obtained by the segmentation in step S202 may be used as a training sample. The server can also respectively carry out densification processing on each original sub-point cloud, and the original sub-point cloud after the densification processing is used as a label of a training sample.
Because the segmented original sub-point cloud may not contain the point cloud corresponding to the target object, the original sub-point cloud is not used as a training sample, and the corresponding label after densification is not required to be continuously determined. However, since the position of the target object in the original point cloud cannot be predetermined, when each original sub-point cloud is cut, whether the point cloud of the target object is cut into different original sub-point clouds is not considered, so that the complete point cloud corresponding to the target object may exist in each original sub-point cloud, and the partial point cloud corresponding to the target object may also exist in each original sub-point cloud, as shown in fig. 4.
Fig. 4 is a schematic view of the segmentation of the original point cloud provided in the present specification, for convenience of understanding, fig. 4 is a top view of the original point cloud, a dashed box encloses a plurality of original sub-point clouds, a top view of a vehicle represents a point cloud corresponding to a target object in the original point cloud, and a thick line portion represents a portion where the point cloud is perceived. It can be seen that after the original point cloud is split, the point cloud corresponding to the vehicle is split into 3 original sub-point clouds.
If the partial point clouds corresponding to the included target object are too small in the segmented original sub-point clouds, the original sub-point clouds are used as training samples to train the densification model, so that the densification model is required to learn how to perform densification according to a small amount of point clouds, more noise can be introduced in the learning process, the training effect can be influenced, and the training efficiency is low. Thus, in this specification, for each original sub-point cloud, the server may identify which original sub-point clouds can be used as training samples according to the intersection ratio (Intersection over Union, IOU) of the point clouds corresponding to the target object in the original sub-point clouds, and determine the labels thereof.
In one or more embodiments of the present disclosure, first, the server may identify, for each original sub-point cloud, whether at least a portion of the point cloud corresponding to the target exists in the original sub-point cloud. Specifically, the server may determine whether at least a portion of the point cloud corresponding to the target exists in the original point cloud by inputting the original sub-point cloud into a pre-trained recognition model.
Secondly, when the identification result shows that at least part of point clouds of the target object are contained in the original sub-point clouds, the server obtains dense sub-point clouds corresponding to the original sub-point clouds by adopting the complete point clouds corresponding to the preset target object and the original sub-point clouds.
Specifically, the server may determine the attribute of the target object according to at least a portion of the point clouds corresponding to the target object in the original sub-point clouds. Wherein the attribute includes at least one of a size and a pose of the target. Since the size of the object generated in the preparation stage may not completely coincide with the size of the object in the original point cloud, by determining the size of the object, a more accurate densified point cloud can be obtained at the time of the subsequent densification process. For example, a 28 inch bicycle is not exactly the same size as a 26 inch bicycle, but is substantially indistinguishable in appearance. And the pose of the target in the original sub-point cloud can comprise information of position, orientation, pose and the like, and can be used for accurately determining the pose and the position of the target.
And then, the server can determine the complete point cloud corresponding to the target object in the original sub point cloud from the preset complete point clouds corresponding to the target objects according to the type and the kind of the target object. For example, when the type of the object in the original sub-point cloud is mountain bike, the server may determine a complete point cloud belonging to the type of bike, the type of mountain bike, from among the complete point clouds generated through the preparation phase. And the preset complete point cloud corresponding to the target object can be adjusted according to the attribute of the target object, so that the adjusted complete point cloud accords with the attribute of the target object.
And secondly, the server merges the complete point cloud corresponding to the adjusted target object into the original sub-point cloud according to the position of at least part of the point cloud corresponding to the target object in the original sub-point cloud, so as to obtain a dense sub-point cloud corresponding to the original sub-point cloud.
Fig. 5 is a schematic diagram of a process for determining a dense sub-point cloud corresponding to an original sub-point cloud provided in the present specification. The method comprises the steps of determining attributes according to at least part of point clouds corresponding to a target object in original sub-point clouds, adjusting preset complete point clouds according to the attributes, and merging the adjusted complete point clouds into the original sub-point clouds according to the positions of the target object to obtain dense sub-point clouds.
It should be noted that, in the present disclosure, the original sub-point cloud may include a plurality of objects, and the server needs to determine, for each object in the original sub-point cloud, a dense sub-point cloud including the complete point cloud corresponding to the all objects by performing the above process, adjusting the complete point cloud preset by each object, and then merging all the complete point clouds into the original sub-point cloud.
Of course, since the preset three-dimensional model sub-point cloud is already a condensed point cloud, the condensed sub-point cloud can be considered to be obtained as long as the three-dimensional model sub-point cloud is merged in the original sub-point cloud.
Further, in the present specification, when a plurality of objects are included in the original sub-point cloud, there may be a case where the intersection of the objects is relatively high and the intersection of the objects is relatively low. On one hand, the server can select a segmentation mode with smaller granularity to enable each original sub-point cloud to contain one target object, or on the other hand, the server can only combine the complete point cloud preset by the target object into the original sub-point cloud aiming at the target object with the cross ratio exceeding the preset value. Or determining dense sub-point clouds according to the complete point clouds preset by each target object and the original sub-point clouds as long as at least partial point clouds corresponding to the target objects exist in the original sub-point clouds.
S206: and taking the original sub-point cloud as a training sample, and taking a dense sub-point cloud corresponding to the original sub-point cloud as a first annotation of the training sample.
In this specification, the server may identify an original sub-point cloud having at least a portion of the point cloud corresponding to the target object as a training sample, and use a dense sub-point cloud corresponding to the original sub-point cloud obtained in step S204 as a first label of the training sample. And training a densification processing model according to each generated training sample in a training stage of the densification point cloud.
Based on the flow chart of the sample generation stage provided in fig. 3, it can be seen that the original point cloud is segmented to obtain an original sub-point cloud which is beneficial to model training, and the original sub-point cloud which can be used as a training sample is determined by identifying whether the original sub-point cloud contains at least part of the point clouds of the target objects, and then the complete point cloud corresponding to the various target objects obtained in the preparation stage and the attribute of the target objects in the original sub-point cloud which is used as the training sample are combined to obtain the dense sub-point cloud which is used as the first label of the training sample. The dense complete point cloud obtained based on the actual target object is combined into the original sub point cloud, so that the label of the training sample is more accurate, and the output result of the dense processing model obtained through training is more accurate. Moreover, the label is not an purposeless added point cloud, but is only densified for the target object, so that the training sample can train the densification model more effectively.
The identification model used by the server in step S204 may be obtained by training.
Specifically, during training of the identification model, the server may determine a sample point cloud, where the sample point cloud may be a point cloud collected historically, and the collection manner may be the same as the manner of collecting the original point cloud described in step S200. That is, a sample point cloud is obtained after the point cloud is acquired for the actual scene.
And, the point cloud corresponding to each target object existing in the sample point cloud can be further determined. When the point cloud corresponding to the target object is determined, the point cloud can be determined in a manual labeling mode, for example, the point cloud in the three-dimensional bounding box is taken as the point cloud corresponding to the target object in a manual labeling mode.
Then, the server may segment the sample point cloud and obtain a plurality of sample sub-point clouds, where the segmentation mode is consistent with the method adopted in step S202, and this will not be described in detail in this specification.
And then, for each sample sub-point cloud, if the intersection ratio of the sample sub-point cloud and the point cloud corresponding to at least one target object is larger than a set threshold, marking the sample sub-point cloud as at least part of the point cloud corresponding to the target object, namely, identifying a positive sample of the model, and if the intersection ratio of the sample sub-point cloud and the point cloud corresponding to any target object is not larger than the set threshold, marking the sample sub-point cloud as at least part of the point cloud corresponding to the target object is not present, namely, identifying a negative sample of the model. When determining the intersection ratio of the sample sub-point cloud and the point cloud corresponding to the target object, the server may determine the target object according to the three-dimensional bounding box contained in the sample sub-point cloud, and then determine the intersection ratio according to the point corresponding to the target object in the sample sub-point cloud and the sample sub-point cloud.
And finally, training the recognition model according to each sample sub-point cloud and the labels of each sample sub-point cloud. If the recognition model is input for each sample sub-point cloud, determining loss according to the recognition result output by the recognition model and the labeling of the sample sub-point cloud, and adjusting parameters of the recognition model with the loss minimized as a target until the accuracy of the recognition result output by the recognition model meets the training ending condition or the training times reach the training ending condition.
Further, in one or more embodiments of the present disclosure, the process of the sample generation stage shown in fig. 3, the first label of the generated training sample is obtained by merging the complete point clouds (i.e. densifying the target objects) of the target objects in the original sub point clouds. However, it is difficult to train the densification model based on the first label for a partial sub-point cloud corresponding to a missing target in the original sub-point cloud, and to densify such a target. The server may then perform another process of generating samples, where the generated training samples are used to train the densification model to complement the point cloud of the target object in the original sub-point cloud, instead of densification, i.e. when the point cloud corresponding to the target object is missing, the densification model may also complement the missing portion.
Fig. 6 is a schematic flow chart of generating a sample according to an embodiment of the present disclosure, including:
s300: an original point cloud is obtained.
S302: and cutting the original point cloud to obtain a plurality of original sub point clouds.
S304: for each original sub-point cloud, identifying whether at least part of point clouds corresponding to a target object exist in the original sub-point clouds, and if so, adjusting a preset complete point cloud corresponding to the target object according to the attribute of the target object.
S306: and taking the original sub-point cloud as a training sample, and taking the adjusted complete point cloud corresponding to the target object as a second annotation of the training sample.
In one or more embodiments of the present disclosure, the operations performed in step S300 and step S302 may be the same as those in step S200 and step S202, and will not be described again. The obtained training sample is not different from the training sample obtained in fig. 3, but the labels of the training samples are different from each other in the subsequent determination, and in this specification, the labels of the training samples generated in fig. 6 are referred to as second labels.
Since the generated training samples are to be used for training the completion processing model to complete the target object, to determine the second label of the training samples, the server may determine, for each original sub-point cloud, an attribute of the target object included in the original sub-point cloud.
And then, according to the attribute of the target object, adjusting a preset complete point cloud corresponding to the target object to serve as a second mark of the training sample.
In addition, it should be noted that, because the second labeled training sample is to train the completion processing model to complete the point cloud corresponding to the target object in the original sub-point cloud, the same manner as in step S204 may be adopted to filter the original sub-point cloud, and determine the original sub-point cloud that can be used as the training sample. This description is not repeated here.
Further, in the present specification, it is also unnecessary to consider whether the cross-point ratio of the partial point cloud corresponding to the target object existing in the training sample and the original sub-point cloud is greater than a threshold value. That is, for each original sub-point cloud, in addition to determining whether the original sub-point cloud can be used as a training sample through the recognition model, the training sample can be determined based on the recognition result of whether a part of the point clouds corresponding to the target object exists in the original sub-point cloud manually, so that the sample size of the training sample for training the "complement" function can be increased.
Further, in the present specification, two production stage schemes are provided, and in the subsequent training stage, the densification model is trained based on the first labeled training sample and the completion model is trained based on the second labeled training sample, or two training samples are simultaneously adopted, so that the loss of the densification model and the completion model is determined in a combined manner, and the densification model and the completion model are trained in a combined manner.
Fig. 7 is a schematic flow chart of a training phase provided in an embodiment of the present disclosure, including:
s400: for each original sub-point cloud serving as a training sample, the original sub-point cloud serving as the training sample is input into a densification processing model.
In one or more embodiments of the present disclosure, after obtaining the training samples through the generate samples stage, the server may train the densification model to be trained according to the training samples and their labels.
The present description trains the dense process model in a supervised training manner, as is common with model training processes. Then, for each training sample, the server may input the training sample into the densification model, that is, the original sub-point cloud into the densification model, and then determine the densified sub-point cloud output by the densification model. And determining loss through the marked and output sub-point cloud to train the densification processing model.
S402: and taking the dense sub-point cloud corresponding to the original sub-point cloud as a first label of the training sample, and training the dense processing model by taking the minimum difference between the sub-point cloud subjected to the dense processing output by the dense processing model and the first label as a training target.
In the present specification, after obtaining the densely processed sub-point cloud output by the densely processed model for each training sample, the server may adjust the parameters of the densely processed model based on the difference between the densely processed sub-point cloud and the first label of the training sample as a loss, and with the minimum loss as a training target.
In addition, in the embodiment of the generating sample stage provided in the present disclosure, the server may further generate a training sample with a second label, and then, for each training sample, input the original sub-point cloud as a training sample into the complementary processing model, determine a complete point cloud corresponding to a target object output by the complementary processing model, and then, according to a difference between the second label of the training sample (i.e., the complete point cloud corresponding to the target object after being adjusted according to the attribute of the target object included in the training sample) and the complete point cloud corresponding to the target object output by the complementary processing model, determine a loss, and adjust parameters of the complementary processing model with the loss being the minimum training target. The completion processing model is used for carrying out completion processing on at least part of point clouds corresponding to the target objects in the input point clouds.
Further, in this specification, the server may perform joint training on the densification process model and the completion process model, and then the structure of the whole model may be as shown in fig. 8.
Specifically, in this specification, for each training sample, the server may input the original sub-point cloud as a training sample into a densification processing model and a completion processing model, and determine a sub-point cloud after densification output by the densification processing model and a complete point cloud corresponding to a target object output by the completion processing model. And determining the joint loss according to the first label of the training sample and the complete point cloud corresponding to the sub point cloud, the second label and the target object after the densification treatment. And the parameters of the densification processing model and the completion processing model are adjusted by taking the minimum loss as a training target.
In particular, the function of the joint loss can be formulated as Represented by L, where CD For joint loss, S 1 And S is 2 And respectively representing the point clouds output by the model and the labels of the training samples. Two of the joint losses represent the losses of the densification process model and the completion process model, respectively. Wherein (1)>In x epsilon S 1 Point cloud representing output of densification processing model, y epsilon S 2 A first annotation representing a training sample entered into the densification model, then optimize the joint loss with its difference minimized as a target, and +.>The average loss of each training sample input into the densification model is shown. Similarly, let go of>In x epsilon S 1 Representing the point cloud output by the completion processing model, y E S 2 The second label representing the training sample of the input completion processing model, it should be noted at this time that the calculated average loss is determined by the second label, i.e., the difference between the complete point cloud and the output completion point cloud, and the joint loss is optimized with the difference minimization as the objective.
Based on the provided training phase flow shown in fig. 7, the server can train the dense model according to the training sample of the first label, and because the first label is obtained by merging the complete point cloud corresponding to the target object into the original sub-point cloud according to at least one of the size and the pose of the target object in the original sub-point cloud, and the complete point cloud is the dense point cloud obtained in advance based on the actual target object, the first label is equivalent to the dense point cloud obtained by independently scanning the target object in the original sub-point cloud for multiple times. Therefore, training is performed based on the difference between the first label and the densely processed point cloud output by the densely processed model, so that the densely processed model can learn how to perform the densely processing on the point cloud of the target object, and the point cloud after the densely processing by the densely processed model is closer to the actual multi-time acquisition result of the point cloud, so that the output result of the densely processed model is more accurate.
In addition, the training process can be combined with the completion processing model to train, so that the completion processing model and the densification processing model can learn hidden variables for point cloud completion of the target object and hidden variables for point cloud densification processing of the target object mutually, and the output results of the two models are more accurate. Meanwhile, the situation that the output result is defective due to the fact that the point cloud corresponding to the target object in the original sub-point cloud serving as the training sample is defective is effectively reduced.
Fig. 8 is a schematic flow chart of the actual use stage provided in the embodiment of the present disclosure, where the actual use stage is a processing procedure of the point cloud, and includes:
s500: and acquiring the point cloud to be processed.
In one or more embodiments of the present description, the densified point cloud method may be applied in the field of unmanned technology, and thus the actual use phase may be generally performed by unmanned equipment. In the driving process of the unmanned equipment, a control strategy or obstacle avoidance can be determined based on an acquired point cloud planning path, and of course, the specific operation is executed according to the follow-up point cloud, so that the unmanned equipment is not described more or limited in the specification.
Thus, in the present specification, the unmanned apparatus can acquire the point cloud to be processed by the set sensor. Similar to the description in the foregoing step S102, the sensor may specifically be a laser radar, or other devices that may collect the target point cloud, which is not limited in this specification. And the acquired point cloud to be processed. Similar to the explanation in step S200, the unmanned device may be the point cloud to be processed acquired in real time during traveling, and thus the point cloud to be processed is also a single-frame point cloud.
S502: and cutting the point cloud to be processed to obtain a plurality of sub point clouds to be processed.
In one or more embodiments of the present disclosure, after the unmanned device acquires the point cloud to be processed, since the training sample is the original sub-point cloud cut from the original point cloud when the training result is training, in order to perform the densification processing on the point cloud to be processed through the densification processing model, the unmanned device may further cut you on the point cloud to be processed, and obtain a plurality of sub-point clouds to be processed.
The specific splitting manner and method may refer to the description in step S202, and the manner of splitting the point cloud to be processed is the same as the manner of splitting the original point cloud, which is not described in detail in this specification.
S504: and identifying whether at least part of point clouds corresponding to the target object exist in each sub-point cloud to be processed, and if so, inputting the sub-point clouds to be processed into a trained densification processing model to obtain the densely processed sub-point clouds output by the trained densification processing model.
In one or more embodiments of the present disclosure, after the splitting of each of the obtained sub-point clouds to be processed, the driving-free device may further identify, for each sub-point cloud to be processed, whether at least a portion of the point clouds corresponding to the target object exists in the sub-point clouds to be processed through the identification model described in step S204.
If at least part of point clouds corresponding to the target objects exist in the sub-point clouds to be processed, the target objects to be subjected to densification processing are included in the sub-point clouds to be processed, so that the unmanned equipment can continuously speak the sub-point clouds to be processed into a trained densification processing model, and the sub-point clouds to be processed after densification processing output by the trained densification processing model are obtained.
If at least part of the point clouds corresponding to the target object does not exist in the sub-point clouds to be processed, the target object needing to be subjected to densification processing does not exist in the sub-point clouds to be processed, so that the unmanned equipment does not process the sub-point clouds to be processed.
S506: and merging all the sub-point clouds to be processed, which are not input into the trained densification processing model, and all the sub-point clouds to be processed, which are subjected to densification processing, so as to obtain the point clouds after densification processing.
In one or more embodiments of the present disclosure, after identifying each sub-point cloud to be processed and performing densification according to a densification model, the unmanned apparatus may combine the sub-point cloud to be processed that is not subjected to densification with the sub-point cloud to be processed that is obtained after performing densification according to the densification model, to obtain a point cloud after densification.
Fig. 9 is a schematic diagram of a process for densifying a point cloud to be processed provided in the present specification, where a dotted line represents segmentation of the point cloud to be processed, a dark filled dotted line frame is a sub-point cloud to be processed for identifying a portion of the point cloud corresponding to a target object, a light filled dotted line frame is a sub-point cloud to be processed for identifying a portion of the point cloud not corresponding to a target object, a plurality of target objects are included in the graph, the thickness of a boundary of a target object represents the point cloud corresponding to the target object, and the denser the point cloud is, the thicker the boundary line is. The former obtains the sub-point cloud to be processed after the densification by the densification processing model, and the boundary of the target object in the sub-point cloud is thickened and complemented, while the latter does not input the densification processing model, so that the sub-point cloud is unchanged. And finally, merging the sub-point clouds to be processed to obtain the point clouds subjected to densification processing.
In addition, in the present specification, after identifying whether the sub-point cloud to be processed includes at least a portion of point clouds corresponding to the target object, if the plurality of sub-point clouds to be processed includes at least a portion of point clouds of the same target object, in order to improve the densification process, the repeated densification of the point clouds of the same target object is reduced, and the plurality of sub-point clouds to be processed including the same target object may be combined and input as one sub-point cloud to be processed into the densification process model, so as to obtain the sub-point clouds to be processed after the densification process.
Specifically, the unmanned device can determine whether a plurality of point clouds to be processed contain the same target object point clouds according to the attribute of the target object in each point cloud to be processed and the positions of the point clouds to be processed in the point clouds to be processed, if so, the point clouds to be processed are combined into one point cloud to be processed, a trained densification processing model is input, and the point clouds to be processed after the densification processing output by the densification processing model are obtained. Similarly, the merged sub-point cloud to be processed may be used as input, and the trained complement processing model may be input, so as to obtain a complement processed sub-point cloud to be processed output by the complement processing model, as shown in fig. 10.
Fig. 10 illustrates a practical application stage, similar to fig. 9, in which after obtaining the point cloud to be processed, the unmanned driving device first segments the point cloud to be processed, then identifies the sub-point cloud to be processed with the partial point cloud corresponding to the target object, and when determining that A, B and the sub-point cloud to be processed C include the partial point cloud corresponding to the same target object, merges the sub-point cloud to be processed A, B and the sub-point cloud to be processed C into the sub-point cloud to be processed D. And then, respectively inputting the D sub-point cloud to be processed into a densification processing model and a complement processing model to obtain the sub-point cloud to be processed after densification processing and after complement processing. And finally, merging the sub-point clouds to be processed to obtain the point clouds subjected to densification processing.
In this specification, merging the sub-point clouds to be processed including the same target object may specifically be performed in the step S504 of the actual application stage, and then after identifying whether at least a part of the sub-point clouds corresponding to the target object exists in the sub-point clouds to be processed, the unmanned device may further determine whether a plurality of sub-point clouds to be processed including at least a part of the point clouds of the same target object exist, if so, merging the plurality of sub-point clouds to be processed including at least a part of the point clouds of the same target object, and then inputting the merged sub-point clouds to the densification model, or respectively inputting the merged sub-point clouds to the densification model and the complement model, so as to obtain the sub-point clouds to be processed after the densification process.
Of course, in this case, in order to make the output results of the densification processing model and the completion processing model more accurate, in the stage of generating the sample, after identifying, for each original sub-point cloud, whether at least part of the point clouds corresponding to the target object exists in the original sub-point clouds, the server may further combine the plurality of original sub-point clouds including at least part of the point clouds of the same target object, and use the combined plurality of original sub-point clouds as training samples.
And then, obtaining a corresponding dense sub-point cloud by adopting a preset complete point cloud corresponding to the target object and a plurality of combined original sub-point clouds, wherein the server can determine the attribute of the target object in the plurality of combined original sub-point clouds by adopting the same method in the step S204 of generating the sample, and adjust the complete point cloud corresponding to the target object according to the attribute. And further combining the complete point cloud corresponding to the adjusted target object with the combined multiple original sub-point clouds to obtain dense sub-point clouds corresponding to the combined multiple original sub-point clouds, wherein the dense sub-point clouds are used as a first label of the training sample.
Of course, since the completion processing model is based on the point cloud of the target object in the part of the original sub-point cloud, the point cloud of the target object is completed, and therefore, the training sample may be determined by adopting the above-mentioned method of merging the original sub-point clouds, and the second label of the merged original sub-point cloud as the training sample may be determined by adopting the methods of step S304 and step S306.
And the partial point clouds of the target object which is missing in the input multiple original sub point clouds can be complemented by training the sample and the complemented processing model trained by the second label.
Further, as described herein in the context of generating a sample phase and a training phase, in one or more embodiments provided herein, the server may further determine a second annotation of the training sample for training the completion processing model.
Then, in the practical application stage S504, the unmanned device may further input, for each sub-point cloud to be processed having at least a portion of the point cloud corresponding to the target object, the sub-point cloud to be processed into the trained densification processing model and the trained completion processing model, respectively. Obtaining a to-be-processed sub-point cloud after densification output by a densification processing model and a to-be-processed sub-point cloud after complement processing output by a complement processing model, and merging the to-be-processed sub-point cloud after densification and the to-be-processed sub-point cloud after complement processing to obtain a complement condensed sub-point cloud corresponding to the to-be-processed sub-point cloud.
Then, in step S506, the sub-point clouds to be processed and the sub-point clouds to be complemented, which are not input with the densification processing model and the complementation processing model, are combined to obtain the complement densification point clouds.
Based on the process of the dense point cloud shown in fig. 1 to 8, the original sub-point cloud cut from the original point cloud is dense by utilizing the complete point cloud corresponding to the preset target object, the dense sub-point cloud serving as a training sample is obtained, the dense sub-point cloud is utilized to train the dense model, the trained dense model is utilized to carry out the dense processing on the sub-point cloud to be processed containing the target wood in the point cloud to be processed, and the point cloud can be subjected to the dense processing without utilizing a two-dimensional image, so that noise is not introduced, and the dense point cloud is more accurate.
In addition, the preparation stage, the sample generation stage and the training stage in the process of the above-mentioned dense point cloud provided in the embodiment of the present disclosure may be summarized as a process of training a model, as shown in fig. 11, where fig. 11 includes:
s600: an original point cloud is obtained.
S602: and cutting the original point cloud to obtain a plurality of original sub point clouds.
S604: and identifying whether at least part of point clouds corresponding to the target object exist in the original point clouds according to each original point cloud, and if so, obtaining dense point clouds corresponding to the original point clouds by adopting a preset complete point cloud corresponding to the target object and the original point clouds.
S606: the original sub-point cloud is used as a training sample to be input into a densification processing model, the dense sub-point cloud corresponding to the original sub-point cloud is used as a first label of the training sample, the difference between the sub-point cloud after densification processing output by the densification processing model and the first label is minimum as a training target, the densification processing model is trained, and the densification processing model is used for carrying out densification processing on the input point cloud.
Of course, the content of each step has been described in detail above, and this description is not repeated here.
In the embodiment of the present disclosure, the processing procedure of the point cloud may be applied to an unmanned device, where the unmanned device is used to determine a dense point cloud, and the unmanned device may be an unmanned vehicle or a vehicle with a driving assistance function. The unmanned device may be a vehicle for delivery service, such as an unmanned delivery vehicle.
The above is the method for densifying point cloud provided in the embodiments of the present specification, and the generalized training model method, and based on the same concept, the present specification further provides a corresponding apparatus, a storage medium, and an electronic device.
Fig. 12 is a schematic structural diagram of a training model device according to an embodiment of the present disclosure, where the training model device includes:
an acquisition module 700, configured to acquire an original point cloud;
the segmentation module 702 is configured to segment the original point cloud to obtain a plurality of original sub-point clouds;
the sample generating module 704 is configured to identify, for each original sub-point cloud, whether at least a portion of the point clouds corresponding to the target object exists in the original sub-point clouds, and if so, obtain a dense sub-point cloud corresponding to the original sub-point cloud by using a preset complete point cloud corresponding to the target object and the original sub-point clouds;
the training module 706 is configured to input the original sub-point cloud as a training sample to a densification model, use a dense sub-point cloud corresponding to the original sub-point cloud as a first label of the training sample, and train the densification model with a minimum difference between the sub-point cloud after densification output by the densification model and the first label as a training target, where the densification model is configured to perform densification on the input point cloud.
Optionally, the sample generating module 704 identifies whether at least part of the point clouds corresponding to the target objects exist in the original sub-point clouds through an identification model, where the identification model is obtained through training by the following method:
Determining sample point clouds and point clouds corresponding to all target objects in the sample point clouds, segmenting the sample point clouds to obtain a plurality of sample sub-point clouds, labeling each sample sub-point cloud as at least partial point clouds corresponding to the target objects if the intersection ratio of the sample sub-point clouds to the point clouds corresponding to at least one target object is larger than a set threshold value, labeling each sample sub-point cloud as at least partial point clouds corresponding to no target object if the intersection ratio of the sample sub-point clouds to the point clouds corresponding to any target object is not larger than the set threshold value, and training the recognition model according to each sample sub-point cloud and the labeling of each sample sub-point cloud.
Optionally, the sample generating module 704 pre-establishes a three-dimensional model corresponding to the target object, and obtains a complete point cloud corresponding to the target object based on the three-dimensional model corresponding to the target object.
Optionally, the sample generating module 704 determines, according to at least a portion of the point clouds corresponding to the target object included in the original sub-point clouds, an attribute of the target object, where the attribute includes at least one of a size and a pose of the target object, adjusts, according to the attribute of the target object, a preset complete point cloud corresponding to the target object, and according to a position of at least a portion of the point clouds corresponding to the target object in the original sub-point clouds, merges the adjusted complete point clouds corresponding to the target object into the original sub-point clouds, so as to obtain a dense sub-point cloud corresponding to the original sub-point clouds.
Optionally, the training module 706 inputs the original sub-point cloud as a training sample into a complement processing model, uses the adjusted complete point cloud corresponding to the target object as a second label of the training sample, uses a minimum difference between the complete point cloud corresponding to the target object output by the complement processing model and the second label as a training target, and trains the complement processing model, where the complement processing model is used for performing complement processing on at least part of point clouds corresponding to the target object existing in the input point clouds.
The apparatus further comprises:
the online processing module 708 is configured to, when a point cloud to be processed is to be processed, divide the point cloud to be processed to obtain a plurality of point clouds to be processed, identify, for each point cloud to be processed, whether at least a part of point clouds corresponding to a target object exists in the point clouds to be processed, if so, input the point clouds to be processed into a trained densification processing model to obtain a point cloud to be processed output by the trained densification processing model, and combine the point clouds to be processed and the point clouds to be processed, which are not input into the trained densification processing model, to obtain a point cloud after densification processing.
Optionally, when the point cloud to be processed is processed, the online processing module 708 segments the point cloud to be processed to obtain a plurality of point clouds to be processed, identifies whether at least part of point clouds corresponding to the target object exist in the point clouds to be processed for each point cloud to be processed, if so, inputs the point clouds to be processed into a trained dense processing model and a trained complement processing model respectively, merges the point clouds to be processed output by the trained dense processing model and the point clouds to be processed output by the trained complement processing model to obtain the full dense point clouds corresponding to the point clouds to be processed, and merges the point clouds to be processed and the full dense point clouds not input into the trained dense processing model and the trained full dense processing model to obtain the full dense point clouds.
Fig. 13 is a schematic structural diagram of a processing device for point cloud according to an embodiment of the present disclosure, where the device includes:
an obtaining module 800, configured to obtain a point cloud to be processed;
a segmentation module 802, configured to segment the point cloud to be processed to obtain a plurality of sub-point clouds to be processed;
A processing module 804, configured to identify, for each sub-point cloud to be processed, whether at least a portion of the point clouds corresponding to the target object exists in the sub-point clouds to be processed, and if so, input the sub-point clouds to be processed into a trained densification processing model to obtain a densified sub-point cloud to be processed output by the trained densification processing model;
and a merging module 806, configured to merge each to-be-processed sub-point cloud not input into the trained densification model and each to-be-processed sub-point cloud after densification, to obtain a densified point cloud.
Optionally, the processing module 804 inputs the sub-point cloud to be processed into a trained densification processing model and a trained complement processing model respectively, so as to obtain a sub-point cloud to be processed after densification processing output by the trained densification processing model and a sub-point cloud to be processed after complement processing output by the trained complement processing model;
the merging module 806 merges the densely processed sub-point cloud output by the trained densely processed model and the fully processed sub-point cloud output by the trained fully processed model to obtain a fully condensed sub-point cloud corresponding to the sub-point cloud to be processed, and merges the sub-point clouds to be processed and the fully condensed sub-point clouds not input into the trained densely processed model and the trained fully condensed sub-point cloud to obtain the fully condensed point cloud.
The present specification also provides a computer readable storage medium storing a computer program which when executed by a processor is operable to perform the training model method and the point cloud processing method contained in each stage of the method of densifying a point cloud provided above.
Based on the positioning method provided above, the embodiment of the present disclosure further provides a schematic structural diagram of the electronic device shown in fig. 14. At the hardware level, as in fig. 14, the unmanned device includes a processor, an internal bus, a network interface, a memory, and a non-volatile storage, although it may include hardware required for other services. The processor reads the corresponding computer program from the nonvolatile memory into the memory and then runs the computer program to realize the training model method and the point cloud processing method.
Of course, other implementations, such as logic devices or combinations of hardware and software, are not excluded from the present description, that is, the execution subject of the following processing flows is not limited to each logic unit, but may be hardware or logic devices.
In the 90 s of the 20 th century, improvements to one technology could clearly be distinguished as improvements in hardware (e.g., improvements to circuit structures such as diodes, transistors, switches, etc.) or software (improvements to the process flow). However, with the development of technology, many improvements of the current method flows can be regarded as direct improvements of hardware circuit structures. Designers almost always obtain corresponding hardware circuit structures by programming improved method flows into hardware circuits. Therefore, an improvement of a method flow cannot be said to be realized by a hardware entity module. For example, a programmable logic device (Programmable Logic Device, PLD) (e.g., field programmable gate array (Field Programmable Gate Array, FPGA)) is an integrated circuit whose logic function is determined by the programming of the device by a user. A designer programs to "integrate" a digital system onto a PLD without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Moreover, nowadays, instead of manually manufacturing integrated circuit chips, such programming is mostly implemented by using "logic compiler" software, which is similar to the software compiler used in program development and writing, and the original code before the compiling is also written in a specific programming language, which is called hardware description language (Hardware Description Language, HDL), but not just one of the hdds, but a plurality of kinds, such as ABEL (Advanced Boolean Expression Language), AHDL (Altera Hardware Description Language), confluence, CUPL (Cornell University Programming Language), HDCal, JHDL (Java Hardware Description Language), lava, lola, myHDL, PALASM, RHDL (Ruby Hardware Description Language), etc., VHDL (Very-High-Speed Integrated Circuit Hardware Description Language) and Verilog are currently most commonly used. It will also be apparent to those skilled in the art that a hardware circuit implementing the logic method flow can be readily obtained by merely slightly programming the method flow into an integrated circuit using several of the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer readable medium storing computer readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, application specific integrated circuits (Application Specific Integrated Circuit, ASIC), programmable logic controllers, and embedded microcontrollers, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, atmel AT91SAM, microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic of the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller in a pure computer readable program code, it is well possible to implement the same functionality by logically programming the method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers, etc. Such a controller may thus be regarded as a kind of hardware component, and means for performing various functions included therein may also be regarded as structures within the hardware component. Or even means for achieving the various functions may be regarded as either software modules implementing the methods or structures within hardware components.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. One typical implementation is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being functionally divided into various units, respectively. Of course, the functions of each element may be implemented in one or more software and/or hardware elements when implemented in the present specification.
It will be appreciated by those skilled in the art that embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, the present specification may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present description can take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The present description is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the specification. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
It will be appreciated by those skilled in the art that embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, the present specification may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present description can take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments.
The foregoing is merely exemplary of the present disclosure and is not intended to limit the disclosure. Various modifications and alterations to this specification will become apparent to those skilled in the art. Any modifications, equivalent substitutions, improvements, or the like, which are within the spirit and principles of the present description, are intended to be included within the scope of the claims of the present description.
Claims (11)
1. A method of training a model, comprising:
acquiring an original point cloud;
splitting the original point cloud to obtain a plurality of original sub point clouds;
identifying whether at least partial point clouds corresponding to a target object exist in the original point clouds or not according to each original point cloud, if so, determining the attribute of the target object according to the at least partial point clouds corresponding to the target object contained in the original point clouds, wherein the attribute comprises at least one of the size and the pose of the target object, adjusting the preset integral point clouds corresponding to the target object according to the attribute of the target object, and obtaining dense point clouds corresponding to the original point clouds by adopting the preset integral point clouds corresponding to the target object and the original point clouds;
Inputting the original sub-point cloud as a training sample into a densification processing model, taking the dense sub-point cloud corresponding to the original sub-point cloud as a first label of the training sample, and training the densification processing model by taking the minimum difference between the sub-point cloud subjected to the densification processing output by the densification processing model and the first label as a training target, wherein the densification processing model is used for carrying out densification processing on the input point cloud;
and taking the original sub-point cloud as a training sample to be input into a complement processing model, taking the adjusted complete point cloud corresponding to the target object as a second label of the training sample, taking the minimum difference between the complete point cloud corresponding to the target object output by the complement processing model and the second label as a training target, and training the complement processing model, wherein the complement processing model is used for carrying out complement processing on at least part of point clouds corresponding to the target object in the input point clouds.
2. The method of claim 1, wherein identifying whether at least a portion of the original sub-point cloud corresponding to the target exists comprises:
identifying whether at least partial point clouds corresponding to the target object exist in the original sub point clouds through an identification model;
The recognition model is obtained by training the following method:
determining a sample point cloud and point clouds corresponding to all targets in the sample point cloud;
dividing the sample point cloud to obtain a plurality of sample sub point clouds;
for each sample sub-point cloud, if the intersection ratio of the sample sub-point cloud and the point cloud corresponding to at least one target object is larger than a set threshold, marking the sample sub-point cloud as at least partial point cloud corresponding to the target object, and if the intersection ratio of the sample sub-point cloud and the point cloud corresponding to any target object is not larger than the set threshold, marking the sample sub-point cloud as at least partial point cloud corresponding to the target object does not exist;
and training the recognition model according to each sample sub-point cloud and the label of each sample sub-point cloud.
3. The method of claim 1, wherein presetting the complete point cloud corresponding to the target object specifically comprises:
pre-establishing a three-dimensional model corresponding to the target object;
and obtaining the complete point cloud corresponding to the target object based on the three-dimensional model corresponding to the target object.
4. The method of claim 1, wherein obtaining a dense sub-point cloud corresponding to the original sub-point cloud by using a complete point cloud corresponding to the target object and the original sub-point cloud, specifically includes:
And merging the adjusted complete point cloud corresponding to the target object into the original sub-point cloud according to the position of at least part of the point cloud corresponding to the target object in the original sub-point cloud, so as to obtain a dense sub-point cloud corresponding to the original sub-point cloud.
5. The method of claim 1, wherein the method further comprises:
when the point cloud to be processed is processed, the point cloud to be processed is segmented to obtain a plurality of sub point clouds to be processed;
for each sub-point cloud to be processed, identifying whether at least part of point clouds corresponding to the target object exist in the sub-point clouds to be processed, if so, inputting the sub-point clouds to be processed into a trained densification processing model to obtain the sub-point clouds to be processed after the densification processing output by the trained densification processing model;
and merging all the sub-point clouds to be processed, which are not input into the trained densification processing model, and all the sub-point clouds to be processed, which are subjected to densification processing, so as to obtain the point clouds after densification processing.
6. The method of claim 1, wherein the method further comprises:
when the point cloud to be processed is processed, the point cloud to be processed is segmented to obtain a plurality of sub point clouds to be processed;
Identifying whether at least partial point clouds corresponding to the target object exist in each sub-point cloud to be processed, and if so, respectively inputting the sub-point clouds to be processed into a trained densification processing model and a trained complement processing model;
combining the densely processed sub-point cloud to be processed output by the trained densely processed model and the fully processed sub-point cloud to be processed output by the trained fully processed model to obtain a fully condensed sub-point cloud corresponding to the sub-point cloud to be processed;
and merging all the sub-point clouds to be processed and all the complementary dense sub-point clouds which are not input into the trained dense processing model and the trained complementary processing model to obtain the complementary dense point clouds.
7. A method for processing a point cloud, comprising:
acquiring point cloud to be processed;
dividing the point cloud to be processed to obtain a plurality of sub point clouds to be processed;
identifying whether at least part of point clouds corresponding to the target object exist in each sub-point cloud to be processed or not, if so, respectively inputting the sub-point clouds to be processed into a trained densification processing model and a trained complement processing model to obtain the sub-point clouds to be processed after densification processing output by the trained densification processing model and the complement processed sub-point clouds to be processed output by the trained complement processing model;
Combining the densely processed sub-point cloud to be processed output by the trained densely processed model and the fully processed sub-point cloud to be processed output by the trained fully processed model to obtain a fully condensed sub-point cloud corresponding to the sub-point cloud to be processed;
and merging all the sub-point clouds to be processed and all the complementary dense sub-point clouds which are not input into the trained dense processing model and the trained complementary processing model to obtain the complementary dense point clouds.
8. An apparatus for training a model, comprising:
the acquisition module is used for acquiring an original point cloud;
the segmentation module is used for segmenting the original point cloud to obtain a plurality of original sub point clouds;
the sample generation module is used for identifying whether at least partial point clouds corresponding to the target object exist in the original point clouds or not according to each original point cloud, if so, determining the attribute of the target object according to the at least partial point clouds corresponding to the target object contained in the original point clouds, wherein the attribute comprises at least one of the size and the pose of the target object, adjusting the preset integral point clouds corresponding to the target object according to the attribute of the target object, and obtaining dense point clouds corresponding to the original point clouds by adopting the preset integral point clouds corresponding to the target object and the original point clouds;
The training module is used for inputting the original sub-point cloud as a training sample into a densification processing model, taking the dense sub-point cloud corresponding to the original sub-point cloud as a first label of the training sample, training the densification processing model by taking the minimum difference between the sub-point cloud after the densification processing output by the densification processing model and the first label as a training target, and carrying out densification processing on the input point cloud by the densification processing model; the method comprises the steps of inputting an original sub-point cloud as a training sample into a complement processing model, taking an adjusted complete point cloud corresponding to a target object as a second label of the training sample, training the complement processing model by taking the minimum difference between the complete point cloud corresponding to the target object output by the complement processing model and the second label as a training target, and performing complement processing on at least part of point clouds corresponding to the target object in the input point clouds by the complement processing model.
9. A processing apparatus for a point cloud, comprising:
the acquisition module is used for acquiring the point cloud to be processed;
the segmentation module is used for segmenting the point cloud to be processed to obtain a plurality of sub point clouds to be processed;
The processing module is used for identifying whether at least part of point clouds corresponding to the target object exist in each sub-point cloud to be processed, if so, inputting the sub-point clouds to be processed into a trained dense processing model and a trained complement processing model respectively to obtain the sub-point clouds to be processed after dense processing output by the trained dense processing model and the sub-point clouds to be processed after complement processing output by the trained complement processing model;
the merging module is used for merging the densely processed sub-point cloud to be processed output by the trained densely processed model and the fully processed sub-point cloud to be processed output by the trained fully processed model to obtain the fully condensed sub-point cloud corresponding to the sub-point cloud to be processed; and the method is used for merging all the sub-point clouds to be processed and all the complementary dense sub-point clouds which are not input into the trained dense processing model and the trained complementary processing model to obtain the complementary dense point clouds.
10. A computer readable storage medium, characterized in that the storage medium stores a computer program which, when executed by a processor, implements the method of any of the preceding claims 1-7.
11. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method of any of the preceding claims 1-7 when executing the program.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110195817.9A CN112837410B (en) | 2021-02-19 | 2021-02-19 | Training model and point cloud processing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110195817.9A CN112837410B (en) | 2021-02-19 | 2021-02-19 | Training model and point cloud processing method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112837410A CN112837410A (en) | 2021-05-25 |
CN112837410B true CN112837410B (en) | 2023-07-18 |
Family
ID=75934214
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110195817.9A Active CN112837410B (en) | 2021-02-19 | 2021-02-19 | Training model and point cloud processing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112837410B (en) |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104346608B (en) * | 2013-07-26 | 2017-09-08 | 株式会社理光 | Sparse depth figure denseization method and apparatus |
US10740914B2 (en) * | 2018-04-10 | 2020-08-11 | Pony Ai Inc. | Enhanced three-dimensional training data generation |
CN109345510A (en) * | 2018-09-07 | 2019-02-15 | 百度在线网络技术(北京)有限公司 | Object detecting method, device, equipment, storage medium and vehicle |
CN109493407B (en) * | 2018-11-19 | 2022-03-25 | 腾讯科技(深圳)有限公司 | Method and device for realizing laser point cloud densification and computer equipment |
CN111694903B (en) * | 2019-03-11 | 2023-09-12 | 北京地平线机器人技术研发有限公司 | Map construction method, device, equipment and readable storage medium |
CN112329547B (en) * | 2020-10-15 | 2024-11-26 | 北京三快在线科技有限公司 | A data processing method and device |
-
2021
- 2021-02-19 CN CN202110195817.9A patent/CN112837410B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN112837410A (en) | 2021-05-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111311709B (en) | Method and device for generating high-precision map | |
CN111340922B (en) | Positioning and map construction method and electronic equipment | |
CN110991489B (en) | Marking method, device and system for driving data | |
CN111508258B (en) | Positioning method and device | |
CN112036462B (en) | Model training and target detection method and device | |
CN113642620B (en) | Obstacle detection model training and obstacle detection method and device | |
CN112990099B (en) | Method and device for detecting lane line | |
CN113887351B (en) | Obstacle detection method and obstacle detection device for unmanned driving | |
CN117593454B (en) | Three-dimensional reconstruction and target surface Ping Miandian cloud generation method | |
CN117649779A (en) | AR technology-based parking management method and system | |
CN115600157B (en) | Data processing method and device, storage medium and electronic equipment | |
CN111414818A (en) | Positioning method and device based on environment image | |
CN112837410B (en) | Training model and point cloud processing method and device | |
CN112329547B (en) | A data processing method and device | |
CN111426299B (en) | Method and device for ranging based on depth of field of target object | |
CN114332189B (en) | A high-precision map construction method, device, storage medium and electronic device | |
CN112712595B (en) | Method and device for generating simulation environment | |
CN112184901B (en) | Depth map determining method and device | |
CN111524190B (en) | Training of visual positioning network and control method and device of unmanned equipment | |
CN114529983A (en) | Event and video fusion action identification method and device | |
CN112257548A (en) | Method and apparatus for generating pedestrian image and storage medium | |
CN117635850B (en) | Data processing method and device | |
CN117095244B (en) | An infrared target recognition method, device, equipment and medium | |
Qiu et al. | TICMapNet: A Tightly Coupled Temporal Fusion Pipeline for Vectorized HD Map Learning | |
CN118781564A (en) | A semantic occupancy prediction method based on LiDAR point cloud and image fusion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |