[go: up one dir, main page]

CN112154454B - Target object detection method, system, device and storage medium - Google Patents

Target object detection method, system, device and storage medium Download PDF

Info

Publication number
CN112154454B
CN112154454B CN201980033130.6A CN201980033130A CN112154454B CN 112154454 B CN112154454 B CN 112154454B CN 201980033130 A CN201980033130 A CN 201980033130A CN 112154454 B CN112154454 B CN 112154454B
Authority
CN
China
Prior art keywords
target object
point cloud
determining
point
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201980033130.6A
Other languages
Chinese (zh)
Other versions
CN112154454A (en
Inventor
周游
蔡剑钊
武志远
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Zhuoyu Technology Co ltd
Original Assignee
Shenzhen Zhuoyu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Zhuoyu Technology Co ltd filed Critical Shenzhen Zhuoyu Technology Co ltd
Publication of CN112154454A publication Critical patent/CN112154454A/en
Application granted granted Critical
Publication of CN112154454B publication Critical patent/CN112154454B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • G01S17/8943D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Theoretical Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Analysis (AREA)
  • Optical Radar Systems And Details Thereof (AREA)
  • Traffic Control Systems (AREA)

Abstract

A method, system, device and storage medium for detecting a target object. The three-dimensional point cloud obtained by detection of the detection equipment carried on the movable platform is clustered to obtain a point cloud cluster corresponding to the target object, in the clustering process, the height of the clustering center of the point cloud cluster is required to meet a preset height condition, further, a target detection model is determined according to the distance between the target object and the movable platform and the corresponding relation between the distance and the detection model, and the point cloud cluster corresponding to the target object is detected through the target detection model, so that the target detection model determines the object type of the target object, namely, the target object with different distances relative to the movable platform is detected by adopting different detection models, and the detection precision of the target object is improved.

Description

Target object detection method, system, equipment and storage medium
Technical Field
The embodiment of the application relates to the field of movable platforms, in particular to a target object detection method, a target object detection system, target object detection equipment and a target object storage medium.
Background
In an automated driving system or an assisted driving system, it is necessary to detect vehicles on a road in order to perform vehicle avoidance.
In the prior art, a photographing device is generally provided in an automatic driving system or an auxiliary driving system, and surrounding vehicles are detected through two-dimensional images collected by the photographing device, but the accuracy of vehicle detection is not sufficient only through the two-dimensional images to detect the surrounding vehicles.
Disclosure of Invention
The embodiment of the application provides a target object detection method, a target object detection system, target object detection equipment and a storage medium, so as to improve the detection precision of a target object.
A first aspect of an embodiment of the present application provides a method for detecting a target object, which is applied to a movable platform, where the movable platform is provided with a detection device, and the detection device is configured to detect an environment around the movable platform to obtain a three-dimensional point cloud, and the method includes:
acquiring the three-dimensional point cloud;
clustering the three-dimensional point cloud to obtain a point cloud cluster corresponding to the first target object, wherein the height of a clustering center of the clustered point cloud cluster accords with a preset height condition;
determining a target detection model according to the distance between the first target object and the movable platform and the corresponding relation between the distance and the detection model;
And detecting a point cloud cluster corresponding to the first target object through the target detection model, and determining the object type of the first target object.
A second aspect of an embodiment of the present application provides a detection system for a target object, including a detection device, a memory, and a processor;
The detection equipment is used for detecting the surrounding environment of the movable platform to obtain a three-dimensional point cloud;
The memory is used for storing program codes;
the processor invokes the program code, which when executed, is operable to:
acquiring the three-dimensional point cloud;
clustering the three-dimensional point cloud to obtain a point cloud cluster corresponding to the first target object, wherein the height of a clustering center of the clustered point cloud cluster accords with a preset height condition;
determining a target detection model according to the distance between the first target object and the movable platform and the corresponding relation between the distance and the detection model;
And detecting a point cloud cluster corresponding to the first target object through the target detection model, and determining the object type of the first target object.
A third aspect of an embodiment of the present application provides a movable platform, including:
A body;
The power system is arranged on the machine body and is used for providing moving power;
And a detection system for a target object as described in the second aspect.
A fourth aspect of an embodiment of the present application provides a computer readable storage medium having stored thereon a computer program for execution by a processor to implement the method of the first aspect.
According to the target object detection method, system, device and storage medium, the three-dimensional point cloud obtained by detection of the detection device carried on the movable platform is clustered to obtain the point cloud cluster corresponding to the target object, in the clustering process, the height of the clustering center of the point cloud cluster is required to meet the preset height condition, further, the target detection model is determined according to the distance between the target object and the movable platform and the corresponding relation between the distance and the detection model, and the point cloud cluster corresponding to the target object is detected through the target detection model, so that the target detection model determines the object type of the target object, namely, different detection models are adopted for detection relative to the target object with different distances of the movable platform, and therefore the detection precision of the target object is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort to a person skilled in the art.
Fig. 1 is a schematic diagram of an application scenario provided in an embodiment of the present application;
FIG. 2 is a flowchart of a method for detecting a target object according to an embodiment of the present application;
fig. 3 is a schematic diagram of another application scenario provided in an embodiment of the present application;
fig. 4 is a schematic diagram of another application scenario provided in an embodiment of the present application;
FIG. 5 is a schematic diagram of a detection model according to an embodiment of the present application;
FIG. 6 is a flowchart of a method for detecting a target object according to another embodiment of the present application;
FIG. 7 is a schematic diagram of a three-dimensional point cloud projected onto a two-dimensional image according to an embodiment of the present application;
FIG. 8 is a schematic diagram of a two-dimensional feature point according to an embodiment of the present application;
FIG. 9 is a flowchart of a method for detecting a target object according to another embodiment of the present application;
FIG. 10 is a schematic diagram of a three-dimensional point cloud according to an embodiment of the present application;
FIG. 11 is a schematic diagram of another three-dimensional point cloud according to an embodiment of the present application;
FIG. 12 is a schematic view of yet another three-dimensional point cloud provided by an embodiment of the present application;
Fig. 13 is a block diagram of a target object detection system according to an embodiment of the present application.
Reference numerals:
The system comprises a vehicle 11, a server 12 and a vehicle 13;
14, a vehicle, 15, a three-dimensional point cloud, 30, a ground point cloud;
The method comprises the steps of 31, 32, a point cloud cluster;
41, a first target object, 42, a first target object, 80, a first image;
81 is a projection area, 82 is a two-dimensional feature point, 1001 is a right area;
1002 top left image, 1003 bottom left image, 100 white arc;
101, a first target object, 102, a second target object;
103, a first target object, 104, a three-dimensional point cloud;
105, 106, an identification frame and 130, a target object detection system;
131, detecting equipment, 132, memory and 133, processor.
Detailed Description
The technical solutions of the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
It will be understood that when an element is referred to as being "fixed to" another element, it can be directly on the other element or intervening elements may also be present. When a component is considered to be "connected" to another component, it can be directly connected to the other component or intervening components may also be present.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein in the description of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. The term "and/or" as used herein includes any and all combinations of one or more of the associated listed items.
Some embodiments of the present application are described in detail below with reference to the accompanying drawings. The following embodiments and features of the embodiments may be combined with each other without conflict.
The embodiment of the application provides a target object detection method. The method is applied to a movable platform, wherein the movable platform is provided with a detection device, and the detection device is used for detecting the surrounding environment of the movable platform to obtain a three-dimensional point cloud. In this embodiment, the movable platform may be an unmanned aerial vehicle, a movable robot, or a vehicle.
In the embodiment of the application, the movable platform is taken as an example of a vehicle, and the vehicle can be an unmanned vehicle, a vehicle with an advanced driving assistance (ADVANCED DRIVER ASSISTANCE SYSTEMS, ADAS) system, or the like. As shown in fig. 1, the vehicle 11 is a carrier on which a detection device is mounted, which may be specifically a binocular stereo camera, a Time of flight (TOF) camera, and/or a lidar. During the running process of the vehicle 11, the detection device detects the surrounding environment of the vehicle 11 in real time to obtain a three-dimensional point cloud. The surroundings of the vehicle 11 include objects around the vehicle 11. Among them, the objects around the vehicle 11 include the ground around the vehicle 11, pedestrians, vehicles, and the like.
Taking a laser radar as an example, when a beam of laser light emitted by the laser radar irradiates on the surface of an object, the surface of the object will reflect the beam of laser light, and the laser radar can determine information such as the azimuth, the distance and the like of the object relative to the laser radar according to the laser light reflected by the surface of the object. If the laser beam emitted by the laser radar is scanned according to a certain track, for example, 360-degree rotation scanning, a large number of laser points are obtained, so that laser point cloud data of the object, that is, three-dimensional point cloud, can be formed.
In addition, the present embodiment is not limited to the execution subject of the target object detection method, and the target object detection method may be executed by an in-vehicle device in a vehicle, or may be executed by a device having a data processing function other than the in-vehicle device, for example, as shown in fig. 1, the server 12 may be configured to perform wireless communication or wired communication between the vehicle 11 and the server 12, the vehicle 11 may transmit the three-dimensional point cloud detected by the detection device to the server 12, and the server 12 may execute the target object detection method. The method for detecting the target object provided by the embodiment of the application is described below by taking the vehicle-mounted device as an example. The vehicle-mounted device may be a device with a data processing function integrated in a console in a vehicle, or may also be a tablet computer, a mobile phone, a notebook computer, etc. placed in the vehicle.
Fig. 2 is a flowchart of a method for detecting a target object according to an embodiment of the present application. As shown in fig. 2, the method in this embodiment may include:
S201, acquiring the three-dimensional point cloud.
As shown in fig. 1, in the running process of a vehicle 11, a detection device carried on the vehicle 11 detects the surrounding environment of the vehicle 11 in real time to obtain a three-dimensional point cloud, and the detection device can be in communication connection with a vehicle-mounted device on the vehicle 11, so that the vehicle-mounted device on the vehicle 11 can acquire the three-dimensional point cloud detected by the detection device in real time. For example, a three-dimensional point cloud of the ground around the vehicle 11, a three-dimensional point cloud of a pedestrian, a three-dimensional point cloud of other vehicles such as the vehicle 13, the vehicle 14.
S202, clustering the three-dimensional point cloud to obtain a point cloud cluster corresponding to the first target object, wherein the height of a clustering center of the clustered point cloud cluster meets a preset height condition.
As shown in fig. 3, the three-dimensional point cloud 15 is a three-dimensional point cloud detected by a detection device mounted on the vehicle 11. The three-dimensional point cloud 15 includes a plurality of three-dimensional points, that is, the three-dimensional point cloud is a set of a large number of three-dimensional points. In addition, three-dimensional points may also be referred to as point cloud points. Since the point cloud points in the three-dimensional point cloud obtained by the detection device at each sampling time point carry position information, the position information may specifically be a three-dimensional coordinate of the point cloud point in a three-dimensional coordinate system, and the three-dimensional coordinate system is not limited in this embodiment, and for example, the three-dimensional coordinate system may specifically be a vehicle body coordinate system, an earth coordinate system, or a world coordinate system. Therefore, the height of each point cloud point relative to the ground can be determined according to the position information of each point cloud point.
In the process of clustering the three-dimensional point cloud 15, specifically, a K-means clustering algorithm may be used to weight K the point cloud points in the three-dimensional point cloud 15, where the height from the ground is close to a preset height, so that the height value of the clustering center is close to the preset height, and the preset height is recorded asWhere H represents the vehicle height. Typically the car height is about 1.6 meters and a large car, such as a bus, is about 3 meters, where the car height H may take the value of 1.1 meters. Or H can have two values, one is H1 = 0.8 m, the other is H2 = 1.5m, and the H1 and H2 are used for clustering respectively to obtain a cluster center with a height value close to that of the cluster centerThe height value of the clusters and the cluster centers of (a) is close toIs a cluster of (a) and (b). Taking the value of H as 1.1 meters as an example, assuming that P1 and P2 are any two three-dimensional points in the three-dimensional point cloud 15, respectively, corresponding P1 and P2 have a three-dimensional coordinate, where the coordinate of P1 in the z-axis, i.e., the height direction, may be denoted as P1 (z), the coordinate of P2 in the z-axis, i.e., the height direction, may be denoted as P2 (z), and if the function value Loss calculated by the following formula (1) is less than or equal to a certain threshold, it is determined that P1 and P2 may be aggregated into a cluster.
Where k may be a constant. It can be appreciated that when the three-dimensional point cloud 15 is clustered, the aggregation process between different three-dimensional points in the three-dimensional point cloud 15 may be similar to the aggregation process described in the above formula (1), and will not be described in detail herein.
As shown in fig. 3, after the three-dimensional point cloud 15 is clustered, a point cloud cluster 31 and a point cloud cluster 32 are obtained, wherein the heights of the clustering centers of the point cloud cluster 31 and the point cloud cluster 32 are close to a preset height. Further, the first target object 41 shown in fig. 4 can be obtained from the point cloud cluster 31, and the first target object 42 shown in fig. 4 can be obtained from the point cloud cluster 32.
It will be appreciated that the first target object is only schematically illustrated herein, and the number of first target objects is not limited.
S203, determining a target detection model according to the distance of the first target object relative to the movable platform and the corresponding relation between the distance and the detection model.
The point cloud cluster 31 and the point cloud cluster 32 as shown in fig. 3 each include a plurality of point cloud points. Since the detection device detects that the point cloud points in the three-dimensional point cloud obtained at each sampling time carry position information, the distance between the point cloud point and the detection device can be calculated according to the position information of each point cloud point, and further, the distance between the point cloud cluster and the vehicle body on which the detection device is mounted can be calculated according to the distances between a plurality of point cloud points in the point cloud cluster and the detection device, and further, the distance between the first target object corresponding to the point cloud cluster and the vehicle body, for example, the distance between the first target object 41 and the vehicle 11 and the distance between the first target object 42 and the vehicle 11, can be obtained.
As shown in fig. 4, the distance of the first target object 41 from the vehicle 11 is smaller than the distance of the first target object 42 from the vehicle 11, for example, the distance of the first target object 41 from the vehicle 11 is denoted as L1, and the distance of the first target object 42 from the vehicle 11 is denoted as L2. In the present embodiment, the in-vehicle apparatus may determine the target detection model corresponding to L1 from the distance L1 of the first target object 41 with respect to the vehicle 11 and the correspondence relationship of the distance and the detection model. The target detection model corresponding to L2 is determined from the distance L2 of the first target object 42 with respect to the vehicle 11 and the correspondence relationship between the distance and the detection model.
In an alternative embodiment, the test models corresponding to different distances may be trained in advance.
For example, as shown in fig. 5, in particular, the sample object may be divided into a sample object in the range of 0-90 meters, a sample object in the range of 75-165 meters, and a sample object in the range of 125-200 meters with respect to the collection vehicle, according to the distance between the sample object and the movable platform, e.g., collection vehicle, that detects the sample object. The collection vehicle may be the vehicle 11 described above, or may be a vehicle other than the vehicle 11. Specifically, the corresponding relation between the distance and the detection model is obtained by training a detection model 1 with respect to a sample object of the collection vehicle in the range of 0-90 meters, training a detection model 2 with respect to a sample object of the collection vehicle in the range of 75-165 meters, and training a detection model 3 with respect to a sample object of the collection vehicle in the range of 125-200 meters.
In another alternative embodiment, the detection model may make adaptation points based on the actual acquired distance. For example, a parameter which can be set in the test model and which can be adjusted as a function of the distance can be provided. In the specific implementation, the distance of the first target object is obtained, and then parameters in the inspection model are set according to the distance, so that the target inspection model is obtained.
S204, detecting a point cloud cluster corresponding to the first target object through the target detection model, and determining the object type of the first target object.
For example, if the vehicle-mounted device determines that the distance L1 of the first target object 41 from the vehicle 11 is in the range of 0-90 meters, the detection model 1 is used to detect the point cloud cluster corresponding to the first target object 41, so as to determine the object type of the first target object 41. If the distance L2 of the first target object 42 relative to the vehicle 11 is in the range of 75 meters to 165 meters, the point cloud cluster corresponding to the first target object 42 is detected by using the detection model 2 to determine the object type of the first target object 42.
It is worth noting that the point cloud distribution characteristics of vehicles in different distance ranges are different. For example, the distribution of the point clouds corresponding to the long-range targets is sparse, while the distribution of the point clouds corresponding to the short-range targets is dense. The point clouds corresponding to short range vehicles often represent vehicle side point clouds, while the point clouds corresponding to medium range vehicles represent more vehicle tail point clouds. Therefore, a plurality of detection models are trained for different distances, and the identification of the target object can be performed more accurately.
In addition, the object types as described above may include types of road sign lines, vehicles, pedestrians, road signs, and the like. Further, the specific type of the vehicle can be identified according to the characteristics of the point cloud cluster, for example, engineering vehicles, cars, buses and the like can be identified.
It will be appreciated that the first target object in this embodiment is only for distinguishing from the second target object in the subsequent embodiments, and that both the first target object and the second target object may refer to target objects detectable by the detection device.
In the embodiment, the three-dimensional point cloud obtained by detection of the detection equipment carried on the movable platform is clustered to obtain the point cloud cluster corresponding to the target object, and in the clustering process, the height of the clustering center of the point cloud cluster is required to meet the preset height condition, and further, according to the distance between the target object and the movable platform and the corresponding relation between the distance and the detection model, and determining a target detection model, and detecting a point cloud cluster corresponding to the target object through the target detection model so that the target detection model determines the object type of the target object, namely, detecting the target object with different distances relative to the movable platform by adopting different detection models, thereby improving the detection precision of the target object.
On the basis of the embodiment, before clustering the three-dimensional point cloud to obtain the point cloud cluster corresponding to the first target object, the method further comprises removing a specific point cloud in the three-dimensional point cloud, wherein the specific point cloud comprises a ground point cloud.
As shown in fig. 3, the three-dimensional point cloud 15 obtained by the detection device includes not only the point cloud corresponding to the target object, but also a specific point cloud, for example, a ground point cloud 30. Therefore, before clustering the three-dimensional point cloud 15, the surface point cloud 30 in the three-dimensional point cloud 15 may be identified by a plane fitting method, the surface point cloud 30 in the three-dimensional point cloud 15 may be removed, and further, the three-dimensional point cloud after removing the surface point cloud 30 may be clustered.
According to the method, the specific point cloud in the three-dimensional point cloud obtained by detection of the detection equipment carried on the movable platform is removed, the three-dimensional point cloud after the specific point cloud is removed is clustered, the point cloud cluster corresponding to the target object is obtained, the influence of the specific point cloud on the detection of the target object can be avoided, and therefore the detection precision of the target object is further improved.
The embodiment of the application provides a target object detection method. Fig. 6 is a flowchart of a method for detecting a target object according to another embodiment of the present application. As shown in fig. 6, on the basis of the foregoing embodiment, before the detecting, by the target detection model, the point cloud cluster corresponding to the first target object and determining the object type of the first target object, the method further includes determining a movement direction of the first target object, and adjusting the movement direction of the first target object to a preset direction.
As a possible implementation manner, the determining the movement direction of the first target object includes determining the movement direction of the first target object according to a three-dimensional point cloud corresponding to the first target object at a first moment and a three-dimensional point cloud corresponding to the first target object at a second moment.
Specifically, the first time is the previous time, and the second time is the current time. Taking the first target object 41 as an example, since the first target object 41 may be in a moving state, the position information of the first target object 41 may be changed in real time. In addition, the detection device on the vehicle 11 detects the surrounding environment in real time, and thus, the vehicle-mounted device can acquire and process the three-dimensional point cloud detected by the detection device in real time. The three-dimensional point cloud corresponding to the first target object 41 at the previous time and the three-dimensional point cloud corresponding to the first target object 41 at the current time may be changed, so that the movement direction of the first target object 41 may be determined according to the three-dimensional point cloud corresponding to the first target object 41 at the previous time and the three-dimensional point cloud corresponding to the first target object 41 at the current time.
Optionally, the determining the movement direction of the first target object according to the three-dimensional point cloud corresponding to the first target object at the first moment and the three-dimensional point cloud corresponding to the first target object at the second moment includes projecting the three-dimensional point cloud corresponding to the first target object at the first moment and the three-dimensional point cloud corresponding to the first target object at the second moment into a world coordinate system respectively, and determining the movement direction of the first target object according to the three-dimensional point cloud corresponding to the first target object at the first moment and the three-dimensional point cloud corresponding to the first target object at the second moment in the world coordinate system.
For example, the three-dimensional point cloud corresponding to the first target object 41 at the previous moment and the three-dimensional point cloud corresponding to the first target object 41 at the current moment are respectively projected into a world coordinate system, and further, a relative position relationship between the three-dimensional point cloud corresponding to the first target object 41 at the previous moment and the three-dimensional point cloud corresponding to the first target object 41 at the current moment is calculated through an iterative closest point (IteratedClosestPoints, ICP) algorithm, wherein the relative position relationship comprises a rotation relationship and a translation relationship, and the motion direction of the first target object 41 can be determined according to the translation relationship, so that a possible implementation manner is that the translation relationship is the motion direction of the first target object 41.
As another possible implementation manner, the determining the movement direction of the first target object includes the following steps:
s601, projecting a three-dimensional point cloud corresponding to the first target object at a first moment into a two-dimensional image at the first moment to obtain a first projection point.
S602, projecting the three-dimensional point cloud corresponding to the first target object at the second moment into the two-dimensional image at the second moment to obtain a second projection point.
In the present embodiment, the vehicle 11 may also be mounted with a photographing device that can be used to photograph an image of the surroundings of the vehicle 11, in particular a two-dimensional image. The period of the three-dimensional point cloud obtained by the detection device and the period of the image captured by the capturing device may be the same or different. For example, the photographing apparatus photographs a frame of two-dimensional image while the detecting apparatus detects the three-dimensional point cloud of the first target object 41 obtained at the previous timing. The photographing apparatus photographs another frame of two-dimensional image while the detecting apparatus detects the three-dimensional point cloud of the first target object 41 obtained at the present time. Here, a two-dimensional image obtained by the photographing apparatus photographing at the previous time may be referred to as a first image, and a two-dimensional image obtained by the photographing apparatus photographing at the current time may be referred to as a second image. Specifically, the three-dimensional point cloud of the first target object 41 at the previous moment may be projected onto the first image, to obtain the first projection point. The three-dimensional point cloud of the first target object 41 at the present moment is projected onto the second image, resulting in a second projected point. As shown in fig. 7, the left area represents a three-dimensional point cloud obtained by detection by the detection device at a certain moment, the right area represents a projection area of the three-dimensional point cloud on the two-dimensional image, and the projection area includes projection points.
In an alternative embodiment, projecting the three-dimensional point cloud into the two-dimensional image includes projecting some or all of the points in the three-dimensional point cloud along the Z-axis onto a two-dimensional plane. Wherein, the Z axis can be the Z axis under the vehicle body coordinate system. Or if the coordinates of the three-dimensional point cloud have been corrected to the earth coordinate system, the Z-axis may be the Z-axis of the earth coordinate system.
S603, determining three-dimensional information of the first feature point according to the first projection point and the first feature point in the two-dimensional image at the first moment, wherein the first feature point is a feature point with a position relationship which accords with a preset position relationship with the first projection point.
For convenience of distinction, a projection point of the three-dimensional point cloud of the first target object 41 on the first image at the previous time is denoted as a first projection point, a feature point on the first image is denoted as a first feature point, and a positional relationship between the first feature point and the first projection point conforms to a preset positional relationship.
Optionally, the determining the three-dimensional information of the first feature point according to the first projection point and the first feature point in the two-dimensional image at the first moment includes determining a weight coefficient corresponding to the first projection point according to a distance between the first projection point and the first feature point in the two-dimensional image at the first moment, and determining the three-dimensional information of the first feature point according to the weight coefficient corresponding to the first projection point and the three-dimensional information of the first projection point.
As shown in fig. 8, 80 denotes a first image obtained by the photographing apparatus photographing at the previous time, and 81 denotes a projection area formed by projecting the three-dimensional point cloud of the first target object 41 at the previous time onto the first image 80. In this projection area 81, a two-dimensional feature point, i.e., a first feature point, can be extracted. The two-dimensional feature point is not necessarily a projection point, that is, the two-dimensional feature point does not necessarily have three-dimensional information. Here, three-dimensional information of the two-dimensional feature point can be estimated by gaussian distribution. As shown in fig. 8, 82 represents any two-dimensional feature point in the projection area 81, and further, a preset range around the two-dimensional feature point 82 is determined, for example, projection points in a10×10 pixel area, for example, A, B, C, D are respectively projection points in the preset range. The distance between the projection point a and the two-dimensional feature point 82 is denoted by D 1, the distance between the projection point B and the two-dimensional feature point 82 is denoted by D 2, the distance between the projection point C and the two-dimensional feature point 82 is denoted by D 3, and the distance between the projection point D and the two-dimensional feature point 82 is denoted by D 4. Wherein, (Mu 1,v1) represents the pixel coordinates of the projected point A on the first image 80 and (mu 0,v0) represents the pixel coordinates of the two-dimensional feature point 82 on the first image 80.(Mu 2,v2) represents the pixel coordinates of the proxel B on the first image 80.(Mu 3,v3) represents the pixel coordinates of the proxel C on the first image 80.(Mu 4,v4) represents the pixel coordinates of the proxel D on the first image 80. The three-dimensional information of the three-dimensional point corresponding to the projection point a is denoted by P 1, the three-dimensional information of the three-dimensional point corresponding to the projection point B is denoted by P 2, the three-dimensional information of the three-dimensional point corresponding to the projection point C is denoted by P 3, and the three-dimensional information of the three-dimensional point corresponding to the projection point D is denoted by P 4. Wherein, P 1、P2、P3、P4 is a vector respectively, and includes xyz three-axis coordinates respectively.
The three-dimensional information of the two-dimensional feature point 82 is denoted as P 0,P0, which can be calculated by the following formulas (2), (3):
Where n represents the number of projection points within a preset range around the two-dimensional feature point 82, ω i represents a weight coefficient, and different projection points may correspond to different weight coefficients, or the same weight coefficient. Sigma is an adjustable parameter, which may be an empirically adjusted parameter, for example.
It is to be understood that the process of calculating the three-dimensional information of the other two-dimensional feature points in the projection area 81 is similar to the process of calculating the three-dimensional information of the two-dimensional feature points 82 described above, and will not be repeated here.
S604, determining three-dimensional information of the second feature points according to the second projection points and the second feature points in the two-dimensional image at the second moment, wherein the second feature points are feature points with the position relationship between the second feature points conforming to the preset position relationship, and the second feature points correspond to the first feature points.
For convenience of distinction, a projection point of the three-dimensional point cloud of the first target object 41 on the second image at the current moment is denoted as a second projection point, a feature point on the second image is denoted as a second feature point, and a positional relationship between the second feature point and the second projection point conforms to a preset positional relationship.
From the first feature point on the first image 80, a second feature point corresponding to the first feature point on the second image can be calculated using a corner Tracking algorithm (KLT).
Optionally, the determining the three-dimensional information of the second feature point according to the second projection point and the second feature point in the two-dimensional image at the second moment includes determining a weight coefficient corresponding to the second projection point according to a distance between the second projection point and the second feature point in the two-dimensional image at the second moment, and determining the three-dimensional information of the second feature point according to the weight coefficient corresponding to the second projection point and the three-dimensional information of the second projection point.
Specifically, the process of calculating the three-dimensional information of the second feature point on the second image is similar to the process of calculating the three-dimensional information of the first feature point on the first image, and will not be repeated here.
S605, determining the movement direction of the first target object according to the three-dimensional information of the first characteristic point and the three-dimensional information of the second characteristic point.
Specifically, the three-dimensional information of the first feature point is the three-dimensional information P 0 of the two-dimensional feature point 82 described above, and the three-dimensional information of the second feature point is the three-dimensional information of the two-dimensional feature point corresponding to the two-dimensional feature point 82 in the second image, which is denoted by P' 0. From P 0 and P '0, the direction of movement of the first target object 41 can be determined, and specifically, the position change between P 0 and P' 0 is the direction of movement of the first target object 41.
Optionally, before the moving direction of the first target object is determined according to the three-dimensional information of the first feature point and the three-dimensional information of the second feature point, the method further comprises converting the three-dimensional information of the first feature point and the three-dimensional information of the second feature point into a world coordinate system respectively.
For example, P 0 and P '0 are respectively converted into the world coordinate system in which the positional change between P 0 and P' 0, which is the movement direction of the first target object 41, is calculated.
It will be appreciated that the movement direction of other first target objects than the first target object 41 may also be determined by several possible implementations as described above, and will not be described in detail herein.
After determining the movement direction of the first target object, the movement direction of the first target object may be further adjusted to a preset direction. Optionally, the preset direction is a movement direction of a sample object used for training the detection model.
For example, the direction of motion of a sample object used to train the detection model is north-oriented, or toward the front or rear of the collection vehicle that detected the sample object. Taking north as an example, in order that the detection model may accurately detect the first target object 41 or the first target object 42, the moving direction of the first target object 41 or the first target object 42 needs to be adjusted to be north, for example, an included angle between the moving direction of the first target object 41 or the first target object 42 and the north direction is θ, then the three-dimensional point cloud corresponding to the first target object 41 or the three-dimensional point cloud corresponding to the first target object 42 is rotated according to a rotation formula R z (θ) described in the following formula (4), so that the moving direction of the first target object 41 or the first target object 42 is north:
In this embodiment, the movement direction of the target object is determined and adjusted to be the preset direction, and since the preset direction is the movement direction of the sample object for training the detection model, the detection accuracy of the target object can be further improved by adjusting the movement direction of the target object to be the preset direction and then detecting the target object by the detection model.
The embodiment of the application provides a target object detection method. On the basis of the embodiment, after the point cloud cluster corresponding to the first target object is detected through the target detection model and the object type of the first target object is determined, the method further comprises verifying the detection result of the target detection model according to preset conditions if the first target object is determined to be a vehicle through the target detection model.
For example, when it is determined that the first target object 41 is a vehicle by the target detection model, the detection result is further verified by a preset condition.
Optionally, the preset condition includes at least one of the size of the first target object meeting a preset size, and the space overlapping degree between the first target object and other target objects around the first target object being smaller than a preset threshold.
For example, when the first target object 41 is detected as a vehicle by the target detection model, it is further detected whether the width of the first target object 41 exceeds a preset width range, which may be a width range of a normal vehicle, for example, 2.8 m to 3 m. If the width of the first target object 41 exceeds the preset width range, it is determined that the detection model deviates from the detection result of the first target object 41, that is, the first target object 41 may not be a vehicle. If the width of the first target object 41 is within the preset width range, further detecting a space overlap ratio between the first target object 41 and other surrounding target objects, wherein the space overlap ratio between the first target object 41 and the other surrounding target objects may specifically be the space overlap ratio of the identification frame for representing the first target object 41 and the identification frame for representing the other surrounding target objects. If the spatial overlap ratio is greater than a preset threshold, it is determined that the detection model deviates from the detection result of the first target object 41, that is, the first target object 41 may not be a vehicle. If the spatial overlap ratio is smaller than the preset threshold value, it is determined that the detection result of the detection model on the first target object 41 is correct.
According to the embodiment, after the target object is detected through the target detection model corresponding to the distance according to the distance of the target object relative to the movable platform, if the object type of the target object is determined to be a vehicle, the detection result of the target detection model is further verified according to the preset condition, when the preset condition is met, the detection result of the target detection model is determined to be correct, and when the preset condition is not met, the detection result of the target detection model is determined to have deviation, so that the detection precision of the target object is further improved.
The embodiment of the application provides a target object detection method. Fig. 9 is a flowchart of a method for detecting a target object according to another embodiment of the present application. As shown in fig. 9, on the basis of the above embodiment, the distance between the first target object and the movable platform is smaller than or equal to a first preset distance. As shown in fig. 10, a right area 1001 is a three-dimensional point cloud detected by a detection device, an upper left image 1002 represents an image obtained by removing height information from the three-dimensional point cloud, and a lower left image 1003 represents a two-dimensional image. Where the white coil around in the right area 1001 represents a ground point cloud and the white arc 100 represents a first preset distance, e.g. 80 meters away, with respect to the detection device. 101. 102, 103 respectively represent a first target object having a distance to the detection device of less than or equal to 80 meters. As can be seen from fig. 10, there is no turn-by-turn white coil outside 80 meters, i.e. no ground point cloud is detected outside 80 meters. The embodiment provides a method for detecting a ground point cloud outside a first preset distance and detecting a second target object outside the first preset distance.
The method further comprises the following steps after the point cloud cluster corresponding to the first target object is detected through the target detection model and the object type of the first target object is determined:
And S901, if the first target object is determined to be a vehicle through the target detection model, determining a ground point cloud outside the first preset distance according to the position of the first target object.
In this embodiment, assuming that the vehicle-mounted device determines that the first target object 101 is a vehicle by using the target detection model corresponding to the distance between the first target object 101 and the detection device, determines that the first target object 102 is a vehicle by using the target detection model corresponding to the distance between the first target object 102 and the detection device, and determines that the first target object 103 is a vehicle by using the target detection model corresponding to the distance between the first target object 103 and the detection device, the vehicle-mounted device may further determine a ground point cloud outside 80 meters with respect to the detection device according to the positions of the first target object 101, the first target object 102 and the first target object 103.
Optionally, the determining the ground point cloud outside the first preset distance according to the position of the first target object includes determining a gradient of the ground where the first target object is located according to the position of the first target object, and determining the ground point cloud outside the first preset distance according to the gradient of the ground.
Specifically, the gradients of the ground on which the first target object 101, the first target object 102, and the first target object 103 are located are determined according to the positions of the first target object 101, the first target object 102, and the first target object 103, and the ground point cloud outside 80 meters with respect to the detection device is determined according to the gradients of the ground. It is to be understood that the present embodiment does not limit the number of the first target objects.
Optionally, determining the gradient of the ground where the first target object is located according to the positions of the first target objects includes determining the gradient of a plane formed by at least three first target objects according to the positions of at least three first target objects, where the gradient of the plane is the gradient of the ground where the first target object is located.
For example, when the first target object 101, the first target object 102, and the first target object 103 are all vehicles, three vehicles may determine one plane. For example, the sitting sign of the first target object 101 is a (x 1, y1, z 1), the coordinates of the first target object 102 are B (x 2, y2, z 2), the sitting sign of the first target object 103 is C (x 3, y3, z 3), the vector ab= (x 2-x1, y2-y1, z2-z 1), and the vector ac= (x 3-x1, y3-y1, z3-z 1). The normal vector of the plane where AB and AC are located is ab×ac= (a, b, c), wherein:
a=(y2-y1)(z3-z1)-(z2-z1)(y3-y1)
b=(z2-z1)(x3-x1)-(z3-z1)(x2-x1)
c=(x2-x1)(y3-y1)-(x3-x1)(y2-y1)
Specifically, according to the normal vector of the plane where AB and AC are located, the gradient of the plane formed by the first target object 101, the first target object 102, and the first target object 103 may be determined, where the gradient of the plane may specifically be the gradient of the ground where the first target object 101, the first target object 102, and the first target object 103 are located.
It will be appreciated that when the number of first target objects is greater than 3, a plane can be determined for each 3 first target objects, so that a plurality of planes can be obtained, and the gradient of the plurality of planes can be calculated by the method for calculating the gradient of the plane as described above, and at this time, the gradient of the ground can be fitted according to the gradient of the plurality of planes.
It will be appreciated that depending on the grade of the ground, it may be determined whether the ground is level, overpass or slope, and in some embodiments the ground on which the first target object is located may not be level, for example, may be overpass or slope, and therefore, depending on the grade of the ground, it may also be determined whether the first target object is on overpass or slope.
After the ground gradient of the ground on which the first target object is located is determined, the ground on which the first target object is located can be extended according to the ground gradient, and the ground point cloud beyond 80 meters is obtained. For example, the linear expansion is performed to a distance of 80 meters or more in accordance with the road surface width where the first target object is located. Here, the case where the ground at a distance other than 80 m is a horizontal ground may be considered, and the case where a slope or a overpass at a distance other than 80 m may be temporarily not considered.
S902, determining the object type of a second target object beyond the first preset distance according to the ground point cloud beyond the first preset distance.
For example, from the ground point cloud outside 80 meters, an object type of the second target object outside 80 meters is determined.
Optionally, the determining the object type of the second target object beyond the first preset distance according to the ground point cloud beyond the first preset distance includes determining a point cloud cluster corresponding to the second target object beyond the first preset distance according to the ground point cloud beyond the first preset distance, wherein the bottom of the second target object is in the same plane with the bottom of the first target object, detecting the point cloud cluster corresponding to the second target object through a detection model corresponding to the distance between the second target object and the movable platform, and determining the object type of the second target object.
For example, according to the ground point cloud outside 80 meters, the point cloud cluster corresponding to the second target object outside 80 meters is determined, as shown in fig. 11, since the second target object outside 80 meters is blocked by the object near the second target object, the number of distant three-dimensional point clouds 104 is smaller, that is, the distant three-dimensional point clouds 104 may be only a partial three-dimensional point cloud at the upper part of the second target object, at this time, the remaining partial three-dimensional point clouds of the second target object need to be supplemented according to the ground point cloud outside 80 meters, for example, the three-dimensional point clouds at the lower half part of the second target object need to be supplemented, so that the bottom of the second target object is in the same plane with the bottom of the first target object 101, the first target object 102 and the first target object 103. The partial three-dimensional point cloud at the upper part of the second target object and the three-dimensional point cloud at the lower half part after the filling can form a point cloud cluster corresponding to the second target object.
Further, according to the distance between the second target object and the detection device, the point cloud cluster corresponding to the second target object is detected by adopting a detection model corresponding to the distance, namely, the second target object is detected to be a pedestrian, a vehicle or other objects by the detection model. The number of the second target objects is not limited, and may be one or a plurality of second target objects. Since the distance between the second target object and the detection device is greater than the first preset distance, the second target object can be detected by using a detection model corresponding to a second preset distance greater than the first preset distance.
Optionally, the determining the point cloud cluster corresponding to the second target object outside the first preset distance according to the ground point cloud outside the first preset distance includes clustering three-dimensional point clouds of the three-dimensional point clouds outside the first preset distance after the ground point clouds are removed to obtain a partial point cloud corresponding to the second target object, and determining the point cloud cluster corresponding to the second target object according to the partial point cloud corresponding to the second target object and the ground point cloud outside the first preset distance.
For example, the three-dimensional point cloud, which is obtained by the detection device and is detected to be far from the detection device beyond 80 meters, may include a ground point cloud, so that the ground point cloud in the three-dimensional point cloud beyond 80 meters needs to be removed, and the three-dimensional point cloud after the ground point cloud is removed is clustered, so as to obtain a partial point cloud corresponding to the second target object, for example, the three-dimensional point cloud 104 shown in fig. 11.
Or after determining the ground gradient of the ground on which the first target object is positioned, extending the ground on which the first target object is positioned according to the ground gradient to obtain the ground point cloud beyond 80 meters. When detecting a second target object beyond 80 meters, removing the extended surface point cloud beyond 80 meters, and clustering the three-dimensional point cloud with the surface point cloud removed in the three-dimensional point cloud beyond 80 meters, so as to obtain a part of point cloud corresponding to the second target object. Further, according to the partial point cloud corresponding to the second target object and the ground point cloud outside 80 meters, a point cloud cluster corresponding to the second target object is determined, specifically, the lower half part of the second target object is supplemented, so that the bottom of the second target object is in the same plane with the bottoms of the first target object 101, the first target object 102 and the first target object 103.
Specifically, the clustering process is similar to the clustering process described above, and will not be described here again. The difference is that the vehicle height H used in the clustering process here is larger than the vehicle height H used in the clustering process described above, for example, the vehicle height H used in the clustering process here may take a value of 1.6 meters or 2.5 meters. Optionally, if the second target object is a vehicle and the width of the second target object is smaller than or equal to a first width, removing three-dimensional point clouds with the height larger than or equal to a first height in a point cloud cluster corresponding to the second target object to obtain residual three-dimensional point clouds corresponding to the second target object, and if the second target object is a vehicle and the width of the second target object is larger than the first width and smaller than or equal to a second width, removing three-dimensional point clouds with the height larger than or equal to a second height in a point cloud cluster corresponding to the second target object to obtain residual three-dimensional point clouds corresponding to the second target object, and generating an identification frame for representing the vehicle according to the residual three-dimensional point clouds corresponding to the second target object, wherein the identification frame is used for navigation decision of the movable platform, and the second width is larger than the first width and the second height is larger than the first height.
It can be understood that, since there may be a micro object such as a guideboard or a branch above the second target object, since the micro object such as a guideboard or a branch may be very close to the second target object, when the point cloud cluster corresponding to the second target object is obtained through clustering, the three-dimensional point cloud of the micro object such as the guideboard or the branch may be included in the point cloud cluster corresponding to the second target object. Therefore, when the vehicle-mounted device determines that the second target object is a vehicle by adopting the detection model corresponding to the distance between the second target object and the detection device, further processing is required to be performed on the point cloud cluster corresponding to the second target object.
Specifically, according to the width of the second target object, it is determined that the second target object is a trolley or a cart, for example, the width of the second target object is smaller than or equal to the first width, and then it is determined that the second target object is a trolley. If the width of the second target object is greater than the first width and the width of the second target object is less than or equal to the second width, determining that the second target object is a cart, specifically, the second width is greater than the first width. Further, if the second target object is a trolley, removing three-dimensional point clouds with the height greater than or equal to the first height, for example, with the height greater than or equal to 1.8 meters, from the point cloud cluster corresponding to the second target object, so as to obtain remaining three-dimensional point clouds corresponding to the second target object. And if the second target object is a cart, removing three-dimensional point clouds with the height larger than or equal to a second height, for example, with the height more than 3.2 meters, in the point cloud cluster corresponding to the second target object to obtain the residual three-dimensional point clouds corresponding to the second target object. As shown in fig. 12, the three-dimensional point cloud in the circle 105 is a three-dimensional point cloud corresponding to a branch. Further, according to the residual three-dimensional point cloud corresponding to the second target object, an identification frame for representing the vehicle is generated. For example, the three-dimensional point cloud corresponding to the branches in the circle 105 shown in fig. 12 is removed on the basis of the three-dimensional point cloud 104 shown in fig. 11 to obtain the remaining three-dimensional point cloud corresponding to the second target object, and further, the lower half part of the second target object is supplemented according to the ground point cloud outside 80 meters, for example, the three-dimensional point cloud of the lower half part of the second target object needs to be supplemented, so that the bottom of the second target object is in the same plane with the bottoms of the first target object 101, the first target object 102 and the first target object 103, and the second target object, namely, the identification frame 106 for characterizing the vehicle shown in fig. 12 is obtained. Further, the vehicle on which the detection device is mounted, for example, the vehicle 11 may make a navigation decision based on the identification frame 106, for example, plan a route based on the identification frame 106, plan a travel route of the vehicle 11 in advance, or control the vehicle 11 to switch to another lane in advance, or control the vehicle speed of the vehicle 11 in advance, or the like.
According to the method, the remote ground point cloud is determined according to the position of the first target object at the position, and the second target object at the position is detected according to the remote ground point cloud, so that the movable platform with the detection equipment carries out navigation decision according to the second target object at the position, and the safety of the movable platform is improved. In addition, by detecting that the second target object is a cart or a trolley and removing three-dimensional point clouds which may be micro objects such as signboards or branches in the three-dimensional point clouds corresponding to the second target object according to the height corresponding to the cart or the height corresponding to the trolley, the detection precision of the second target object is improved. In addition, according to the positions of at least three first target objects, the gradient of a plane formed by at least three first target objects is determined, the gradient of the ground where the first target objects are located is determined according to the gradient of the plane, whether the ground is a horizontal ground, a viaduct or a slope or the like can be determined according to the gradient of the ground where the first target objects are located, so that the detection precision of ground identification is improved, when the ground point cloud is removed, the point cloud of the horizontal ground can be removed, and the point cloud of the road surface such as the viaduct or the slope can be removed, so that the influence of the point cloud of the horizontal ground, the point cloud of the road surface such as the viaduct or the slope on the detection of the first target object or the second target object is reduced, and the detection precision of the first target object or the second target object is further improved.
The embodiment of the application provides a target object detection system. Fig. 13 is a block diagram of a target object detection system according to an embodiment of the present application, and as shown in fig. 13, a target object detection system 130 includes a detection device 131, a memory 132, and a processor 133. The detecting device 131 is configured to detect an environment around the movable platform to obtain a three-dimensional point cloud. The processor 133 may specifically be a component in the in-vehicle apparatus in the above embodiment, or other component, device, or assembly having a data processing function mounted in the vehicle. Specifically, the memory 132 is configured to store a program code, and the processor 133 is configured to invoke the program code, and when the program code is executed, to obtain the three-dimensional point cloud, cluster the three-dimensional point cloud to obtain a point cloud cluster corresponding to a first target object, wherein a height of a clustering center of the clustered point cloud cluster meets a preset height condition, determine a target detection model according to a distance between the first target object and the movable platform and a corresponding relation between the distance and a detection model, detect the point cloud cluster corresponding to the first target object through the target detection model, and determine an object type of the first target object.
Optionally, the processor 133 is further configured to determine a movement direction of the first target object and adjust the movement direction of the first target object to a preset direction before determining the object type of the first target object by detecting the point cloud cluster corresponding to the first target object through the target detection model.
Optionally, the preset direction is a movement direction of a sample object used for training the detection model.
Optionally, when determining the moving direction of the first target object, the processor 133 is specifically configured to determine the moving direction of the first target object according to the three-dimensional point cloud corresponding to the first target object at the first moment and the three-dimensional point cloud corresponding to the first target object at the second moment.
Optionally, when determining the movement direction of the first target object according to the three-dimensional point cloud corresponding to the first target object at the first moment and the three-dimensional point cloud corresponding to the first target object at the second moment, the processor 133 is specifically configured to respectively project the three-dimensional point cloud corresponding to the first target object at the first moment and the three-dimensional point cloud corresponding to the first target object at the second moment into a world coordinate system, and determine the movement direction of the first target object according to the three-dimensional point cloud corresponding to the first target object at the first moment and the three-dimensional point cloud corresponding to the first target object at the second moment in the world coordinate system.
Optionally, when determining the moving direction of the first target object, the processor 133 is specifically configured to project a three-dimensional point cloud corresponding to the first target object at a first moment in a two-dimensional image at the first moment to obtain a first projection point, project a three-dimensional point cloud corresponding to the first target object at a second moment in the two-dimensional image at the second moment to obtain a second projection point, determine three-dimensional information of the first feature point according to the first projection point and a first feature point in the two-dimensional image at the first moment, where the first feature point is a feature point whose position relationship with the first projection point conforms to a preset position relationship, determine three-dimensional information of the second feature point according to the second projection point and a second feature point in the two-dimensional image at the second moment, where the second feature point corresponds to the first feature point and the second feature point, and determine three-dimensional information of the second feature point according to the position relationship between the second projection point and the first feature point.
Optionally, when determining the three-dimensional information of the first feature point according to the first projection point and the first feature point in the two-dimensional image at the first moment, the processor 133 is specifically configured to determine a weight coefficient corresponding to the first projection point according to a distance between the first projection point and the first feature point in the two-dimensional image at the first moment, and determine the three-dimensional information of the first feature point according to the weight coefficient corresponding to the first projection point and the three-dimensional information of the first projection point.
Optionally, when determining the three-dimensional information of the second feature point according to the second projection point and the second feature point in the two-dimensional image at the second moment, the processor 133 is specifically configured to determine a weight coefficient corresponding to the second projection point according to a distance between the second projection point and the second feature point in the two-dimensional image at the second moment, and determine the three-dimensional information of the second feature point according to the weight coefficient corresponding to the second projection point and the three-dimensional information of the second projection point.
Optionally, before determining the moving direction of the first target object according to the three-dimensional information of the first feature point and the three-dimensional information of the second feature point, the processor 133 is further configured to convert the three-dimensional information of the first feature point and the three-dimensional information of the second feature point into a world coordinate system respectively.
Optionally, the processor 133 is further configured to verify a detection result of the target detection model according to a preset condition if the first target object is determined to be a vehicle by the target detection model after detecting the point cloud cluster corresponding to the first target object by the target detection model and determining the object type of the first target object.
Optionally, the preset condition includes at least one of the size of the first target object meeting a preset size, and the contact ratio between the first target object and other target objects around the first target object being smaller than a preset threshold.
Optionally, before clustering the three-dimensional point clouds to obtain a point cloud cluster corresponding to the first target object, the processor 133 is further configured to remove a specific point cloud in the three-dimensional point clouds, where the specific point cloud includes a ground point cloud.
Optionally, the distance between the first target object and the movable platform is smaller than or equal to a first preset distance, the processor 133 detects a point cloud cluster corresponding to the first target object through the target detection model, and after determining the object type of the first target object, the processor is further configured to determine a ground point cloud outside the first preset distance according to the position of the first target object if the first target object is determined to be a vehicle through the target detection model, and determine an object type of a second target object outside the first preset distance according to the ground point cloud outside the first preset distance.
Optionally, when determining the ground point cloud outside the first preset distance according to the position of the first target object, the processor 133 is specifically configured to determine a gradient of the ground where the first target object is located according to the position of the first target object, and determine the ground point cloud outside the first preset distance according to the gradient of the ground.
Optionally, the processor 133 is specifically configured to determine, according to the positions of the at least three first target objects, a gradient of a plane formed by the at least three first target objects, where the gradient of the plane is a gradient of a ground on which the first target objects are located when determining the gradient of the ground on which the first target objects are located.
Optionally, when determining the object type of the second target object beyond the first preset distance according to the ground point cloud beyond the first preset distance, the processor 133 is specifically configured to determine, according to the ground point cloud beyond the first preset distance, a point cloud cluster corresponding to the second target object beyond the first preset distance, wherein the bottom of the second target object is in the same plane with the bottom of the first target object, detect, by a detection model corresponding to a distance between the second target object and the movable platform, the point cloud cluster corresponding to the second target object, and determine the object type of the second target object.
Optionally, when determining the point cloud cluster corresponding to the second target object outside the first preset distance according to the ground point cloud outside the first preset distance, the processor 133 is specifically configured to cluster the three-dimensional point cloud after the ground point cloud is removed in the three-dimensional point cloud outside the first preset distance to obtain a partial point cloud corresponding to the second target object, and determine the point cloud cluster corresponding to the second target object according to the partial point cloud corresponding to the second target object and the ground point cloud outside the first preset distance.
Optionally, the processor 133 is further configured to remove three-dimensional point clouds with a height greater than or equal to a first height from the point cloud clusters corresponding to the second target object if the second target object is a vehicle and the width of the second target object is less than or equal to a first width, obtain remaining three-dimensional point clouds corresponding to the second target object, and if the second target object is a vehicle and the width of the second target object is greater than the first width and less than or equal to a second width, remove three-dimensional point clouds with a height greater than or equal to a second height from the point cloud clusters corresponding to the second target object, obtain remaining three-dimensional point clouds corresponding to the second target object, generate an identification frame for characterizing the vehicle according to the remaining three-dimensional point clouds corresponding to the second target object, and the identification frame is used for navigation decision of the movable platform, wherein the second width is greater than the first width and the second height is greater than the first height.
The specific principle and implementation manner of the target object detection system provided in the embodiment of the present application are similar to those of the foregoing embodiment, and are not repeated herein.
The embodiment of the application provides a movable platform. The movable platform comprises a body, a power system and a target object detection system as described in the embodiment. Wherein, the driving system is installed at the fuselage for providing the removal power. The target object detection system may implement the target object detection method described above, and the specific principle and implementation manner of the target object detection method are similar to those of the foregoing embodiments, which are not described herein again. The present embodiment is not limited to the specific form of the movable platform, and for example, the movable platform may be an unmanned plane, a movable robot, a vehicle, or the like.
In addition, the present embodiment also provides a computer-readable storage medium having stored thereon a computer program that is executed by a processor to implement the target object detection method described in the above embodiments.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in hardware plus software functional units.
The integrated units implemented in the form of software functional units described above may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium, and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to perform part of the steps of the methods according to the embodiments of the present application. The storage medium includes a U disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, an optical disk, or other various media capable of storing program codes.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional modules is illustrated, and in practical application, the above-described functional allocation may be performed by different functional modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules to perform all or part of the functions described above. The specific working process of the above-described device may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
It should be noted that the above embodiments are merely for illustrating the technical solution of the present application and not for limiting the same, and although the present application has been described in detail with reference to the above embodiments, it should be understood by those skilled in the art that the technical solution described in the above embodiments may be modified or some or all of the technical features may be equivalently replaced, and these modifications or substitutions do not make the essence of the corresponding technical solution deviate from the scope of the technical solution of the embodiments of the present application.

Claims (57)

1. The method for detecting the target object is characterized by being applied to a movable platform, wherein the movable platform is provided with a detection device, and the detection device is used for detecting the surrounding environment of the movable platform to obtain a three-dimensional point cloud, and the method comprises the following steps:
acquiring the three-dimensional point cloud;
clustering the three-dimensional point cloud to obtain a point cloud cluster corresponding to the first target object, wherein the height of a clustering center of the clustered point cloud cluster accords with a preset height condition;
determining a target detection model according to the distance between the first target object and the movable platform and the corresponding relation between the distance and the detection model;
And detecting a point cloud cluster corresponding to the first target object through the target detection model, and determining the object type of the first target object.
2. The method of claim 1, wherein the detecting, by the object detection model, the point cloud cluster corresponding to the first target object, before determining the object type of the first target object, the method further comprises:
determining a movement direction of the first target object;
And adjusting the movement direction of the first target object to be a preset direction.
3. The method according to claim 2, wherein the preset direction is a movement direction of a sample object for training the detection model.
4. A method according to claim 3, wherein said determining the direction of movement of the first target object comprises:
And determining the movement direction of the first target object according to the three-dimensional point cloud corresponding to the first target object at the first moment and the three-dimensional point cloud corresponding to the first target object at the second moment.
5. The method of claim 4, wherein determining the direction of motion of the first target object based on the three-dimensional point cloud corresponding to the first target object at the first time and the three-dimensional point cloud corresponding to the first target object at the second time comprises:
respectively projecting the three-dimensional point cloud corresponding to the first target object at the first moment and the three-dimensional point cloud corresponding to the first target object at the second moment into a world coordinate system;
And determining the movement direction of the first target object according to the three-dimensional point cloud corresponding to the first target object at the first moment and the three-dimensional point cloud corresponding to the first target object at the second moment in the world coordinate system.
6. A method according to claim 3, wherein said determining the direction of movement of the first target object comprises:
Projecting the three-dimensional point cloud corresponding to the first target object at the first moment into a two-dimensional image at the first moment to obtain a first projection point;
Projecting the three-dimensional point cloud corresponding to the first target object at the second moment into a two-dimensional image at the second moment to obtain a second projection point;
Determining three-dimensional information of the first feature point according to the first projection point and a first feature point in the two-dimensional image at the first moment, wherein the first feature point is a feature point with a position relationship conforming to a preset position relationship with the first projection point;
Determining three-dimensional information of a second feature point according to the second projection point and the second feature point in the two-dimensional image at the second moment, wherein the second feature point is a feature point with a position relationship conforming to a preset position relationship with the second projection point, and the second feature point corresponds to the first feature point;
And determining the movement direction of the first target object according to the three-dimensional information of the first characteristic point and the three-dimensional information of the second characteristic point.
7. The method of claim 6, wherein determining three-dimensional information of the first feature point from the first projection point and a first feature point in the two-dimensional image at the first time instant comprises:
Determining a weight coefficient corresponding to the first projection point according to the distance between the first projection point and a first characteristic point in the two-dimensional image at the first moment;
And determining the three-dimensional information of the first characteristic point according to the weight coefficient corresponding to the first projection point and the three-dimensional information of the first projection point.
8. The method of claim 6, wherein determining three-dimensional information of the second feature point from the second projection point and a second feature point in the two-dimensional image at the second time instant comprises:
determining a weight coefficient corresponding to the second projection point according to the distance between the second projection point and a second characteristic point in the two-dimensional image at the second moment;
And determining the three-dimensional information of the second characteristic point according to the weight coefficient corresponding to the second projection point and the three-dimensional information of the second projection point.
9. The method according to any one of claims 6-8, wherein before determining the direction of motion of the first target object based on the three-dimensional information of the first feature point and the three-dimensional information of the second feature point, the method further comprises:
And respectively converting the three-dimensional information of the first characteristic point and the three-dimensional information of the second characteristic point into a world coordinate system.
10. The method according to any one of claims 1-8, wherein after detecting a point cloud cluster corresponding to the first target object by the target detection model and determining an object type of the first target object, the method further comprises:
And if the first target object is determined to be the vehicle through the target detection model, verifying the detection result of the target detection model according to preset conditions.
11. The method according to claim 9, wherein after detecting the point cloud cluster corresponding to the first target object by the target detection model and determining the object type of the first target object, the method further comprises:
And if the first target object is determined to be the vehicle through the target detection model, verifying the detection result of the target detection model according to preset conditions.
12. The method of claim 10, wherein the preset conditions include at least one of:
The size of the first target object meets a preset size;
and the space overlap ratio between the first target object and other target objects around the first target object is smaller than a preset threshold value.
13. The method of claim 11, wherein the preset conditions include at least one of:
The size of the first target object meets a preset size;
and the space overlap ratio between the first target object and other target objects around the first target object is smaller than a preset threshold value.
14. The method according to any one of claims 1-8 or 11-13, wherein before clustering the three-dimensional point cloud to obtain a point cloud cluster corresponding to the first target object, the method further comprises:
And removing a specific point cloud in the three-dimensional point cloud, wherein the specific point cloud comprises a ground point cloud.
15. The method of claim 9, wherein prior to clustering the three-dimensional point cloud to obtain a point cloud cluster corresponding to the first target object, the method further comprises:
And removing a specific point cloud in the three-dimensional point cloud, wherein the specific point cloud comprises a ground point cloud.
16. The method of claim 10, wherein prior to clustering the three-dimensional point cloud to obtain a point cloud cluster corresponding to the first target object, the method further comprises:
And removing a specific point cloud in the three-dimensional point cloud, wherein the specific point cloud comprises a ground point cloud.
17. The method of any one of claims 1-8, wherein a distance of the first target object relative to the movable platform is less than or equal to a first preset distance;
The method further comprises the steps of after the point cloud cluster corresponding to the first target object is detected through the target detection model and the object type of the first target object is determined, the method further comprises:
If the first target object is determined to be a vehicle through the target detection model, determining a ground point cloud outside the first preset distance according to the position of the first target object;
And determining the object type of the second target object beyond the first preset distance according to the ground point cloud beyond the first preset distance.
18. The method of claim 9, wherein the first target object is less than or equal to a first predetermined distance from the movable platform;
The method further comprises the steps of after the point cloud cluster corresponding to the first target object is detected through the target detection model and the object type of the first target object is determined, the method further comprises:
If the first target object is determined to be a vehicle through the target detection model, determining a ground point cloud outside the first preset distance according to the position of the first target object;
And determining the object type of the second target object beyond the first preset distance according to the ground point cloud beyond the first preset distance.
19. The method of claim 17, wherein the determining the ground point cloud outside the first predetermined distance based on the location of the first target object comprises:
Determining the gradient of the ground where the first target object is located according to the position of the first target object;
And determining the ground point cloud outside the first preset distance according to the gradient of the ground.
20. The method of claim 18, wherein determining the ground point cloud outside the first predetermined distance based on the location of the first target object comprises:
Determining the gradient of the ground where the first target object is located according to the position of the first target object;
And determining the ground point cloud outside the first preset distance according to the gradient of the ground.
21. The method of claim 19, wherein determining the slope of the ground on which the first target object is located based on the location of the first target object comprises:
And determining the gradient of a plane formed by at least three first target objects according to the positions of the at least three first target objects, wherein the gradient of the plane is the gradient of the ground on which the first target objects are positioned.
22. The method of claim 20, wherein determining the slope of the ground on which the first target object is located based on the location of the first target object comprises:
And determining the gradient of a plane formed by at least three first target objects according to the positions of the at least three first target objects, wherein the gradient of the plane is the gradient of the ground on which the first target objects are positioned.
23. The method of claim 17, wherein determining the object type of the second target object outside the first preset distance from the ground point cloud outside the first preset distance comprises:
determining a point cloud cluster corresponding to a second target object outside the first preset distance according to the ground point cloud outside the first preset distance, wherein the bottom of the second target object and the bottom of the first target object are in the same plane;
And detecting a point cloud cluster corresponding to the second target object through a detection model corresponding to the distance between the second target object and the movable platform, and determining the object type of the second target object.
24. The method according to any one of claims 18-22, wherein determining the object type of the second target object outside the first preset distance from the ground point cloud outside the first preset distance comprises:
determining a point cloud cluster corresponding to a second target object outside the first preset distance according to the ground point cloud outside the first preset distance, wherein the bottom of the second target object and the bottom of the first target object are in the same plane;
And detecting a point cloud cluster corresponding to the second target object through a detection model corresponding to the distance between the second target object and the movable platform, and determining the object type of the second target object.
25. The method of claim 23, wherein the determining, according to the ground point cloud outside the first preset distance, the point cloud cluster corresponding to the second target object outside the first preset distance includes:
Clustering three-dimensional point clouds of the three-dimensional point clouds outside the first preset distance after the ground point clouds are removed, and obtaining partial point clouds corresponding to the second target object;
And determining a point cloud cluster corresponding to the second target object according to the partial point cloud corresponding to the second target object and the ground point cloud outside the first preset distance.
26. The method of claim 24, wherein the determining, according to the ground point cloud outside the first preset distance, a point cloud cluster corresponding to the second target object outside the first preset distance includes:
Clustering three-dimensional point clouds of the three-dimensional point clouds outside the first preset distance after the ground point clouds are removed, and obtaining partial point clouds corresponding to the second target object;
And determining a point cloud cluster corresponding to the second target object according to the partial point cloud corresponding to the second target object and the ground point cloud outside the first preset distance.
27. The method according to claim 25 or 26, characterized in that the method further comprises:
If the second target object is a vehicle and the width of the second target object is smaller than or equal to the first width, removing three-dimensional point clouds with the height larger than or equal to the first height in the point cloud cluster corresponding to the second target object to obtain residual three-dimensional point clouds corresponding to the second target object;
If the second target object is a vehicle, the width of the second target object is larger than the first width, and the width of the second target object is smaller than or equal to the second width, removing three-dimensional point clouds with the height larger than or equal to the second height in the point cloud cluster corresponding to the second target object, and obtaining residual three-dimensional point clouds corresponding to the second target object;
Generating an identification frame for representing the vehicle according to the residual three-dimensional point cloud corresponding to the second target object, wherein the identification frame is used for the movable platform to carry out navigation decision;
wherein the second width is greater than the first width and the second height is greater than the first height.
28. A detection system of a target object is characterized by comprising a detection device, a memory and a processor;
The detection equipment is used for detecting the surrounding environment of the movable platform to obtain a three-dimensional point cloud;
The memory is used for storing program codes;
the processor invokes the program code, which when executed, is operable to:
acquiring the three-dimensional point cloud;
clustering the three-dimensional point cloud to obtain a point cloud cluster corresponding to the first target object, wherein the height of a clustering center of the clustered point cloud cluster accords with a preset height condition;
determining a target detection model according to the distance between the first target object and the movable platform and the corresponding relation between the distance and the detection model;
And detecting a point cloud cluster corresponding to the first target object through the target detection model, and determining the object type of the first target object.
29. The system of claim 28, wherein the processor, prior to detecting, by the object detection model, a point cloud cluster corresponding to the first target object, is further configured to:
determining a movement direction of the first target object;
And adjusting the movement direction of the first target object to be a preset direction.
30. The system of claim 29, wherein the predetermined direction is a direction of movement of a sample object used to train the detection model.
31. The system of claim 30, wherein the processor, when determining the direction of motion of the first target object, is specifically configured to:
And determining the movement direction of the first target object according to the three-dimensional point cloud corresponding to the first target object at the first moment and the three-dimensional point cloud corresponding to the first target object at the second moment.
32. The system of claim 31, wherein the processor is configured to determine the direction of movement of the first target object based on the three-dimensional point cloud corresponding to the first target object at a first time and the three-dimensional point cloud corresponding to the first target object at a second time when:
respectively projecting the three-dimensional point cloud corresponding to the first target object at the first moment and the three-dimensional point cloud corresponding to the first target object at the second moment into a world coordinate system;
And determining the movement direction of the first target object according to the three-dimensional point cloud corresponding to the first target object at the first moment and the three-dimensional point cloud corresponding to the first target object at the second moment in the world coordinate system.
33. The system of claim 30, wherein the processor, when determining the direction of motion of the first target object, is specifically configured to:
Projecting the three-dimensional point cloud corresponding to the first target object at the first moment into a two-dimensional image at the first moment to obtain a first projection point;
Projecting the three-dimensional point cloud corresponding to the first target object at the second moment into a two-dimensional image at the second moment to obtain a second projection point;
Determining three-dimensional information of the first feature point according to the first projection point and a first feature point in the two-dimensional image at the first moment, wherein the first feature point is a feature point with a position relationship conforming to a preset position relationship with the first projection point;
Determining three-dimensional information of a second feature point according to the second projection point and the second feature point in the two-dimensional image at the second moment, wherein the second feature point is a feature point with a position relationship conforming to a preset position relationship with the second projection point, and the second feature point corresponds to the first feature point;
And determining the movement direction of the first target object according to the three-dimensional information of the first characteristic point and the three-dimensional information of the second characteristic point.
34. The system of claim 33, wherein the processor is configured to, when determining the three-dimensional information of the first feature point based on the first projection point and the first feature point in the two-dimensional image at the first time, specifically:
Determining a weight coefficient corresponding to the first projection point according to the distance between the first projection point and a first characteristic point in the two-dimensional image at the first moment;
And determining the three-dimensional information of the first characteristic point according to the weight coefficient corresponding to the first projection point and the three-dimensional information of the first projection point.
35. The system of claim 33, wherein the processor is configured to, when determining the three-dimensional information of the second feature point based on the second projection point and the second feature point in the two-dimensional image at the second time, specifically:
determining a weight coefficient corresponding to the second projection point according to the distance between the second projection point and a second characteristic point in the two-dimensional image at the second moment;
And determining the three-dimensional information of the second characteristic point according to the weight coefficient corresponding to the second projection point and the three-dimensional information of the second projection point.
36. The system of any one of claims 33-35, wherein prior to determining the direction of motion of the first target object based on the three-dimensional information of the first feature point and the three-dimensional information of the second feature point, the processor is further configured to:
And respectively converting the three-dimensional information of the first characteristic point and the three-dimensional information of the second characteristic point into a world coordinate system.
37. The system of any one of claims 28-35, wherein the processor, after detecting, by the object detection model, a point cloud cluster corresponding to the first target object, determines an object type of the first target object, is further configured to:
And if the first target object is determined to be the vehicle through the target detection model, verifying the detection result of the target detection model according to preset conditions.
38. The system of claim 36, wherein the processor, after detecting the point cloud cluster corresponding to the first target object by the target detection model, is further configured to:
And if the first target object is determined to be the vehicle through the target detection model, verifying the detection result of the target detection model according to preset conditions.
39. The system of claim 37, wherein the preset conditions include at least one of:
The size of the first target object meets a preset size;
And the contact ratio between the first target object and other target objects around the first target object is smaller than a preset threshold value.
40. The system of claim 38, wherein the preset conditions include at least one of:
The size of the first target object meets a preset size;
And the contact ratio between the first target object and other target objects around the first target object is smaller than a preset threshold value.
41. The system of any one of claims 28-35 or 38-40, wherein prior to clustering the three-dimensional point cloud to obtain a point cloud cluster corresponding to a first target object, the processor is further configured to:
And removing a specific point cloud in the three-dimensional point cloud, wherein the specific point cloud comprises a ground point cloud.
42. The system of claim 36, wherein prior to clustering the three-dimensional point cloud to obtain a point cloud cluster corresponding to the first target object, the processor is further configured to:
And removing a specific point cloud in the three-dimensional point cloud, wherein the specific point cloud comprises a ground point cloud.
43. The system of claim 37, wherein prior to clustering the three-dimensional point cloud to obtain a point cloud cluster corresponding to a first target object, the processor is further configured to:
And removing a specific point cloud in the three-dimensional point cloud, wherein the specific point cloud comprises a ground point cloud.
44. The system of any one of claims 28-35, wherein a distance of the first target object relative to the movable platform is less than or equal to a first preset distance;
the processor detects a point cloud cluster corresponding to the first target object through the target detection model, and after determining the object type of the first target object, the processor is further configured to:
If the first target object is determined to be a vehicle through the target detection model, determining a ground point cloud outside the first preset distance according to the position of the first target object;
And determining the object type of the second target object beyond the first preset distance according to the ground point cloud beyond the first preset distance.
45. The system of claim 36, wherein the first target object is less than or equal to a first predetermined distance from the movable platform;
the processor detects a point cloud cluster corresponding to the first target object through the target detection model, and after determining the object type of the first target object, the processor is further configured to:
If the first target object is determined to be a vehicle through the target detection model, determining a ground point cloud outside the first preset distance according to the position of the first target object;
And determining the object type of the second target object beyond the first preset distance according to the ground point cloud beyond the first preset distance.
46. The system of claim 44, wherein the processor is configured to, based on the location of the first target object, determine a cloud of ground points outside the first predetermined distance by:
Determining the gradient of the ground where the first target object is located according to the position of the first target object;
And determining the ground point cloud outside the first preset distance according to the gradient of the ground.
47. The system of claim 45, wherein the processor is configured to, based on the location of the first target object, determine a cloud of ground points outside the first predetermined distance by:
Determining the gradient of the ground where the first target object is located according to the position of the first target object;
And determining the ground point cloud outside the first preset distance according to the gradient of the ground.
48. The system of claim 46, wherein the processor is configured to, based on the location of the first target object, determine a grade of a ground surface on which the first target object is located, specifically:
And determining the gradient of a plane formed by at least three first target objects according to the positions of the at least three first target objects, wherein the gradient of the plane is the gradient of the ground on which the first target objects are positioned.
49. The system of claim 47, wherein the processor is configured to, based on the location of the first target object, determine a slope of a ground surface on which the first target object is located, specifically:
And determining the gradient of a plane formed by at least three first target objects according to the positions of the at least three first target objects, wherein the gradient of the plane is the gradient of the ground on which the first target objects are positioned.
50. The system of claim 44, wherein the processor is configured to, when determining the object type of the second target object outside the first predetermined distance from the cloud of ground points outside the first predetermined distance:
determining a point cloud cluster corresponding to a second target object outside the first preset distance according to the ground point cloud outside the first preset distance, wherein the bottom of the second target object and the bottom of the first target object are in the same plane;
And detecting a point cloud cluster corresponding to the second target object through a detection model corresponding to the distance between the second target object and the movable platform, and determining the object type of the second target object.
51. The system of any one of claims 45-49, wherein the processor, when determining the object type of the second target object outside the first preset distance from the ground point cloud outside the first preset distance, is specifically configured to:
determining a point cloud cluster corresponding to a second target object outside the first preset distance according to the ground point cloud outside the first preset distance, wherein the bottom of the second target object and the bottom of the first target object are in the same plane;
And detecting a point cloud cluster corresponding to the second target object through a detection model corresponding to the distance between the second target object and the movable platform, and determining the object type of the second target object.
52. The system of claim 50, wherein the processor is configured to, when determining a point cloud cluster corresponding to a second target object outside the first preset distance according to the ground point cloud outside the first preset distance:
Clustering three-dimensional point clouds of the three-dimensional point clouds outside the first preset distance after the ground point clouds are removed, so as to obtain partial point clouds corresponding to the second target object;
And determining a point cloud cluster corresponding to the second target object according to the partial point cloud corresponding to the second target object and the ground point cloud outside the first preset distance.
53. The system of claim 51, wherein the processor is configured to, when determining a point cloud cluster corresponding to a second target object outside the first preset distance according to the ground point cloud outside the first preset distance:
Clustering three-dimensional point clouds of the three-dimensional point clouds outside the first preset distance after the ground point clouds are removed, so as to obtain partial point clouds corresponding to the second target object;
And determining a point cloud cluster corresponding to the second target object according to the partial point cloud corresponding to the second target object and the ground point cloud outside the first preset distance.
54. The system of claim 52 or 53, wherein the processor is further configured to:
If the second target object is a vehicle and the width of the second target object is smaller than or equal to the first width, removing three-dimensional point clouds with the height larger than or equal to the first height in the point cloud cluster corresponding to the second target object to obtain residual three-dimensional point clouds corresponding to the second target object;
If the second target object is a vehicle, the width of the second target object is larger than the first width, and the width of the second target object is smaller than or equal to the second width, removing three-dimensional point clouds with the height larger than or equal to the second height in the point cloud cluster corresponding to the second target object, and obtaining residual three-dimensional point clouds corresponding to the second target object;
Generating an identification frame for representing the vehicle according to the residual three-dimensional point cloud corresponding to the second target object, wherein the identification frame is used for the movable platform to carry out navigation decision, the second width is larger than the first width, and the second height is larger than the first height.
55. A movable platform, comprising:
A body;
The power system is arranged on the machine body and is used for providing moving power;
And a detection system for a target object according to any one of claims 28-54.
56. The mobile platform of claim 55, wherein the mobile platform comprises an unmanned aerial vehicle, a mobile robot, or a vehicle.
57. A computer readable storage medium, having stored thereon a computer program, the computer program being executed by a processor to implement the method of any of claims 1-27.
CN201980033130.6A 2019-09-10 2019-09-10 Target object detection method, system, device and storage medium Active CN112154454B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/105158 WO2021046716A1 (en) 2019-09-10 2019-09-10 Method, system and device for detecting target object and storage medium

Publications (2)

Publication Number Publication Date
CN112154454A CN112154454A (en) 2020-12-29
CN112154454B true CN112154454B (en) 2025-03-04

Family

ID=73891475

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980033130.6A Active CN112154454B (en) 2019-09-10 2019-09-10 Target object detection method, system, device and storage medium

Country Status (2)

Country Link
CN (1) CN112154454B (en)
WO (1) WO2021046716A1 (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US12243158B2 (en) * 2020-12-29 2025-03-04 Volvo Car Corporation Ensemble learning for cross-range 3D object detection in driver assist and autonomous driving systems
CN112835061B (en) * 2021-02-04 2024-02-13 郑州衡量科技股份有限公司 ToF sensor-based dynamic vehicle separation and width-height detection method and system
CN112906519B (en) * 2021-02-04 2023-09-26 北京邮电大学 Vehicle type identification method and device
CN112907745B (en) * 2021-03-23 2022-04-01 北京三快在线科技有限公司 Method and device for generating digital orthophoto map
CN113076922B (en) * 2021-04-21 2024-05-10 北京经纬恒润科技股份有限公司 Object detection method and device
CN113610967B (en) * 2021-08-13 2024-03-26 北京市商汤科技开发有限公司 Three-dimensional point detection method, three-dimensional point detection device, electronic equipment and storage medium
CN113894050B (en) * 2021-09-14 2023-05-23 深圳玩智商科技有限公司 Logistics part sorting method, sorting equipment and storage medium
CN113781639B (en) * 2021-09-22 2023-11-28 交通运输部公路科学研究所 Quick construction method for digital model of large-scene road infrastructure
CN113838196A (en) * 2021-11-24 2021-12-24 阿里巴巴达摩院(杭州)科技有限公司 Point cloud data processing method, device, equipment and storage medium
CN114565887A (en) * 2021-12-20 2022-05-31 高新兴创联科技有限公司 A detection method for forklift safety operation based on safety helmet wearing recognition
CN114162126B (en) * 2021-12-28 2024-07-05 上海洛轲智能科技有限公司 Vehicle control method, device, equipment, medium and product
CN115018910A (en) * 2022-04-19 2022-09-06 京东科技信息技术有限公司 Method and device for detecting target in point cloud data and computer readable storage medium
CN115082857A (en) * 2022-06-24 2022-09-20 深圳市镭神智能系统有限公司 A detection method, device, device and storage medium for a target object
CN115457496B (en) * 2022-09-09 2023-12-08 北京百度网讯科技有限公司 Automatic driving retaining wall detection method and device and vehicle
CN115600395B (en) * 2022-10-09 2023-07-18 南京领鹊科技有限公司 Indoor engineering quality acceptance evaluation method and device
CN119027683B (en) * 2024-10-24 2025-01-03 深圳大学 Scene completion method and system based on air point cloud priori guidance

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105975907A (en) * 2016-04-27 2016-09-28 江苏华通晟云科技有限公司 SVM model pedestrian detection method based on distributed platform
CN106204586A (en) * 2016-07-08 2016-12-07 华南农业大学 A kind of based on the moving target detecting method under the complex scene followed the tracks of

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8983201B2 (en) * 2012-07-30 2015-03-17 Microsoft Technology Licensing, Llc Three-dimensional visual phrases for object recognition
CN107895386A (en) * 2017-11-14 2018-04-10 中国航空工业集团公司西安飞机设计研究所 A kind of multi-platform joint objective autonomous classification method
CN108171796A (en) * 2017-12-25 2018-06-15 燕山大学 A kind of inspection machine human visual system and control method based on three-dimensional point cloud
CN108197566B (en) * 2017-12-29 2022-03-25 成都三零凯天通信实业有限公司 Monitoring video behavior detection method based on multi-path neural network
CN108317953A (en) * 2018-01-19 2018-07-24 东北电力大学 A kind of binocular vision target surface 3D detection methods and system based on unmanned plane
CN108319920B (en) * 2018-02-05 2021-02-09 武汉光谷卓越科技股份有限公司 Road marking detection and parameter calculation method based on line scanning three-dimensional point cloud
CN108680100B (en) * 2018-03-07 2020-04-17 福建农林大学 Method for matching three-dimensional laser point cloud data with unmanned aerial vehicle point cloud data
CN109813277B (en) * 2019-02-26 2021-07-16 北京中科慧眼科技有限公司 Construction method of ranging model, ranging method and device and automatic driving system
CN109902629A (en) * 2019-03-01 2019-06-18 成都康乔电子有限责任公司 A Real-time Vehicle Object Detection Model in Complex Traffic Scenarios

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105975907A (en) * 2016-04-27 2016-09-28 江苏华通晟云科技有限公司 SVM model pedestrian detection method based on distributed platform
CN106204586A (en) * 2016-07-08 2016-12-07 华南农业大学 A kind of based on the moving target detecting method under the complex scene followed the tracks of

Also Published As

Publication number Publication date
WO2021046716A1 (en) 2021-03-18
CN112154454A (en) 2020-12-29

Similar Documents

Publication Publication Date Title
CN112154454B (en) Target object detection method, system, device and storage medium
US11320833B2 (en) Data processing method, apparatus and terminal
EP3631494B1 (en) Integrated sensor calibration in natural scenes
CN109624974B (en) Vehicle control device, vehicle control method, and storage medium
US9070289B2 (en) System and method for detecting, tracking and estimating the speed of vehicles from a mobile platform
US9025825B2 (en) System and method for visual motion based object segmentation and tracking
CN111815641A (en) Camera and radar fusion
JP6804991B2 (en) Information processing equipment, information processing methods, and information processing programs
JP6826421B2 (en) Equipment patrol system and equipment patrol method
EP3349143B1 (en) Nformation processing device, information processing method, and computer-readable medium
CN112912920A (en) Point cloud data conversion method and system for 2D convolutional neural network
KR20200136905A (en) Signal processing device and signal processing method, program, and moving object
US20180273031A1 (en) Travel Control Method and Travel Control Apparatus
CN108572663A (en) Target Tracking
EP3552388B1 (en) Feature recognition assisted super-resolution method
CN110837814A (en) Vehicle navigation method, device and computer readable storage medium
KR102117313B1 (en) Gradient estimation device, gradient estimation method, computer program, and controlling system
CN112805766A (en) Apparatus and method for updating detailed map
CN112700486B (en) Method and device for estimating depth of road surface lane line in image
CN113743171A (en) Target detection method and device
CN112166446B (en) Method, system, device and computer-readable storage medium for identifying accessibility
CN111731304B (en) Vehicle control device, vehicle control method, and storage medium
JP6370234B2 (en) MAP DATA GENERATION DEVICE, MAP DATA GENERATION METHOD, MAP DATA GENERATION COMPUTER PROGRAM, AND VEHICLE POSITION DETECTION DEVICE
CN112020722A (en) Road shoulder identification based on three-dimensional sensor data
CN117635721A (en) Target positioning method, related system and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20240515

Address after: Building 3, Xunmei Science and Technology Plaza, No. 8 Keyuan Road, Science and Technology Park Community, Yuehai Street, Nanshan District, Shenzhen City, Guangdong Province, 518057, 1634

Applicant after: Shenzhen Zhuoyu Technology Co.,Ltd.

Country or region after: China

Address before: 518057 Shenzhen Nanshan High-tech Zone, Shenzhen, Guangdong Province, 6/F, Shenzhen Industry, Education and Research Building, Hong Kong University of Science and Technology, No. 9 Yuexingdao, South District, Nanshan District, Shenzhen City, Guangdong Province

Applicant before: SZ DJI TECHNOLOGY Co.,Ltd.

Country or region before: China

GR01 Patent grant
GR01 Patent grant