[go: up one dir, main page]

CN112883909B - Obstacle position detection method and device based on bounding box and electronic equipment - Google Patents

Obstacle position detection method and device based on bounding box and electronic equipment Download PDF

Info

Publication number
CN112883909B
CN112883909B CN202110283571.0A CN202110283571A CN112883909B CN 112883909 B CN112883909 B CN 112883909B CN 202110283571 A CN202110283571 A CN 202110283571A CN 112883909 B CN112883909 B CN 112883909B
Authority
CN
China
Prior art keywords
target
obstacle
landing point
alternative
image frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110283571.0A
Other languages
Chinese (zh)
Other versions
CN112883909A (en
Inventor
苏英菲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Neusoft Reach Automotive Technology Shenyang Co Ltd
Original Assignee
Neusoft Reach Automotive Technology Shenyang Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Neusoft Reach Automotive Technology Shenyang Co Ltd filed Critical Neusoft Reach Automotive Technology Shenyang Co Ltd
Priority to CN202110283571.0A priority Critical patent/CN112883909B/en
Publication of CN112883909A publication Critical patent/CN112883909A/en
Application granted granted Critical
Publication of CN112883909B publication Critical patent/CN112883909B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a surrounding box-based obstacle position detection method, a surrounding box-based obstacle position detection device and electronic equipment, and relates to the technical field of vehicle driving, comprising the following steps: acquiring an image frame aiming at a target obstacle at each moment in real time; detecting a plurality of alternative places where the target obstacle is located in each image frame through the bounding box; selecting a target landing point nearest to the fixed horizon from a plurality of candidate landing points of an image frame at a current time based on a trajectory characteristic of the target obstacle; and determining the position of the target obstacle at the current moment according to the target landing point serving as a ground reference, and determining the accurate landing point through the obstacle track, so as to obtain the accurate obstacle position with smaller error and ensure the reliability of automatic driving.

Description

Obstacle position detection method and device based on bounding box and electronic equipment
Technical Field
The invention relates to the technical field of vehicle driving, in particular to a surrounding box-based obstacle position detection method and device and electronic equipment.
Background
With the development of vehicle technology, accurately detecting the position information of an obstacle plays a key role in realizing automatic driving of a vehicle. At present, the bounding box detection method is often used for detecting the landing point of an obstacle vehicle, and then detecting the position information of the obstacle.
However, the inventor researches that, due to the complex and changeable driving environment of the vehicle, the obstacle vehicle is affected by road conditions and the like in the driving process to generate different driving postures, so that the obstacle vehicle cannot be ensured to be parallel to the acquisition direction of the acquisition device, and further the landing point precision of bounding box detection is poor, the detection accuracy of the obstacle position is affected, and the safety and reliability of automatic driving cannot be ensured.
Disclosure of Invention
In view of the above, the present invention aims to provide a method, a device and an electronic device for detecting a position of an obstacle based on a bounding box, which determine an accurate landing point through an obstacle track, thereby obtaining a relatively accurate obstacle position with a small error and ensuring the reliability of automatic driving.
In a first aspect, an embodiment provides a method for detecting a position of an obstacle based on a bounding box, including:
Acquiring an image frame aiming at a target obstacle at each moment in real time;
detecting a plurality of alternative landing points of a target obstacle in each image frame through a bounding box;
Selecting a target landing point closest to a fixed horizon from a plurality of alternative landing points of an image frame at a current moment based on track characteristics of the target obstacle;
And determining the position of the target obstacle at the current moment according to the target landing point serving as a ground reference.
In an alternative embodiment, the step of selecting the target landing point closest to the fixed horizon from a plurality of candidate landing points in the image frame at the current moment based on the trajectory characteristics of the target obstacle includes:
Acquiring a track characteristic from the track of the target obstacle, wherein the track characteristic comprises a slope;
If the slope is positive, taking a first alternative landing point positioned on the left side in the image frame at the current moment as a target landing point;
and if the slope is negative, taking a second alternative landing point positioned on the right side in the image frame at the current moment as a target landing point.
In an alternative embodiment, the step of detecting, by a bounding box, a plurality of alternative places of approach of the target obstacle and a trajectory of the target obstacle in each of the image frames includes:
detecting a plurality of alternative landing points of the target obstacle in each image frame through the bounding box;
Determining a trajectory of the target obstacle from a third alternative landing point of any one of the plurality of alternative landing points of an image frame at a current time and the third alternative landing point of the image frame at a plurality of historical times consecutive to the current time.
In an alternative embodiment, the step of detecting a plurality of alternative places of landing of the target obstacle in each of the image frames by a bounding box includes:
Detecting the box body frame outline of the target obstacle in each image frame through the bounding box;
And determining a plurality of alternative landing points according to the end points of the lower frame line of the box body frame outline, wherein the left end point of the lower frame line is a first alternative landing point, and the right end point of the lower frame line is a second alternative landing point.
In an alternative embodiment, the step of determining the trajectory of the target obstacle according to a third alternative landing point of any one of the plurality of alternative landing points of the image frame at the current time and the third alternative landing point of the image frame at a plurality of history times consecutive to the current time includes:
Determining the track of the target obstacle according to a first alternative landing point in an image frame at the current moment and first alternative landing points in image frames at a plurality of historical moments which are continuous with the current moment;
Or alternatively
Determining the track of the target obstacle according to the second alternative landing point in the image frame at the current moment and the second alternative landing points in the image frames at a plurality of historical moments which are continuous with the current moment.
In an alternative embodiment, the step of determining the position of the target obstacle at the current moment according to the target landing point as a ground reference includes:
determining depth information of the target landing point by a monocular camera detection method, wherein the depth information is two-dimensional image coordinates;
And converting the depth information into target obstacle position coordinates in a world coordinate system based on a camera imaging principle and a geometric relation by taking the target landing point as a ground reference.
In an alternative embodiment, before the step of acquiring the image frame for the target obstacle at each moment in time, the method further includes:
identifying the image frames acquired at each moment, and judging whether target barriers exist in the image frames or not;
If so, executing the step of acquiring the image frame aiming at the target obstacle at each moment in real time.
In a second aspect, an embodiment provides an obstacle position detection device based on a bounding box, the device comprising:
The acquisition module acquires image frames aiming at the target obstacle at each moment in real time;
The detection module is used for detecting a plurality of alternative places where the target obstacle is located and the track of the target obstacle in each image frame through the bounding box;
A selection module for selecting a target landing point closest to a fixed horizon from a plurality of candidate landing points of an image frame at a current moment based on the track characteristics of the target obstacle;
and the determining module is used for determining the position of the target obstacle at the current moment according to the target landing point serving as a ground reference.
In a third aspect, an embodiment provides an electronic device, including a memory, a processor, where the memory stores a computer program executable on the processor, and where the processor implements the steps of the method according to any of the foregoing embodiments when the computer program is executed.
In a fourth aspect, embodiments provide a machine-readable storage medium storing machine-executable instructions that, when invoked and executed by a processor, cause the processor to implement the steps of the method of any of the preceding embodiments.
According to the method, the device and the electronic equipment for detecting the position of the obstacle based on the bounding box, the bounding box is used for detecting a plurality of alternative landing points of the target obstacle in the image frame, the track of the target obstacle is confirmed according to the landing point conditions of the target obstacle at a plurality of moments, then the target landing point closest to the fixed horizon is selected from the alternative landing points according to track characteristics, and the position of the target obstacle at the current moment is determined based on the target landing point.
Additional features and advantages of the disclosure will be set forth in the description which follows, or in part will be obvious from the description, or may be learned by practice of the techniques of the disclosure.
The foregoing objects, features and advantages of the disclosure will be more readily apparent from the following detailed description of the preferred embodiments taken in conjunction with the accompanying drawings.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a bounding box detection touchdown point according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of another enclosure detection touchdown point according to an embodiment of the present invention;
FIG. 3 is a flowchart of a method for detecting a position of an obstacle based on a bounding box according to an embodiment of the present invention;
FIG. 4 is a functional block diagram of an obstacle position detecting device based on bounding boxes according to an embodiment of the present invention;
fig. 5 is a schematic diagram of a hardware architecture of an electronic device according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In the current detection algorithm for the obstacle based on the bounding box boundingbox, although the outline of the obstacle can be precisely framed, the specific position of each part of the obstacle cannot be determined, such as the position of the window structure, the position of the wheel structure, the position of the door structure, etc. in the obstacle vehicle cannot be identified. In general, in order to determine the accurate position of an obstacle, the calculation is performed through a landing point, and the landing point is the intersection point position of a wheel and the ground, and cannot be obtained by adopting the detection algorithm. In order to solve the above-mentioned problem and thus to be able to accurately detect the landing place, it is generally determined by a lower frame line of the bounding box with reference to a fixed horizon, wherein each frame line of the bounding box is parallel to the coordinate plane, as shown in fig. 1.
However, the inventor researches that, due to the different vehicle postures of the obstacle to be detected, the obstacle to be detected may incline and not be parallel to the fixed horizon, and at this time, a larger error may occur in the vehicle landing point detected by the bounding box, that is, one of the tire a landing points is far away from the lower frame line, as shown in fig. 2, at this time, if the tire a landing point is determined according to the lower frame line, the error is larger, and further the obstacle position determination and the safety reliability of automatic driving may be affected.
Based on the above, according to the obstacle position detection method based on the bounding box, the accurate landing point is determined through the obstacle track, so that the obstacle position with smaller error and more accuracy is obtained, and the reliability of automatic driving is ensured.
For the convenience of understanding the present embodiment, first, a method for detecting the position of an obstacle based on a bounding box disclosed in the present embodiment will be described in detail, where the method may be applied to control devices, such as a PC personal computer, a controller, a server, and an intelligent device, where such control devices may be independently provided or integrated in terminals, such as a vehicle-mounted control terminal and a road facility. As an optional embodiment, the road facility performs information interaction with the current vehicle, receives the image frames collected by the current vehicle or the road facility, detects the position of an obstacle in the running process of the current vehicle through the method provided by the embodiment of the invention, and sends the position of the obstacle to the current vehicle control equipment so as to enable the current vehicle to perform corresponding operation to avoid the obstacle, thereby ensuring the safety and reliability of automatic driving of the vehicle.
Fig. 3 is a flowchart of a method for detecting a position of an obstacle based on a bounding box according to an embodiment of the present invention.
As shown in fig. 3, the method comprises the steps of:
step S102, acquiring an image frame aiming at a target obstacle at each moment in real time;
In the running process of the current vehicle, each moment of running direction corresponds to one image frame, and each moment of image frame including the target obstacle in the image frame is acquired in real time. As a preferred embodiment, the acquisition direction of the acquisition device of the image frames is kept horizontal to the vehicle travel direction, and the acquisition device may be mounted on a vehicle or on a road facility on which the vehicle travels. One target obstacle, multiple target obstacles, or no target obstacle may be included in the image frame. Here, the target obstacle is a target object that affects the safe running of the current vehicle, including an obstacle vehicle, a large stone, a pedestrian, or the like on a running road. The embodiment of the invention determines the position of the target obstacle by the image frame with the target obstacle so that the current vehicle can perform corresponding operation and avoid the obstacle.
In some embodiments, before step S102, the following steps are further included:
Step 1.1), identifying the image frames acquired at each moment, and judging whether target barriers exist in the image frames.
Step 1.2), if any, a step of acquiring an image frame for the target obstacle at each moment in real time is performed, i.e., step S102.
Step 1.3), if not, the acquisition operation is not executed for the image frame, so that the operation steps are simplified, and the resources are saved.
Step S104, detecting a plurality of alternative places where the target obstacle is located and the track of the target obstacle in each image frame through bounding boxes;
Here, there may be a large error in directly determining the target obstacle position through the alternative landing points detected by the two-dimensional bounding box, and in order to ensure the accuracy of the obstacle position, in the embodiment of the present invention, the target landing point with a small error is further determined from the alternative landing points.
Step S106, selecting a target landing point closest to a fixed horizon from a plurality of alternative landing points of an image frame at the current moment based on the track characteristics of the target obstacle;
The inventor researches and finds that the error of determining the position of the target obstacle is minimum based on the landing point which is closest to the fixed horizon and is detected by the bounding box, and the alternative landing point which is closest to the fixed horizon is determined as the target landing point.
And step S108, determining the position of the target obstacle at the current moment according to the target landing point serving as a ground reference.
In a preferred embodiment of practical application, a plurality of alternative landing points of a target obstacle in an image frame are detected based on a bounding box, a track of the target obstacle is confirmed according to the landing point conditions of the target obstacle at a plurality of moments, then a target landing point closest to a fixed horizon is selected from the alternative landing points according to track characteristics, and the position of the target obstacle at the current moment is determined based on the target landing point.
In an alternative embodiment, step S106 further comprises the steps of:
Step 2.1), obtaining track characteristics from the track of the target obstacle, wherein the track characteristics comprise slopes;
Wherein the track can be determined based on a plurality of alternative places of landing of the target obstacle in the image frame at each moment, the characteristics of the track are extracted, and as an alternative embodiment, the slope of the track is calculated and taken as the current track characteristics. It should be noted that the track characteristic may be adjusted according to the practical application, and is not limited to the slope.
Step 2.2), if the slope is positive, taking a first alternative landing point positioned at the left side in the image frame at the current moment as a target landing point; the first alternative landing point to the left in the image frame is closest to the fixed horizon and the error for the obstacle position calculation is minimal.
Step 2.3), if the slope is negative, taking the second alternative landing point positioned on the right side in the image frame at the current moment as a target landing point. The same is true of the foregoing, and a detailed description thereof will be omitted.
As an alternative embodiment, the bounding box detection algorithm determines a plurality of alternative landing points through the bounding box lower wire if the left endpoint, right endpoint, midpoint, etc. feature points of the bounding box lower wire.
In some embodiments, the present application selects the feature points of the left and right endpoints as the alternative landing points, and step S104 may further be implemented by the following steps:
Step 3.1), detecting a plurality of alternative places where the target obstacle is located in each image frame through the bounding box;
Illustratively, a box frame contour of the target obstacle in each of the image frames may be detected by a bounding box; and determining a plurality of alternative landing points according to the end points of the lower frame line of the box body frame outline, wherein the left end point of the lower frame line is a first alternative landing point (such as the left lower corner point of the bounding box in fig. 1), and the right end point of the lower frame line is a second alternative landing point (such as the right lower corner point of the bounding box in fig. 1).
If the vehicle is in the posture shown in fig. 2, it can be seen that the tire a landing point is a long distance from the lower frame line of the bounding box, and if the second alternative landing point is used as the target landing point for the subsequent calculation of the obstacle position, the error is large, so the first alternative landing point which is closer to the tire is selected as the target landing point, and the calculation of the target obstacle position is performed. Therefore, the embodiment of the invention carries out confirmation screening on a plurality of alternative landing places, and can further ensure the driving reliability of the vehicle.
Step 3.2) determining a trajectory of the target obstacle from a third alternative landing point of any one of the plurality of alternative landing points of the image frame at the current time and the third alternative landing point of the image frame at a plurality of history times consecutive to the current time.
Wherein the third alternative place of landing is used to characterize the meaning of any one of the plurality of alternative places of landing, either the first alternative place of landing or the second alternative place of landing or other alternative places of landing.
In some embodiments, the trajectory of the target obstacle may be determined from a first alternate landing point in an image frame at a current time and a first alternate landing point in image frames at a plurality of historical times consecutive to the current time;
Or alternatively
Determining the track of the target obstacle according to the second alternative landing point in the image frame at the current moment and the second alternative landing points in the image frames at a plurality of historical moments which are continuous with the current moment.
As can be seen from the above steps, in the step of determining the movement track of the target obstacle, the alternative places in the image frames at the current time and the historical time can be selected at will, but it is required to ensure that the alternative places selected in the image frames at each time are the same, that is, the first alternative place or the second alternative place is selected at both the current time and the historical time, and so on.
Here, as an alternative embodiment, three continuous images are generally selected to determine the track of the obstacle, for example, the current time, the previous time of the current time, and the two previous image frames of the current time are selected to determine the track of the target obstacle.
In some embodiments, step S108 may be implemented by steps comprising:
step 4.1), determining depth information of the target landing point by a monocular camera detection method, wherein the depth information is two-dimensional image coordinates;
The monocular detection can detect depth information of pixels on a certain plane in a space with respect to a fixed ground, for example, depth information of all pixels on a plane having a height of one meter from the ground.
And 4.2), taking the target landing point as a ground reference, and converting the depth information into target obstacle position coordinates in a world coordinate system based on a camera imaging principle and a geometric relation, wherein the target obstacle position coordinates are space three-dimensional coordinates.
In the existing bounding box detection method, the landing point of the lower frame line is arbitrarily selected to participate in the calculation of the obstacle position, if the road condition of the target obstacle vehicle running is complex, such as rugged, pothole and the like, the current vehicle cannot ensure that the obstacle image frame parallel to the acquisition equipment is acquired, namely the situation in fig. 2 is possible to appear, and the target obstacle is inclined relative to the fixed ground line.
As shown in fig. 4, an embodiment of the present invention further provides an apparatus 200 for detecting a position of an obstacle based on a bounding box, the apparatus including:
an acquisition module 201 acquires an image frame for a target obstacle at each moment in real time;
A detection module 202 for detecting a plurality of alternative places of landing of a target obstacle and a track of the target obstacle in each image frame through bounding boxes;
A selection module 203 for selecting a target landing point closest to a fixed horizon from a plurality of candidate landing points of an image frame at a current time based on a trajectory characteristic of the target obstacle;
a determining module 204 determines a position of the target obstacle at the current time based on the target landing point as a ground reference.
According to the embodiment of the invention, the determination of the landing point and the target obstacle position is realized based on the bounding box detection method, the more accurate landing point is determined through the track characteristics of the target obstacle, even if the current vehicle environment is complex and changeable, the more accurate target landing point can be accurately identified without identifying each part in the vehicle, and the target obstacle position is calculated based on the target landing point, so that the vehicle can perform operations such as acceleration, deceleration, steering and the like, the obstacle in the running direction can be accurately avoided, and the driving safety of a user is ensured.
In an alternative embodiment, the selecting module is further configured to obtain a trajectory feature from a trajectory of the target obstacle, where the trajectory feature includes a slope; if the slope is positive, taking a first alternative landing point positioned on the left side in the image frame at the current moment as a target landing point; and if the slope is negative, taking a second alternative landing point positioned on the right side in the image frame at the current moment as a target landing point.
In an alternative embodiment, the detection module is further configured to detect a plurality of alternative landing points of the target obstacle in each of the image frames through a bounding box; determining a trajectory of the target obstacle from a third alternative landing point of any one of the plurality of alternative landing points of an image frame at a current time and the third alternative landing point of the image frame at a plurality of historical times consecutive to the current time.
In an optional embodiment, the detection module is further configured to detect a box frame contour of the target obstacle in each of the image frames through a bounding box; and determining a plurality of alternative landing points according to the end points of the lower frame line of the box body frame outline, wherein the left end point of the lower frame line is a first alternative landing point, and the right end point of the lower frame line is a second alternative landing point.
In an alternative embodiment, the detection module is further configured to determine the trajectory of the target obstacle according to a first alternative landing point in an image frame at a current time and a first alternative landing point in image frames at a plurality of historical time points that are continuous with the current time; or determining the track of the target obstacle according to the second alternative landing point in the image frame at the current moment and the second alternative landing points in the image frames at a plurality of historical moments which are continuous with the current moment.
In an alternative embodiment, the determining module is further configured to determine depth information of the target landing point through a monocular camera detection method, where the depth information is two-dimensional image coordinates; and converting the depth information into target obstacle position coordinates in a world coordinate system based on a camera imaging principle and a geometric relation by taking the target landing point as a ground reference.
In an optional embodiment, before the step of acquiring the image frame of each time point for the target obstacle in real time, the acquiring module is further configured to identify the image frame acquired at each time point, and determine whether the target obstacle exists in the image frame; if so, executing the step of acquiring the image frame aiming at the target obstacle at each moment in real time.
Fig. 5 is a schematic hardware architecture of an electronic device 300 according to an embodiment of the present invention. Referring to fig. 5, the electronic device 300 includes: a machine-readable storage medium 301 and a processor 302, and may also include a non-volatile storage medium 303, a communication interface 304, and a bus 305; wherein the machine-readable storage medium 301, the processor 302, the non-volatile storage medium 303, and the communication interface 304 communicate with each other via a bus 305. The processor 302 may perform the above embodiments describe a bounding box based obstacle position detection method by reading and executing the bounding box based obstacle position detection machine executable instructions in the machine readable storage medium 301.
The machine-readable storage medium referred to herein may be any electronic, magnetic, optical, or other physical storage device that can contain or store information, such as executable instructions, data, or the like. For example, a machine-readable storage medium may be: RAM (Radom Access Memory, random access memory), volatile memory, non-volatile memory, flash memory, a storage drive (e.g., hard drive), any type of storage disk (e.g., optical disk, dvd, etc.), or a similar storage medium, or a combination thereof.
The non-volatile medium may be a non-volatile memory, a flash memory, a storage drive (e.g., hard drive), any type of storage disk (e.g., optical disk, dvd, etc.), or a similar non-volatile storage medium, or a combination thereof.
It can be understood that the specific operation method of each functional module in this embodiment may refer to the detailed description of the corresponding steps in the above method embodiment, and the detailed description is not repeated here.
The embodiment of the invention provides a computer readable storage medium, in which a computer program is stored, and when the computer program code is executed, the method for detecting the position of an obstacle based on a bounding box according to any one of the embodiments is implemented, and the specific implementation can be referred to the method embodiment and will not be repeated herein.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described system and apparatus may refer to corresponding procedures in the foregoing method embodiments, which are not described herein again.
In addition, in the description of embodiments of the present invention, unless explicitly stated and limited otherwise, the terms "mounted," "connected," and "connected" are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present invention will be understood in specific cases by those of ordinary skill in the art.
In the description of the present invention, it should be noted that the directions or positional relationships indicated by the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc. are based on the directions or positional relationships shown in the drawings, are merely for convenience of describing the present invention and simplifying the description, and do not indicate or imply that the devices or elements referred to must have a specific orientation, be configured and operated in a specific orientation, and thus should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the above examples are only specific embodiments of the present invention, and are not intended to limit the scope of the present invention, but it should be understood by those skilled in the art that the present invention is not limited thereto, and that the present invention is described in detail with reference to the foregoing examples: any person skilled in the art may modify or easily conceive of the technical solution described in the foregoing embodiments, or perform equivalent substitution of some of the technical features, while remaining within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention.

Claims (9)

1. An obstacle position detection method based on bounding boxes is characterized by comprising the following steps:
Acquiring an image frame aiming at a target obstacle at each moment in real time;
detecting a plurality of alternative landing points of a target obstacle in each image frame through a bounding box;
Acquiring a track characteristic from the track of the target obstacle, wherein the track characteristic comprises a slope;
if the slope is positive, taking a first alternative landing point positioned on the left side in the image frame at the current moment as a target landing point;
if the slope is negative, taking a second alternative landing point positioned on the right side in the image frame at the current moment as a target landing point;
And determining the position of the target obstacle at the current moment according to the target landing point serving as a ground reference.
2. The method of claim 1, wherein the step of detecting a plurality of alternative points of approach of the target obstacle and the trajectory of the target obstacle in each of the image frames by bounding boxes comprises:
detecting a plurality of alternative landing points of the target obstacle in each image frame through the bounding box;
Determining a trajectory of the target obstacle from a third alternative landing point of any one of the plurality of alternative landing points of an image frame at a current time and the third alternative landing point of the image frame at a plurality of historical times consecutive to the current time.
3. The method of claim 2, wherein the step of detecting a plurality of alternative sites of the target obstacle in each of the image frames by bounding boxes comprises:
Detecting the box body frame outline of the target obstacle in each image frame through the bounding box;
And determining a plurality of alternative landing points according to the end points of the lower frame line of the box body frame outline, wherein the left end point of the lower frame line is a first alternative landing point, and the right end point of the lower frame line is a second alternative landing point.
4. A method according to claim 3, wherein the step of determining the trajectory of the target obstacle from a third alternative landing point of any one of the plurality of alternative landing points of the image frame at the current time and the third alternative landing point of the image frame at a plurality of history times consecutive to the current time comprises:
Determining the track of the target obstacle according to a first alternative landing point in an image frame at the current moment and first alternative landing points in image frames at a plurality of historical moments which are continuous with the current moment;
Or alternatively
Determining the track of the target obstacle according to the second alternative landing point in the image frame at the current moment and the second alternative landing points in the image frames at a plurality of historical moments which are continuous with the current moment.
5. The method of claim 1, wherein the step of determining the position of the target obstacle at the current time based on the target landing point as a ground reference comprises:
determining depth information of the target landing point by a monocular camera detection method, wherein the depth information is two-dimensional image coordinates;
And converting the depth information into target obstacle position coordinates in a world coordinate system based on a camera imaging principle and a geometric relation by taking the target landing point as a ground reference.
6. The method of claim 1, further comprising, prior to the step of acquiring in real time image frames for the target obstacle at each instant of time:
identifying the image frames acquired at each moment, and judging whether target barriers exist in the image frames or not;
If so, executing the step of acquiring the image frame aiming at the target obstacle at each moment in real time.
7. An obstacle position detection device based on bounding boxes, the device comprising:
The acquisition module acquires image frames aiming at the target obstacle at each moment in real time;
The detection module is used for detecting a plurality of alternative places where the target obstacle is located and the track of the target obstacle in each image frame through the bounding box;
A selection module for acquiring track characteristics from the track of the target obstacle, wherein the track characteristics comprise slopes; if the slope is positive, taking a first alternative landing point positioned on the left side in the image frame at the current moment as a target landing point; if the slope is negative, taking a second alternative landing point positioned on the right side in the image frame at the current moment as a target landing point;
and the determining module is used for determining the position of the target obstacle at the current moment according to the target landing point serving as a ground reference.
8. An electronic device comprising a memory, a processor, the memory having stored therein a computer program executable on the processor, characterized in that the processor, when executing the computer program, implements the steps of the method of any of the preceding claims 1 to 6.
9. A machine-readable storage medium storing machine-executable instructions which, when invoked and executed by a processor, cause the processor to perform the steps of the method of any one of claims 1 to 6.
CN202110283571.0A 2021-03-16 2021-03-16 Obstacle position detection method and device based on bounding box and electronic equipment Active CN112883909B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110283571.0A CN112883909B (en) 2021-03-16 2021-03-16 Obstacle position detection method and device based on bounding box and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110283571.0A CN112883909B (en) 2021-03-16 2021-03-16 Obstacle position detection method and device based on bounding box and electronic equipment

Publications (2)

Publication Number Publication Date
CN112883909A CN112883909A (en) 2021-06-01
CN112883909B true CN112883909B (en) 2024-06-14

Family

ID=76042699

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110283571.0A Active CN112883909B (en) 2021-03-16 2021-03-16 Obstacle position detection method and device based on bounding box and electronic equipment

Country Status (1)

Country Link
CN (1) CN112883909B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114299466B (en) * 2021-12-29 2025-03-18 东软睿驰汽车技术(沈阳)有限公司 Vehicle posture determination method, device and electronic equipment based on monocular camera

Family Cites Families (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9315192B1 (en) * 2013-09-30 2016-04-19 Google Inc. Methods and systems for pedestrian avoidance using LIDAR
WO2015068249A1 (en) * 2013-11-08 2015-05-14 株式会社日立製作所 Autonomous driving vehicle and autonomous driving system
JP6440411B2 (en) * 2014-08-26 2018-12-19 日立オートモティブシステムズ株式会社 Object detection device
JP6557958B2 (en) * 2014-10-22 2019-08-14 株式会社Soken Obstacle detection device for vehicle
US9694498B2 (en) * 2015-03-30 2017-07-04 X Development Llc Imager for detecting visual light and projected patterns
CN106808482B (en) * 2015-12-02 2019-07-19 中国科学院沈阳自动化研究所 A kind of inspection robot multi-sensor system and inspection method
US10403153B2 (en) * 2016-01-05 2019-09-03 United States Of America As Represented By The Administrator Of Nasa Autonomous emergency flight management system for an unmanned aerial system
US10571926B1 (en) * 2016-08-29 2020-02-25 Trifo, Inc. Autonomous platform guidance systems with auxiliary sensors and obstacle avoidance
WO2018069757A2 (en) * 2016-10-11 2018-04-19 Mobileye Vision Technologies Ltd. Navigating a vehicle based on a detected barrier
CN106873600A (en) * 2017-03-31 2017-06-20 深圳市靖洲科技有限公司 It is a kind of towards the local obstacle-avoiding route planning method without person bicycle
CN109509210B (en) * 2017-09-15 2020-11-24 百度在线网络技术(北京)有限公司 Obstacle tracking method and device
CN109521756B (en) * 2017-09-18 2022-03-08 阿波罗智能技术(北京)有限公司 Obstacle motion information generation method and apparatus for unmanned vehicle
CN108544490B (en) * 2018-01-05 2021-02-23 广东雷洋智能科技股份有限公司 Obstacle avoidance method for unmanned intelligent robot road
CN110378168B (en) * 2018-04-12 2023-05-30 海信集团有限公司 Method, device and terminal for fusing multiple types of barriers
CN110807347B (en) * 2018-08-06 2023-07-25 海信集团有限公司 Obstacle detection method, obstacle detection device and terminal
CN109271944B (en) * 2018-09-27 2021-03-12 百度在线网络技术(北京)有限公司 Obstacle detection method, obstacle detection device, electronic apparatus, vehicle, and storage medium
CN112084810B (en) * 2019-06-12 2024-03-08 杭州海康威视数字技术股份有限公司 Obstacle detection method and device, electronic equipment and storage medium
CN110246183B (en) * 2019-06-24 2022-07-15 百度在线网络技术(北京)有限公司 Wheel grounding point detection method, device and storage medium
CN110428505B (en) * 2019-07-22 2023-08-25 高新兴科技集团股份有限公司 Method for removing video projection interferents in three-dimensional map and computer storage medium
CN110550029B (en) * 2019-08-12 2021-02-09 华为技术有限公司 Obstacle avoidance method and device
CN110765929A (en) * 2019-10-21 2020-02-07 东软睿驰汽车技术(沈阳)有限公司 Vehicle obstacle detection method and device
CN111027381A (en) * 2019-11-06 2020-04-17 杭州飞步科技有限公司 Method, device, equipment and storage medium for recognizing obstacle by monocular camera
CN111060923B (en) * 2019-11-26 2022-05-13 武汉乐庭软件技术有限公司 Multi-laser-radar automobile driving obstacle detection method and system
CN111046809B (en) * 2019-12-16 2023-09-12 昆山微电子技术研究院 Obstacle detection method, device, equipment and computer readable storage medium
CN111353453B (en) * 2020-03-06 2023-08-25 北京百度网讯科技有限公司 Obstacle detection method and device for vehicle
CN111337941B (en) * 2020-03-18 2022-03-04 中国科学技术大学 A dynamic obstacle tracking method based on sparse lidar data
CN111488812B (en) * 2020-04-01 2022-02-22 腾讯科技(深圳)有限公司 Obstacle position recognition method and device, computer equipment and storage medium
CN111398924B (en) * 2020-04-29 2023-07-25 上海英恒电子有限公司 Radar installation angle calibration method and system
CN111612760B (en) * 2020-05-20 2023-11-17 阿波罗智联(北京)科技有限公司 Method and device for detecting obstacles
CN111666876B (en) * 2020-06-05 2023-06-09 阿波罗智联(北京)科技有限公司 Method and device for detecting obstacle, electronic equipment and road side equipment
CN111665852B (en) * 2020-06-30 2022-09-06 中国第一汽车股份有限公司 Obstacle avoiding method and device, vehicle and storage medium
CN111950428A (en) * 2020-08-06 2020-11-17 东软睿驰汽车技术(沈阳)有限公司 Target obstacle identification method, device and vehicle
CN112092809A (en) * 2020-09-15 2020-12-18 北京罗克维尔斯科技有限公司 Auxiliary reversing method, device and system and vehicle
CN112329552B (en) * 2020-10-16 2023-07-14 爱驰汽车(上海)有限公司 Obstacle detection method and device based on automobile
CN112462368B (en) * 2020-11-25 2022-07-12 中国第一汽车股份有限公司 Obstacle detection method and device, vehicle and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"一种基于深度信息的障碍物检测方法";杨磊等;《计算机技术与发展》;20150831;第25卷(第8期);第43-47页 *
"智能车辆障碍物检测技术综述";李洋;《大众科技》;20190630;第21卷(第6期);第65-68页 *

Also Published As

Publication number Publication date
CN112883909A (en) 2021-06-01

Similar Documents

Publication Publication Date Title
JP6273352B2 (en) Object detection apparatus, object detection method, and mobile robot
Broggi et al. Self-calibration of a stereo vision system for automotive applications
KR101880185B1 (en) Electronic apparatus for estimating pose of moving object and method thereof
CN111829484B (en) Target distance measuring and calculating method based on vision
CN110794406A (en) Multi-source sensor data fusion system and method
JP2001242934A (en) Obstacle detection device, obstacle detection method, and recording medium recording obstacle detection program
CN113432533B (en) Robot positioning method and device, robot and storage medium
CN112991550B (en) Obstacle position detection method and device based on pseudo point cloud and electronic equipment
KR101030317B1 (en) Apparatus and method for tracking obstacles using stereo vision
CN114119729B (en) Obstacle identification method and device
KR20110060315A (en) Magnetic Position Recognition Method of Road Driving Robot
US20220309776A1 (en) Method and system for determining ground level using an artificial neural network
CN112883909B (en) Obstacle position detection method and device based on bounding box and electronic equipment
US20250029401A1 (en) Image processing device
CN111553342A (en) Visual positioning method and device, computer equipment and storage medium
EP3905113A1 (en) Camera height calculation method and image processing apparatus
JP4539388B2 (en) Obstacle detection device
CN113030976A (en) Method for eliminating interference of metal well lid on millimeter wave radar by using laser radar
CN118314179A (en) Error-time-actual detection method, device and equipment on AI binocular stitching camera
KR101784584B1 (en) Apparatus and method for determing 3d object using rotation of laser
CN114299466B (en) Vehicle posture determination method, device and electronic equipment based on monocular camera
JP5903901B2 (en) Vehicle position calculation device
JP5891802B2 (en) Vehicle position calculation device
CN112380963B (en) Depth information determining method and device based on panoramic looking-around system
WO2023068034A1 (en) Image processing device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant