CN112146620B - Target object ranging method and device - Google Patents
Target object ranging method and device Download PDFInfo
- Publication number
- CN112146620B CN112146620B CN202011333374.7A CN202011333374A CN112146620B CN 112146620 B CN112146620 B CN 112146620B CN 202011333374 A CN202011333374 A CN 202011333374A CN 112146620 B CN112146620 B CN 112146620B
- Authority
- CN
- China
- Prior art keywords
- target
- frame
- vanishing point
- determining
- camera
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 53
- 230000008859 change Effects 0.000 claims abstract description 111
- 238000001914 filtration Methods 0.000 claims description 13
- 238000012545 processing Methods 0.000 claims description 8
- 241000820057 Ithone Species 0.000 claims description 3
- 238000005259 measurement Methods 0.000 abstract description 18
- 230000015654 memory Effects 0.000 description 16
- 230000000007 visual effect Effects 0.000 description 13
- 238000010586 diagram Methods 0.000 description 12
- 238000004590 computer program Methods 0.000 description 11
- 230000001133 acceleration Effects 0.000 description 10
- 230000000694 effects Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000004927 fusion Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 238000005034 decoration Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000008447 perception Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000004888 barrier function Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000002594 fluoroscopy Methods 0.000 description 1
- 238000000691 measurement method Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C3/00—Measuring distances in line of sight; Optical rangefinders
- G01C3/10—Measuring distances in line of sight; Optical rangefinders using a parallactic triangle with variable angles and a base of fixed length in the observation station, e.g. in the instrument
Landscapes
- Physics & Mathematics (AREA)
- Electromagnetism (AREA)
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Traffic Control Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a distance measuring method and device for a target object. Wherein, the method comprises the following steps: determining an angle change value of the target camera between a kth frame and a (k + 1) th frame according to the kth frame angular velocity and the (k + 1) th frame angular velocity output by the target sensor; determining a position change value of the position of the target vanishing point between the kth frame and the (k + 1) th frame according to the angle change value and the focal length of the target camera; under the condition that the position determination result of the target vanishing point before the (k + 1) th frame meets a first preset condition, determining the (k + 1) th position of the target vanishing point in the (k + 1) th frame image shot by the target camera according to the position change value; and determining the distance between the target object in the (k + 1) th frame of image and the target camera according to the (k + 1) th position of the target vanishing point. The invention solves the technical problem of low accuracy of the distance measurement result in automatic driving.
Description
Technical Field
The invention relates to the field of computers, in particular to a distance measuring method and device for a target object.
Background
In autonomous driving environment perception, perception of dynamic obstacles is an important ability for autonomous driving. Although the laser radar has the sensing capability of dynamic obstacles, the laser radar is difficult to be generally applied to the field of automatic driving due to the large volume and high price.
Currently, the industry tries to use a monocular camera to achieve ranging, but most monocular ranging work still needs to be performed with other expensive sensors such as: and laser radar, millimeter wave radar and the like are used in a fusion mode. A small percentage of purely visual research programs are too hypothetical and lack the ability to dynamically estimate. For example, the projection coordinate of the light ray projected to the image plane by the horizontal line of the assumed parallel plane of the prior art solution is fixed, and this projection coordinate may be called vanishing point. However, when the road jolts or goes up and down a slope in the actual road driving process, the vanishing point will change. There are also algorithms that attempt to calculate vanishing points using parallel lane lines, but vanishing points fail to calculate when parallel lane lines are not detected or when lane lines are not parallel. Therefore, the existing distance measurement methods all affect the precision of distance measurement and even cause a very off-spectrum distance measurement value, so that the automatic driving cannot correctly output the sensing result of the distance measurement, and great hidden danger can be brought to the safety of the automatic driving.
Aiming at the problem that the accuracy rate of a distance measurement result in automatic driving is low in the related technology, an effective solution does not exist at present.
Disclosure of Invention
The embodiment of the invention provides a distance measuring method and device for a target object, which at least solve the technical problem of low accuracy of distance measuring results in automatic driving.
According to an aspect of the embodiments of the present invention, there is provided a ranging method of a target object, including: determining an angle change value of a target camera between a kth frame and a (k + 1) th frame according to a kth frame angular velocity and a (k + 1) th frame angular velocity output by a target sensor, wherein the target camera and the target sensor are arranged on a target vehicle, and k is a natural number; determining a position change value of the position of a target vanishing point between the kth frame and the (k + 1) th frame according to the angle change value and the focal length of the target camera; under the condition that the position determination result of the target vanishing point before the (k + 1) th frame meets a first preset condition, determining the (k + 1) th position of the target vanishing point in a (k + 1) th frame image shot by the target camera according to the position change value; and determining the distance between a target object in the (k + 1) th frame of image and the target camera according to the (k + 1) th position of the target vanishing point.
According to another aspect of the embodiments of the present invention, there is also provided a ranging apparatus for a target object, including: the first determination module is used for determining an angle change value of a target camera between a kth frame and a (k + 1) th frame according to a kth frame angular velocity and a (k + 1) th frame angular velocity output by a target sensor, wherein the target camera and the target sensor are arranged on a target vehicle, and k is a natural number; a second determining module, configured to determine, according to the angle change value and the focal length of the target camera, a position change value of a position of a target vanishing point between the kth frame and the (k + 1) th frame; a third determining module, configured to determine, according to the position change value, a (k + 1) th position of the target vanishing point in a (k + 1) th frame image captured by the target camera when a position determination result of the target vanishing point before the (k + 1) th frame meets a first preset condition; a fourth determining module, configured to determine, according to the (k + 1) th position of the target vanishing point, a distance between the target object in the (k + 1) th frame of image and the target camera.
According to still another aspect of the embodiments of the present invention, there is also provided a computer-readable storage medium, in which a computer program is stored, wherein the computer program is configured to execute the above-mentioned distance measuring method for a target object when running.
According to still another aspect of the embodiments of the present invention, there is also provided an electronic device, including a memory and a processor, where the memory stores a computer program, and the processor is configured to execute the above-mentioned distance measuring method for a target object through the computer program.
In the embodiment of the invention, the angle change value of the target camera between the kth frame and the (k + 1) th frame is determined according to the kth frame angular velocity and the (k + 1) th frame angular velocity output by the target sensor, the target camera and the target sensor are arranged on the target vehicle, and k is a natural number; determining a position change value of the position of a target vanishing point between a kth frame and the (k + 1) th frame according to the angle change value and the focal length of the target camera; under the condition that the position determination result of the target vanishing point before the (k + 1) th frame meets a first preset condition, determining the (k + 1) th position of the target vanishing point in the (k + 1) th frame image shot by the target camera according to the position change value; and determining the distance between the target object in the (k + 1) th frame of image and the target camera according to the (k + 1) th position of the target vanishing point. The purpose of distance measurement according to the position of the vanishing point in the image is achieved, the technical effect of improving the accuracy of the distance measurement result is achieved, and the technical problem that the accuracy of the distance measurement result in automatic driving is low is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a schematic diagram of an application environment of an alternative method for measuring distance of a target object according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method of ranging a target object according to an embodiment of the present invention;
FIG. 3 is a first schematic diagram in accordance with an alternative embodiment of the present invention;
FIG. 4 is a schematic illustration of the effect of camera changes on vanishing points in accordance with an alternative embodiment of the present invention;
FIG. 5 is a schematic illustration of a vanishing point in an image in accordance with an alternative embodiment of the invention;
FIG. 6 is a second schematic diagram in accordance with an alternative embodiment of the present invention;
FIG. 7 is a top view of a target object lateral position estimate according to an alternative embodiment of the present invention;
FIG. 8 is a schematic diagram of an alternative target object ranging apparatus according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of an alternative electronic device according to an embodiment of the invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The following explains the terms referred to in the present application:
dynamic obstacle ranging: the actual distance of a dynamic object (vehicle, pedestrian, etc.) from the current vehicle is measured, and the distance comprises coordinate position information of longitudinal direction x and transverse direction y.
Vanishing point: in linear fluoroscopy, the point at which two or more lines representing parallel lines extend far out of the horizon to converge is called the vanishing point.
According to an aspect of the embodiments of the present invention, there is provided a method for ranging a target object, optionally, as an optional implementation manner, the method for ranging a target object may be applied to, but not limited to, a scene as shown in fig. 1, where the scene includes: a first vehicle 102 and a second vehicle 104.
The first vehicle 102 is a vehicle traveling ahead of the second vehicle 104, and the second vehicle 104 may have an object camera and an object sensor mounted thereon. The target camera mounted on the second vehicle 104 is used to photograph vehicles traveling ahead, including but not limited to the first vehicle 102 described above, traveling ahead of the second vehicle 104. The target sensor is used for detecting an angular velocity of the vehicle, and the target sensor includes, but is not limited to, an Inertial Measurement Unit (IMU) sensor.
The above is merely an example, and this is not limited in this embodiment.
Specifically, the following steps are realized through the distance measurement of the target object:
determining an angle change value of a target camera between a kth frame and a (k + 1) th frame according to a kth frame angular velocity and a (k + 1) th frame angular velocity output by a target sensor, wherein the target camera and the target sensor are arranged on a target vehicle, and k is a natural number; in step S104, determining a position change value of the position of the target vanishing point between the kth frame and the (k + 1) th frame according to the angle change value and the focal length of the target camera; in step S106, when the result of determining the position of the target vanishing point before the (k + 1) th frame meets a first preset condition, determining the (k + 1) th position of the target vanishing point in the (k + 1) th frame image captured by the target camera according to the position change value; in step S108, a distance between the target object in the (k + 1) th frame of image and the target camera is determined according to the (k + 1) th position of the target vanishing point.
Alternatively, the main body of the above steps may be a server, or may be a processor installed on the second vehicle 104, but is not limited thereto.
Optionally, as an optional implementation manner, as shown in fig. 2, the distance measuring method for the target object includes:
step S202, determining an angle change value of a target camera between a kth frame and a (k + 1) th frame according to a kth frame angular velocity and a (k + 1) th frame angular velocity output by a target sensor, wherein the target camera and the target sensor are arranged on a target vehicle, and k is a natural number;
step S204, determining a position change value of the position of the target vanishing point between the kth frame and the (k + 1) th frame according to the angle change value and the focal length of the target camera;
step S206, under the condition that the position determination result of the target vanishing point before the (k + 1) th frame meets a first preset condition, determining the (k + 1) th position of the target vanishing point in the (k + 1) th frame image shot by the target camera according to the position change value;
step S208, determining the distance between the target object in the (k + 1) th frame of image and the target camera according to the (k + 1) th position of the target vanishing point.
Through the steps, determining an angle change value of a target camera between a kth frame and a (k + 1) th frame according to a kth frame angular velocity and a (k + 1) th frame angular velocity output by a target sensor, wherein the target camera and the target sensor are arranged on a target vehicle, and k is a natural number; determining a position change value of the position of a target vanishing point between a kth frame and the (k + 1) th frame according to the angle change value and the focal length of the target camera; under the condition that the position determination result of the target vanishing point before the (k + 1) th frame meets a first preset condition, determining the (k + 1) th position of the target vanishing point in the (k + 1) th frame image shot by the target camera according to the position change value; and determining the distance between the target object in the (k + 1) th frame of image and the target camera according to the (k + 1) th position of the target vanishing point. The purpose of distance measurement according to the position of the vanishing point in the image is achieved, the technical effect of improving the accuracy of the distance measurement result is achieved, and the technical problem that the accuracy of the distance measurement result in automatic driving is low is solved.
As an optional embodiment, the target sensor may be an imu (inertial measurement unit) sensor, and the target camera may be a monocular camera. The embodiment provides a real-time dynamic obstacle ranging algorithm based on the fusion of a monocular camera and an IMU, and the target object may be an obstacle, and in general, an object located in front of the target vehicle may be considered as an obstacle, including but not limited to a vehicle, a pedestrian, and the like. The obstacle may be dynamic, such as a vehicle that is traveling, a pedestrian that is moving, or the like.
As a preferred embodiment, the following description will be given taking an example in which the target sensor is an inertial measurement unit, the target camera is a monocular camera, and the target object is a preceding vehicle that travels ahead of the target vehicle. Fig. 3 is a first schematic diagram according to an alternative embodiment of the present invention, which includes a front vehicle 302 and a target vehicle 304, wherein the target vehicle 304 is mounted with a monocular camera 306 and an inertial measurement unit 308, and the monocular camera and the inertial measurement unit can be mounted at any position of the target vehicle, and the arrangement positions of the monocular camera and the inertial measurement unit in fig. 3 are only for illustrating the embodiment and are not limited thereto. In the embodiment, the monocular camera is oriented in front of the target vehicle, and the front vehicle can be photographed in real time.
As an alternative embodiment, the k frame and the k +1 frame may be the time when the target sensor and the target camera perform data acquisition simultaneously. The time for the target sensor to acquire the angular velocity of the k frame is also the time for the target camera to take the image of the k frame, and similarly, the time for the target sensor to acquire the angular velocity of the (k + 1) th frame is also the time for the target camera to take the image of the (k + 1) th frame. In this embodiment, the grounding point and the lane line of the dynamic target of each frame of image can be detected by using a deep learning detection algorithm. And calculating vanishing points on each frame of image by using the parallel lane lines, wherein each frame of image comprises a kth frame of image and a (k + 1) th frame of image. Synchronizing the angular speed of the kth frame and the angular speed of the (k + 1) th frame output by the IMU sensor, calculating the change of the pitching angle of the image of the kth frame and the image of the (k + 1) th frame in a camera coordinate system by using a gyroscope of the IMU sensor, then converting the change into a position change value of a vanishing point in the image coordinate system, and fusing a lane line and the vanishing point calculated by the IMU through filtering. And calculating the position of the target in the (k + 1) th frame image by using the fused vanishing point, the grounding point of the target and the external parameter value of the camera through plane assumption.
Optionally, the determining, according to the position change value, a (k + 1) th position of the target vanishing point in a (k + 1) th frame image captured by the target camera includes: determining a k +1 estimated position of the target vanishing point in a k +1 frame image shot by the target camera according to the position change value and the k position of the target vanishing point in the k frame image shot by the target camera; under the condition that a lane line meeting a second preset condition is detected in the (k + 1) th frame image, determining a (k + 1) th observation position of the target vanishing point in the (k + 1) th frame image according to the detected lane line; and determining the (k + 1) th position of the target vanishing point in the (k + 1) th frame image according to the (k + 1) th estimated position and the (k + 1) th observation position.
As an alternative implementation, FIG. 4 is a schematic diagram of the effect of camera changes on vanishing points, according to an alternative embodiment of the invention, in whichIs the k-th position of the vanishing point of the object in the k-th frame image taken by the object camera,is the estimated position of the k +1 th frame image shot by the target camera at the target vanishing point,is the value of the change in position,is the value of the change in the angle,is the camera focal length. In the present embodiment, it is preferred that,。
as an optional implementation manner, the k +1 th frame image captured by the target camera includes lane lines, and if the k +1 th frame image includes at least two lane lines, the location of the vanishing point in the k +1 th frame image can be determined by the at least two lane lines included in the k +1 th frame image. In this embodiment, the vanishing point position determined by the lane line in the (k + 1) th image is a visual observation value. Since the preceding vehicle is dynamic and the visual observation value is static, the position accuracy and robustness of the resulting target vanishing point in the k +1 th frame image may not be high. In the embodiment, the k +1 th observation position and the k +1 th pre-estimated position can be fused, so that the motion information of the vanishing point is increased, and the estimation of the vanishing point of the target can be more accurate and the robustness is better by combining the visual observation information.
Optionally, the determining the (k + 1) th position of the target vanishing point in the (k + 1) th frame image according to the (k + 1) th estimated position and the (k + 1) th observation position includes: filtering the (k + 1) th estimated position and the (k + 1) th observation position to obtain a (k + 1) th position; or carrying out weighted average processing on the (k + 1) th estimated position and the (k + 1) th observation position to obtain the (k + 1) th position.
As an optional implementation manner, the filtering process may be kalman filtering, and the k +1 th estimated position and the k +1 th observed position may be fused through the kalman filtering. In the embodiment, the observation position of the target vanishing point of the visual observation can be fused through Kalman filtering, and the motion estimation position calculated through the IMU sensor combines the motion information and the visual observation information, so that the accuracy of the target vanishing point position can be improved.
As an alternative embodiment, the fusion of the (k + 1) th estimated position and the (k + 1) th observed position may also be implemented by means of weighted average processing. The selection of the specific weight may be determined according to actual conditions, for example, if the weight of the motion information is large, the weight of the estimated position k +1 may be set to be larger, for example, 0.6, 0.7, and the like, and the weight of the corresponding observed position k +1 may be set to be smaller, and correspondingly, may be 0.4, 0.3, and the like. Conversely, if the weight of the visual observation is large, the weight of the k +1 th observation position may be set to be larger, and correspondingly the weight of the k +1 th estimated position may be set to be smaller. The weights of the (k + 1) th estimated position and the (k + 1) th observation position can be set to be 0.5, and in this case, the (k + 1) th estimated position and the (k + 1) th observation position are subjected to mean processing. In this embodiment, the weights of the (k + 1) th estimated position and the (k + 1) th observation position can be set according to the actual situation through a weighted average processing mode, and the motion information and the visual observation information are fused according to the weights, so that the accuracy of the target vanishing point position can be improved.
Optionally, the determining, according to the detected lane line, a (k + 1) th observation position of the target vanishing point in the (k + 1) th frame image includes: and under the condition that the detected lane lines comprise at least 2 parallel lane lines, performing fitting operation on the at least 2 parallel lane lines to obtain the (k + 1) th observation position.
As an alternative embodiment, the lane lines on the road are parallel in the real scene, but due to the imaging principle, a plurality of lane lines captured in the image captured by the camera intersect at a point, which is the vanishing point. In this embodiment, taking the (k + 1) th frame image including two lane lines as an example, as shown in fig. 5, it is a schematic diagram of vanishing points in the image according to the alternative embodiment of the present invention, where the diagram includes a lane line 501 and a lane line 502, and an intersection of extension lines of the lane line 501 and the lane line 502 is a point, which is a (k + 1) th position of the target vanishing point in the (k + 1) th frame image. The lane lines 501 and 502 in this embodiment are only for illustrating this embodiment, and the number of lane lines may be determined according to actual situations. In this embodiment, at least 2 lane lines areThe vanishing point is calculated by a least square method, see the following formula:
wherein,is the vanishing point on the image,is a lane line on the image and,is a least squares fit function.
Optionally, the determining, according to the position change value, a (k + 1) th position of the target vanishing point in a (k + 1) th frame image captured by the target camera includes: determining a k +1 estimated position of the target vanishing point in a k +1 frame image shot by the target camera according to the position change value and the k position of the target vanishing point in the k frame image shot by the target camera; and under the condition that the lane line meeting the second preset condition is not detected in the (k + 1) th frame image, determining the (k + 1) th position of the target vanishing point in the (k + 1) th frame image as being equal to the (k + 1) th estimated position.
As an optional implementation manner, in many cases, due to the fact that the lane line is worn, the lane line cannot be detected in such cases, and thus the (k + 1) th observation position of the target vanishing point in the (k + 1) th frame image cannot be determined according to the lane line, or the lane lines in the actual scene may not be parallel, the lane lines in the (k + 1) th frame image may not intersect with one point, and in such cases, the (k + 1) th observation position of the target vanishing point in the (k + 1) th frame image cannot be determined according to the lane line. In this case, the k +1 th estimated position of the target vanishing point in the k +1 th frame image can be estimated by using the previous frame, i.e. the k-th frame image, and the k +1 th estimated position can be used as the position of the target vanishing point in the k +1 th frame image.
Optionally, determining a k +1 estimated position of the target vanishing point in a k +1 frame image captured by the target camera according to the position change value and a k position of the target vanishing point in the k frame image captured by the target camera, includes: determining the k +1 th predicted position as equal to the sum of the k-th position and the position change value.
As an alternative embodiment, as shown in FIG. 4, in the figureIs the k-th position of the vanishing point of the object in the k-th frame image taken by the object camera,is the estimated position of the k +1 th frame image shot by the target camera at the target vanishing point,is the value of the change in position,。
optionally, the method further comprises: under the condition that at least 2 parallel lane lines are detected in the (k + 1) th frame image, determining that the lane lines meeting the second preset condition are detected in the (k + 1) th frame image; and under the condition that at least 2 parallel lane lines are not detected in the (k + 1) th frame image, determining that no lane line meeting the second preset condition is detected in the (k + 1) th frame image.
As an alternative embodiment, the vanishing point is the intersection of the extensions of at least two non-parallel straight lines. Due to the imaging principle, two parallel straight lines in an actual scene are not parallel in an image, and the intersection point of the extension lines is the vanishing point. That is, if the position of the target vanishing point in the (k + 1) th frame image is determined by the lane lines, it needs to be proved that the lane lines are parallel in the actual scene. However, if the lane lines in the actual scene are parallel, the lane lines imaged in the (k + 1) th frame image are not parallel, as shown in fig. 5. It can be proved that the lane lines are parallel in the actual scene by converting the imaged lane lines in the (k + 1) th frame image into a physical coordinate system.
As a preferred embodiment, the deep learning neural network may be used to detect the lane lines on the (k + 1) th frame image. Suppose that each lane lineConsisting of n pixel coordinates. The mapping relation H of the camera relative to the road surface can be calculated through external reference calibration of the camera, and the physical position of the lane line under the current camera coordinate system can be calculated if the road surface is flat, such as a formula:
wherein,is a world physical coordinate point of a pixel point on a lane line corresponding to a camera coordinate system, K is a camera internal parameter, H is a mapping relation from the camera to a road surface,are homogeneous coordinates of the lane line pixels. After the world physical coordinates of each pixel of the lane line are calculated, each lane line is fitted through a straight lineThen, a fitting error is calculated, if the fitting error is smaller than a threshold, the lane line is considered to be a straight line, and the threshold may be determined according to actual conditions, for example: 0.01, 0.1, etc., and the specific selection can be determined according to the actual situation. In this embodiment, whether at least 2 parallel lane lines in the actual scene are flat or not is detected through the k +1 th frame image by the above-mentioned coordinate system conversion methodAnd if the lane lines are parallel, determining that at least 2 parallel lane lines of the (k + 1) th frame image are the lane lines meeting a second preset condition, and otherwise, determining that the lane lines meeting the second preset condition are not detected in the (k + 1) th frame image.
Optionally, the method further comprises: and under the condition that the position determination result of the target vanishing point before the (k + 1) th frame does not meet the first preset condition, determining the (k + 1) th position of the target vanishing point in the (k + 1) th frame image according to the projection position of the object detected in the (k + 1) th frame image.
As an alternative embodiment, if there is no visual observation of vanishing points for a long time, the (k + 1) th position of the target vanishing point in the (k + 1) th frame image can be determined according to the projection position of the object detected in the (k + 1) th frame image.
Optionally, the determining the k +1 th position of the target vanishing point in the k +1 th frame image according to the projection position of the object detected in the k +1 th frame image includes: and determining the (k + 1) th position of the target vanishing point in the (k + 1) th frame image according to the projection position of the object detected in the (k + 1) th frame image, the height of the target camera to the ground and the focal length of the target camera.
As an alternative implementation, fig. 6 is a schematic diagram two of an alternative embodiment of the present invention, wherein,is the height of the target camera to the ground,is the focal length of the target camera and,is the difference between the pixel projected on the image by the light parallel to the ground and the ground point of the vehicle ahead,whereinis the projection position of the object detected in the image of the (k + 1) th frame,is the (k + 1) th position of the target vanishing point in the (k + 1) th frame image. Therefore, the (k + 1) th position of the target vanishing point in the (k + 1) th frame image can be obtained by combining the above parameters in fig. 6.
Optionally, the determining the k +1 th position of the target vanishing point in the k +1 th frame image according to the projection position of the object detected in the k +1 th frame image, the height of the target camera to the ground and the focal length of the target camera includes: respectively determining the k +1 th estimated positions of the target vanishing point in the k +1 th frame image according to the projection positions of M objects in the N objects, the height from the target camera to the ground and the focal length of the target camera under the condition that the detected objects in the k +1 th frame image comprise N objects, and obtaining M k +1 th estimated positions in total, wherein N is 1 or a natural number greater than 1, and M is less than or equal to N; and determining the (k + 1) th position according to the (k + 1) th estimated positions.
As an alternative embodiment, a plurality of target objects may be included in front of the target vehicle, for example, a plurality of vehicles, a plurality of pedestrians, or a pedestrian at the right of the vehicle. If no vanishing point is observed for a long time, the (k + 1) th position of the vanishing point of the target in the (k + 1) th frame image can be reversely deduced by selecting the target distance with higher confidence coefficient from the plurality of target objects in the (k + 1) th frame image. Specifically, a Random sample consensus (RANSAC) algorithm may be used to cyclically remove an object with a low confidence, and the (k + 1) th position of the target vanishing point in the (k + 1) th frame image may be obtained by using the object position information with a high confidence. In this embodiment, assuming that the object detected in the (k + 1) th frame image includes N objects, the (k + 1) th estimated position of the target vanishing point in the (k + 1) th frame image may be determined according to the parameter information of M objects of the N objects by determining the position of the vanishing point in the image according to the projection position of the object, the height from the target camera to the ground, and the focal length of the target camera in the above embodiment. The M objects are objects with a high confidence level among the N objects. And finally, obtaining an average value of the k +1 th estimated positions of the M, or determining the k +1 th position of the target vanishing point in the k +1 th frame image through Kalman filtering.
Optionally, the determining, according to the projection positions of M of the N objects, the heights of the target cameras to the ground, and the focal lengths of the target cameras, the (k + 1) th estimated positions of the target vanishing point in the (k + 1) th frame image respectively to obtain M (k + 1) th estimated positions includes: acquiring first M objects with the confidence degrees ordered from high to low in the N objects; and respectively determining the (k + 1) th estimated position of the target vanishing point in the (k + 1) th frame image according to the projection positions of the first M objects, the height from the target camera to the ground and the focal length of the target camera, and obtaining the (M) th estimated positions of the (k + 1) th frame image.
As an alternative embodiment, a Random sample consensus algorithm (RANSAC algorithm for short) may be used to circularly remove objects with low confidence. Specifically, the N objects may be sorted in the order from high confidence to low confidence, and the top M objects with the top confidence may be selected from the N objects. And then determining the (k + 1) th estimated position of the target vanishing point in the (k + 1) th frame image according to the projection position of the object, the height from the target camera to the ground and the focal length of the target camera in the embodiment, so as to obtain M (k + 1) th estimated positions. And then the (k + 1) th position of the target vanishing point in the (k + 1) th frame image can be obtained by means of averaging the (k + 1) th estimated positions or Kalman filtering.
Optionally, the determining, according to the projection positions of M objects in the N objects, the heights of the target cameras to the ground, and the focal lengths of the target cameras, the k +1 th estimated positions of the target vanishing point in the k +1 th frame of image respectively includes: under the condition that the position change value of the target vanishing point between the kth frame and the (k + 1) th frame is the change value of the longitudinal coordinate of the target vanishing point between the kth frame and the (k + 1) th frame, and the (k + 1) th position is the (k + 1) th longitudinal coordinate, for each of the M objects, determining the (k + 1) th estimated longitudinal coordinate of the target vanishing point in the (k + 1) th frame image by the following formula:
wherein,is the (k + 1) th estimated longitudinal coordinate corresponding to the ith object in the M objects,is a longitudinal coordinate of a projection of the grounding point of the ith one of the M objects on the longitudinal axis,is the height of the target camera to the ground,is the focal length of the subject camera and,is the estimated distance between the target camera and the target object.
As an optional implementation manner, the change of the position of the vanishing point in the image due to factors such as road bump is generally longitudinal, in this embodiment, the position change value of the target vanishing point between the kth frame and the (k + 1) th frame may be a change of a longitudinal coordinate, and then the (k + 1) th estimated longitudinal coordinate of the target vanishing point in the (k + 1) th frame image may be determined by the following formula:
is the (k + 1) th estimated longitudinal coordinate corresponding to the ith object in the M objects,is the longitudinal coordinate of the projection of the grounding point of the ith object in the M objects on the longitudinal axis,is the height of the target camera to the ground,is the focal length of the subject camera and,is the estimated distance between the target camera and the target object.
Optionally, the determining the k +1 th position according to the M k +1 th predicted positions includes: determining the k +1 th position as being equal to an average of the M k +1 th predicted positions.
As an alternative implementation, the average value of the M k +1 th predicted positions may be used as the k +1 th position of the target vanishing point in the k +1 th frame image.
Optionally, the method further comprises: determining that the position determination result of the target vanishing point before the (k + 1) th frame does not meet the first preset condition under the condition that at least 2 parallel lane lines are not detected in the continuous P1 frame images shot by the target camera before the (k + 1) th frame; otherwise, determining that the position determination result of the target vanishing point before the (k + 1) th frame meets the first preset condition, wherein P1 is a preset natural number greater than 1; or under the condition that at least 2 parallel lane lines are not detected in the P2 frame images shot by the target camera within the preset time length before the (k + 1) th frame, determining that the position determination result of the target vanishing point before the (k + 1) th frame does not meet the first preset condition; otherwise, determining that the position determination result of the target vanishing point before the (k + 1) th frame meets the first preset condition, wherein P2 is a preset natural number greater than 1.
As an alternative embodiment, the first preset condition is described below, and the first preset condition may include two cases:
in case 1, at least 2 parallel lane lines are not detected in each of the consecutive P1 frame images captured before the k +1 th frame image captured by the subject camera, and it is determined that the first preset bar is not satisfied. Conversely, if there is a certain frame image in the continuous P1 frame images or at least 2 parallel lane lines are detected in a certain number of frame images, it is determined that the first preset condition is satisfied.
In case 2, the target camera may capture a P3 frame image within a preset time period before the k +1 th frame image captured by the target camera, wherein P3 may be greater than or equal to P2. Under the condition that P3 is larger than P2, if at least 2 parallel lane lines are not detected in the P2 frame images, determining that the first preset condition is not met, wherein the P2 frame images can be continuous images or interval images; otherwise, determining that the position determination result of the target vanishing point before the (k + 1) th frame meets a first preset condition, wherein P2 is a preset natural number greater than 1.
In this embodiment, when the first preset condition is satisfied, the data of the previous k-th frame may be used to obtain the k +1 th estimated position of the target vanishing point in the k +1 th frame image captured by the target camera by using the IMU as the motion prediction of the vanishing point, and the k +1 th observed position of the target vanishing point in the k +1 th frame image obtained by the visual observation may be fused to obtain the k +1 th position of the target vanishing point in the k +1 th frame image. Under the condition that the first preset condition is not met, the (k + 1) th position of the target vanishing point in the (k + 1) th frame image can be determined according to the projection position of the object detected in the (k + 1) th frame image.
Optionally, the k frame angular velocity and the k +1 frame according to the target sensor outputAn angular velocity determining an angle change value of the target camera between the k frame and the k +1 frame, including: determining an angle change value of the target camera between the kth frame and the (k + 1) th frame by the following formula when a position change value of the position of the target vanishing point between the kth frame and the (k + 1) th frame is a change value of a longitudinal coordinate of the target vanishing point between the kth frame and the (k + 1) th frame, and the (k + 1) th position is the (k + 1) th longitudinal coordinate:
Wherein,,is the k frame angular velocity of the target camera output,is the k +1 frame angular velocity of the target camera output.
As an alternative embodiment, the IMU may consist of a 3-axis gyroscope and a 3-axis angular velocity meter, wherein the gyroscope outputs 3 degrees of angular velocity and the accelerometer outputs 3 degrees of acceleration information. The time interval between two adjacent frames of the image may be 20ms to 50 ms. Typically, IMUs are at frequencies between 100Hz and 200Hz, which are much greater than the image frequency. Therefore, in such a short time, the angular velocity of the gyroscope output can be integrated to accurately calculate the angular change in the time, and specifically, the angular velocities of the 3 axes can be respectively:
wherein k, k +1 are the times of the k, k +1 th frame image, respectively,is the angular velocity value output by the gyroscope,is the value of the angle integrated between the k and k +1 frame images. After the angle change of the camera between two frames of images is calculated, the change of vanishing points can be calculated through approximate transformation.
Optionally, the determining, according to the angle change value and the focal length of the target camera, a position change value of the position of the target vanishing point between the k-th frame and the (k + 1) -th frame includes: determining a position change value of the position of the target vanishing point between the k-th frame and the k + 1-th frame by the following formula:
Wherein,is the focal length of the subject camera and,the angle change value. In this embodiment, as shown in FIG. 4。
Optionally, the determining a distance between a target object in the (k + 1) th frame of image and the target camera according to the (k + 1) th position of the target vanishing point includes: determining a distance d between a target object in the image of the (k + 1) th frame and the target camera by the following formula when a position change value of the position of the target vanishing point between the (k) th frame and the (k + 1) th frame is a change value of a longitudinal coordinate of the target vanishing point between the (k) th frame and the (k + 1) th frame, and the (k + 1) th position is the (k + 1) th longitudinal coordinate:
wherein,is a longitudinal coordinate of the projection of the grounding point of the target object on the longitudinal axis,is the (k + 1) th vertical coordinate of the target vanishing point in the (k + 1) th frame image,is the height of the target camera to the ground,is the focal length of the target camera.
As an alternative, referring to fig. 6, in the parameter diagram shown in fig. 6, the parameter can be represented by a formulaThe distance from the camera lens to the vehicle in front is found, and, in this embodiment,can be represented by formulaTo obtain a mixture of, among others,may be the longitudinal coordinate of the ground point of the vehicle in front projected on the longitudinal axis,is the (k + 1) th longitudinal coordinate of the target vanishing point in the (k + 1) th frame image,is the height of the target camera to the ground,is the focal length of the target camera.
Optionally, the method further comprises: determining a lateral distance x of the target object from the target camera in the (k + 1) th frame image by:
wherein,is the horizontal coordinate of the target vanishing point in the (k + 1) th frame image,is a lateral coordinate of a projection of a grounding point of the target object on a lateral axis, and d is a distance between the target object and the target camera in the (k + 1) th frame image.
As an alternative implementation, the lateral position information of the target object can also be estimated according to the visual geometry, as shown in fig. 7, which is a top view of the lateral position estimation of the target object according to an alternative embodiment of the present invention, where x is the lateral distance of the target object in the camera coordinate system,is the lateral coordinate of the target object's grounding point projected on the image,is the lateral coordinate of the target vanishing point. According to the visual geometric relationship, the following can be obtained:
wherein d is a distance between a target object in the (k + 1) th frame image and the target camera,is the focal length of the target camera.
Optionally, the method further comprises: determining the estimated position, the estimated speed and the estimated acceleration of the target object in the (k + 1) th frame of image according to the following formulas:
wherein T is a time interval between the k frame and a (k + 1) th frame,
,respectively the estimated position, the estimated speed and the estimated acceleration of the target object in the (k + 1) th frame image,
is the position, velocity and acceleration of the target object in the k-th frame image, respectively.
As an optional implementation manner, in practical application, due to wear of lane lines, the lane lines do not have a parallel relationship or a traveling road does not satisfy a plane assumption, which both may bring a large error to position estimation, and in order to reduce fluctuation caused by the large error, a kalman filtering method of a constant acceleration model is adopted in this embodiment:
,respectively the estimated position, the estimated speed and the estimated acceleration of the target object in the (k + 1) th frame image,
is the position, velocity and acceleration of the target object in the k-th frame image, respectively.
Optionally, a position change value of the target vanishing point between the kth frame and the (k + 1) th frame is a change value of a longitudinal coordinate of the target vanishing point between the kth frame and the (k + 1) th frame, and the (k + 1) th position is the (k + 1) th longitudinal coordinate.
According to the method and the device, the vanishing point can be dynamically estimated in real time, and barrier ranging information with higher precision can be obtained compared with static vanishing point estimation. Even if the lane line cannot be detected, the lane line is not parallel or the lane line is detected wrongly, the vanishing point can be reversely deduced by measuring the target distance with the highest confidence coefficient. In addition the present application provides vanishing point observation by combining monocular vision. And the position of the vanishing point on the image is estimated with high precision through Kalman filtering. And calculating high-precision dynamic obstacle position information through the visual geometric relationship, and then calculating the position and speed information of the obstacle through Kalman filtering. The vanishing point estimation can be more accurate and the robustness is better.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
According to another aspect of the embodiments of the present invention, there is also provided a target object ranging apparatus for implementing the above target object ranging method. As shown in fig. 8, the apparatus includes: a first determining module 82, configured to determine an angle change value of a target camera between a k frame and a k +1 frame according to a k frame angular velocity and a k +1 frame angular velocity output by a target sensor, where the target camera and the target sensor are disposed on a target vehicle, and k is a natural number; a second determining module 84, configured to determine, according to the angle change value and the focal length of the target camera, a position change value of the position of the target vanishing point between the kth frame and the (k + 1) th frame; a third determining module 86, configured to determine, according to the position change value, a (k + 1) th position of the target vanishing point in a (k + 1) th frame image captured by the target camera when a position determination result of the target vanishing point before the (k + 1) th frame meets a first preset condition; a fourth determining module 88, configured to determine, according to the (k + 1) th position of the target vanishing point, a distance between the target object in the (k + 1) th frame of image and the target camera.
Optionally, the above apparatus is further configured to determine, according to the position change value, a (k + 1) th position of the target vanishing point in a (k + 1) th frame image captured by the target camera by: determining a k +1 estimated position of the target vanishing point in a k +1 frame image shot by the target camera according to the position change value and the k position of the target vanishing point in the k frame image shot by the target camera; under the condition that a lane line meeting a second preset condition is detected in the (k + 1) th frame image, determining a (k + 1) th observation position of the target vanishing point in the (k + 1) th frame image according to the detected lane line; and determining the (k + 1) th position of the target vanishing point in the (k + 1) th frame image according to the (k + 1) th estimated position and the (k + 1) th observation position.
Optionally, the above apparatus is further configured to determine, according to the (k + 1) th estimated position and the (k + 1) th observation position, the (k + 1) th position of the target vanishing point in the (k + 1) th frame image by: filtering the (k + 1) th estimated position and the (k + 1) th observation position to obtain a (k + 1) th position; or carrying out weighted average processing on the (k + 1) th estimated position and the (k + 1) th observation position to obtain the (k + 1) th position.
Optionally, the apparatus is further configured to determine, according to the detected lane line, a k +1 th observation position of the target vanishing point in the k +1 th frame image by: and under the condition that the detected lane lines comprise at least 2 parallel lane lines, performing fitting operation on the at least 2 parallel lane lines to obtain the (k + 1) th observation position.
Optionally, the above apparatus is further configured to determine, according to the position change value, a (k + 1) th position of the target vanishing point in a (k + 1) th frame image captured by the target camera by: determining a k +1 estimated position of the target vanishing point in a k +1 frame image shot by the target camera according to the position change value and the k position of the target vanishing point in the k frame image shot by the target camera; and under the condition that the lane line meeting the second preset condition is not detected in the (k + 1) th frame image, determining the (k + 1) th position of the target vanishing point in the (k + 1) th frame image as being equal to the (k + 1) th estimated position.
Optionally, the apparatus is further configured to determine, according to the position change value and a kth position of the target vanishing point in a kth frame image captured by the target camera, a k +1 th estimated position of the target vanishing point in a k +1 th frame image captured by the target camera, by: determining the k +1 th predicted position as equal to the sum of the k-th position and the position change value.
Optionally, the apparatus is further configured to determine that a lane line meeting the second preset condition is detected in the (k + 1) th frame image when at least 2 parallel lane lines are detected in the (k + 1) th frame image; and under the condition that at least 2 parallel lane lines are not detected in the (k + 1) th frame image, determining that no lane line meeting the second preset condition is detected in the (k + 1) th frame image.
Optionally, the above apparatus is further configured to, when a position determination result of the target vanishing point before the (k + 1) th frame does not satisfy the first preset condition, determine the (k + 1) th position of the target vanishing point in the (k + 1) th frame image according to a projection position of an object detected in the (k + 1) th frame image.
Optionally, the above apparatus is further configured to determine, according to the projection position of the object detected in the (k + 1) th frame image, the (k + 1) th position of the target vanishing point in the (k + 1) th frame image by: and determining the (k + 1) th position of the target vanishing point in the (k + 1) th frame image according to the projection position of the object detected in the (k + 1) th frame image, the height of the target camera to the ground and the focal length of the target camera.
Optionally, the above apparatus is further configured to determine the k +1 th position of the target vanishing point in the k +1 th frame image according to the projection position of the object detected in the k +1 th frame image, the height of the target camera to the ground, and the focal length of the target camera, by: respectively determining the k +1 th estimated positions of the target vanishing point in the k +1 th frame image according to the projection positions of M objects in the N objects, the height from the target camera to the ground and the focal length of the target camera under the condition that the detected objects in the k +1 th frame image comprise N objects, and obtaining M k +1 th estimated positions in total, wherein N is 1 or a natural number greater than 1, and M is less than or equal to N; and determining the (k + 1) th position according to the (k + 1) th estimated positions.
Optionally, the above apparatus is further configured to determine, according to the projection positions of M objects in the N objects, the height from the target camera to the ground, and the focal length of the target camera, the k +1 th estimated positions of the target vanishing point in the k +1 th frame image respectively, and obtain M k +1 th estimated positions in total: acquiring first M objects with the confidence degrees ordered from high to low in the N objects; respectively determining the (k + 1) th estimated positions of the target vanishing point in the (k + 1) th frame image according to the projection positions of the first M objects, the height from the target camera to the ground and the focal length of the target camera, and obtaining the (M) th estimated positions of the (k + 1) th frame image.
Optionally, the above apparatus is further configured to determine, according to the projection positions of M of the N objects, the heights of the target cameras to the ground, and the focal lengths of the target cameras, the k +1 th estimated positions of the target vanishing point in the k +1 th frame of image respectively by: under the condition that the position change value of the target vanishing point between the kth frame and the (k + 1) th frame is the change value of the longitudinal coordinate of the target vanishing point between the kth frame and the (k + 1) th frame, and the (k + 1) th position is the (k + 1) th longitudinal coordinate, for each of the M objects, determining the (k + 1) th estimated longitudinal coordinate of the target vanishing point in the (k + 1) th frame image by the following formula:
wherein,is the (k + 1) th estimated longitudinal coordinate corresponding to the ith object in the M objects,is a longitudinal coordinate of a projection of the grounding point of the ith one of the M objects on the longitudinal axis,is the height of the target camera to the ground,is the focal length of the subject camera and,is the estimated distance between the target camera and the target object.
Optionally, the apparatus is further configured to determine the k +1 th position according to the M k +1 th predicted positions by: determining the k +1 th position as being equal to an average of the M k +1 th predicted positions.
Optionally, the apparatus is further configured to determine that the position determination result of the target vanishing point before the (k + 1) th frame does not satisfy the first preset condition when at least 2 parallel lane lines are not detected in any of the consecutive P1 frame images captured by the target camera before the (k + 1) th frame; otherwise, determining that the position determination result of the target vanishing point before the (k + 1) th frame meets the first preset condition, wherein P1 is a preset natural number greater than 1; or under the condition that at least 2 parallel lane lines are not detected in the P2 frame images shot by the target camera within the preset time length before the (k + 1) th frame, determining that the position determination result of the target vanishing point before the (k + 1) th frame does not meet the first preset condition; otherwise, determining that the position determination result of the target vanishing point before the (k + 1) th frame meets the first preset condition, wherein P2 is a preset natural number greater than 1.
Optionally, the above apparatus is further configured to determine an angle change value of the target camera between the k frame and the k +1 frame according to the k frame angular velocity and the k +1 frame angular velocity output by the target sensor, by: determining an angle change value of the target camera between the kth frame and the (k + 1) th frame by the following formula when a position change value of the position of the target vanishing point between the kth frame and the (k + 1) th frame is a change value of a longitudinal coordinate of the target vanishing point between the kth frame and the (k + 1) th frame, and the (k + 1) th position is the (k + 1) th longitudinal coordinate:
Wherein,,is the k frame angular velocity of the target camera output,is the k +1 frame angular velocity of the target camera output.
Optionally, the above apparatus is further configured to determine, according to the angle change value and the focal length of the target camera, a position change value of the position of the target vanishing point between the k-th frame and the (k + 1) -th frame by: determining a position change value of the position of the target vanishing point between the k-th frame and the k + 1-th frame by the following formula:
Optionally, the above apparatus is further configured to determine a distance between a target object in the (k + 1) th frame image and the target camera according to the (k + 1) th position of the target vanishing point by: determining a distance d between a target object in the image of the (k + 1) th frame and the target camera by the following formula when a position change value of the position of the target vanishing point between the (k) th frame and the (k + 1) th frame is a change value of a longitudinal coordinate of the target vanishing point between the (k) th frame and the (k + 1) th frame, and the (k + 1) th position is the (k + 1) th longitudinal coordinate:
wherein,is a longitudinal coordinate of the projection of the grounding point of the target object on the longitudinal axis,is the (k + 1) th vertical coordinate of the target vanishing point in the (k + 1) th frame image,is the height of the target camera to the ground,is the focal length of the target camera.
Optionally, the above apparatus is further configured to determine a lateral distance x between the target object in the (k + 1) th frame image and the target camera by the following formula:
wherein,is the horizontal coordinate of the target vanishing point in the (k + 1) th frame image,is a lateral coordinate of a projection of a grounding point of the target object on a lateral axis, and d is a distance between the target object and the target camera in the (k + 1) th frame image.
Optionally, the above apparatus is further configured to determine an estimated position, an estimated speed and an estimated acceleration of the target object in the (k + 1) th frame of image by the following formulas:
wherein T is a time interval between the k frame and a (k + 1) th frame,
,respectively the destination in the k +1 frame imageThe estimated position, the estimated speed and the estimated acceleration of the object,
the position, the velocity, and the acceleration of the target object in the k-th frame image, respectively.
Optionally, a position change value of the target vanishing point between the kth frame and the (k + 1) th frame is a change value of a longitudinal coordinate of the target vanishing point between the kth frame and the (k + 1) th frame, and the (k + 1) th position is the (k + 1) th longitudinal coordinate.
According to still another aspect of the embodiments of the present invention, there is also provided an electronic device for implementing the above-described distance measuring method for a target object, which may be an in-vehicle apparatus mounted on a target vehicle shown in fig. 1. As shown in fig. 9, the electronic device comprises a memory 902 and a processor 904, the memory 902 having stored therein a computer program, the processor 904 being arranged to perform the steps of any of the above-described method embodiments by means of the computer program.
Optionally, in this embodiment, the electronic device may be located in at least one network device of a plurality of network devices of a computer network.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
s1, determining an angle change value of the target camera between the kth frame and the (k + 1) th frame according to the kth frame angular velocity and the (k + 1) th frame angular velocity output by the target sensor, wherein the target camera and the target sensor are arranged on the target vehicle, and k is a natural number;
s2, determining a position change value of the position of the target vanishing point between the kth frame and the (k + 1) th frame according to the angle change value and the focal length of the target camera;
s3, determining the (k + 1) th position of the target vanishing point in the (k + 1) th frame image shot by the target camera according to the position change value under the condition that the position determination result of the target vanishing point before the (k + 1) th frame meets a first preset condition;
s4, determining the distance between the target object in the (k + 1) th frame image and the target camera according to the (k + 1) th position of the target vanishing point.
Alternatively, it can be understood by those skilled in the art that the structure shown in fig. 9 is only an illustration, and the electronic device may also be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palmtop computer, a Mobile Internet Device (MID), a PAD, and the like. Fig. 9 is a diagram illustrating a structure of the electronic device. For example, the electronics may also include more or fewer components (e.g., network interfaces, etc.) than shown in FIG. 9, or have a different configuration than shown in FIG. 9.
The memory 902 may be configured to store software programs and modules, such as program instructions/modules corresponding to the method and apparatus for measuring a distance of a target object in the embodiment of the present invention, and the processor 904 executes various functional applications and data processing by running the software programs and modules stored in the memory 902, that is, implements the method for measuring a distance of a target object. The memory 902 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 902 may further include memory located remotely from the processor 904, which may be connected to the terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof. The memory 902 may be, but is not limited to, used for storing information such as images. As an example, as shown in fig. 9, the memory 902 may include, but is not limited to, a first determining module 802, a second determining module 804, a third determining module 806, and a fourth determining module 810 of the ranging apparatus including the target object. In addition, the distance measuring device may further include, but is not limited to, other module units in the distance measuring device for the target object, which is not described in detail in this example.
Optionally, the transmitting device 906 is used for receiving or sending data via a network. Examples of the network may include a wired network and a wireless network. In one example, the transmission device 906 includes a Network adapter (NIC) that can be connected to a router via a Network cable and other Network devices to communicate with the internet or a local area Network. In one example, the transmission device 906 is a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
In addition, the electronic device further includes: a display 908 for displaying the distance between the target object and the target camera; and a connection bus 910 for connecting the respective module components in the above-described electronic apparatus.
In other embodiments, the terminal device or the server may be a node in a distributed system, where the distributed system may be a blockchain system, and the blockchain system may be a distributed system formed by connecting a plurality of nodes through a network communication. Nodes can form a Peer-To-Peer (P2P, Peer To Peer) network, and any type of computing device, such as a server, a terminal, and other electronic devices, can become a node in the blockchain system by joining the Peer-To-Peer network.
According to an aspect of the application, a computer program product or computer program is provided, comprising computer instructions, the computer instructions being stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the method provided in the various alternative implementations described above. Wherein the computer program is arranged to perform the steps of any of the above method embodiments when executed.
Alternatively, in the present embodiment, the above-mentioned computer-readable storage medium may be configured to store a computer program for executing the steps of:
s1, determining an angle change value of the target camera between the kth frame and the (k + 1) th frame according to the kth frame angular velocity and the (k + 1) th frame angular velocity output by the target sensor, wherein the target camera and the target sensor are arranged on the target vehicle, and k is a natural number;
s2, determining a position change value of the position of the target vanishing point between the kth frame and the (k + 1) th frame according to the angle change value and the focal length of the target camera;
s3, determining the (k + 1) th position of the target vanishing point in the (k + 1) th frame image shot by the target camera according to the position change value under the condition that the position determination result of the target vanishing point before the (k + 1) th frame meets a first preset condition;
s4, determining the distance between the target object in the (k + 1) th frame image and the target camera according to the (k + 1) th position of the target vanishing point.
Alternatively, in this embodiment, a person skilled in the art may understand that all or part of the steps in the methods of the foregoing embodiments may be implemented by a program instructing hardware associated with the terminal device, where the program may be stored in a computer-readable storage medium, and the storage medium may include: flash disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
The integrated unit in the above embodiments, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in the above computer-readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing one or more computer devices (which may be personal computers, servers, network devices, etc.) to execute all or part of the steps of the method according to the embodiments of the present invention.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed client may be implemented in other manners. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.
Claims (15)
1. A method of ranging a target object, comprising:
determining an angle change value of a target camera between a kth frame and a (k + 1) th frame according to a kth frame angular velocity and a (k + 1) th frame angular velocity output by a target sensor, wherein the target camera and the target sensor are arranged on a target vehicle, and k is a natural number;
determining a position change value of the position of a target vanishing point between the kth frame and the (k + 1) th frame according to the angle change value and the focal length of the target camera;
under the condition that the position determination result of the target vanishing point before the (k + 1) th frame meets a first preset condition, determining the (k + 1) th position of the target vanishing point in a (k + 1) th frame image shot by the target camera according to the position change value;
and determining the distance between a target object in the (k + 1) th frame of image and the target camera according to the (k + 1) th position of the target vanishing point.
2. The method of claim 1, wherein determining the k +1 th position of the target vanishing point in the k +1 th frame image captured by the target camera according to the position variation value comprises:
determining a k +1 estimated position of the target vanishing point in a k +1 frame image shot by the target camera according to the position change value and the k position of the target vanishing point in the k frame image shot by the target camera;
under the condition that a lane line meeting a second preset condition is detected in the (k + 1) th frame image, determining a (k + 1) th observation position of the target vanishing point in the (k + 1) th frame image according to the detected lane line;
and determining the (k + 1) th position of the target vanishing point in the (k + 1) th frame image according to the (k + 1) th estimated position and the (k + 1) th observation position.
3. The method according to claim 2, wherein the determining the (k + 1) th position of the target vanishing point in the (k + 1) th frame image according to the (k + 1) th estimated position and the (k + 1) th observed position comprises:
filtering the (k + 1) th estimated position and the (k + 1) th observation position to obtain a (k + 1) th position; or
And carrying out weighted average processing on the (k + 1) th estimated position and the (k + 1) th observation position to obtain the (k + 1) th position.
4. The method according to claim 2, wherein the determining the k +1 th observation position of the target vanishing point in the k +1 th frame image according to the detected lane line comprises:
and under the condition that the detected lane lines comprise at least 2 parallel lane lines, performing fitting operation on the at least 2 parallel lane lines to obtain the (k + 1) th observation position.
5. The method of claim 1, wherein determining the k +1 th position of the target vanishing point in the k +1 th frame image captured by the target camera according to the position variation value comprises:
determining a k +1 estimated position of the target vanishing point in a k +1 frame image shot by the target camera according to the position change value and the k position of the target vanishing point in the k frame image shot by the target camera;
and under the condition that a lane line meeting a second preset condition is not detected in the (k + 1) th frame image, determining the (k + 1) th position of the target vanishing point in the (k + 1) th frame image as being equal to the (k + 1) th estimated position.
6. The method according to claim 2 or 5, characterized in that the method further comprises:
under the condition that at least 2 parallel lane lines are detected in the (k + 1) th frame image, determining that the lane lines meeting the second preset condition are detected in the (k + 1) th frame image;
and under the condition that at least 2 parallel lane lines are not detected in the (k + 1) th frame image, determining that no lane line meeting the second preset condition is detected in the (k + 1) th frame image.
7. The method of claim 1, further comprising:
and under the condition that the position determination result of the target vanishing point before the (k + 1) th frame does not meet the first preset condition, determining the (k + 1) th position of the target vanishing point in the (k + 1) th frame image according to the projection position of the object detected in the (k + 1) th frame image.
8. The method according to claim 7, wherein the determining the k +1 th position of the target vanishing point in the k +1 th frame image according to the projection position of the object detected in the k +1 th frame image comprises:
and determining the (k + 1) th position of the target vanishing point in the (k + 1) th frame image according to the projection position of the object detected in the (k + 1) th frame image, the height of the target camera to the ground and the focal length of the target camera.
9. The method of claim 8, wherein the determining the k +1 th position of the target vanishing point in the k +1 th frame image according to the projection position of the detected object in the k +1 th frame image, the height of the target camera to the ground and the focal length of the target camera comprises:
respectively determining the k +1 th estimated positions of the target vanishing point in the k +1 th frame image according to the projection positions of M objects in the N objects, the height from the target camera to the ground and the focal length of the target camera under the condition that the detected objects in the k +1 th frame image comprise N objects, and obtaining M k +1 th estimated positions in total, wherein N is 1 or a natural number more than 1, and M is less than or equal to N;
and determining the (k + 1) th position according to the (k + 1) th estimated positions.
10. The method according to claim 9, wherein the determining k +1 th estimated positions of the vanishing point of the target in the k +1 th frame of image according to the projection positions of M of the N objects, the height of the target camera to the ground, and the focal length of the target camera respectively to obtain M k +1 th estimated positions comprises:
acquiring first M objects with the confidence degrees ordered from high to low in the N objects;
respectively determining the (k + 1) th estimated positions of the target vanishing point in the (k + 1) th frame image according to the projection positions of the first M objects, the height from the target camera to the ground and the focal length of the target camera, and obtaining the (M) th estimated positions of the (k + 1) th frame image.
11. The method according to claim 9, wherein the determining the k +1 th estimated position of the target vanishing point in the k +1 th frame image according to the projection positions of the M objects in the N objects, the height of the target camera to the ground and the focal length of the target camera respectively comprises:
under the condition that the position change value of the target vanishing point between the kth frame and the (k + 1) th frame is the change value of the longitudinal coordinate of the target vanishing point between the kth frame and the (k + 1) th frame, and the (k + 1) th position is the (k + 1) th longitudinal coordinate, determining the (k + 1) th estimated longitudinal coordinate of the target vanishing point in the (k + 1) th frame image by the following formula for each of the M objects:
wherein,is the (k + 1) th estimated longitudinal coordinate corresponding to the ith object in the M objects,is a longitudinal coordinate of a projection of the grounding point of the ith one of the M objects on the longitudinal axis,is the height of the target camera to the ground,is the focal length of the subject camera and,is the estimated distance between the target camera and the target object.
12. The method of claim 1 or 7, further comprising:
determining that the position determination result of the target vanishing point before the (k + 1) th frame does not meet the first preset condition under the condition that at least 2 parallel lane lines are not detected in the continuous P1 frame images shot by the target camera before the (k + 1) th frame; otherwise, determining that the position determination result of the target vanishing point before the (k + 1) th frame meets the first preset condition, wherein P1 is a preset natural number greater than 1; or
Determining that the position determination result of the target vanishing point before the (k + 1) th frame does not meet the first preset condition under the condition that at least 2 parallel lane lines are not detected in the P2 frame images shot by the target camera within a preset time length before the (k + 1) th frame; otherwise, determining that the position determination result of the target vanishing point before the (k + 1) th frame meets the first preset condition, wherein P2 is a preset natural number greater than 1.
13. The method according to any one of claims 1 to 5 and 8 to 11, wherein the determining an angle change value of the target camera between the k frame and the k +1 frame according to the k frame angular velocity and the k +1 frame angular velocity output by the target sensor comprises:
determining an angle change value of the target camera between the kth frame and the (k + 1) th frame by the following formula under the condition that a position change value of the position of the target vanishing point between the kth frame and the (k + 1) th frame is a change value of a longitudinal coordinate of the target vanishing point between the kth frame and the (k + 1) th frame, and the (k + 1) th position is a (k + 1) th longitudinal coordinate:
14. The method according to any one of claims 1 to 5 and 7 to 11, wherein the determining the distance between the target object and the target camera in the (k + 1) th frame image according to the (k + 1) th position of the target vanishing point comprises:
determining a distance d between a target object in the image of the (k + 1) th frame and the target camera by the following formula when a position change value of the position of the target vanishing point between the (k) th frame and the (k + 1) th frame is a change value of a longitudinal coordinate of the target vanishing point between the (k) th frame and the (k + 1) th frame, and the (k + 1) th position is a (k + 1) th longitudinal coordinate:
wherein,is a longitudinal coordinate of the projection of the grounding point of the target object on the longitudinal axis,is the (k + 1) th vertical coordinate of the target vanishing point in the (k + 1) th frame image,is the height of the target camera to the ground,is the focal length of the target camera.
15. A range finder apparatus for a target object, comprising:
the first determination module is used for determining an angle change value of a target camera between a kth frame and a (k + 1) th frame according to a kth frame angular velocity and a (k + 1) th frame angular velocity output by a target sensor, wherein the target camera and the target sensor are arranged on a target vehicle, and k is a natural number;
a second determining module, configured to determine, according to the angle change value and the focal length of the target camera, a position change value of a position of a target vanishing point between the kth frame and the (k + 1) th frame;
a third determining module, configured to determine, according to the position change value, a (k + 1) th position of the target vanishing point in a (k + 1) th frame image captured by the target camera when a position determination result of the target vanishing point before the (k + 1) th frame meets a first preset condition;
a fourth determining module, configured to determine, according to the (k + 1) th position of the target vanishing point, a distance between the target object in the (k + 1) th frame of image and the target camera.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011333374.7A CN112146620B (en) | 2020-11-25 | 2020-11-25 | Target object ranging method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011333374.7A CN112146620B (en) | 2020-11-25 | 2020-11-25 | Target object ranging method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112146620A CN112146620A (en) | 2020-12-29 |
CN112146620B true CN112146620B (en) | 2021-03-16 |
Family
ID=73887308
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011333374.7A Active CN112146620B (en) | 2020-11-25 | 2020-11-25 | Target object ranging method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112146620B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112907678B (en) * | 2021-01-25 | 2022-05-13 | 深圳佑驾创新科技有限公司 | Vehicle-mounted camera external parameter attitude dynamic estimation method and device and computer equipment |
CN114037977B (en) * | 2022-01-07 | 2022-04-26 | 深圳佑驾创新科技有限公司 | Road vanishing point detection method, device, equipment and storage medium |
CN114814803B (en) * | 2022-04-22 | 2025-03-14 | 福思(杭州)智能科技有限公司 | A method, device and storage medium for visual ranging of a vehicle |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5752728B2 (en) * | 2013-02-28 | 2015-07-22 | 富士フイルム株式会社 | Inter-vehicle distance calculation device and operation control method thereof |
JP6251099B2 (en) * | 2014-03-24 | 2017-12-20 | 国立大学法人東京工業大学 | Distance calculation device |
CN111104824B (en) * | 2018-10-26 | 2024-12-13 | 中兴通讯股份有限公司 | Lane departure detection method, electronic device and computer readable storage medium |
CN111191487A (en) * | 2018-11-14 | 2020-05-22 | 北京市商汤科技开发有限公司 | Lane line detection and driving control method and device and electronic equipment |
-
2020
- 2020-11-25 CN CN202011333374.7A patent/CN112146620B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN112146620A (en) | 2020-12-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3517997B1 (en) | Method and system for detecting obstacles by autonomous vehicles in real-time | |
CN112146620B (en) | Target object ranging method and device | |
CN112785702A (en) | SLAM method based on tight coupling of 2D laser radar and binocular camera | |
EP2948927B1 (en) | A method of detecting structural parts of a scene | |
CN109975792A (en) | Method based on Multi-sensor Fusion correction multi-line laser radar point cloud motion distortion | |
CN110553648A (en) | method and system for indoor navigation | |
US8467612B2 (en) | System and methods for navigation using corresponding line features | |
US20180075614A1 (en) | Method of Depth Estimation Using a Camera and Inertial Sensor | |
CN113701750A (en) | Fusion positioning system of underground multi-sensor | |
CN113870367A (en) | Method, apparatus, device, storage medium and program product for generating camera external parameters | |
Trejo et al. | Depth map estimation methodology for detecting free-obstacle navigation areas | |
Badino et al. | Stereo-based free space computation in complex traffic scenarios | |
KR101319526B1 (en) | Method for providing location information of target using mobile robot | |
KR20160125803A (en) | Apparatus for defining an area in interest, apparatus for detecting object in an area in interest and method for defining an area in interest | |
CN113269857A (en) | Coordinate system relation obtaining method and device | |
CN108322698A (en) | The system and method merged based on multiple-camera and Inertial Measurement Unit | |
CN111553342A (en) | Visual positioning method and device, computer equipment and storage medium | |
JP5891802B2 (en) | Vehicle position calculation device | |
CN113379850B (en) | Mobile robot control method, device, mobile robot and storage medium | |
CN112115930B (en) | Method and device for determining pose information | |
Aliakbarpour et al. | Geometric exploration of virtual planes in a fusion-based 3D data registration framework | |
Yang et al. | Road detection by RANSAC on randomly sampled patches with slanted plane prior | |
JP4622889B2 (en) | Image processing apparatus and image processing method | |
Pełczyński et al. | Motion vector estimation of a stereovision camera with inertial sensors | |
Kanatani et al. | Detection of 3D points on moving objects from point cloud data for 3D modeling of outdoor environments |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20221021 Address after: 518000 Tencent Building, No. 1 High-tech Zone, Nanshan District, Shenzhen City, Guangdong Province, 35 Floors Patentee after: TENCENT TECHNOLOGY (SHENZHEN) Co.,Ltd. Patentee after: TENCENT CLOUD COMPUTING (BEIJING) Co.,Ltd. Address before: 518000 Tencent Building, No. 1 High-tech Zone, Nanshan District, Shenzhen City, Guangdong Province, 35 Floors Patentee before: TENCENT TECHNOLOGY (SHENZHEN) Co.,Ltd. |
|
TR01 | Transfer of patent right |