CN109254579B - Binocular vision camera hardware system, three-dimensional scene reconstruction system and method - Google Patents
Binocular vision camera hardware system, three-dimensional scene reconstruction system and method Download PDFInfo
- Publication number
- CN109254579B CN109254579B CN201710576935.8A CN201710576935A CN109254579B CN 109254579 B CN109254579 B CN 109254579B CN 201710576935 A CN201710576935 A CN 201710576935A CN 109254579 B CN109254579 B CN 109254579B
- Authority
- CN
- China
- Prior art keywords
- dimensional scene
- vehicle
- camera
- scene reconstruction
- binocular vision
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 40
- 238000011084 recovery Methods 0.000 claims abstract description 14
- 230000009467 reduction Effects 0.000 claims description 17
- 230000011218 segmentation Effects 0.000 claims description 16
- 230000003287 optical effect Effects 0.000 claims description 15
- 230000001360 synchronised effect Effects 0.000 claims description 14
- 230000005540 biological transmission Effects 0.000 claims description 10
- 238000010276 construction Methods 0.000 abstract description 3
- 230000008569 process Effects 0.000 description 13
- 230000000007 visual effect Effects 0.000 description 11
- 238000005516 engineering process Methods 0.000 description 8
- 230000006870 function Effects 0.000 description 7
- 230000008447 perception Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- AYFVYJQAPQTCCC-GBXIJSLDSA-N L-threonine Chemical compound C[C@@H](O)[C@H](N)C(O)=O AYFVYJQAPQTCCC-GBXIJSLDSA-N 0.000 description 3
- 230000009471 action Effects 0.000 description 3
- 230000009466 transformation Effects 0.000 description 3
- 230000004888 barrier function Effects 0.000 description 2
- 230000002708 enhancing effect Effects 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0246—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
- G05D1/0251—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting 3D information from a plurality of images taken from different locations, e.g. stereo vision
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0214—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0276—Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0276—Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
- G05D1/0278—Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle using satellite positioning signals, e.g. GPS
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Physics & Mathematics (AREA)
- Remote Sensing (AREA)
- Aviation & Aerospace Engineering (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Electromagnetism (AREA)
- Studio Devices (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a binocular vision camera hardware system, a three-dimensional scene reconstruction system and a three-dimensional scene reconstruction method, wherein the three-dimensional scene reconstruction system comprises an ECU (electronic control Unit), a binocular vision camera hardware system and a GPS (global positioning System) module, when the ECU determines that the distance between a vehicle and a front turning position reaches a preset distance according to navigation information sent back by the GPS module, the ECU controls the binocular vision camera to synchronously rotate back and forth in a preset rotating range, acquires image data in a field of view of 180 degrees in front of the vehicle, and carries out three-dimensional scene reconstruction on the image data in the field of view of 180 degrees in front of the vehicle respectively acquired by a left camera and a right camera based on a motion recovery structure principle and an instant positioning and map construction principle to obtain a three-dimensional scene reconstruction image in the field of view of 180 degrees in front of the vehicle. According to the invention, the image acquisition within the view angle range of 180 degrees in front of the vehicle can be realized only by adopting a set of binocular vision cameras with a one-dimensional rotation function, so that the hardware cost for reconstructing the three-dimensional scene is reduced.
Description
Technical Field
The invention relates to the technical field of three-dimensional scene reconstruction, in particular to a binocular vision camera hardware system, a three-dimensional scene reconstruction system and a three-dimensional scene reconstruction method.
Background
An automatic driving automobile, also called an unmanned automobile, a computer driving automobile or a wheeled mobile robot, is an intelligent automobile which realizes unmanned driving through a computer system. With the continuous development of the automatic driving technology, the requirement of the automatic driving automobile on the perception capability of the surrounding environment is higher and higher.
At present, an automatic driving automobile mainly realizes an automatic turning function of the automobile based on a three-dimensional scene reconstructed by a current environment, and various technologies for realizing the reconstruction of the three-dimensional scene include a high-precision Real-time kinematic (RTK) technology, a high-cost laser radar technology or a machine vision technology. Considering that the mode based on the machine vision technology has relatively low cost, many vehicle enterprises adopt the machine vision technology to reconstruct the three-dimensional scene of the current environment of the vehicle, so that the vehicle can automatically turn based on the three-dimensional scene.
When the three-dimensional scene is reconstructed based on the machine vision technology, the three-dimensional scene is mainly reconstructed based on a binocular vision camera, and because the visual angle of the binocular vision camera is limited, generally between 40 degrees and 60 degrees, and cannot cover the visual angle range of 180 degrees in front of a vehicle, in order to guarantee the reconstruction precision of the three-dimensional scene and realize the automatic turning function of the vehicle, a plurality of sets of binocular vision cameras are usually configured for the vehicle so as to acquire images in the visual angle range of 180 degrees in front of the vehicle. However, configuring multiple sets of binocular vision cameras for a vehicle may result in increased hardware costs for three-dimensional scene reconstruction.
Disclosure of Invention
In view of the above, the invention discloses a binocular vision camera hardware system, a three-dimensional scene reconstruction system and a three-dimensional scene reconstruction method, which are used for solving the problem that in the traditional scheme, when images in a view angle range of 180 degrees in front of a vehicle are acquired, hardware cost for three-dimensional scene reconstruction is increased due to the fact that a plurality of sets of binocular vision cameras need to be configured on the vehicle.
A binocular vision camera hardware system, comprising: binocular vision camera, reduction gearing, step motor and step motor controller, wherein, binocular vision camera includes: a left camera, a right camera, and a bracket to fix the left camera and the right camera;
the output end of the stepping motor controller is connected with the control end of the stepping motor, the motor output shaft of the stepping motor is connected with the reduction transmission device, and the central rotating shaft of the reduction transmission device is provided with the bracket for fixing the left camera and the right camera;
the stepping motor drives the reduction transmission device to rotate according to a control signal output by the stepping motor controller, and drives the support arranged on a central rotating shaft of the reduction transmission device to rotate, so that the left camera and the right camera fixed on the support can rotate in a reciprocating and synchronous mode within a preset rotating range.
Preferably, the rotation angle of the optical axis of the camera between two adjacent frames of the binocular vision camera is equal to a preset angle.
A three-dimensional scene reconstruction system, comprising: the binocular vision camera of claim 1, wherein a left camera and a right camera of the binocular vision camera are horizontally symmetrically placed along a vehicle centerline;
the input end of the ECU is respectively connected with the output end of the GPS module, the image output end of the left camera and the image output end of the right camera, the output end of the ECU is connected with the control end of a stepping motor controller in the binocular vision camera, the ECU is used for sending a trigger instruction to the stepping motor controller in the binocular vision camera when the distance between a vehicle and a front turning position is determined to reach a preset distance according to navigation information returned by the GPS module, and the stepping motor in the binocular vision camera is controlled by the stepping motor controller to drive the left camera and the right camera to reciprocate and synchronously rotate in a preset rotation range in a manner that the left camera and the right camera are parallel to a vehicle body and are positioned along a vehicle center line according to a preset motor rotation angular speed; acquiring image data in a 180-degree view field in front of the vehicle acquired by the left camera and image data in a 180-degree view field in front of the vehicle acquired by the right camera; carrying out three-dimensional scene reconstruction on image data in a 180-degree view field in front of the vehicle, which is acquired by the left camera, based on a motion recovery structure principle to obtain a first group of three-dimensional scene reconstruction images; carrying out three-dimensional scene reconstruction on image data in a field of view of 180 degrees in front of the vehicle, which is acquired by the right camera, based on a motion recovery structure principle to obtain a second group of three-dimensional scene reconstruction images; performing three-dimensional scene reconstruction on image data in a field of view of 180 degrees in front of the vehicle, which are simultaneously acquired by the left camera and the right camera at different moments, based on an instant positioning and map building principle to obtain a third group of three-dimensional scene reconstruction images; carrying out three-dimensional space scene splicing on the first group of three-dimensional scene reconstruction images, the second group of three-dimensional scene reconstruction images and the third group of three-dimensional scene reconstruction images to obtain three-dimensional scene reconstruction images within a view angle range of 180 degrees in front of the vehicle, wherein the preset rotation range is as follows: ((90 ° -a/2), (90 ° -a/2)), a is the horizontal field angle of the binocular vision camera.
Preferably, the preset motor rotation angular velocity satisfies formula (1), and the expression of formula (1) is as follows:
w=N*c (1)
where w is the rotational angular velocity of the motor, unit: -s, N is the frame frequency of the binocular vision camera, unit: pfs, c is the camera optical axis rotation angle between two adjacent frames that binocular vision camera predetermines, the unit: degree.
A method of reconstructing a three-dimensional scene, comprising:
acquiring the distance between the current moment and the front turning position of the vehicle;
when confirming when the distance reaches preset distance, send trigger command to the step motor controller in the binocular vision camera, through step motor controller control step motor in the binocular vision camera drives left camera and right camera according to predetermineeing motor rotation angular velocity and just follows vehicle center line position, and reciprocal synchronous rotation is in predetermineeing the rotation range at the position that is on a parallel with the automobile body, wherein, predetermine the rotation range and be: ((90 ° -a/2), (90 ° -a/2)), a is the horizontal field angle of the binocular vision camera;
acquiring image data in a 180-degree view field in front of the vehicle acquired by the left camera and image data in a 180-degree view field in front of the vehicle acquired by the right camera;
carrying out three-dimensional scene reconstruction on image data in a 180-degree view field in front of the vehicle, which is acquired by the left camera, based on a motion recovery structure principle to obtain a first group of three-dimensional scene reconstruction images;
carrying out three-dimensional scene reconstruction on image data in a field of view of 180 degrees in front of the vehicle, which is acquired by the right camera, based on a motion recovery structure principle to obtain a second group of three-dimensional scene reconstruction images;
performing three-dimensional scene reconstruction on image data in a field of view of 180 degrees in front of the vehicle, which are simultaneously acquired by the left camera and the right camera at different moments, based on an instant positioning and map building principle to obtain a third group of three-dimensional scene reconstruction images;
and converting the coordinate systems of the first group of three-dimensional scene reconstruction images, the second group of three-dimensional scene reconstruction images and the third group of three-dimensional scene reconstruction images to obtain three-dimensional scene reconstruction images within a view angle range of 180 degrees in front of the vehicle.
Preferably, after obtaining the three-dimensional scene reconstruction image within the viewing angle range of 180 ° in front of the vehicle, the method further includes:
carrying out obstacle identification, boundary identification on two sides of a road to be turned and passable area identification on the three-dimensional scene reconstruction image within the view angle range of 180 degrees in front of the vehicle to obtain an identification result;
selecting an optimal passing path which meets passing conditions from the identification result;
and controlling a turning execution structure of the vehicle, and realizing turning of the vehicle according to the optimal passing path.
Preferably, the performing obstacle identification, boundary identification on two sides of a road to be turned, and passable area identification on the three-dimensional scene reconstructed image within the view angle range of 180 degrees in front of the vehicle to obtain an identification result includes:
reconstructing an image of a three-dimensional scene within a view angle range of 180 degrees in front of the vehicle, and performing region segmentation based on depth and gray level to obtain T segmented regions, wherein T is a positive integer;
performing the following operations for each of the divided regions:
determining the total pixel point number N in the current segmentation region and the pixel point number M which accords with a plane fitting equation model;
judging whether the ratio of the number M of the pixel points to the number N of the total pixel points is smaller than a threshold parameter or not;
if the ratio is not smaller than the threshold parameter, judging that the current segmentation area is a road area, and taking the outer boundary of the road area as the boundary of two sides of the road to be turned;
if the ratio is smaller than the threshold parameter, judging that the current segmentation area is an obstacle area;
after the determination of the area types of the T divided areas is completed, obtaining a passable area L according to a formula (2) and a formula (3) and according to all road areas and all obstacle areas in the T divided areas, wherein the distance from the passable area L to the obstacle area is not higher than a safety distance threshold, and the formula (2) and the formula (3) are specifically as follows:
L∩A=L (2);
L∩B=0 (3);
in the formula, a is the total number of all road regions in the T divided regions, B is the total number of all obstacle regions in the T divided regions, and 0 represents an empty set.
Preferably, the method further comprises the following steps:
and when the steering wheel corner is determined to be restored to the preset corner and the rotating speeds of the inner front wheel and the outer front wheel are the same, sending a stop instruction to the stepping motor controller, and controlling the stepping motor to drive the binocular vision camera to be restored to the position parallel to the central line of the vehicle through the stepping motor controller.
From the technical scheme, the invention discloses a binocular vision camera hardware system, a three-dimensional scene reconstruction system and a method, wherein the three-dimensional scene reconstruction system comprises an ECU, a binocular vision camera hardware system and a GPS module which are connected with the ECU, when the ECU determines that the distance between the vehicle and the front turning position reaches the preset distance according to the navigation information transmitted back by the GPS module, the ECU controls a left camera and a right camera of the binocular vision camera to synchronously rotate in a reciprocating way in a preset rotation range, acquires image data in a field of view 180 degrees in front of the vehicle, and is based on a motion recovery structure principle and an instant positioning and map building principle, and carrying out three-dimensional scene reconstruction on the image data in the field of view of 180 degrees in front of the vehicle acquired by the left camera and the image data in the field of view of 180 degrees in front of the vehicle acquired by the right camera to obtain a three-dimensional scene reconstruction image in the field of view of 180 degrees in front of the vehicle. Compared with the traditional scheme, the invention can realize image acquisition within the visual angle range of 180 degrees in front of the vehicle by only adopting one set of binocular vision cameras with one-dimensional rotation function, thereby improving the visual range of the set of binocular vision cameras, enhancing the perception capability of the vehicle to the front environment and reducing the hardware cost of three-dimensional scene reconstruction.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the disclosed drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a binocular vision camera hardware system disclosed in an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a three-dimensional scene reconstruction system disclosed in an embodiment of the present invention;
FIG. 3 is a schematic view of a binocular vision camera according to an embodiment of the present invention in an observation range of a rotation period;
FIG. 4 is a flowchart of a method for reconstructing a three-dimensional scene according to an embodiment of the present invention;
FIG. 5 is a flowchart of a three-dimensional scene reconstruction disclosed in an embodiment of the present invention;
fig. 6 is a flowchart of a method for selecting an optimal traffic path for vehicle turning based on a three-dimensional scene reconstructed image according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention discloses a binocular vision camera, a three-dimensional scene reconstruction system and a three-dimensional scene reconstruction method, and aims to solve the problem that in the traditional scheme, when images in a view angle range of 180 degrees in front of a vehicle are collected, hardware cost for three-dimensional scene reconstruction is increased due to the fact that multiple sets of binocular vision cameras need to be configured on the vehicle.
Referring to fig. 1, a schematic structural diagram of a hardware system of a binocular vision camera disclosed in an embodiment of the present invention includes: binocular vision camera, reduction gearing 14, step motor 15 and step motor controller 16, wherein, binocular vision camera includes: a left camera 11, a right camera 12, and a bracket 13 that fixes the left camera 11 and the right camera 12;
wherein:
the output end of the stepping motor controller 16 is connected with the control end of the stepping motor 15, the motor output shaft of the stepping motor 15 is connected with the reduction transmission device 14, and the central rotating shaft of the reduction transmission device 14 is provided with a bracket 13 for fixing the left camera 11 and the right camera 12.
The working principle is as follows: the stepping motor 15 drives the reduction gear 14 to rotate according to a control signal output by the stepping motor controller 16, and drives the bracket 13 mounted on the central rotating shaft of the reduction gear 14 to rotate, so that the left camera 11 and the right camera 12 fixed on the bracket 13 can synchronously rotate in a reciprocating manner within a preset rotating range.
Optionally, the reduction gearing 14 comprises: a driving gear and a driven gear, wherein the driving gear is arranged on the motor output shaft of the stepping motor 15, the driving gear is meshed with the driven gear, and the driven gear is arranged on the bracket 13. The stepping motor 15 drives a driving gear installed on an output shaft of the motor according to a control signal output by the stepping motor controller 16, and the driving gear rotates to be meshed with a driven gear, so that the driven gear rotates to drive a support 13 installed on the driven gear to rotate, and the left camera 11 and the right camera 12 fixed on the support 13 rotate in a reciprocating synchronous mode within a preset rotation range.
In practical application, the selection of the stepping motor 15 is determined by a step angle, the step angle satisfies the formula that b is more than or equal to 0 degrees and is less than or equal to c, and b is the step angle and the unit: and c is the preset rotation angle of the optical axis of the camera between two adjacent frames, and the unit is as follows: degree (C)
It should be noted that, the structure of the binocular vision camera hardware system includes, but is not limited to, the embodiment shown in fig. 1, and all structures capable of realizing the reciprocating synchronous rotation of the left camera 11 and the right camera 12 within the preset rotation range belong to the protection scope of the present invention.
In order to realize the image acquisition of the binocular vision camera within the visual angle range of 180 degrees in front of the vehicle, the invention sets the parameter information of the binocular vision camera, and the method specifically comprises the following steps:
assuming that the horizontal head view angle of the binocular vision camera is a (unit: °), and the frame frequency is n (pfs), the rotation range of the stepping motor 15 is (- (90 ° -a/2), (90 ° -a/2)) in order to realize image acquisition of the binocular vision camera within the view angle range of 180 ° in front of the vehicle.
In order to further ensure that the binocular vision camera collects images in a view angle range of 180 degrees in front of a vehicle, in practical application, the binocular vision camera is preferentially installed on a front windshield of the vehicle and located behind a rearview mirror, a left camera 11 and a right camera 12 of the binocular vision camera are horizontally and symmetrically arranged along a central line of the vehicle, the length of a base line between the left camera 11 and the right camera 12 is adjustable, the adjusting range is 10 cm-30 cm, the field angle range of the binocular vision camera is 40-60 degrees, the pitch angle of the binocular vision camera relative to the vehicle body is 0 degree, and a stepping motor controller can be arranged in a rear cabin.
In practical application, the left camera 11 and the right camera 12 of the binocular vision camera are rotated by the stepping motor 15, and since the rotation range of the stepping motor 15 is (-90 ° -a/2), (90 ° -a/2)), the left camera 11 and the right camera 12 of the binocular vision camera are correspondingly rotated in a reciprocating synchronous manner in the positions (-90 ° -a/2), (90 ° -a/2)) parallel to the vehicle body and along the center line of the vehicle.
The motor rotation angular velocity w of the stepping motor satisfies formula (1), and the expression of formula (1) is as follows:
w=N*c (1);
where w is the rotational angular velocity of the motor, unit: -s, N is the frame frequency of the binocular vision camera, unit: pfs, c is the camera optical axis rotation angle between two adjacent frames that binocular vision camera predetermines, the unit: degree.
When the distance between the vehicle and the front turning position reaches a preset distance (such as 50 m-100 m), in order to enable the binocular vision camera to complete image acquisition in one rotation period within one second and ensure a high-precision three-dimensional reconstruction effect, the more the overlapping area between two adjacent frames of images acquired by the binocular vision camera is, the better the overlapping area is, in order to achieve the purpose, the rotation angle of the optical axis of the camera between two adjacent frames of the binocular vision camera is equal to a preset angle, and the preset angle is preferably 3 degrees.
In summary, the invention discloses a binocular vision camera with a one-dimensional rotation function, which greatly improves the visual field range of a set of binocular vision camera, enhances the perception capability of a vehicle to the front environment, reduces the hardware cost of three-dimensional scene reconstruction, reduces potential unsafe factors caused by narrow visual field, and provides guarantee for improving the safety of automatic turning of the vehicle.
Referring to fig. 2, an embodiment of the present invention discloses a schematic structural diagram of a three-dimensional scene reconstruction system, where the reconstruction system includes: an ECU (Electronic Control Unit) 21, a GPS module 22, and the binocular vision camera hardware system 23 shown in fig. 1, wherein an input end of the ECU21 is connected to the GPS module 22, an image output end of the left camera, and an image output end of the right camera, respectively, and an output end of the ECU21 is connected to a Control end of a stepper motor controller in the binocular vision camera 21;
the GPS module 22 is used for acquiring the current position information of the vehicle and sending the position information to the EPU21, so that the EPU21 determines the distance between the vehicle and the front turning position according to the position information.
The three-dimensional scene reconstruction process of the three-dimensional scene reconstruction system on the image in the view angle range of 180 degrees in front of the vehicle is as follows:
when determining that the distance between the vehicle and a front turning position (such as an intersection and a T-shaped intersection) reaches a preset distance (such as 50-100 m), the ECU21 sends a trigger instruction to a stepping motor controller 16 in the binocular vision camera hardware system 23, and the stepping motor 15 in the binocular vision camera hardware system 23 is controlled by the stepping motor controller to drive the left camera and the right camera to rotate in a reciprocating and synchronous manner within a preset rotation range in a manner of being parallel to the vehicle body and along the vehicle center line position according to a preset motor rotation angular speed;
the ECU21 acquires image data in a 180-degree view field in front of the vehicle collected by the left camera and image data in a 180-degree view field in front of the vehicle collected by the right camera in real time;
the ECU21 carries out three-dimensional scene reconstruction on image data in a 180-degree view field in front of the vehicle collected by the left camera based on SFM principle to obtain a first group of three-dimensional scene reconstruction images, which are recorded as ML; carrying out three-dimensional scene reconstruction on image data in a field of view of 180 degrees in front of the vehicle acquired by the right camera based on an SFM principle to obtain a second group of three-dimensional scene reconstruction images, and recording the second group of three-dimensional scene reconstruction images as MR; based on the SLAM principle, three-dimensional scene reconstruction is carried out on image data in a 180-degree view field in front of the vehicle, which are acquired by the left camera and the right camera at different moments, so that a third group of three-dimensional scene reconstruction images, namely MM, are obtained; carrying out three-dimensional space scene splicing on the first group of three-dimensional scene reconstruction images ML, the second group of three-dimensional scene reconstruction images MR and the third group of three-dimensional scene reconstruction images MM to obtain three-dimensional scene reconstruction images within a view angle range of 180 degrees in front of the vehicle, wherein the preset rotation range is as follows: ((90 ° -a/2), (90 ° -a/2)), a is the horizontal field angle of the binocular vision camera.
It should be noted that, when the binocular vision camera is located in a position parallel to the center line of the vehicle, the rotation angle of the synchronous motor is 0 °, and accordingly, the rotation angles of the left and right cameras of the binocular vision camera are 0 °, and assuming that the synchronous motor turns negative to the left and positive to the right, the rotation range of the synchronous motor is (- (90 ° -a/2), (90 ° -a/2)), and accordingly, the rotation ranges of the left and right cameras of the binocular vision camera are also: ((90 ° -a/2), (90 ° -a/2)), that is, the field angle range of the binocular vision camera is: ((90 ° -a/2), (90 ° -a/2)), specifically as shown in fig. 3, the two overlapping sector areas shown by reference numeral 31 are the field angle ranges of the maximum rotational position on the left side of the binocular vision camera, the two overlapping sector areas shown by reference numeral 32 are the field angle ranges of the binocular vision camera horizontally placed in the non-turning state (i.e., the initial state), and the two overlapping sector areas shown by reference numeral 33 are the field angle ranges of the maximum rotational position on the right side of the binocular vision camera.
Among them, SFM (Structure from motion) is a cross-bar Structure with multiple channels interlaced horizontally and vertically, each channel providing 8Gbps switching capability (super 720 provides 20Gpbs per channel). In this embodiment, based on the SFM principle, three-dimensional scene reconstruction is performed on image data in a field of view 180 degrees in front of the vehicle acquired by the left camera and image data in a field of view 180 degrees in front of the vehicle acquired by the right camera, respectively.
The working principle of SLAM (Simultaneous Localization And Mapping) is as follows: the robot (the automatic driving vehicle in the application) moves from an unknown position in an unknown environment, self-positioning is carried out according to position estimation and a map in the moving process, and meanwhile, an incremental map is built on the basis of self-positioning, so that autonomous positioning and navigation of the robot are realized. The three-dimensional scene reconstruction method is based on the SLAM principle and used for reconstructing the three-dimensional scene of image data in a 180-degree view field in front of the vehicle, wherein the image data are acquired by the left camera and the right camera at different moments simultaneously.
It should be noted that the preset rotation range in the present embodiment, that is, (- (90 ° -a/2), (90 ° -a/2)) in the above embodiment, a is the horizontal angle of view of the binocular vision camera.
The preset motor rotation angular velocity is also the motor rotation angular velocity satisfying the above formula (1), and when the motor rotation angular velocity satisfying the formula (1) is plural, the motor rotation angular velocity may be selected according to actual needs.
In practical applications, the ECU21 may calculate the distance from the front turning position of the vehicle according to the navigation information returned by a GPS (Global Positioning System) module of the autonomous vehicle, and determine whether to turn on the binocular vision camera hardware System 23 by comparing the distance with a preset distance.
In summary, the three-dimensional scene reconstruction system disclosed in the present invention includes: the system comprises an ECU21, a GPS module 22 and a binocular vision camera hardware system 23, wherein when the ECU21 determines that the distance between the current moment and the front turning position of the vehicle reaches a preset distance according to navigation information transmitted back by the GPS module 22, the ECU21 controls the left camera 11 and the right camera 12 of the binocular vision camera hardware system 23 to synchronously rotate back and forth in a preset rotation range, acquires image data in a field of view of 180 degrees in front of the vehicle, and carries out three-dimensional scene reconstruction on the image data in the field of view of 180 degrees in front of the vehicle acquired by the left camera and the image data in the field of view of 180 degrees in front of the vehicle acquired by the right camera based on a motion recovery structure principle and an instant positioning and map construction principle, so as to obtain a three-dimensional scene reconstruction image in the field of view of 180 degrees in front of the vehicle. Compared with the traditional scheme, the invention can realize image acquisition within the visual angle range of 180 degrees in front of the vehicle by only adopting a set of binocular vision camera hardware system 23 with a one-dimensional rotation function, thereby improving the visual range of a set of binocular vision camera, enhancing the perception capability of the vehicle to the front environment and reducing the hardware cost of three-dimensional scene reconstruction.
Corresponding to the system embodiment, the invention also discloses a three-dimensional scene reconstruction method.
Referring to fig. 4, a flowchart of a method for reconstructing a three-dimensional scene according to an embodiment of the present invention is applied to the ECU21 in the foregoing embodiment, and the method includes the steps of:
step S41, acquiring the distance between the current time and the front turning position of the vehicle;
specifically, after the autonomous driving vehicle is operated, the GPS module 22 starts to perform navigation positioning on the position information of the vehicle, and transmits the navigation information back to the ECU21, and the ECU21 calculates the distance between the current time and the front turning position of the vehicle.
Step S42, when the distance is determined to reach the preset distance, controlling the binocular vision camera hardware system 23 to synchronously rotate back and forth within the preset rotation range;
specifically, when ECU21 confirms when the distance reaches preset distance, send trigger command to step motor controller 16 in binocular vision camera hardware system 23, through step motor controller control step motor 15 in binocular vision camera hardware system 23 drives left camera and right camera at the position that is on a parallel with the automobile body and along the vehicle center line according to presetting motor rotation angular velocity, and reciprocal synchronous rotation is in presetting rotation range, wherein, it is to preset rotation range: ((90 ° -a/2), (90 ° -a/2)), a is the horizontal field angle of the binocular vision camera;
when the binocular vision camera is located in a position parallel to the center line of the vehicle, the rotation angle of the synchronous motor is 0 °, and accordingly, the rotation angles of the left and right cameras of the binocular vision camera are 0 °, and assuming that the synchronous motor turns negative to the left and positive to the right, the rotation range of the synchronous motor is (- (90 ° -a/2), (90 ° -a/2)), and accordingly, the rotation ranges of the left and right cameras of the binocular vision camera are also: ((90-a/2), (90-a/2)) as shown in FIG. 2.
Step S43, acquiring image data in a 180-degree view field in front of the vehicle acquired by the left camera and image data in a 180-degree view field in front of the vehicle acquired by the right camera;
the image data in the field of view of 180 degrees in front of the vehicle acquired by the left camera and the image data in the field of view of 180 degrees in front of the vehicle acquired by the right camera specifically refer to: and the left camera and the right camera reciprocate and synchronously move the acquired images at all times within a preset rotation range.
S44, carrying out three-dimensional scene reconstruction on image data in a field of view of 180 degrees in front of the vehicle, which is acquired by the left camera, based on an SFM principle to obtain a first group of three-dimensional scene reconstruction images;
s45, carrying out three-dimensional scene reconstruction on image data in a field of view of 180 degrees in front of the vehicle, which is acquired by the right camera, based on an SFM principle to obtain a second group of three-dimensional scene reconstruction images;
step S46, carrying out three-dimensional scene reconstruction on image data in a field of view of 180 degrees in front of the vehicle, which are simultaneously acquired by the left camera and the right camera at different moments, based on the SLAM principle to obtain a third group of three-dimensional scene reconstruction images;
it should be noted that, in the actual execution process, step S44, step S45, and step S46 have no fixed sequence, including but not limited to the sequence shown in fig. 4, and in actual application, the three steps may also be executed simultaneously.
And S47, carrying out three-dimensional space scene splicing on the first group of three-dimensional scene reconstruction images, the second group of three-dimensional scene reconstruction images and the third group of three-dimensional scene reconstruction images to obtain three-dimensional scene reconstruction images within a view angle range of 180 degrees in front of the vehicle.
In order to facilitate understanding of the three-dimensional scene reconstruction method, as shown in fig. 5, a three-dimensional scene reconstruction flowchart is disclosed in another embodiment of the present invention, where an image acquired by a left camera is named as a left image, an image acquired by a right camera is named as a right image, a time for a binocular vision camera to rotate and acquire the images is t 1-tn, from a time t1, the left camera and the right camera synchronously output image data acquired at each time to an ECU21, and the ECU21 performs three-dimensional scene reconstruction based on an SFM principle according to the image data acquired by the left camera, including the left image at the time t1, the left image at the time t2, the left image at the time … … tn-1, and the left image at the time tn, to obtain a first set of three-dimensional scene reconstruction images; the ECU21 carries out three-dimensional scene reconstruction based on the SFM principle according to image data collected by the right camera, wherein the image data comprises a right image at the time of t1, a right image at the time of t2, a right image at the time of … … tn-1 and a right image at the time of tn to obtain a second group of three-dimensional scene reconstruction images; the ECU21 simultaneously reconstructs a three-dimensional scene based on the SLAM principle according to image data acquired by the left camera, including a left image at the time of t1, a left image at the time of t2, a left image at the time of … … tn-1, a left image at the time of tn, and image data acquired by the right camera, including a right image at the time of t1, a right image at the time of t2, a right image at the time of … … tn-1 and a right image at the time of tn to obtain a third group of three-dimensional scene reconstruction images; and finally, the ECU21 carries out three-dimensional space scene splicing on the first group of three-dimensional scene reconstruction images, the second group of three-dimensional scene reconstruction images and the third group of three-dimensional scene reconstruction images to obtain three-dimensional scene reconstruction images within the view angle range of 180 degrees in front of the vehicle.
With reference to fig. 5, a process of reconstructing a three-dimensional scene according to image data acquired by a left camera and image data acquired by a right camera in the above embodiment is illustrated as follows:
(1) the process that the ECU21 reconstructs a three-dimensional scene based on the SFM principle according to the image data acquired by the left camera to obtain a first group of reconstructed three-dimensional scene images specifically includes:
taking the tn-2 time left image, the tn-1 time left image and the tn time left camera image as an example for explanation, the process of reconstructing the three-dimensional scene of the left image at other times is similar.
Extracting and matching feature points of a tn-2 time left image (not shown in fig. 5) and a tn-1 time left image to obtain a matching point N pair, extracting and matching feature points of the tn-1 time left image and a tn time left camera image to obtain a matching point M pair, and determining that a homonymy matching point pair L is N and N M based on a gray level correlation principle; according to the vehicle inertial navigation or the wheel speed sensor providing the vehicle moving displacement between the three moments tn-2, tn-1 and tn, a series of spatial characteristic points can be determined to be described under a wtnl system and an wtn-1 system based on the stereo vision matching principle, and the coordinate system is defined as follows: the left camera optical center is the origin of a coordinate system, the left camera optical axis direction is the Z direction, the direction vertical to the left camera body is the Y direction, and the X direction accords with the right-hand spiral rule relation; according to the homonymy point relation, the space point coordinate described under wtn-1 can be converted to wtn system based on the coordinate transformation principle, so that the conversion relation among wt1 system, wt2 system, … and wtn system can be obtained by recursion, and according to the coordinate transformation, all three-dimensional reconstruction results described under wtn system can be further obtained.
(2) The process of reconstructing the three-dimensional scene by the ECU21 based on the SFM principle according to the image data acquired by the right camera to obtain the second group of three-dimensional scene reconstructed images is the same as above, and is not repeated here.
(3) The process that the ECU21 carries out three-dimensional scene reconstruction on image data in a field of view of 180 degrees in front of the vehicle, which are simultaneously acquired by the left camera and the right camera at different moments, based on the instant positioning and map building principle, and the process of obtaining a third group of three-dimensional scene reconstruction images comprises the following steps:
defining a world coordinate system Wtn, wherein tn represents the nth time, the origin point is the central point of the origin point of the left camera coordinate system and the origin point of the right camera coordinate system at the tn time, the Z direction is parallel to the optical axis direction of the left camera, the Y direction is parallel to the Y direction of the left camera coordinate system, and the X direction accords with the right-hand rule relationship;
respectively carrying out feature extraction and matching on the left camera image and the right camera image at the tn-1 moment and the tn moment, and assuming that N pairs and M pairs of correct matching point pairs exist, wherein a matching feature point set from the left camera image at the tn-1 moment is recorded as Nn-1, and a matching feature point set from the left camera image at the tn moment is recorded as Mn;
and if L is Nn-1 n Mn, L represents the same-name point on the image acquired by the left camera at the time tn-1 and the image acquired by the left camera at the time tn, the three-dimensional coordinates of the same-name point under world coordinate systems Wtn-1 and Wtn at the time tn-1 and tn can be obtained based on a stereoscopic vision matching principle, and the corresponding relation between the world coordinate systems Wtn-1 and Wtn can be obtained based on a coordinate transformation principle according to the corresponding same-name point relation at different times, so that the three-dimensional point coordinates under the world coordinate systems obtained at different times can be transformed to be described under the same world coordinate system, and the three-dimensional scenes of image sequences at different times can be reconstructed.
(4) The process that the ECU21 carries out three-dimensional space scene stitching on the first set of three-dimensional scene reconstruction images, the second set of three-dimensional scene reconstruction images and the third set of three-dimensional scene reconstruction images to obtain three-dimensional scene reconstruction images within a view angle range of 180 degrees in front of the vehicle includes:
assuming that the left and right cameras respectively acquire n frame image sequences from time t1 to time tn, assuming that the left camera coordinate system wl system (the left camera optical center is the origin of the coordinate system, the left camera optical axis direction is the Z direction, the direction perpendicular to and downward from the left camera body is the Y direction, and the X direction conforms to the right-hand helical rule relationship) at time tn, the right camera coordinate system wr system (the right camera optical center is the origin of the coordinate system, the right camera optical axis direction is the Z direction, the direction perpendicular to and downward from the right camera body is the Y direction, and the X direction conforms to the right-hand helical rule relationship), the world coordinate system w at time tn (the origin is the center point of the origin of the left camera coordinate system and the origin of the right camera coordinate system at time tn, the Z direction is parallel to the left camera optical axis direction, the Y direction is parallel to the Y direction of the left camera coordinate system, and the X direction conforms to the right-hand helical rule relationship), wl, wr, the relationship between the three coordinate systems w is as follows: and B is the length of the base line, and the three-dimensional scene reconstruction results in the wl coordinate system and the wr coordinate system can be uniformly converted into the w coordinate system according to the formulas (4) and (5), so that the three-dimensional reconstruction of the scene in the range of 180 degrees is realized.
[Xwl Ywl Zwl]T=[Xw Yw Zw]T+[B/2 0 0]T (4);
[Xwr Ywr Zwr]T=[Xw Yw Zw]T+[-B/2 0 0]T (5)。
In the conventional scheme, when an automatic driving vehicle turns to run, obstacle identification, identification of boundaries at two sides of a road to be turned and identification of passable areas are usually realized based on a high-precision map so as to select an optimal passing path of the vehicle.
In order to solve the problems, after the three-dimensional scene reconstruction image within the view angle range of 180 degrees in front of the vehicle is obtained, the optimal passing path for the vehicle to turn can be selected based on the three-dimensional scene image.
Referring to fig. 6, a flowchart of a method for selecting an optimal traffic path for vehicle turning based on a three-dimensional scene reconstructed image according to an embodiment of the present invention includes:
step S61, carrying out obstacle identification, boundary identification on two sides of a road to be turned and passable area identification on the three-dimensional scene reconstruction image within the visual angle range of 180 degrees in front of the vehicle to obtain an identification result;
specifically, a three-dimensional scene reconstruction image in a view angle range of 180 degrees in front of a vehicle is subjected to region segmentation based on depth and gray level to obtain T segmentation regions, wherein T is a positive integer;
performing the following operations for each of the divided regions:
determining the total pixel point number N in the current segmentation region and the pixel point number M which accords with a plane fitting equation model; judging whether the ratio of the number M of the pixel points to the number N of the total pixel points is smaller than a threshold parameter or not; if the ratio is not smaller than the threshold parameter, judging that the current segmentation area is a road area, and taking the outer boundary of the road area as the boundary of two sides of the road to be turned; if the ratio is smaller than the threshold parameter, judging that the current segmentation area is an obstacle area; after the determination of the area types of the T divided areas is completed, obtaining a passable area L according to a formula (2) and a formula (3) and according to all road areas and all obstacle areas in the T divided areas, wherein the distance from the passable area L to the obstacle area is not higher than a safety distance threshold, and the formula (2) and the formula (3) are specifically as follows:
L∩A=L (2);
L∩B=0 (3);
in the formula, a is the total number of all road regions in the T divided regions, B is the total number of all obstacle regions in the T divided regions, and 0 represents an empty set.
For example, assume that, in T segmented regions obtained by segmentation, there are N pixel points in the current segmented region, the three-dimensional coordinates of the N pixel points are p1, p2, p3, …, and pN, respectively, and the plane fitting equation model is: if the number of points which accord with the plane fitting equation model is M, if M/N is larger than or equal to thre, the thre is a threshold parameter and is generally set to be 0.8, judging that the current segmentation area is a road area, and taking the outer boundary of the road area as the boundary of two sides of a road to be turned; if M/N is less than thre, the current segmentation area is judged to be an obstacle area, and the obstacle type can be specifically determined through deep learning by combining the prior information of moving targets such as vehicles, pedestrians and the like.
The passable area is identified on the basis of identification of the road area and the barrier area, and the reasonable and feasible passable area needs to be determined by a path planning method in consideration of the interactive existence of the barrier area and the road area.
The path planning criterion is as follows:
after the determination of the area types to which the T divided areas belong is completed, obtaining a passable area L according to a formula (2) and a formula (3) and according to all road areas and all obstacle areas in the T divided areas, wherein the distance from the passable area L to the obstacle area is not higher than a safety distance threshold (generally 0.5m), and the formula (2) and the formula (3) are specifically as follows:
L∩A=L (2);
L∩B=0 (3);
in the formula, a is the total number of all road regions in the T divided regions, three-dimensional coordinate points are a1, a2, a3, … and an respectively, B is the total number of all obstacle regions in the T divided regions, and three-dimensional coordinate points are B1, B2, B3, … and bm respectively.
Step S62, selecting the optimal passing path which meets the passing condition from the recognition result;
the determination accuracy of the basis of the optimal passing path is as follows:
the connected component threshold value d is set based on the size of the own vehicle (the length and width are W and L respectively),all passable paths satisfying ai-bj > d; n pieces of vehicle route points are set as candidate passing routes, a center point of a passable boundary on the same longitudinal Distance is defined for each track point on each passing route, a point a is set as an arrival position point in front of the vehicle, a current vehicle position point is a point b, the vehicle running track points on the Nth planned route are Ni1, Ni2, Ni3, … and Nil, in all the candidate routes, | | Nik-bj | > safe and satisfy Distance ═ min (N1, N2, N3 and … NN), the corresponding planned route is an optimal passable route, wherein k represents the kth vehicle running track point, i represents a boundary point on the ith obstacle, j represents a boundary point on the jth obstacle, safe represents a safe Distance, and generally takes 0.5 m.
And step S63, controlling a turning executing structure of the vehicle, and realizing turning of the vehicle according to the optimal passing route.
In summary, when the ECU21 determines that the distance between the current time and the front turning position of the vehicle reaches the preset distance according to the navigation information sent back by the GPS module 22, the ECU21 controls the left camera 11 and the right camera 12 of the binocular vision camera hardware system 23 to synchronously rotate back and forth within the preset rotation range, acquires image data within a field of view of 180 degrees in front of the vehicle, performs three-dimensional scene reconstruction on the image data within the field of view of 180 degrees in front of the vehicle acquired by the left camera and the image data within the field of view of 180 degrees in front of the vehicle acquired by the right camera based on the motion recovery structure principle and the instant positioning and map construction principle, obtains a three-dimensional scene reconstruction image within the field of view of 180 degrees in front of the vehicle, performs obstacle identification, discrimination of the boundaries on both sides of the road segment to be turned and identification of the passable area based on the three-dimensional scene reconstruction image, determines the optimal passable path according to the identification result, thereby realizing the stable and reliable automatic turning function of the vehicle. According to the invention, the image acquisition within the view angle range of 180 degrees in front of the vehicle can be realized only by adopting a set of binocular vision camera hardware system 23 with a one-dimensional rotation function, so that the view range of a set of binocular vision camera is improved, the perception capability of the vehicle to the front environment is enhanced, the hardware cost of three-dimensional scene reconstruction is reduced, and the dependence on a high-precision map in the traditional scheme is reduced.
It is understood that the determination may be made based on the display content of the on-board display, the steering wheel angle, the wheel speed signal, and the turn signal when the autonomous vehicle has completed a turn. In practical application, the ECU21 may determine whether the vehicle has turned by judging whether the steering wheel angle is restored to a preset angle and whether the rotation speeds of the inner and outer front wheels are the same, and when it is determined that the steering wheel angle is restored to the preset angle and the rotation speeds of the inner and outer front wheels are the same, send a stop instruction to the stepping motor controller, and control the stepping motor to drive the binocular vision camera hardware system 23 to be restored to a position parallel to the vehicle center line through the stepping motor controller.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Claims (7)
1. A binocular vision camera hardware system, comprising: binocular vision camera, reduction gearing, step motor and step motor controller, wherein, binocular vision camera includes: a left camera, a right camera, and a bracket to fix the left camera and the right camera;
the output end of the stepping motor controller is connected with the control end of the stepping motor, the motor output shaft of the stepping motor is connected with the reduction transmission device, the central rotating shaft of the reduction transmission device is provided with the bracket for fixing the left camera and the right camera, and the rotation angle of the optical axis of the camera between two adjacent frames of the binocular vision camera is equal to a preset angle;
the stepping motor drives the reduction transmission device to rotate according to a control signal output by the stepping motor controller, and drives the bracket arranged on a central rotating shaft of the reduction transmission device to rotate, so that the left camera and the right camera fixed on the bracket can synchronously rotate in a reciprocating manner within a preset rotating range, and image data in a 180-degree view field in front of the vehicle acquired by the left camera and image data in a 180-degree view field in front of the vehicle acquired by the right camera are acquired; carrying out three-dimensional scene reconstruction on image data in a 180-degree view field in front of the vehicle, which is acquired by the left camera, based on a motion recovery structure principle to obtain a first group of three-dimensional scene reconstruction images; carrying out three-dimensional scene reconstruction on image data in a field of view of 180 degrees in front of the vehicle, which is acquired by the right camera, based on a motion recovery structure principle to obtain a second group of three-dimensional scene reconstruction images; performing three-dimensional scene reconstruction on image data in a field of view of 180 degrees in front of the vehicle, which are simultaneously acquired by the left camera and the right camera at different moments, based on an instant positioning and map building principle to obtain a third group of three-dimensional scene reconstruction images; carrying out three-dimensional space scene splicing on the first group of three-dimensional scene reconstruction images, the second group of three-dimensional scene reconstruction images and the third group of three-dimensional scene reconstruction images to obtain three-dimensional scene reconstruction images within a view angle range of 180 degrees in front of the vehicle, wherein the preset rotation range is as follows: ((90 ° -a/2), (90 ° -a/2)), a is the horizontal field angle of the binocular vision camera;
wherein the reduction gear comprises: the driving gear is installed on the motor output shaft, the driving gear is meshed with the driven gear, and the driven gear is installed on the support.
2. A three-dimensional scene reconstruction system, comprising: the binocular vision camera of claim 1, wherein a left camera and a right camera of the binocular vision camera are horizontally symmetrically placed along a vehicle centerline;
the input end of the ECU is respectively connected with the output end of the GPS module, the image output end of the left camera and the image output end of the right camera, the output end of the ECU is connected with the control end of a stepping motor controller in the binocular vision camera, the ECU is used for sending a trigger instruction to the stepping motor controller in the binocular vision camera when the distance between a vehicle and a front turning position is determined to reach a preset distance according to navigation information returned by the GPS module, and the stepping motor in the binocular vision camera is controlled by the stepping motor controller to drive the left camera and the right camera to reciprocate and synchronously rotate in a preset rotation range in a manner that the left camera and the right camera are parallel to a vehicle body and are positioned along a vehicle center line according to a preset motor rotation angular speed; acquiring image data in a 180-degree view field in front of the vehicle acquired by the left camera and image data in a 180-degree view field in front of the vehicle acquired by the right camera; carrying out three-dimensional scene reconstruction on image data in a 180-degree view field in front of the vehicle, which is acquired by the left camera, based on a motion recovery structure principle to obtain a first group of three-dimensional scene reconstruction images; carrying out three-dimensional scene reconstruction on image data in a field of view of 180 degrees in front of the vehicle, which is acquired by the right camera, based on a motion recovery structure principle to obtain a second group of three-dimensional scene reconstruction images; performing three-dimensional scene reconstruction on image data in a field of view of 180 degrees in front of the vehicle, which are simultaneously acquired by the left camera and the right camera at different moments, based on an instant positioning and map building principle to obtain a third group of three-dimensional scene reconstruction images; carrying out three-dimensional space scene splicing on the first group of three-dimensional scene reconstruction images, the second group of three-dimensional scene reconstruction images and the third group of three-dimensional scene reconstruction images to obtain three-dimensional scene reconstruction images within a view angle range of 180 degrees in front of the vehicle, wherein the preset rotation range is as follows: ((90 ° -a/2), (90 ° -a/2)), a is the horizontal field angle of the binocular vision camera.
3. The three-dimensional scene reconstruction system according to claim 2, wherein the preset motor rotation angular velocity satisfies formula (1), and the expression of formula (1) is as follows:
w=N*c(1)
where w is the rotational angular velocity of the motor, unit: -s, N is the frame frequency of the binocular vision camera, unit: pfs, c is the camera optical axis rotation angle between two adjacent frames that binocular vision camera predetermines, the unit: degree.
4. A method for reconstructing a three-dimensional scene, comprising:
acquiring the distance between the current moment and the front turning position of the vehicle;
when confirming when the distance reaches preset distance, send trigger command to the step motor controller in the binocular vision camera, through step motor controller control step motor in the binocular vision camera drives left camera and right camera according to predetermineeing motor rotation angular velocity and just follows vehicle center line position, and reciprocal synchronous rotation is in predetermineeing the rotation range at the position that is on a parallel with the automobile body, wherein, predetermine the rotation range and be: ((90 ° -a/2), (90 ° -a/2)), a is the horizontal field angle of the binocular vision camera;
acquiring image data in a 180-degree view field in front of the vehicle acquired by the left camera and image data in a 180-degree view field in front of the vehicle acquired by the right camera;
carrying out three-dimensional scene reconstruction on image data in a 180-degree view field in front of the vehicle, which is acquired by the left camera, based on a motion recovery structure principle to obtain a first group of three-dimensional scene reconstruction images;
carrying out three-dimensional scene reconstruction on image data in a field of view of 180 degrees in front of the vehicle, which is acquired by the right camera, based on a motion recovery structure principle to obtain a second group of three-dimensional scene reconstruction images;
performing three-dimensional scene reconstruction on image data in a field of view of 180 degrees in front of the vehicle, which are simultaneously acquired by the left camera and the right camera at different moments, based on an instant positioning and map building principle to obtain a third group of three-dimensional scene reconstruction images;
and converting the coordinate systems of the first group of three-dimensional scene reconstruction images, the second group of three-dimensional scene reconstruction images and the third group of three-dimensional scene reconstruction images to obtain three-dimensional scene reconstruction images within a view angle range of 180 degrees in front of the vehicle.
5. The method of claim 4, further comprising, after the obtaining the three-dimensional scene reconstruction image within a 180 ° view angle range in front of the vehicle:
carrying out obstacle identification, boundary identification on two sides of a road to be turned and passable area identification on the three-dimensional scene reconstruction image within the view angle range of 180 degrees in front of the vehicle to obtain an identification result;
selecting an optimal passing path which meets passing conditions from the identification result;
and controlling a turning execution structure of the vehicle, and realizing turning of the vehicle according to the optimal passing path.
6. The three-dimensional scene reconstruction method according to claim 5, wherein the performing obstacle identification, boundary identification on two sides of a road to be turned, and passable area identification on the three-dimensional scene reconstruction image within a view angle range of 180 ° in front of the vehicle to obtain an identification result comprises:
reconstructing an image of a three-dimensional scene within a view angle range of 180 degrees in front of the vehicle, and performing region segmentation based on depth and gray level to obtain T segmented regions, wherein T is a positive integer;
performing the following operations for each of the divided regions:
determining the total pixel point number N in the current segmentation region and the pixel point number M which accords with a plane fitting equation model;
judging whether the ratio of the number M of the pixel points to the number N of the total pixel points is smaller than a threshold parameter or not;
if the ratio is not smaller than the threshold parameter, judging that the current segmentation area is a road area, and taking the outer boundary of the road area as the boundary of two sides of the road to be turned;
if the ratio is smaller than the threshold parameter, judging that the current segmentation area is an obstacle area;
after the determination of the area types of the T divided areas is completed, obtaining a passable area L according to a formula (2) and a formula (3) and according to all road areas and all obstacle areas in the T divided areas, wherein the distance from the passable area L to the obstacle area is not higher than a safety distance threshold, and the formula (2) and the formula (3) are specifically as follows:
L∩A=L(2);
L∩B=0(3);
in the formula, a is the total number of all road regions in the T divided regions, B is the total number of all obstacle regions in the T divided regions, and 0 represents an empty set.
7. The method of reconstructing a three-dimensional scene of claim 4, further comprising:
and when the steering wheel corner is determined to be restored to the preset corner and the rotating speeds of the inner front wheel and the outer front wheel are the same, sending a stop instruction to the stepping motor controller, and controlling the stepping motor to drive the binocular vision camera to be restored to the position parallel to the central line of the vehicle through the stepping motor controller.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710576935.8A CN109254579B (en) | 2017-07-14 | 2017-07-14 | Binocular vision camera hardware system, three-dimensional scene reconstruction system and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710576935.8A CN109254579B (en) | 2017-07-14 | 2017-07-14 | Binocular vision camera hardware system, three-dimensional scene reconstruction system and method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109254579A CN109254579A (en) | 2019-01-22 |
CN109254579B true CN109254579B (en) | 2022-02-25 |
Family
ID=65051208
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710576935.8A Active CN109254579B (en) | 2017-07-14 | 2017-07-14 | Binocular vision camera hardware system, three-dimensional scene reconstruction system and method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109254579B (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110148216B (en) * | 2019-05-24 | 2023-03-24 | 中德(珠海)人工智能研究院有限公司 | Three-dimensional modeling method of double-dome camera |
CN111174765B (en) * | 2020-02-24 | 2021-08-13 | 北京航天飞行控制中心 | Target detection control method and device for planetary vehicle based on vision guidance |
CN111323425A (en) * | 2020-03-30 | 2020-06-23 | 上海应用技术大学 | A multi-camera visual inspection device and method |
CN113085745A (en) * | 2021-04-27 | 2021-07-09 | 重庆金康赛力斯新能源汽车设计院有限公司 | Display method and system for head-up display |
CN114199235B (en) * | 2021-11-29 | 2023-11-03 | 珠海一微半导体股份有限公司 | A positioning system and positioning method based on sector depth camera |
CN114119758B (en) * | 2022-01-27 | 2022-07-05 | 荣耀终端有限公司 | Method for acquiring vehicle pose, electronic device and computer-readable storage medium |
CN114739407B (en) * | 2022-03-21 | 2025-06-06 | 浙江理工大学 | A pitch motion search device and method for orchard navigation path visual information |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101976460A (en) * | 2010-10-18 | 2011-02-16 | 胡振程 | Generating method of virtual view image of surveying system of vehicular multi-lens camera |
CN102592117A (en) * | 2011-12-30 | 2012-07-18 | 杭州士兰微电子股份有限公司 | Three-dimensional object identification method and system |
CN102609934A (en) * | 2011-12-22 | 2012-07-25 | 中国科学院自动化研究所 | Multi-target segmenting and tracking method based on depth image |
CN103048995A (en) * | 2011-10-13 | 2013-04-17 | 中国科学院合肥物质科学研究院 | Wide-angle binocular vision identifying and positioning device for service robot |
CN103197494A (en) * | 2013-03-18 | 2013-07-10 | 哈尔滨工业大学 | Binocular camera shooting device restoring scene three-dimensional information |
CN103390268A (en) * | 2012-05-11 | 2013-11-13 | 株式会社理光 | Object area segmentation method and device |
CN103577790A (en) * | 2012-07-26 | 2014-02-12 | 株式会社理光 | Road turning type detecting method and device |
CN106066645A (en) * | 2015-04-21 | 2016-11-02 | 赫克斯冈技术中心 | While operation bull-dozer, measure and draw method and the control system of landform |
CN106485233A (en) * | 2016-10-21 | 2017-03-08 | 深圳地平线机器人科技有限公司 | Drivable region detection method, device and electronic equipment |
CN106741265A (en) * | 2017-01-04 | 2017-05-31 | 芜湖德力自动化装备科技有限公司 | A kind of AGV platforms |
CN106873580A (en) * | 2015-11-05 | 2017-06-20 | 福特全球技术公司 | Based on perception data autonomous driving at the intersection |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2199983A1 (en) * | 2008-12-22 | 2010-06-23 | Nederlandse Centrale Organisatie Voor Toegepast Natuurwetenschappelijk Onderzoek TNO | A method of estimating a motion of a multiple camera system, a multiple camera system and a computer program product |
CN103247075B (en) * | 2013-05-13 | 2015-08-19 | 北京工业大学 | Based on the indoor environment three-dimensional rebuilding method of variation mechanism |
WO2015068249A1 (en) * | 2013-11-08 | 2015-05-14 | 株式会社日立製作所 | Autonomous driving vehicle and autonomous driving system |
US10037028B2 (en) * | 2015-07-24 | 2018-07-31 | The Trustees Of The University Of Pennsylvania | Systems, devices, and methods for on-board sensing and control of micro aerial vehicles |
JP6659317B2 (en) * | 2015-11-17 | 2020-03-04 | 株式会社東芝 | Position and orientation estimation device, position and orientation estimation program, and vacuum cleaner system |
CN106441151A (en) * | 2016-09-30 | 2017-02-22 | 中国科学院光电技术研究所 | Measuring system for three-dimensional target Euclidean space reconstruction based on vision and active optical fusion |
CN106940186B (en) * | 2017-02-16 | 2019-09-24 | 华中科技大学 | A kind of robot autonomous localization and navigation methods and systems |
-
2017
- 2017-07-14 CN CN201710576935.8A patent/CN109254579B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101976460A (en) * | 2010-10-18 | 2011-02-16 | 胡振程 | Generating method of virtual view image of surveying system of vehicular multi-lens camera |
CN103048995A (en) * | 2011-10-13 | 2013-04-17 | 中国科学院合肥物质科学研究院 | Wide-angle binocular vision identifying and positioning device for service robot |
CN102609934A (en) * | 2011-12-22 | 2012-07-25 | 中国科学院自动化研究所 | Multi-target segmenting and tracking method based on depth image |
CN102592117A (en) * | 2011-12-30 | 2012-07-18 | 杭州士兰微电子股份有限公司 | Three-dimensional object identification method and system |
CN103390268A (en) * | 2012-05-11 | 2013-11-13 | 株式会社理光 | Object area segmentation method and device |
CN103577790A (en) * | 2012-07-26 | 2014-02-12 | 株式会社理光 | Road turning type detecting method and device |
CN103197494A (en) * | 2013-03-18 | 2013-07-10 | 哈尔滨工业大学 | Binocular camera shooting device restoring scene three-dimensional information |
CN106066645A (en) * | 2015-04-21 | 2016-11-02 | 赫克斯冈技术中心 | While operation bull-dozer, measure and draw method and the control system of landform |
CN106873580A (en) * | 2015-11-05 | 2017-06-20 | 福特全球技术公司 | Based on perception data autonomous driving at the intersection |
CN106485233A (en) * | 2016-10-21 | 2017-03-08 | 深圳地平线机器人科技有限公司 | Drivable region detection method, device and electronic equipment |
CN106741265A (en) * | 2017-01-04 | 2017-05-31 | 芜湖德力自动化装备科技有限公司 | A kind of AGV platforms |
Also Published As
Publication number | Publication date |
---|---|
CN109254579A (en) | 2019-01-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109254579B (en) | Binocular vision camera hardware system, three-dimensional scene reconstruction system and method | |
JP7073315B2 (en) | Vehicles, vehicle positioning systems, and vehicle positioning methods | |
JP7013434B2 (en) | Methods and devices for controlling the running of vehicles | |
US11107230B2 (en) | Systems and methods for depth estimation using monocular images | |
CN109583151B (en) | Vehicle trajectory prediction method and device | |
CN111788102B (en) | Odometer system and method for tracking traffic lights | |
KR102275310B1 (en) | Mtehod of detecting obstacle around vehicle | |
Schwesinger et al. | Automated valet parking and charging for e-mobility | |
JP2023134478A (en) | System and method for anonymizing navigation information | |
US10929995B2 (en) | Method and apparatus for predicting depth completion error-map for high-confidence dense point-cloud | |
EP3647734A1 (en) | Automatic generation of dimensionally reduced maps and spatiotemporal localization for navigation of a vehicle | |
CN109131317A (en) | Automatic vertical parking system and method based on multisection type planning and machine learning | |
CN112518739B (en) | Track-mounted chassis robot reconnaissance intelligent autonomous navigation method | |
US9734719B2 (en) | Method and apparatus for guiding a vehicle in the surroundings of an object | |
CN110914641A (en) | Fusion framework and batch alignment of navigation information for autonomous navigation | |
CN111522350A (en) | Sensing method, intelligent control equipment and automatic driving vehicle | |
CN110163963B (en) | Mapping device and mapping method based on SLAM | |
WO2015024407A1 (en) | Power robot based binocular vision navigation system and method based on | |
WO2021120202A1 (en) | Implementation of dynamic cost function of self-driving vehicles | |
CN106203341A (en) | A kind of Lane detection method and device of unmanned vehicle | |
CN114119896A (en) | A driving path planning method | |
CN113724525B (en) | Automatic passenger-replacing patrol type parking method and system based on big data platform and storage device | |
CN112837209B (en) | Novel method for generating distorted image for fish-eye lens | |
Behringer et al. | Results on visual road recognition for road vehicle guidance | |
CN112130576A (en) | Intelligent vehicle traveling method, computer readable storage medium and AGV |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |