CN118781569B - Preview aiming method, binocular camera, preview aiming system, automobile, controller and medium - Google Patents
Preview aiming method, binocular camera, preview aiming system, automobile, controller and medium Download PDFInfo
- Publication number
- CN118781569B CN118781569B CN202411259244.1A CN202411259244A CN118781569B CN 118781569 B CN118781569 B CN 118781569B CN 202411259244 A CN202411259244 A CN 202411259244A CN 118781569 B CN118781569 B CN 118781569B
- Authority
- CN
- China
- Prior art keywords
- image
- road surface
- region
- polar coordinate
- light
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 59
- 238000006243 chemical reaction Methods 0.000 claims description 26
- 238000003384 imaging method Methods 0.000 claims description 19
- 230000009466 transformation Effects 0.000 claims description 14
- 238000000605 extraction Methods 0.000 claims description 10
- 238000004590 computer program Methods 0.000 claims description 9
- 235000004522 Pentaglottis sempervirens Nutrition 0.000 claims description 7
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims description 5
- 238000013135 deep learning Methods 0.000 claims description 3
- 240000004050 Pentaglottis sempervirens Species 0.000 claims 4
- 230000003287 optical effect Effects 0.000 abstract description 33
- 230000008569 process Effects 0.000 abstract description 9
- 230000000694 effects Effects 0.000 abstract description 8
- 238000009472 formulation Methods 0.000 abstract description 3
- 239000000203 mixture Substances 0.000 abstract description 3
- 238000010586 diagram Methods 0.000 description 15
- 238000004364 calculation method Methods 0.000 description 8
- 238000013461 design Methods 0.000 description 7
- 238000012545 processing Methods 0.000 description 6
- 238000001514 detection method Methods 0.000 description 5
- 230000000007 visual effect Effects 0.000 description 5
- 238000004458 analytical method Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 230000009286 beneficial effect Effects 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000002776 aggregation Effects 0.000 description 2
- 238000004220 aggregation Methods 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 239000002245 particle Substances 0.000 description 2
- 230000008447 perception Effects 0.000 description 2
- 238000012805 post-processing Methods 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 101100512334 Bacillus subtilis (strain 168) mapB gene Proteins 0.000 description 1
- 241000764238 Isis Species 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 101150112095 map gene Proteins 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 238000002310 reflectometry Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
- G06V10/12—Details of acquisition arrangements; Constructional details thereof
- G06V10/14—Optical characteristics of the device performing the acquisition or on the illumination arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/757—Matching configurations of points or features
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Software Systems (AREA)
- Geometry (AREA)
- Computer Graphics (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
The invention provides a pretightening method, a binocular camera, a pretightening system, an automobile, a controller and a medium, wherein the method comprises the steps of sequentially extracting a region of interest and converting polar coordinates of a first road surface scanning image and a second road surface scanning image which are acquired based on the binocular camera to obtain a first polar coordinate region image and a second polar coordinate region image, and obtaining a characteristic point matching set according to the registration result of the first polar coordinate area image and the second polar coordinate area image, obtaining a pavement area space position set according to pixel position coordinates corresponding to each matching characteristic point in the characteristic point matching set, and obtaining a pavement three-dimensional reconstruction result according to a pavement pre-aiming area information set obtained based on the pavement area space position set. The binocular image registration is carried out based on the road surface matching constraint of the same optical axis, the registration accuracy and the registration effect are improved while the simple and efficient registration process is ensured, and a reliable basis is provided for road surface pre-aiming reconstruction and intelligent driving strategy formulation.
Description
Technical Field
The invention relates to the technical field of vehicle vision system design, in particular to a pre-aiming method, a binocular camera, a pre-aiming system, an automobile, a controller and a medium.
Background
Along with the continuous popularization and application of the intelligent driving technology, the requirements on a vehicle vision system are gradually improved, and how to perform high-efficiency and accurate pre-aiming detection on the road surface of a drivable road is gradually the direction of enthusiasm research in the intelligent driving field.
The existing road surface pre-aiming system for the travelable road is realized based on binocular stereoscopic vision technology, road surface detection is carried out through binocular cameras, the road surface flatness of the travelable road is identified according to detection results, and a guiding basis is provided for intelligent driving strategies. However, in practical application, the technology needs to perform space recovery through multiple steps of calibration, stereo correction, registration (parallax estimation) and the like, so that the calculated amount is too large, and the effective field of view (field of view overlapping area) of the camera is easily influenced by a baseline distance, so that the requirement on the calibration precision of the camera is very high, and the technology has a large application limitation.
Disclosure of Invention
In order to solve the problems in the prior art, it is necessary to provide a pre-aiming method, a binocular camera, a pre-aiming system, an automobile, a controller and a medium.
In a first aspect, an embodiment of the present invention provides a pavement pre-aiming method, including the steps of:
Acquiring a first road surface scanning image and a second road surface scanning image by a binocular camera, wherein the first road surface scanning image and the second road surface scanning image are coaxial axis images;
obtaining a first region of interest image based on the first road surface scanning image, and obtaining a second region of interest image based on the second road surface scanning image;
obtaining a first polar coordinate region image based on the first region of interest image and obtaining a second polar coordinate region image based on the second region of interest image through polar coordinate conversion;
Acquiring a registration result of the first polar coordinate area image and the second polar coordinate area image, and obtaining a feature point matching set according to the registration result;
obtaining a pavement area space position set according to the pixel position coordinates of each matched characteristic point pair in the characteristic point matching set;
Obtaining a pavement pre-aiming area information set based on the pavement area space position set through vehicle body coordinate system conversion;
And obtaining a road surface three-dimensional reconstruction result according to the road surface pre-aiming area information set.
In a second aspect, an embodiment of the present invention provides a binocular camera, including a first camera body, a second camera body, a first lens, a second lens, a half mirror, and a mirror;
The light transmitting end of the semi-transparent half reflecting mirror is aligned with the light entering end of the reflecting mirror, the light reflecting end of the semi-transparent half reflecting mirror is aligned with the light entering end of the second lens, and the semi-transparent half reflecting mirror is used for reflecting and transmitting received incident light rays so as to obtain corresponding first reflected light rays and transmitted light rays;
The light-emitting end of the second lens is aligned to the light-entering end of the second camera body, and the second lens is used for converging and injecting the received first reflected light into the second camera body;
The light emitting end of the reflecting mirror is aligned to the light entering end of the first lens, and the reflecting mirror is used for reflecting the received transmitted light to obtain second reflected light parallel to the first reflected light;
the light-emitting end of the first lens is aligned to the light-entering end of the first camera body, and the first lens is used for converging and injecting the received second reflected light into the first camera body.
In a third aspect, an embodiment of the present invention provides a pavement pre-aiming system, where the system includes an image acquisition module, an area extraction module, a polar coordinate conversion module, a stereo registration module, a spatial information acquisition module, a pre-aiming information acquisition module, and a three-dimensional reconstruction module;
the image acquisition module is used for acquiring a first road surface scanning image and a second road surface scanning image through a binocular camera, wherein the first road surface scanning image and the second road surface scanning image are common optical axis images;
the region extraction module is used for obtaining a first region-of-interest image based on the first road surface scanning image and obtaining a second region-of-interest image based on the second road surface scanning image;
The polar coordinate conversion module is used for obtaining a first polar coordinate area image based on the first region of interest image and obtaining a second polar coordinate area image based on the second region of interest image through polar coordinate conversion;
the three-dimensional registration module is used for acquiring registration results of the first polar coordinate area image and the second polar coordinate area image and obtaining a characteristic point matching set according to the registration results;
the space information acquisition module is used for acquiring a road surface area space position set according to the pixel position coordinates of each matched characteristic point pair in the characteristic point matching set;
the pre-aiming information acquisition module is used for acquiring a pavement pre-aiming area information set based on the pavement area space position set through vehicle body coordinate system conversion;
the three-dimensional reconstruction module is used for obtaining a road surface three-dimensional reconstruction result according to the road surface pre-aiming area information set.
In a fourth aspect, the embodiment of the invention also provides an automobile, wherein the computer equipment is deployed on the automobile and comprises a memory, a processor and a road surface pre-aiming program which is stored in the memory and can run on the processor, and the road surface pre-aiming program is configured with steps for realizing the road surface pre-aiming method.
In a fifth aspect, an embodiment of the present invention further provides a controller, where the controller includes a processor and a memory, where the processor and the memory are coupled, the memory is used to execute a computer program or instructions in the memory, and the processor is configured to cause the controller to execute the steps of the road pre-aiming method described above.
In a sixth aspect, embodiments of the present invention also provide a computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of the above method.
Compared with the prior art, on one hand, the road surface pre-aiming method of the embodiment of the invention carries out binocular road surface stereogram registration through road surface matching constraint based on the same optical axis, so that not only can excessive calculation operations be avoided and the simple and efficient registration process be ensured, but also the registration precision and registration effect can be effectively improved, reliable basis is provided for three-dimensional reconstruction of the road surface, and further reliable guiding basis is provided for intelligent driving strategy formulation. On the other hand, the binocular camera realizes the optical axis sharing of the binocular cameras with different focal lengths through the combined design of the semi-transparent and semi-reflective mirrors, so that the field of view of the camera can be enlarged, and the field of view of the camera can be effectively prevented from being influenced by the base line distance.
Drawings
FIG. 1 is a schematic flow chart of a road surface pre-aiming method in an embodiment of the invention;
FIG. 2 is a view angle diagram of a binocular camera in an embodiment of the present invention;
FIG. 3 is a schematic view of a region of interest of a pavement in an embodiment of the present invention;
FIG. 4 is a schematic diagram of converting a road surface scan image into a road surface bird's eye view in an embodiment of the invention;
FIG. 5 is a schematic illustration of a first region of interest image and a second region of interest image in an embodiment of the invention;
FIG. 6 is a schematic diagram of a principle of dual focal imaging in an embodiment of the present invention;
FIG. 7 is a schematic representation of a polar transformation in an embodiment of the invention;
FIG. 8 is a schematic representation of binocular image registration in Cartesian and polar coordinate systems in accordance with an embodiment of the present invention;
FIG. 9 is a feature point matching schematic diagram of binocular image registration in an embodiment of the present invention;
FIG. 10 is a schematic diagram of a binocular camera according to an embodiment of the present invention;
FIG. 11 is a schematic diagram of another structure of a binocular camera according to an embodiment of the present invention;
FIG. 12 is a layout diagram of a vehicle-mounted binocular camera in an embodiment of the present invention;
fig. 13 is a schematic structural diagram of a road surface pre-aiming system according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantageous effects of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples, and it is apparent that the examples described below are part of the examples of the present application, which are provided for illustration only and are not intended to limit the scope of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
In some embodiments, as shown in fig. 1, a pavement pre-sighting method is provided, the method comprising the steps of:
S11, acquiring a first road surface scanning image and a second road surface scanning image through a binocular camera, wherein the first road surface scanning image and the second road surface scanning image are coaxial axis images;
In some embodiments, the first road surface scanned image and the second road surface scanned image can be understood as the imaging effect under two different focal lengths acquired by a binocular camera with an optical axis passing through the center of the camera, if the focal length of one of the binocular cameras is the focal length of the other binocular camera Smaller, larger Field of View (FOV), larger corresponding Field of View, and focal length of the other cameraThe first road surface scanning image and the second road surface scanning image shown in fig. 2 can be obtained by a larger angle of view and a smaller corresponding field of view, and it should be noted that in the present embodimentAndThe size relationship of (2) is only exemplary, and in practical application, the corresponding focal length size relationship of two cameras in the binocular camera can be adjusted or can beThe size of the particles is larger than the size of the particles,In the smaller case, only the requirement that the focal lengths of the two cameras are different is met.
In some embodiments, the binocular camera further comprises a first camera body, a second camera body, a first lens, a second lens, a half mirror, and a mirror;
The light transmitting end of the semi-transparent half reflecting mirror is aligned with the light entering end of the reflecting mirror, the light reflecting end of the semi-transparent half reflecting mirror is aligned with the light entering end of the second lens, and the semi-transparent half reflecting mirror is used for reflecting and transmitting received incident light rays so as to obtain corresponding first reflected light rays and transmitted light rays;
The light-emitting end of the second lens is aligned to the light-entering end of the second camera body, and the second lens is used for converging and injecting the received first reflected light into the second camera body;
The light emitting end of the reflecting mirror is aligned to the light entering end of the first lens, and the reflecting mirror is used for reflecting the received transmitted light to obtain second reflected light parallel to the first reflected light;
the light-emitting end of the first lens is aligned to the light-entering end of the first camera body, and the first lens is used for converging and injecting the received second reflected light into the first camera body.
The binocular camera realizes the sharing of the optical axes of the binocular cameras with different focal lengths through the combined design of the semi-transparent and semi-reflective mirrors, so that the field of view of the camera can be enlarged, the field of view of the camera can be effectively prevented from being influenced by the distance of a base line, the binocular road surface scanning images with the shared optical axes can be provided, reliable support is provided for the simple, efficient and accurate stereoscopic registration of the binocular images based on the road surface matching constraint of the same optical axis, and the efficiency and the precision of the three-dimensional reconstruction of the road surface are further effectively improved.
S12, obtaining a first region-of-interest image based on the first road surface scanning image and obtaining a second region-of-interest image based on the second road surface scanning image, wherein the first region-of-interest image and the second region-of-interest image can be understood as rectangular areas which are intercepted from the first road surface scanning image and the second road surface scanning image according to an image area of a certain distance range of a front of a vehicle which is required to be focused in practical application, and the rectangular areas are also called road surface ROI (Region of Interest) areas.
Considering that in practical application, due to the position relation between the binocular camera and the road surface, the acquired first road surface scanning image and the acquired second road surface scanning image actually show a trapezoid as shown by a red area in fig. 3, and the interception and processing of the trapezoid area are complicated, a large amount of calculation resources are required, in order to reduce the calculation amount of subsequent image processing and ensure the high efficiency and reliability of extraction and analysis of the road surface interested area, in some embodiments, the steps of performing bird's eye view conversion on the acquired road surface scanning image to change the required road surface interested area into a rectangular area and then extracting the interested area image are further arranged, specifically, the steps of obtaining the first interested area image based on the first road surface scanning image and obtaining the second interested area image based on the second road surface scanning image include:
Obtaining a first road surface aerial view based on the first road surface scanning image and a second road surface aerial view based on the second road surface scanning image through an affine transformation principle, wherein the first road surface aerial view and the second road surface aerial view can be understood as images obtained by performing view transformation on the first road surface scanning image and the second road surface scanning image, and the corresponding affine transformation formula is as follows:
Wherein, Representing pixel coordinates in the road surface aerial view; Pixel coordinates representing a road surface scanned image; 、、 And The rotation and scaling parameters of the corresponding coordinate axes; And It should be noted that, the parameter value in the specific affine transformation formula may be set according to the actual requirement, which is not limited herein.
The affine transformation can be used for converting the first road surface aerial view and the second road surface aerial view shown in the (a) diagram in fig. 4 into the first road surface aerial view and the second road surface aerial view shown in the (b) diagram in fig. 4, namely, 4 vertexes of a trapezoid shown in the (a) diagram in fig. 4 are mapped onto 4 vertexes of a rectangle shown in the (b) diagram in fig. 4, the camera view angle is changed into the aerial view angle, and the road surface ROI area in the first road surface aerial view and the second road surface aerial view is ensured to be rectangular areas. Because the camera is right opposite to the road surface under the view angle of the aerial view, the depth information of the road surface ROI is conveniently and accurately extracted, the depth information of the road surface ROI is conveniently and well estimated, the problem that the polar coordinate representation and calculation of the trapezoid area are complex and are not beneficial to the improvement of the subsequent image processing efficiency can be effectively solved, and the reliability guarantee is provided for the high efficiency of the subsequent binocular image three-dimensional registration.
Obtaining a first region of interest image based on the first road surface aerial view and obtaining a second region of interest image based on the second road surface aerial view according to preset region of interest identification parameters, wherein the preset region of interest identification parameters can be understood as region position parameter information for identifying and intercepting the region of interest image from the road surface aerial view, and can be selected and set according to actual application requirements in principle, in order to ensure the reliability of intercepting the region of interest in actual application, in some embodiments, the preset region of interest identification parameters are further obtained in a pre-calibration mode for subsequent region of interest extraction on the road surface scanning image acquired in real time, and the specific acquisition steps of the preset region of interest identification parameters comprise:
The method comprises the steps of obtaining a first image to be marked and a second image to be marked, wherein the first image to be marked and the second image to be marked comprise preset area identification points, the preset area identification points can be understood to be identification points which are distributed in a pavement area within a visual field range of a camera and used for identifying a pavement area of interest of a pavement of the car in front of the car to be analyzed, and in some embodiments, the identification range corresponding to the preset area identification points is set to be 60 meters long and 2 meters wide ) Is a region of (2) the range of the light-emitting diode is within the range, i.e. on the front road surface of the carAfter the identification points are placed in the area range, a binocular camera is adopted to shoot to obtain a first image to be marked and a second image to be marked, wherein the first image to be marked and the second image to be marked comprise the preset area identification points.
And obtaining a first marked image based on the first image to be marked and obtaining a second marked image based on the second image to be marked according to the preset area identification points, wherein the first marked image and the second marked image can be understood as multiple marked images corresponding to the position points of the image respectively in the first image to be marked and the second image to be marked according to the positions of the preset area identification points on the road surface.
The process of obtaining the preset region of interest identification parameter according to the labeling results can be understood as calculating the position relation of the region of interest relative to the scanned image (for example, the region of interest is a certain pixel interval range in the image) based on the position information of the pixel points corresponding to each identification point on the first scanned image and the second scanned image corresponding to the whole scanned image, and the position relation is used as the corresponding preset region of interest identification parameter.
In practical application, after the bird's eye view conversion, forThe images under the two focal lengths are subjected to pavement interested region clipping, invalid regions in the aerial view are removed, and the calculation can be accelerated, and the difficulty of a post-processing algorithm can be simplified. Because ofLess thanTherefore, it isIs larger thanIs provided with a plurality of light-emitting diodes,Is provided with thereinMeanwhile, as the first road surface scanning image and the second road surface scanning image share the optical axis, according to the bifocal imaging principle shown in fig. 6, the first road surface matching constraint is known, the first road surface matching constraint is a ray passing through the center point of the first region image, the second road surface matching constraint is known, the second road surface matching constraint is a ray passing through the center point of the second region image, namely, the first region image and the second region image meet the road surface matching constraint shown in the graph (b) in fig. 5, the corresponding characteristic points of the first region image and the second region image are fixed on the ray passing through the center point of the region image, and then the characteristic point registration can be carried out through the road surface matching constraint of the same optical axis, so that the road surface scanning information is calculated.
S13, obtaining a first polar coordinate area image based on the first region-of-interest image and a second polar coordinate area image based on the second region-of-interest image through polar coordinate conversion, wherein the first polar coordinate area image and the second polar coordinate area image can be understood as images obtained by carrying out polar coordinate system conversion on the first region-of-interest image and the second region-of-interest image belonging to a Cartesian coordinate system and then using the images for subsequent analysis processing in consideration of road surface matching constraint (ray constraint) shown in a graph (b) in FIG. 5, and the specific conversion process is as follows:
The first region of interest image and the second region of interest image are respectively centered on the center of the corresponding region image, the horizontal is the radius (maximum value is R), the vertical is the angle (maximum value is 360 degrees, and the radial expansion is performed:
Wherein, Is the center point of the region of interest image; Is the point to be transformed; is the distance between the center point and the point to be transformed; And (5) matching the included angle value between the constraint ray and the horizontal line for the pavement.
The effect of polar coordinate transformation on the road surface is shown in FIG. 8, where FIG. 8 (a) isLower overall area, FIG. 8 (b) showsThe ROI area of the lower road surface is scaled toAn image of the size of the region of the lower road surface ROI, FIG. 8 (c) showsUnder the road surface ROI area, fig. 8 (d) is a polar coordinate area image of fig. 8 (b), fig. 8 (e) is a polar coordinate area image of fig. 8 (c), and it can be seen that in the polar coordinate system, road surface matching constraint rays are simultaneously converted into horizontal-to-straight line constraint (similar to epipolar constraint), and the existing stereo matching strategy based on limit constraint can be referred to for registration.
S14, acquiring a registration result of the first polar coordinate area image and the second polar coordinate area image, and acquiring a feature point matching set according to the registration result, wherein the acquisition process of the feature point matching set can be understood as a process of registering all feature point pairs matched on the acquired first region-of-interest image and the second region-of-interest image by adopting the existing stereoscopic vision image matching strategy.
In some embodiments, the step of obtaining a registration result of the first polar coordinate region image and the second polar coordinate region image, and obtaining the feature point matching set according to the registration result includes:
The registration result is a polar coordinate feature point matching set corresponding to the first polar coordinate region image and the second polar coordinate region image, wherein the preset three-dimensional registration method is one of a feature similarity matching method based on horizontal straight line constraint and a three-dimensional registration method based on depth learning, and the registration can be performed by adopting a traditional parallax map matching algorithm and a depth learning strategy. The corresponding feature similarity matching method based on horizontal straight line constraint can be understood as a similar disparity value matching strategy based on stereoscopic vision by taking horizontal straight line constraint and feature point matching as matching principles, as shown in the following fig. 9, i.e. a group of points with the features closest to the same are found at the same height position on the two polar coordinate area images, namely, the matching feature point pairs. In fig. 9, the red frame is a corresponding point frame, and the corresponding points of the left and right images are searched in the direction of the black dotted line (the horizontal straight line constraint), and the specific registration process is as follows:
And respectively calculating the feature similarity of all pixel points on the horizontal straight line constraint in the first polar coordinate area image and the second polar coordinate area image according to a preset pixel sliding window, and obtaining the polar coordinate feature point matching set according to the feature similarity and the maximum similarity registration principle.
In practical application, assume that the preset pixel sliding window isCan calculate the characteristics of all points on polar lines in the left and right images (a first polar region image and a second polar region image) by adopting SAD (Sum of Absolute Differences) algorithmAnd,For the pixel index of the left and right images,Representing a positive integer set, and the corresponding calculation formula is as follows: Wherein, the method comprises the steps of, wherein, ,Is a pixel pointSliding window of preset pixels, and recalculatingAndSimilarity of (2)After that, according toAnd finally, carrying out post-processing such as filtering on the obtained registration point pairs, mainly utilizing geometric imaging constraint, wherein the matching point is closer to the center of the polar line in the short-focal-length image, and is closer to the image edge in the long-focal-length image to judge whether the matching is correct or not, so that the registration points of all the pixel points of the left image are obtained, and the required polar coordinate feature point matching set is obtained.
The three-dimensional registration method based on deep learning can be understood as that the first polar coordinate area image and the second polar coordinate area image are respectively subjected to feature extraction through a preset convolution neural network to obtain a corresponding first feature image and a corresponding second feature image, multiple cost bodies are obtained based on multiple cost body construction modes according to the first feature image and the second feature image, cost aggregation is carried out on all cost bodies according to a preset three-dimensional convolution structure, parallax iterative learning is carried out according to a preset configuration loss function to obtain the polar coordinate feature point matching set, and the fact that the preset convolution neural network and the preset three-dimensional convolution structure can be set according to practical application requirements is required.
In practical application, the registration process based on deep learning comprises a) extracting a first feature map by adopting a convolutional neural networkAnd a second feature mapB) according to the characteristic diagram, through characteristic correlationAnd feature connectionThe method comprises the steps of (1) constructing a cost body, c) carrying out cost aggregation by using a three-dimensional convolution structure, d) carrying out registration point estimation by using an L1 loss function shown in the following formula:
Wherein, Estimating the number of pixels for the total; A true registration point for each pixel; A registration point is predicted for each pixel.
The final result of the registration method is shown in fig. 8 (d) and fig. 8 (e), and the corresponding feature points P of the first polar coordinate region image and the second polar coordinate region image, that is, the matching polar coordinate feature point pairs in the polar coordinate feature point matching set, can be obtained through registration.
The feature point matching set is obtained based on the polar coordinate feature point matching set through inverse polar coordinate transformation, wherein the feature point matching set can be understood as a feature point pair which is obtained by converting each corresponding matching polar coordinate feature point pair under a polar coordinate system into a matching feature point pair under a Cartesian coordinate system of an original image, and each matching feature point pair is conveniently mapped to a three-dimensional world coordinate system so as to obtain three-dimensional information of a reconstructed road surface. Specifically, the inverse polar transformation is expressed as:
By the above-described inverse polar coordinate transformation, the feature point P shown in the graph (d) and the graph (e) in fig. 8 can be converted into the feature point P shown in the graph (b) and the graph (c) in fig. 8, and each matching feature point pair in the cartesian coordinate system can be obtained.
In the embodiment, the characteristic points are registered based on the road surface matching constraint of the same optical axis, so that the local characteristics of the characteristic points can be ensured to be more uniform, the matching efficiency and the matching precision are effectively improved, and the better matching effect is ensured.
S15, obtaining a pavement area space position set according to pixel position coordinates of each matched characteristic point pair in the characteristic point matching set, wherein a bifocal imaging principle is shown in fig. 6, and a point in space is based on a theoretical model of small hole imagingAt two different focal lengthsNext, there are two projections on the imaging planeThe simultaneous equations can be obtained according to the similar triangle theorem as follows:
In the formula, For two focal lengths of the light, the light is reflected,For two projections of the image plane at the corresponding focal length,Is the planar dot position.
The planar imaging formula is expanded to a three-dimensional world coordinate system, so that a calculation formula of a three-dimensional space can be obtained as follows:
Wherein, Is a space pointProjection on the image plane, the coordinate point of the space can be calculated according to the formula。
In some embodiments, the step of obtaining the pavement area spatial position set according to the pixel position coordinates of each matching feature point pair in the feature point matching set includes:
And obtaining the space position coordinates of the road surface area based on the pixel position coordinates of each matched characteristic point pair in the characteristic point matching set by a bifocal imaging principle, wherein the space position coordinates of the road surface area can be understood as the space positions of the road surface points under a world coordinate system obtained by performing three-dimensional space conversion on the pixel position coordinates of each matched characteristic point pair in the characteristic point matching set by the lens focal length corresponding to the first road surface scanning image and the second road surface scanning image. Correspondingly, the preset space coordinate conversion formula is expressed as follows:
Wherein, AndRespectively representing pixel position coordinates of an ith matching feature point pair in the feature point matching set corresponding to the first region of interest image and the second region of interest image; Representing the spatial position coordinates of the road surface area corresponding to the ith matching characteristic point pair; And Respectively representing the focal lengths of the lenses used to acquire the first road surface scanned image and the second road surface scanned image.
By the method, the spatial positions of all pixel points on the whole road surface ROI can be calculated, so that an accurate road surface region spatial position set is obtained, and reliable data support is provided for subsequent road surface three-dimensional reconstruction.
And S16, obtaining a pavement pre-aiming area information set based on the pavement area space position set through vehicle body coordinate system conversion, wherein the pavement pre-aiming area information set can be understood as a coordinate system conversion matrix constructed based on the position relation between a binocular camera for acquiring a first pavement scanning image and a second pavement scanning image and the vehicle body coordinate system, and converting each space position in the pavement area space position set into position information under the vehicle body coordinate system to obtain the pavement position information set.
S17, obtaining a road surface three-dimensional reconstruction result according to the road surface pre-aiming area information set, wherein the road surface three-dimensional reconstruction result can be understood as a result obtained by carrying out road surface three-dimensional reconstruction according to the road surface pre-aiming area information set, and the specific road surface three-dimensional reconstruction method can be realized by referring to the prior art and is not described in detail herein.
According to the embodiment, through the first road surface scanning image and the second road surface scanning image which are obtained through the binocular camera and are subjected to interested region extraction to obtain a first interested region image and a second interested region image which correspond to rays of which the road surface matching constraint is the center point of the regional image, the first interested region image and the second interested region image are subjected to polar coordinate conversion to obtain a corresponding first polar coordinate region image and a corresponding second polar coordinate region image by taking the center of the corresponding regional image as the center of a circle, the road surface matching constraint is converted into a corresponding horizontal linear constraint, then the first polar coordinate region image and the second polar coordinate region image are subjected to characteristic point matching to obtain a corresponding characteristic point matching set, the corresponding road surface region space position set is obtained based on a bifocal imaging principle, and the corresponding road surface pre-aiming region information set is obtained by converting each space position in the road surface region space position set.
Although the steps in the flowcharts described above are shown in order as indicated by arrows, these steps are not necessarily executed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders.
In some embodiments, as shown in fig. 10, there is provided a binocular camera including a first camera body 11, a second camera body 12, a first lens 13, a second lens 14, a half mirror 15, and a mirror 16;
The translucent half mirror 15 has a light transmitting end aligned with the light entering end of the reflecting mirror 16, a light reflecting end aligned with the light entering end of the second lens 14, and the translucent half mirror 15 is configured to reflect and transmit the received incident light to obtain a corresponding first reflected light and a corresponding transmitted light;
The light-emitting end of the second lens 14 is aligned to the light-entering end of the second camera body 12, and the second lens 14 is configured to converge the received first reflected light beam and inject the first reflected light beam into the second camera body 12;
The light emitting end of the reflecting mirror 16 is aligned to the light entering end of the first lens 13, and the reflecting mirror 16 is configured to reflect the received transmitted light to obtain a second reflected light parallel to the first reflected light;
The light-emitting end of the first lens 13 is aligned with the light-entering end of the first camera body 11, and the first lens 13 is configured to converge and inject the received second reflected light into the first camera body 11.
In some embodiments, as shown in fig. 10, the first camera body 11 and the second camera body 12 are disposed on the same side, the half mirror 15 is parallel to the mirror surface of the mirror 16, after the incident light enters the half mirror 15 (beam splitter) with a projection and reflection ratio of 50:50, one beam of light is reflected to form a first reflected light, enters the second lens 14 (lens or lens group) with a focal length of f 2, and is converged by the second lens 14 to enter the second camera body 12 for imaging to generate a first road surface scanning image, and meanwhile, the other beam of light is transmitted to the mirror 16 and is reflected by the mirror 16 to generate a second reflected light parallel to the first reflected light, enters the first lens 13 with a focal length of f 1, and is converged by the first lens 13 (lens or lens group) to enter the first camera body 11 for imaging to generate a second road surface scanning image with a common optical axis with the first road surface scanning image. It should be noted that, the specific types of the first camera body 11, the second camera body 12, the first lens 13, the second lens 14 and the reflecting mirror 16 may be selected according to practical application requirements, where the reflecting mirror 16 preferably adopts a 45-degree reflecting mirror with high reflectivity, wide reflection band and good compactness, so as to provide reliable guarantee for the imaging quality of the second camera body 12.
In some embodiments, the binocular camera may further adopt a layout structure shown in fig. 11, where the first camera body 11 and the second camera body 12 are respectively arranged on two sides, the half mirror 15 and the mirror surface of the mirror 16 keep a vertical state, and after the incident light enters the half mirror 15 (beam splitter) with a projection and reflection ratio of 50:50, one beam of light is reflected to form a first reflected light, enters the second lens 14 (lens or lens group) with a focal length of f 2, and is converged by the second lens 14 to enter the second camera body 12 for imaging to form a first road scanning image, and meanwhile, the other beam of light is transmitted to enter the mirror 16 and reflected by the mirror 16 to form a second reflected light parallel to the first reflected light, enters the first lens 13 with a focal length of f 1, and is converged by the first lens 13 (lens or lens group) to enter the first camera body 11 for imaging to form a second road scanning image with a common optical axis with the first road scanning image.
Based on the binocular camera layout structure shown in fig. 10 and 11, the first camera body 11, the second camera body 12, the first lens 13, the second lens 14, the half mirror 15 and the mirror 16 in the binocular camera can generate the first road surface scanning image and the second road surface scanning image with the common optical axis only by meeting the corresponding optical path design requirements. In some embodiments, the half mirror 15 is configured to receive an incident light ray, reflect and transmit the incident light ray to obtain a corresponding first reflected light ray and a corresponding transmitted light ray, the second lens 14 is configured to receive the first reflected light ray so that the first reflected light ray is converged into the second camera body 12, the mirror 16 is configured to receive the transmitted light ray, reflect and generate a corresponding second reflected light ray so that the second reflected light ray is parallel to the first reflected light ray, and the first lens 13 is configured to receive the second reflected light ray so that the second reflected light ray is converged into the first camera body 11.
In order to reduce imaging differences caused by factors other than the arrangement position of the binocular camera as much as possible in practical applications, in some embodiments, the target surface sizes and the pixel sizes of the first camera body 11 and the second camera body 12 are further set to be the same.
Meanwhile, in order to increase imaging efficiency as much as possible in consideration of the case where zooming adjusts the camera field of view to cause a problem of photographing delay, in some embodiments, the focal lengths of the first lens 13 and the second lens 14 are further set to be different. In addition, the focal lengths of the first lens 13 and the second lens 14 in some embodiments can be adjusted to adapt to the road surface pre-aiming requirements of different distance ranges, and the corresponding road surface pre-aiming precision can be effectively ensured, so that high-quality road surface scanning images can be obtained, and further a reliable analysis basis is provided for the follow-up road surface three-dimensional reconstruction processing.
The binocular camera provided by the embodiment realizes the sharing of the optical axes of the binocular cameras with different focal lengths through the combined design of the semi-transparent and semi-reflective mirrors, so that the visual field range of the camera can be enlarged, the influence of the base line distance on the visual field of the camera can be effectively avoided, the binocular road surface scanning images with the shared optical axes can be provided, reliable support is provided for the simple, efficient and accurate stereoscopic registration of the binocular images based on the road surface matching constraint of the same optical axis, and the efficiency and the precision of the road surface three-dimensional reconstruction are further effectively improved.
In practical applications, the binocular camera provided in the above embodiment may be directly arranged on the vehicle body as a sensor for collecting road surface scanning images by the vehicle vision system, and the road surface scanning images in front of the vehicle driving road, which can be collected in real time, are used for corresponding road surface image analysis and detection, and the binocular camera arrangement mode includes two modes, namely, a mode in which the binocular bifocal camera shown in the (a) diagram in fig. 12 is horizontally arranged so that the optical axes of the camera sensors (the optical axes of the first lens and the second lens) are parallel to the road surface, and a mode in which the binocular bifocal camera shown in the (b) diagram in fig. 12 is downwardly inclined so that the optical axes of the camera sensors (the optical axes of the first lens and the second lens) intersect with the road surface. Considering the characteristics that the arrangement mode of the parallel pavement of the optical axis of the camera sensor is more beneficial to binocular perception, but the pixels of the pavement in the image are less, the arrangement mode of the intersection of the optical axis of the camera sensor and the pavement is more beneficial to pavement scanning, the pavement pixels in the image are more, and the resolution is higher, in some embodiments, the optical axes of the first lens and the second lens are further arranged to intersect with the pavement, so that the image with more paths of pixels is conveniently obtained, the subsequent pavement pretightening analysis processing is facilitated, and meanwhile, when the binocular bifocal camera is arranged in a downward inclined mode, the intersection angle between the optical axis of the specific camera sensor and the pavement can be used for balancing the range of the pre-scanned pavement (particularly the length of the pavement in front), and the actual automatic driving binocular perception function which needs to be supported is related, and the specific included angle parameter limitation does not exist.
The binocular camera provided by the embodiment can be understood as a binocular ranging principle based on small-hole imaging, and the binocular stereoscopic vision sensing system with the common optical axis realized by introducing the design combination of the semi-transparent semi-reflecting mirror and the reflecting mirror can be installed on a vehicle or other movable equipment for pre-aiming detection on a driving road surface, so that the visual field of the camera can be increased, the influence of the baseline distance on the visual field can be effectively avoided, and a binocular road surface scanning image with the common optical axis can be provided, so that the simple, efficient and accurate binocular image stereoscopic registration based on the road surface matching constraint of the same optical axis can be realized conveniently.
In one embodiment, as shown in fig. 13, there is provided a pavement pre-aiming system, which comprises an image acquisition module 1, a region extraction module 2, a polar coordinate conversion module 3, a three-dimensional registration module 4, a spatial information acquisition module 5, a pre-aiming information acquisition module 6 and a three-dimensional reconstruction module 7;
The image acquisition module 1 is used for acquiring a first road surface scanning image and a second road surface scanning image through a binocular camera, wherein the first road surface scanning image and the second road surface scanning image are common optical axis images;
The region extraction module 2 is configured to obtain a first region of interest image based on the first road surface scanning image, and obtain a second region of interest image based on the second road surface scanning image;
The polar coordinate conversion module 3 is configured to obtain a first polar coordinate area image based on the first region of interest image and obtain a second polar coordinate area image based on the second region of interest image through polar coordinate conversion;
the stereo registration module 4 is configured to obtain a registration result of the first polar coordinate area image and the second polar coordinate area image, and obtain a feature point matching set according to the registration result;
The spatial information acquisition module 5 is configured to obtain a spatial position set of the road surface area according to the pixel position coordinates of each matching feature point pair in the feature point matching set;
the pre-aiming information acquisition module 6 is used for acquiring a pavement pre-aiming area information set based on the pavement area space position set through vehicle body coordinate system conversion;
The three-dimensional reconstruction module 7 is configured to obtain a three-dimensional road surface reconstruction result according to the road surface pre-aiming area information set.
For specific limitation of the pavement pre-aiming system, reference may be made to the limitation of the pavement pre-aiming method, and corresponding technical effects may be equally obtained, which is not described herein. The modules in the pavement pre-aiming system can be realized in whole or in part by software, hardware and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In some embodiments, an automobile is provided having a computer device disposed thereon, the computer device comprising a memory, a processor, and a road pre-sight program stored on the memory and operable on the processor, the road pre-sight program configured with steps to implement the road pre-sight method described above.
In some embodiments, a controller is provided, characterized in that the controller comprises a processor and a memory, the processor and the memory being coupled, the memory being for a computer program or instructions, the processor being for executing the computer program or instructions in the memory, such that the controller performs the steps of the road pre-sighting method described above.
In some embodiments, a computer readable storage medium is provided, on which a computer program is stored which, when executed by a processor, implements the steps of the road pre-aiming method described above.
In summary, compared with the prior art, the road surface pretightening method, the binocular camera, the pretightening system, the automobile, the controller and the medium, which are provided by the embodiment of the invention, perform binocular road surface stereogram registration through road surface matching constraint based on the same optical axis, not only can avoid excessive calculation operation and ensure the simplicity and high efficiency of the registration process, but also can effectively improve the registration precision and the registration effect, provide reliable basis for road surface three-dimensional reconstruction and further provide reliable guiding basis for intelligent driving strategy formulation, and the binocular camera realizes the optical axis co-path of the binocular camera with different focal lengths through the combined design of the semi-transparent semi-reflection and the reflecting mirror, not only can increase the field of view of the camera, but also can effectively prevent the field of view of the camera from being influenced by the baseline distance, can also provide reliable support for simple, high-efficiency and accurate binocular road surface matching constraint based on the same optical axis, and further effectively improve the efficiency and precision of road surface three-dimensional reconstruction.
In this specification, each embodiment is described in a progressive manner, and all the embodiments are directly the same or similar parts referring to each other, and each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments. It should be noted that, any combination of the technical features of the foregoing embodiments may be used, and for brevity, all of the possible combinations of the technical features of the foregoing embodiments are not described, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples represent only a few preferred embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the application. It should be noted that modifications and substitutions can be made by those skilled in the art without departing from the technical principles of the present application, and such modifications and substitutions should also be considered to be within the scope of the present application. Therefore, the protection scope of the patent of the application is subject to the protection scope of the claims.
Claims (17)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202411259244.1A CN118781569B (en) | 2024-09-10 | 2024-09-10 | Preview aiming method, binocular camera, preview aiming system, automobile, controller and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202411259244.1A CN118781569B (en) | 2024-09-10 | 2024-09-10 | Preview aiming method, binocular camera, preview aiming system, automobile, controller and medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN118781569A CN118781569A (en) | 2024-10-15 |
CN118781569B true CN118781569B (en) | 2024-12-24 |
Family
ID=92981285
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202411259244.1A Active CN118781569B (en) | 2024-09-10 | 2024-09-10 | Preview aiming method, binocular camera, preview aiming system, automobile, controller and medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118781569B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108294726A (en) * | 2017-01-12 | 2018-07-20 | 天津工业大学 | Binocular fundus camera imaging optical system |
WO2022143237A1 (en) * | 2020-12-31 | 2022-07-07 | 华为技术有限公司 | Target positioning method and system, and related device |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102999939B (en) * | 2012-09-21 | 2016-02-17 | 魏益群 | Coordinate acquiring device, real-time three-dimensional reconstructing system and method, three-dimensional interactive device |
CN103198524B (en) * | 2013-04-27 | 2015-08-12 | 清华大学 | A kind of three-dimensional reconstruction method for large-scale outdoor scene |
CN207168484U (en) * | 2017-01-12 | 2018-04-03 | 天津工业大学 | Binocular fundus camera imaging optical system |
CN113537047A (en) * | 2021-07-14 | 2021-10-22 | 广东汇天航空航天科技有限公司 | Obstacle detection method, obstacle detection device, vehicle and storage medium |
-
2024
- 2024-09-10 CN CN202411259244.1A patent/CN118781569B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108294726A (en) * | 2017-01-12 | 2018-07-20 | 天津工业大学 | Binocular fundus camera imaging optical system |
WO2022143237A1 (en) * | 2020-12-31 | 2022-07-07 | 华为技术有限公司 | Target positioning method and system, and related device |
Also Published As
Publication number | Publication date |
---|---|
CN118781569A (en) | 2024-10-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102785831B1 (en) | Device and method for obtaining distance information from a view | |
US10909395B2 (en) | Object detection apparatus | |
US10602059B2 (en) | Method for generating a panoramic image | |
JP3539788B2 (en) | Image matching method | |
JP5872818B2 (en) | Positioning processing device, positioning processing method, and image processing device | |
US9025862B2 (en) | Range image pixel matching method | |
JP4414661B2 (en) | Stereo adapter and range image input device using the same | |
CN110334678A (en) | A Pedestrian Detection Method Based on Vision Fusion | |
CN102368137B (en) | Embedded calibrating stereoscopic vision system | |
CN103679707A (en) | Binocular camera disparity map based road obstacle detection system and method | |
CN102679959A (en) | Omnibearing 3D (Three-Dimensional) modeling system based on initiative omnidirectional vision sensor | |
JP2023505891A (en) | Methods for measuring environmental topography | |
CN113643345A (en) | Multi-view road intelligent identification method based on double-light fusion | |
CN206611521U (en) | A kind of vehicle environment identifying system and omni-directional visual module based on multisensor | |
CN102201058A (en) | Cat eye effect object recognition algorithm of active and passive imaging system sharing same aperture | |
CN113506336B (en) | Light field depth prediction method based on convolutional neural network and attention mechanism | |
CN114359384A (en) | Vehicle positioning method and device, vehicle system and storage medium | |
Alaniz-Plata et al. | ROS and stereovision collaborative system | |
Harvent et al. | Multi-view dense 3D modelling of untextured objects from a moving projector-cameras system | |
CN118781569B (en) | Preview aiming method, binocular camera, preview aiming system, automobile, controller and medium | |
CN113109833A (en) | Bionic three-dimensional imaging system and method based on fusion of visible light and laser radar | |
CN116777973A (en) | Binocular stereo vision ranging method and system for heterogeneous images based on deep learning | |
US11997247B2 (en) | Three-dimensional space camera and photographing method therefor | |
CN114419243A (en) | Panoramic three-dimensional imaging method and device based on double-camera scanning | |
JP3525712B2 (en) | Three-dimensional image capturing method and three-dimensional image capturing device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |