CN115511933A - Binocular image combination detection device and method and storage medium - Google Patents
Binocular image combination detection device and method and storage medium Download PDFInfo
- Publication number
- CN115511933A CN115511933A CN202211213202.5A CN202211213202A CN115511933A CN 115511933 A CN115511933 A CN 115511933A CN 202211213202 A CN202211213202 A CN 202211213202A CN 115511933 A CN115511933 A CN 115511933A
- Authority
- CN
- China
- Prior art keywords
- coordinate
- image
- test
- parallax
- monocular optical
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
- G06T7/337—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/74—Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
The invention relates to the technical field of image processing, and discloses binocular fusion detection equipment and method and a storage medium. The binocular fusion detection apparatus includes: the device comprises a test camera, a test terminal and a loading jig; the loading jig is used for loading two groups of monocular optical modules and test cameras; the test camera is used for respectively shooting display images of the two groups of monocular optical modules when the first test graphic card is used as an input image to obtain a first shot image and a second shot image; and the test terminal is used for calculating binocular fusion parallax between the two groups of monocular optical modules or deviation of an actual display image of the single monocular optical module relative to the first test graphic card according to the first shot image and the second shot image. According to the invention, the single testing camera is adopted to sequentially collect the display images of the two monocular optical modules, so that the number of the testing cameras is reduced, the tolerance introduced by the self structure of the testing camera is reduced, the assembly tolerance of the two testing cameras is also reduced, and the measurement precision is improved.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to binocular image combination detection equipment and method applied to wearable display equipment and a storage medium.
Background
With wearable display equipment such as virtual reality, augmented reality and the like, a stereoscopic image feeling is formed for people by adopting a binocular vision principle, and the experience that people feel 3D objects is directly influenced by the accuracy of binocular image combination. At present, most of optical mechanisms of wearable display equipment are composed of two monocular optical modules, each monocular optical module comprises a monocular optical lens and a screen, the binocular imaging precision of the optical mechanisms of the type is difficult to reach within 0.5 degrees due to the existence of the structure and the assembly tolerance, and generally can be about 1.5 degrees, and the binocular imaging precision greatly influences the user experience of the head-mounted display equipment. For this reason, it is often necessary to measure the binocular fusion accuracy of the head-mounted display device to ensure the quality of the product that is shipped from the factory. However, the binocular detection device with two cameras is usually adopted in the prior art, and two cameras are respectively aligned to a corresponding monocular optical module to be detected, and because two cameras have differences, the assembly structure of the two cameras also has tolerance, so that the precision of the test result is low easily.
Disclosure of Invention
The invention aims to provide binocular image combination detection equipment and method and a storage medium applied to wearable display equipment, and aims to solve the problem of low testing precision in the prior art.
In order to achieve the purpose, the invention adopts the following technical scheme:
the utility model provides a binocular closes like check out test set for measure and use two sets of monocular optical module to close like wearing formula display device's binocular and close like parallax, binocular closes like check out test set and includes: the device comprises a test camera, a test terminal and a loading jig;
the loading jig is provided with a first loading mechanism for loading two groups of monocular optical modules and a second loading mechanism for loading the testing camera;
the test camera is used for respectively shooting actual display images of the two monocular optical modules when the first test graphic card is used as an input image to obtain a first shot image and a second shot image;
the test terminal is used for calculating binocular image combination parallax between two groups of monocular optical modules or deviation of an actual display image of a single monocular optical module relative to a first test graphic card according to the first shot image and the second shot image, the binocular image combination parallax and the deviation both comprise rotation parallax and vertical parallax, and the vertical parallax comprises horizontal parallax in the horizontal direction and vertical parallax in the vertical direction.
Optionally, the second loading mechanism includes: a horizontal adjustment mechanism for adjusting the horizontal position of the test camera, and a height adjustment mechanism for adjusting the height position of the test camera.
Optionally, the test terminal is specifically configured to, in an aspect of calculating a binocular imaging parallax between two monocular optical modules according to the first captured image and the second captured image:
extracting a first characteristic point from the first shot image to obtain a first coordinate, and extracting a second characteristic point to obtain a second coordinate; extracting a first characteristic point from the second shot image to obtain a third coordinate, and extracting a second characteristic point to obtain a fourth coordinate;
calculating to obtain the vertical parallax between the two groups of monocular optical modules according to the first coordinate and the third coordinate of the first characteristic point; and calculating to obtain the rotation parallax between the two groups of monocular optical modules according to the first coordinate and the third coordinate of the first characteristic point and the second coordinate and the fourth coordinate of the second characteristic point.
Optionally, the test terminal is specifically configured to, in terms of calculating a deviation of an actual display image of a single monocular optical module with respect to a first test card according to the first captured image and the second captured image:
extracting a first characteristic point from the first test chart to obtain a first coordinate, and extracting a second characteristic point to obtain a second coordinate; extracting a first feature point from the first shot image/the second shot image to obtain a third coordinate, and extracting a second feature point to obtain a fourth coordinate;
according to the first coordinate and the third coordinate of the first characteristic point, calculating to obtain the vertical parallax of the actual display image of the single monocular optical module corresponding to the first shot image/the second shot image relative to the first test graphic card; and calculating the rotation parallax of the actual display image of the single monocular optical module corresponding to the first shot image/the second shot image relative to the first test graphic card according to the first coordinate and the third coordinate of the first characteristic point and the second coordinate and the fourth coordinate of the second characteristic point.
Optionally, the test camera is further configured to shoot a second test card in an aligned manner, so as to obtain a third shot image;
and the test terminal is also used for calculating the error of the test camera according to the second test graphic card and the third shot image.
A binocular image combination detection method is used for measuring binocular image combination parallax of wearable display equipment for combining images by using two groups of monocular optical modules, and comprises the following steps:
respectively shooting actual display images of the two monocular optical modules when the first test graphic card is used as an input image by using the same test camera to obtain a first shot image and a second shot image;
according to the first shot image and the second shot image, calculating binocular imaging parallax between two groups of monocular optical modules or deviation of an actual display image of a single monocular optical module relative to a first test graphic card, wherein the binocular imaging parallax and the deviation both comprise rotation parallax and vertical parallax, and the vertical parallax comprises horizontal parallax in the horizontal direction and vertical parallax in the vertical direction.
Optionally, the method further includes: and according to the calculated binocular fusion parallax between the two groups of monocular optical modules, or the deviation of the actual display image of the monocular optical module relative to the first test graphic card, and the error of the test camera, performing corresponding rotation and/or vertical offset on the input image of the monocular optical module so as to perform image correction.
Optionally, when the image correction is performed, the input image of one monocular optical module is taken as a reference, and the input image of the other monocular optical module is adjusted; or simultaneously adjusting the input images of the two monocular optical modules relatively.
Optionally, the method for calculating the binocular fusion parallax between two monocular optical modules according to the first captured image and the second captured image includes:
extracting a first feature point from the first shot image to obtain a first coordinate, and extracting a second feature point to obtain a second coordinate; extracting the first characteristic point from the second shot image to obtain a third coordinate, and extracting the second characteristic point to obtain a fourth coordinate;
calculating to obtain the vertical parallax between the two groups of monocular optical modules according to the first coordinate and the third coordinate of the first characteristic point; and calculating to obtain the rotation parallax between the two groups of monocular optical modules according to the first coordinate and the third coordinate of the first characteristic point and the second coordinate and the fourth coordinate of the second characteristic point.
Optionally, the method for calculating a deviation of an actual display image of a single monocular optical module from a first test card according to the first captured image and the second captured image includes:
extracting a first characteristic point from the first test chart to obtain a first coordinate, and extracting a second characteristic point to obtain a second coordinate; extracting a first feature point from the first shot image/the second shot image to obtain a third coordinate, and extracting a second feature point to obtain a fourth coordinate;
according to the first coordinate and the third coordinate of the first characteristic point, calculating to obtain the vertical parallax of the actual display image of the single monocular optical module corresponding to the first shot image/the second shot image relative to the first test graphic card; and calculating the rotation parallax of the actual display image of the single monocular optical module corresponding to the first shot image/the second shot image relative to the first test graphic card according to the first coordinate and the third coordinate of the first characteristic point and the second coordinate and the fourth coordinate of the second characteristic point.
A storage medium comprising instructions which, when executed by a processor, implement the steps of the binocular fusion detection method of any of the above.
Compared with the prior art, the invention has the beneficial effects that:
according to the embodiment of the invention, the single test camera is adopted to sequentially collect the display images of the two monocular optical modules in the binocular display equipment, instead of two test cameras are adopted to respectively collect the display images of the two monocular optical modules, so that the number of the test cameras is reduced, the tolerance introduced by the self structure of the test cameras is reduced, the assembly tolerance of the two test cameras is also reduced, and the measurement precision is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the prior art descriptions will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a contrast image with rotational parallax according to an embodiment of the present invention.
Fig. 2 is a comparative image with vertical parallax according to an embodiment of the present invention.
Fig. 3 is a structural view of the binocular image combining detection apparatus provided in the embodiment of the present invention.
Fig. 4 is a flowchart of a binocular fusion detection method according to an embodiment of the present invention.
Fig. 5 is a view of a test chart according to an embodiment of the present invention.
FIG. 6 is a diagram of another test card according to an embodiment of the present invention.
Fig. 7 is a schematic view of a rotation angle provided in the embodiment of the present invention.
Detailed Description
In order to make the objects, features and advantages of the present invention more obvious and understandable, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the embodiments described below are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
In order to eliminate errors between detection devices, the invention adopts the monocular camera to detect the parallax of the binocular display device, firstly the translation camera is used for respectively acquiring the display images of two monocular optical modules in the binocular display device, then the rotation parallax and the vertical parallax of the binocular parallax are calculated in a mode of calibrating the characteristic points of the test chart, and the movement and the rotation of the input image of the wearable display device are controlled by the computer after the binocular parallax is obtained, so that the binocular imaging precision is effectively improved, and the errors caused by the assembly of a screen and an optical lens in the monocular optical modules and the assembly process of the monocular optical modules are reduced.
The binocular image combining parallax mainly includes rotational parallax and vertical parallax, as shown in fig. 1 and fig. 2, the rotational parallax means that the eye images rotate relative to the source image, and is generally expressed by an angle; the vertical parallax is that the input eye picture moves up and down and left and right relative to the source image, and generally can be converted into an angle by taking a pixel as a unit or taking a length as a unit and knowing a virtual image distance.
The two parallaxes are mainly caused by the fact that the centers of the screen and the optical lens are inconsistent due to low precision in the automatic alignment process of the screen and the lens, and in addition, the existence of the parallaxes can also be caused by the tolerance of structural parts and the installation tolerance in the installation process of the monocular optical module.
Referring to fig. 3, in order to accurately detect the binocular fusion tolerance, an embodiment of the present invention provides a binocular fusion detection apparatus, which mainly includes: the device comprises a test camera, a test terminal and a loading jig;
the loading jig is provided with a first loading mechanism for loading two groups of monocular optical modules of the wearable display equipment and a second loading mechanism for loading the testing camera;
the test camera is used for respectively shooting actual display images of the two monocular optical modules when the first test graphic card is used as an input image to obtain a first shot image and a second shot image;
and the test terminal is used for calculating binocular image combination parallax between two groups of monocular optical modules or deviation of an actual display image of a single monocular optical module relative to the first test graphic card according to the first shot image and the second shot image, the binocular image combination parallax and the deviation both comprise rotation parallax and vertical parallax, and the vertical parallax comprises horizontal parallax in the horizontal direction and vertical parallax in the vertical direction.
According to the embodiment of the invention, the single test camera is adopted to collect the display images of the two monocular optical modules, and compared with a mode of collecting by adopting two test cameras, the method can reduce errors caused by the structure of the test camera and the assembly tolerance of the test camera, so that the measurement precision is effectively improved.
Specifically, the test camera can adopt a diaphragm front lens, and the field angle is 70-120 degrees.
In order to realize the removal of test camera, load the second loading mechanism that the tool provided and specifically include: a horizontal adjustment mechanism (e.g., a guide rail) for adjusting the horizontal position of the test camera, and a height adjustment mechanism for adjusting the height position of the test camera. Preferably, the horizontal adjusting mechanism and the height adjusting mechanism both adopt high-precision adjusting mechanisms which can be accurate to micrometer level.
It should be noted that, when the test camera is used to collect an image, the test camera needs to be aligned with the center of the monocular optical module. In practical application, the testing camera and the monocular optical module are controlled to respectively display a cross graph, and alignment operation can be carried out according to the centers of the cross graphs of the testing camera and the monocular optical module before collection.
An embodiment of the present invention further provides a binocular image combination detection method, configured to measure binocular image combination parallax of a wearable display device that uses two groups of monocular optical modules to perform image combination, and referring to fig. 4, the method includes:
102, calculating binocular combined image parallax between two groups of monocular optical modules or deviation of an actual display image of a single monocular optical module relative to a first test graphic card according to the first shot image and the second shot image, wherein the binocular combined image parallax and the deviation both comprise rotation parallax and vertical parallax, and the vertical parallax comprises horizontal parallax in the horizontal direction and vertical parallax in the vertical direction.
In the step 102, the method for calculating the binocular fusion parallax between the two monocular optical modules according to the first captured image and the second captured image includes:
extracting a first characteristic point from the first shot image to obtain a first coordinate, and extracting a second characteristic point to obtain a second coordinate; extracting the first characteristic point from the second shot image to obtain a third coordinate, and extracting the second characteristic point to obtain a fourth coordinate;
calculating to obtain the vertical parallax between the two monocular optical modules according to the first coordinate and the third coordinate of the first characteristic point; and calculating to obtain the rotation parallax between the two groups of monocular optical modules according to the first coordinate and the third coordinate of the first characteristic point and the second coordinate and the fourth coordinate of the second characteristic point.
The first test chart may select the checkerboard image shown in fig. 5 or the image shown in fig. 6, where both images include feature points (e.g., connection points between black lattices) that are easy to extract, and the image shown in fig. 6 has the serial number of the black lattice, so that the position information of the target feature point can be found more conveniently.
For the sake of understanding, the specific implementation of step 102 will be described below by taking the image shown in fig. 6 as the first test chart, and taking the intersection a of 11 and 22 as the first feature point and the intersection B of 31 and 42 as the second feature point as an example:
firstly, acquiring an actual display image of a left monocular optical module of wearable display equipment to obtain a first shot image, extracting coordinates (x 1, y 1) of an intersection point A from the first shot image, and extracting coordinates (x 2, y 2) of an intersection point B; after the left monocular optical module is collected, the horizontal position of the test camera is moved through the guide rail, so that the display image of the right monocular optical module can be collected to obtain a second shot image after the center point of the right monocular optical module displayed by the test camera is coincided with the center point of the right monocular optical module, then the coordinates (x 1', y 1') of the intersection point A 'are extracted from the second shot image, and the coordinates (x 2', y2 ') of the intersection point B' are extracted. It should be noted that the intersection point a and the intersection point a ' actually belong to the same feature point, and the intersection point B ' actually belong to the same feature point, and different identifiers are used only for distinguishing different sources of coordinate information of the intersection point a and the intersection point B '.
And then, calculating the binocular fusion parallax between the left and right monocular optical modules:
the vertical parallax is divided into a horizontal direction and a vertical direction, the vertical parallax in the horizontal direction is X1= ((X1 '-X1) + (X2' -X2))/2, and the vertical parallax in the vertical direction may be calculated as Y1= ((Y1 '-Y1) + (Y2' -Y2))/2 from the ordinate.
The rotational parallax a delta can be calculated by adopting the following formula:
tanδ1=(y2-y1)/(x2-x1);
tanδ1’=(y2’-y1)/(x2’-x1);
▲δ=δ1’-δ1。
where δ 1 and δ 1' are the inclination angles of the line AB in the first captured image and the second captured image, respectively.
In order to improve the calculation accuracy, a plurality of pairs of feature points may be used for calculation, and then the average value of the calculation results may be obtained.
In addition, in order to further overcome the error of the test camera, before the step 102, the error of the test camera may be collected in advance, and the specific method includes: controlling the test camera to shoot the second test graphic card in an aligning manner to obtain a third shot image; and calculating the error of the test camera according to the coordinates of the plurality of feature points in the second test chart and the third shot image.
It should be noted that, in order to reduce the influence of the distortion of the camera lens, the positions of the selected feature points are ensured to be within 20 ° of the field angle of the test camera lens.
When the second test chart selects the checkerboard image shown in fig. 5, the connection point between the black squares can be taken as a feature point, such as a point a and a point B in the vertical direction, and the coordinate of the point a is set as (u) 1 ,v 1 ) (ii) a Coordinates (u) of point B 2 ,v 2 ) Tan delta if there is no rotational parallax of the camera 0 =(v 2 -v 1 )/(u 2 -u 1 ) =0; when rotational parallax exists in the camera, tan delta 0 Will not be zero. In other words, the calculation method calculates the rotational parallax of the test camera by the coordinate offset of the feature point with respect to the original position of the test chart, and the deviation value can be regarded as a known error, and the known error is divided into positive and negative values, such as a positive value in the left diagram of fig. 7 and a negative value in the right diagram of fig. 7.
The above-mentioned data collection process is the rotational parallax, and meanwhile, it is necessary to collect vertical parallax data, where the vertical parallax is that the center of the picture photographed by the camera and the center of the graphic card are shifted up, down, left, and right, and the center point O of the graphic card is taken as a feature point, and assuming that the coordinates thereof are (0, 0), the center point O of the graphic card photographed by the test camera is O ', and the coordinates of O' are (u) 0 ,v 0 ) The vertical parallax is the offset in the U direction is U 0 The offset in the V direction being V 0 Thus, the camera tolerance collection is completed.
In another alternative embodiment, in step 102, a method for calculating a deviation of an actual display image of a single monocular optical module from a first captured image and a second captured image with respect to a first test card includes:
extracting a first characteristic point from the first test chart to obtain a first coordinate, and extracting a second characteristic point to obtain a second coordinate; extracting a first feature point from the first shot image/the second shot image to obtain a third coordinate, and extracting a second feature point to obtain a fourth coordinate;
according to the first coordinate and the third coordinate of the first characteristic point, calculating to obtain the vertical parallax of an actual display image of the single monocular optical module corresponding to the first shot image/the second shot image relative to the first test graphic card; and calculating the rotation parallax of the actual display image of the single monocular optical module corresponding to the first shot image/the second shot image relative to the first test graphic card according to the first coordinate and the third coordinate of the first characteristic point and the second coordinate and the fourth coordinate of the second characteristic point.
The method takes the first test graphic card as a reference, respectively calculates the deviation of the actual display image of the single monocular optical module relative to the first test graphic card, and can also obtain more accurate measurement results.
In order to improve the actual binocular imaging accuracy of the wearable display device, after the calculated binocular imaging parallax between two groups of monocular optical modules or the deviation of the actual display image of a single monocular optical module relative to a first test graphic card and the error of a test camera, the error of the test camera (if the calculated binocular imaging parallax and the calculated deviation are smaller and can be ignored) can be tested based on the calculated binocular imaging parallax and the calculated deviation, corresponding rotation and/or vertical offset can be carried out on the input image of the monocular optical module, so that image correction can be carried out, and the binocular imaging parallax of the device can be reduced.
Specifically, when image correction is performed, the input image of one monocular optical module is taken as a reference, and the input image of the other monocular optical module is adjusted, so that the left and right pictures achieve a consistent effect; or, the input images of the two monocular optical modules are adjusted relatively at the same time, so that the left and right pictures achieve the same effect.
Based on the same conception, the embodiment of the invention provides a storage medium, wherein at least one instruction is stored in the storage medium, and the instruction is loaded and executed by a processor to realize the binocular fusion detection method provided by the embodiment of the invention. The storage medium may be a read-only memory, a magnetic or optical disk, or the like.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Claims (11)
1. The utility model provides a binocular closes like check out test set for measure and use two sets of monocular optical module to close like wearing formula display device's binocular and close like parallax, its characterized in that, binocular closes like check out test set and includes: the device comprises a test camera, a test terminal and a loading jig;
the loading jig is provided with a first loading mechanism for loading two groups of monocular optical modules and a second loading mechanism for loading the testing camera;
the test camera is used for respectively shooting actual display images of the two monocular optical modules when the first test graphic card is used as an input image to obtain a first shot image and a second shot image;
the test terminal is used for calculating binocular image combination parallax between two groups of monocular optical modules or deviation of actual display images of the monocular optical modules relative to a first test graphic card according to the first shot image and the second shot image, the binocular image combination parallax comprises rotation parallax and vertical parallax, and the vertical parallax comprises horizontal parallax in the horizontal direction and vertical parallax in the vertical direction.
2. The binocular fusion detection apparatus of claim 1, wherein the second loading mechanism includes: a horizontal adjustment mechanism for adjusting the horizontal position of the test camera, and a height adjustment mechanism for adjusting the height position of the test camera.
3. The binocular fusion detection apparatus of claim 1, wherein the test terminal, in calculating a binocular fusion parallax between two monocular optical modules from the first captured image and the second captured image, is specifically configured to:
extracting a first characteristic point from the first shot image to obtain a first coordinate, and extracting a second characteristic point to obtain a second coordinate; extracting a first characteristic point from the second shot image to obtain a third coordinate, and extracting a second characteristic point to obtain a fourth coordinate;
calculating to obtain the vertical parallax between the two monocular optical modules according to the first coordinate and the third coordinate of the first characteristic point; and calculating to obtain the rotation parallax between the two groups of monocular optical modules according to the first coordinate and the third coordinate of the first characteristic point and the second coordinate and the fourth coordinate of the second characteristic point.
4. The binocular fusion detection device of claim 1, wherein the test terminal, in calculating a deviation of an actual display image of a single monocular optical module from the first captured image and the second captured image with respect to a first test graphic card, is specifically configured to:
extracting a first characteristic point from the first test chart to obtain a first coordinate, and extracting a second characteristic point to obtain a second coordinate; extracting a first characteristic point from the first shot image/the second shot image to obtain a third coordinate, and extracting a second characteristic point to obtain a fourth coordinate;
according to the first coordinate and the third coordinate of the first characteristic point, calculating to obtain the vertical parallax of an actual display image of the single monocular optical module corresponding to the first shot image/the second shot image relative to the first test graphic card; and calculating the rotation parallax of the actual display image of the single monocular optical module corresponding to the first shot image/the second shot image relative to the first test graphic card according to the first coordinate and the third coordinate of the first characteristic point and the second coordinate and the fourth coordinate of the second characteristic point.
5. The binocular fusion detection device of claim 1, wherein the test camera is further configured to shoot a second test card in alignment to obtain a third shot image;
and the test terminal is also used for calculating the error of the test camera according to the second test chart and the third shot image.
6. The utility model provides a binocular closes like detection method for measure and use two sets of monocular optical module to close like wearing formula display device's binocular and close like parallax, its characterized in that includes:
respectively shooting actual display images of the two monocular optical modules when a first test graphic card is used as an input image by using the same test camera to obtain a first shot image and a second shot image;
according to the first shot image and the second shot image, calculating binocular imaging parallax between two groups of monocular optical modules or deviation of an actual display image of a single monocular optical module relative to a first test graphic card, wherein the binocular imaging parallax and the deviation both comprise rotation parallax and vertical parallax, and the vertical parallax comprises horizontal parallax in the horizontal direction and vertical parallax in the vertical direction.
7. The binocular fusion detection method of claim 6, further comprising: and according to the calculated binocular fusion parallax between the two groups of monocular optical modules, or the deviation of the actual display image of the monocular optical module relative to the first test graphic card, and the error of the test camera, performing corresponding rotation and/or vertical offset on the input image of the monocular optical module so as to perform image correction.
8. The binocular fusion detection method of claim 7, wherein when performing the image correction, the input image of one of the monocular optical modules is used as a reference, and the input image of the other monocular optical module is adjusted; or, the input images of the two monocular optical modules are adjusted relatively at the same time.
9. The binocular fusion detection method of claim 6, wherein the method of calculating binocular fusion parallax between two monocular optical modules according to the first captured image and the second captured image comprises:
extracting a first feature point from the first shot image to obtain a first coordinate, and extracting a second feature point to obtain a second coordinate; extracting the first characteristic point from the second shot image to obtain a third coordinate, and extracting the second characteristic point to obtain a fourth coordinate;
calculating to obtain the vertical parallax between the two groups of monocular optical modules according to the first coordinate and the third coordinate of the first characteristic point; and calculating to obtain the rotation parallax between the two groups of monocular optical modules according to the first coordinate and the third coordinate of the first characteristic point and the second coordinate and the fourth coordinate of the second characteristic point.
10. The binocular fusion detection method of claim 6, wherein the method of calculating the deviation of the actual display image of a single monocular optical module from the first captured image and the second captured image with respect to a first test chart comprises:
extracting a first characteristic point from the first test chart to obtain a first coordinate, and extracting a second characteristic point to obtain a second coordinate; extracting a first feature point from the first shot image/the second shot image to obtain a third coordinate, and extracting a second feature point to obtain a fourth coordinate;
according to the first coordinate and the third coordinate of the first characteristic point, calculating to obtain the vertical parallax of an actual display image of the single monocular optical module corresponding to the first shot image/the second shot image relative to the first test graphic card; and calculating the rotation parallax of the actual display image of the single monocular optical module corresponding to the first shot image/the second shot image relative to the first test graphic card according to the first coordinate and the third coordinate of the first characteristic point and the second coordinate and the fourth coordinate of the second characteristic point.
11. A storage medium comprising instructions which, when executed by a processor, carry out the steps of the method of binocular fusion detection according to any one of claims 6 to 10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211213202.5A CN115511933A (en) | 2022-09-30 | 2022-09-30 | Binocular image combination detection device and method and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211213202.5A CN115511933A (en) | 2022-09-30 | 2022-09-30 | Binocular image combination detection device and method and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115511933A true CN115511933A (en) | 2022-12-23 |
Family
ID=84508052
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211213202.5A Pending CN115511933A (en) | 2022-09-30 | 2022-09-30 | Binocular image combination detection device and method and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115511933A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116312299A (en) * | 2023-02-06 | 2023-06-23 | 北京灵犀微光科技有限公司 | AR (augmented reality) glasses test calibration method and system |
-
2022
- 2022-09-30 CN CN202211213202.5A patent/CN115511933A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116312299A (en) * | 2023-02-06 | 2023-06-23 | 北京灵犀微光科技有限公司 | AR (augmented reality) glasses test calibration method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107666606B (en) | Method and device for acquiring binocular panoramic image | |
CN110809786B (en) | Calibration device, calibration chart, chart pattern generation device, and calibration method | |
WO2018209968A1 (en) | Camera calibration method and system | |
CN111210468B (en) | A method and device for acquiring image depth information | |
US20100328437A1 (en) | Distance measuring apparatus having dual stereo camera | |
CN108230397A (en) | Multi-lens camera is demarcated and bearing calibration and device, equipment, program and medium | |
WO2022037633A1 (en) | Calibration method and apparatus for binocular camera, image correction method and apparatus for binocular camera, storage medium, terminal and intelligent device | |
CN114714356A (en) | Method for accurately detecting calibration error of hand eye of industrial robot based on binocular vision | |
CN107589069B (en) | Non-contact type measuring method for object collision recovery coefficient | |
CN109584263B (en) | Testing method and system of wearable device | |
CN111609995A (en) | A kind of optical module assembly adjustment test method and device | |
CN108108021A (en) | The outer parameter correction gauge of tracing of human eye system and bearing calibration | |
CN113298886B (en) | Calibration method of projector | |
CN210322247U (en) | Optical module assembly and debugging testing device | |
CN110505468A (en) | A kind of augmented reality shows the test calibration and deviation correction method of equipment | |
CN111044262A (en) | Near-to-eye display optical-mechanical module detection device | |
CN115684012A (en) | Visual inspection system, calibration method, device and readable storage medium | |
CN111833394A (en) | Camera calibration method, measurement method based on binocular measurement device | |
CN115511933A (en) | Binocular image combination detection device and method and storage medium | |
CN115393311A (en) | Binocular vision distance measurement method based on baseline distance | |
CN110363818A (en) | The method for detecting abnormality and device of binocular vision system | |
TWI712310B (en) | Detection method and detection system for calibration quality of stereo camera | |
US20130011010A1 (en) | Three-dimensional image processing device and three dimensional image processing method | |
CN112712566A (en) | Binocular stereo vision sensor measuring method based on structure parameter online correction | |
CN115082555B (en) | A high-precision real-time displacement measurement system and method for RGBD monocular camera |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |