[go: up one dir, main page]

US10142612B2 - One method of binocular depth perception based on active structured light - Google Patents

One method of binocular depth perception based on active structured light Download PDF

Info

Publication number
US10142612B2
US10142612B2 US14/591,083 US201514591083A US10142612B2 US 10142612 B2 US10142612 B2 US 10142612B2 US 201514591083 A US201514591083 A US 201514591083A US 10142612 B2 US10142612 B2 US 10142612B2
Authority
US
United States
Prior art keywords
camera
input image
coded pattern
block
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US14/591,083
Other versions
US20150229911A1 (en
Inventor
Chenyang Ge
Nanning ZHENG
Huimin YAO
Yanhui Zhou
Jiankun LUN
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Assigned to GE, CHENYANG, ZHENG, Nanning reassignment GE, CHENYANG ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: GE, CHENYANG, LUN, JIANKUN, YAO, Huimin, ZHENG, Nanning, ZHOU, Yanhui
Publication of US20150229911A1 publication Critical patent/US20150229911A1/en
Application granted granted Critical
Publication of US10142612B2 publication Critical patent/US10142612B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/239Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/128Adjusting depth or disparity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/254Image signal generators using stereoscopic image cameras in combination with electromagnetic radiation sources for illuminating objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0081Depth or disparity estimation from stereoscopic image signals

Definitions

  • the present invention combines the advantages of the binocular stereoscopic ranging and active structured light coding to achieve a substantial increase in precision and spatial resolution of the depth ranging. Moreover, the beneficial effects based on the technical solution of the present invention will be concretely demonstrated by further explanation in the following implementation examples.
  • FIG. 6 illustrates the schematic for calculations of binocular block matching depth in the implementation example of the present invention

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Electromagnetism (AREA)
  • Optics & Photonics (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Analysis (AREA)

Abstract

The present invention provides a method of binocular depth perception based on active structured light, adopting a coded pattern projector to project a coded pattern for structured light coding of the projective space or target object (characteristic calibration), then obtaining the coded pattern by means of two cameras on the same baseline and respectively located symmetrically on both sides of the coded pattern projector, after preprocessing and projection shadow detection, estimating the block matching movement in two modes based on the image blocks (binocular block matching and automatic matching) to obtain the offset of the optimal matching block, finally working out the depth value according to the formula for depth calculation and compensating the depth of the projection shadows to generate high-resolution and high-precision depth information.

Description

FIELD OF THE INVENTION
The present invention is classified in the field of image processing, human-computer interaction and machine vision technology, specifically involved in a method of binocular depth perception based on active structured light.
BACKGROUND OF THE INVENTION
The vision is the most direct and important way for human to observe and recognize our world. We are living in a three-dimensional world, and the human vision can not only perceive the brightness, color, texture information and movement on the surface of an object, but also distinguish its shape, space and spatial position (depth and distance). Currently, the difficulty in researching the machine vision system is how to get the high-precision 3D depth information in real time and improve the intelligent level of the machine.
In the field of industry, the depth perception technology and devices can provide high-resolution and high-precision 3D depth information, which is widely demanded in automotive safety driving assisting, high-speed machine tool processing, industrial modeling, 3D printing, medical imaging and 3D visual perception in the IOT (Internet of Things). In the field of consumer electronics, the depth perception technology and devices can help to improve the intelligent level and interaction ability of electronic products to bring brand-new man-machine interaction for the users' experience and achieve innovative applications in the smart TV, smart phones, household appliances, and tablet PC, etc.
The depth perception technologies can be roughly divided into passive and active solutions. The traditional binocular stereoscopic vision ranging is a passive ranging method, seriously subject to the impact of the ambient light and featuring a complex stereo matching process. The active ranging method is mainly divided into structured light coding and ToF. Therein, the active visual mode based on structured light coding can get image depth information more accurately, is not affected by the ambient light and features a simple stereo matching process, for example, the somatosensory interaction devices of Microsoft Kinect, the patent for invention of “Depth Perception Device and System” newly applied by Apple Co. in 2013, and the depth photographic device released by Intel in 2014, all of them can actively emit the laser pattern to calculate the depth distance. The depth perception devices currently developed by Microsoft, Apple and Intel all adopt the receiving mode by a single camera, so they are mainly suitable for the consumer electronics but can not satisfy the requirements for automobile auxiliary security, industries, 3D printing and other relevant fields in such aspects as depth image resolution, precision and scope of applications. In the same, the stereo matching calculation process is seriously subject to the influence of such factors as illumination, textures and shelters, and it has more errors, larger amount of calculation and more difficulty in generating real-time depth images.
SUMMARY OF THE INVENTION
In view of contents mentioned above, the present invention provides a method of binocular depth perception based on active structured light, which is an active visual mode based on structured light coding, first adopting a coded pattern projector (laser pattern projector or any other projection device) to project a structured light coded pattern on the projective space or target object, then obtaining the coded pattern by two cameras concurrently which are fixed on the same baseline as the code pattern projector and respectively located on both sides of the code pattern projector as the same equal in distance, then using two kinds of block matching methods to calculate the motion vectors, and finally completing depth calculation and depth compensation to generate high-resolution and high-precision image depth information (distance).
According to the present invention, a method of binocular depth perception based on active structured light includes such contents as follows:
Step 1: adopt an active visual mode of structured light coding, use a coded pattern projector to project a coded pattern and carry out structured light coding of the projective space, namely, carry out characteristics calibration in an active manner;
Step 2: the binocular cameras acquire and fix their respective reference coded patterns Rl and Rr;
Step 3: the binocular cameras respectively acquire their input images Il and Ir containing the coded patterns and complete preprocess the input images Il and Ir;
Step 4: use the input images Il and Ir after being preprocessed to respectively detect the projection shadows of the target objects, respectively mark with Al and Ar;
Step 5: use two methods of block matching motion estimation to generate the offset respectively, namely the motion vectors, among which the binocular block matching calculation between the input images Il and Ir is used to get the X-axis offset Δxl,r or Y-axis offset Δyl,r; and among which the block matching calculation between the reference coded patterns Rl and Rr corresponding to the input images Il and Ir to get the X-axis offset Δxl and Δxr or Y-axis offset Δyl and Δyr;
Step 6: carry out depth calculation, including:
(6 a) Select the offset Δxl,r or Δyl,r and combine the focal length f of the camera image sensor, the baseline distance 2S between two cameras and the dot pitch parameter μ of the camera image sensor to obtain the depth information dl,r for the central point o of the projection image block blockm×n based on calculation according to the formula for depth calculation;
(6 b) Select the offset Δxl and Δxr or Δyl and Δyr and combine the given distance parameter d of the reference coded pattern, the focal length f of the camera image sensor, the baseline distance S between the camera and the coded pattern projector, as well as the dot pitch parameter μ of the camera image sensor to obtain the depth information dl and dr respectively for the central point o of the projection image block blockm×n corresponding to the same position in the input images Il and Ir based on calculation according to the formula for depth calculation;
Step 7: depth compensation—use the depth information dl and dr, combine the projection shadow areas Al and Ar detected in Step 4 to compensate and correct the depth information dl,r, and output the final depth value dout of the central point o on the projection image block blockm×n;
Step 8: move the central point o of the projection image block to the next pixel in the same line, repeat the steps 5-7 to calculate the depth value corresponding to the next pixel, then follow such calculation sequence from left to right and from top to bottom line by line to obtain the depth information of the whole image based on point-by-point calculation.
The present invention combines the advantages of the binocular stereoscopic ranging and active structured light coding to achieve a substantial increase in precision and spatial resolution of the depth ranging. Moreover, the beneficial effects based on the technical solution of the present invention will be concretely demonstrated by further explanation in the following implementation examples.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 illustrates the flow chart for the binocular depth perception method of an active structured light in the implementation example of the present invention;
FIG. 2 illustrates the structure schematic for binocular cameras in the implementation example of the present invention;
FIG. 3 illustrates the schematic diagram for the coded image projector, binocular cameras' field of view and projection shadows in the implementation example of the present invention;
FIG. 4 illustrates the structure of the calculation module for depth perception of the binocular cameras in the implementation example of the present invention;
FIG. 5 illustrates the schematic of the input image block and the search for an optimal matching block in the implementation example of the present invention;
FIG. 6 illustrates the schematic for calculations of binocular block matching depth in the implementation example of the present invention;
FIG. 7 illustrates the schematic for FOV (field-of-view) integration of the binocular cameras in the implementation example of the present invention.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
Here explain further details of the present invention based on concrete examples for implementation.
In general, the method for binocular depth perception of active structured light in the implementation example of the present invention is an active visual mode based on the structured light coding, adopting a coded pattern projector (laser pattern projector or any other projection device) to project a structured light coded pattern on the projective space or target object, then obtaining the coded pattern by two cameras concurrently which are fixed on the same baseline as the coded pattern projector and respectively located on both sides of the coded pattern projector as the same equal in distance, then using two kinds of block matching methods to calculate the motion vectors, and finally completing depth calculation and depth compensation to generate high-resolution and high-precision image depth information (distance).
FIG. 1 illustrates the overall process for the binocular depth perception method of an active structured light in the implementation example of the present invention. For clearer descriptions, this article next will take advantage of the FIGS. 2, 3, 4, 5, 6 and 7 to describe this method.
Step 1: the coded pattern projector carries out spatial coding. It adopts an active visual mode of structured light coding, using a coded pattern projector (laser pattern projector or any other projection device) to project a coded pattern so as to carry out structured light coding of the projective space or target object, namely, carrying out characteristic calibration in an active manner;
The above-mentioned coded pattern projector can be a laser speckle projector or laser character projector or any other projection device. Optimally, the laser speckle projector can project the coherent laser beams (infrared, visible light, ultraviolet and invisible light), which can form speckle images composed of scattered spots by means of interference imaging and object surface diffuse reflection; the laser character projector can project the patterns which are made by some regular characters or symbols; and also the projection device can project controllable coded patterns. The patterns projected by a coded pattern projector are usually fixed patterns or the projected patterns can be changed after synchronization to the image sensor for image receiving of the camera based on a certain control strategy; in a certain horizontal or vertical range, the features of the same pattern will not be repeated or distributed at random. The field of view (FoV) (including horizontal and vertical FoV) projected by the projector is generally greater than that of the receiving camera.
Step 2: the binocular cameras acquire and fix their respective reference coded patterns Rl and Rr;
Optimally, the binocular cameras is made up of two independent cameras which are identical in performance indexes (the same optical lens and image sensors) and arranged on both sides of the coded pattern projector symmetrically at the same distance to the left and right sides. Their optic axis are parallel to that of the coded pattern projector and kept on the same baseline to receive such coded patterns within a certain range of wavelength, as shown in FIG. 2. The focal length of the camera image sensor is f, the baseline distance from the camera to the coded pattern projector is S, and the dot pitch parameter of the camera image sensor is μ.
In practical applications, we can adjust the baselines of the two cameras according to different needs or use two cameras with different models or focal length to meet the requirements of different functions. Generally, the binocular cameras only receive those patterns projected by such projectors within a certain range of wavelength to minimize the interference from other light sources or light beams, and thus receive the patterns projected by the coded pattern projector clearly and steadily.
Before put into work, the binocular cameras need to acquire and fix their respective reference coded patterns at first as the reference benchmark for matching comparison. The above-mentioned reference coded patterns are acquired as follows: the coded pattern is projected onto the plane (this plane can be formed by a projection cloth and a panel, etc. used to present clear and stable images, and this plane can be called as the reference benchmark plane) vertical to the optical center axis (Z-axis) of the projector and with the vertical distance of d to the projector. The static images are acquired by the cameras, after preprocessed, then as the standard patterns are stored and fixed in the memory for matching benchmark and depth perception calculation. Optimally, a reference coded pattern of the laser speckle projector is a standard speckle pattern composed of multiple scattered spots, with the distance known as d. The reference coded patterns can be obtained by means of the above-mentioned methods, and such methods are only used to illustrate but not limit the above-mentioned implementation example, because the technicians in this field also can get such reference patterns in other ways.
Step 3: the binocular cameras respectively acquire the input images Il and Ir containing the coded patterns and complete preprocess the input images Il and Ir;
The input images containing the coded patterns received by the binocular cameras can contain a target object, with depth information unknown, but it's within the effective range of the coded pattern projector and the camera.
The above-mentioned image preprocessing refers to adaptive and consistent processing of the input images with different characteristics in order to have the patterns clearer, reduce the false matching and noise interference, and help the depth perception calculation in the present invention by means of preprocessing of such input images. Optimally, the methods for preprocessing include video format conversion (e.g., Bayer, ITU601 and ITU656 video decoding or MIPI interface format conversion), color space conversion (for example, from RGB to YUV), and grey image adaptive denoising and enhancement, etc., and the enhancement method includes histogram enhancement, grey linear enhancement and binarization processing, etc., but not limited to these traditional methods for enhancement. The reference coded patterns and the real-time input images acquired by the binocular cameras all pass through the same image preprocessing module.
Step 4: use the input images Il and Ir after being preprocessed to respectively detect the projection shadows of the target objects, separately marked as Al and Ar;
The above-mentioned projection shadow area refers to an area (namely, the area free of coded patterns) at the edge of the target object because the image received by the camera is shaded by the edge of such target object during projection of the coded pattern projector, that is the area without coded pattern. As shown in FIG. 3 for a schematic projection shadow, it illustrates the projection shadow areas Al and Ar due to shading of the target object while the left and right cameras are receiving the input images.
Optimally, the projection shadow areas can be detected as follows: detect the number of the feature points contained in an input image block of a certain size, and if such number is smaller than the predetermined threshold, then it argues that the area of this input image block is the projection shadow area. Take a laser speckle projector as an example, the coded patterns it has projected are speckle images composed of scattered spots and we can detect the number of the scattered spots in the input image block of a certain size, and if such number is smaller than the predetermined threshold, then it argues that the area of this input image block is the projection shadow area. The method for projection shadow detection in the present invention is not limited to detection of laser speckle images, but also can be used to detect the characteristic information of other coded patterns. Generally, such areas not within the effective range of the coded pattern projector and the camera can also be treated as projection shadow areas.
Step 5: use the two methods of block matching motion estimation to generate the offset respectively, namely the motion vectors, among which the binocular block matching calculation between the input images Il and Ir is used to get the X-axis offset Δxl,r or Y-axis offset Δyl,r and among which the block matching calculation between the reference coded patterns Rl and Rr corresponding to the input images Il and Ir to get the X-axis offset Δxl and Δxr or Y-axis offset Δyl and Δyr;
The binocular cameras acquire the input image sequence at first, then send it into the module for depth perception calculation as shown in FIG. 4, and after preprocessing in Step 3, send into the module for block matching motion estimation to carry out matching calculation according to the method for block matching motion estimation of the two modes.
The first mode is the binocular block matching calculation between the input images Il and Ir, specifically as follows:
In the input image Il, extract an input image block B′ blockm×n of a certain size with the central point of o; in the input image Ir, extract a matching search window MatchM×N of a certain size and corresponding to the central point of o the input image block (the size of MatchM×N is M×N; M and N are both integers, equal or unequal; generally, M≥N, M>m, N≥n); and then in the matching search window MatchM×N, extract all matching blocks matchk of the same size as the input image blocks; the size is m×n, and the matching central points ok and k is integer, indicating the number of the matching blocks.
Then, calculate the similarity values match_valuek between the input image block B′ blockm×n and the kth matching blocks matchk respectively, and such values are used as the indexes to measure the similarity of the image matching blocks.
Finally, obtain the minimum value among all similarity values match_valuek, the matching block matchk corresponding to such value is the optimal matching block B that the image block B′ blockm×n is proposed to search for, and the position information corresponding to such minimum value is the offset (Δxl,r, Δyl,r) for the central point o of the image block blockm×n, namely, the motion vector of such input image block B′. As shown in FIG. 5, the input image block refers to the grey area in the input image Il, the optimal matching block refers to the slashed area in the matching search window of the input image Ir, the optimal offset is (Δxl,r, Δyl,r) between the central point ok of such slashed area and the central point o (this central point o corresponds to that of the input image block) of the matching search window blockM×N, respectively indicating the displacement in the X and Y-axis directions, and the offset value is the coordinate values (x, y) of the central point o in the matching search window, respectively obtained by the X and Y-axis values d by the coordinate values (x′, y′) of the central point in the optimal matching block with the result changed into an absolute value and expressed in the number of pixels.
Another mode is the block matching calculation between the input images Il, Ir and its corresponding reference coded patterns Rl, Rr. Specific methods: as shown in FIG. 5, extract the input image block B′ in the input image Il and then search for the image block B most matching B′ (i.e., of the highest similarity) in its reference coded pattern Rl; extract the input image block B′ in the input image Ir and then search for the image block B most matching B′ (i.e., of the highest similarity) in its reference coded pattern Rr; the method for searching for the optimal block matching is just the same as that of the binocular block matching motion estimation in the previous mode to obtain the optimal offset (Δxl, Δyl) of the input image block in the input image Il and its optimal matching block, as well as the optimal offset (Δxr, Δyr) of the input image block in the input image Ir and its optimal matching block. The offset value is the coordinates (x, y) of the central point o in the matching search window corresponding to the input image block, respectively obtained by the X and Y-axis values subtracted by the coordinate values (x′, y′) of the central point in the optimal matching block and expressed in the number of pixels, and the positive and negative values correspond to the far and near relationship with the reference pattern plane in space.
Optimally, the size of the input image block is selected according to the relative uniqueness of this image block within a certain horizontal or vertical range, that is, this image block is different in characteristics from other image blocks of same size, so it can be recognized from other image blocks with identical size.
Optimally, the similarity value is the sum of absolute differences (SAD) between the pixels corresponding to the input image block and the matching block, but not limited to this method.
Step 6: carry out depth calculation, including:
(6a) Select the offset Δxl,r or Δyl,r and combine the focal length f of the camera image sensor, the baseline distance between two cameras 2S and the dot pitch parameter μ of the camera image sensor to obtain the depth information dl,r for the central point o of the projection image block blockm×n based on calculation according to the formula for depth calculation, as shown in FIG. 6, calculation of the binocular camera depth;
Therein, if the binocular camera is arranged horizontally to the coded pattern projector, then select the offset Δxl,r; if the binocular camera is arranged vertically to the coded pattern projector, then select the offset Δyl,r.
In this implementation example, calculate dl,r according to the following formula for depth calculation, and here takes the horizontal offset Δxl,r as an input parameter:
d l , x = 2 fS Δ x l , r μ ( 1 )
Where, the horizontal offset Δxl,r is the X-axis optimal offset of the optimal matching block B on the input image Ir corresponding to the input image block B′ of the input image Il, that is, the x coordinate value of the central point in the matching search window of the input image Ir subtracted by the x′ coordinate value of the central point of optimal matching block B searched in matching search window by input image block B′, with the result changed into its absolute value, expressed in the number of pixels.
(6b) Select the offset Δxl and Δxr or Δyl and Δyr and combine the given distance parameter of the reference code pattern d, the focal length f of the camera image sensor, the baseline distance S between the camera and the coded pattern projector, as well as the dot pitch parameter μ of the camera image sensor to obtain the depth information dl and dr respectively for the central point o of the projection image block blockm×n corresponding to the same position in the input images Il and Ir based on calculation according to the formula for depth calculation;
Therein, if the binocular camera is arranged horizontally to the coded pattern projector, then select the offset Δxl, Δxr; if the binocular camera is arranged vertically to the coded pattern projector, then select the offset Δyl, Δyr.
In this implementation example, calculate dl, dr according to the following formula for depth calculation, and here takes the horizontal offset Δxl, Δyr as input parameters:
d l = fSd fS + Δ x l μ d , d r = fSd fS + Δ x r μ d ( 2 )
Where, Δxl, Δxr indicate the X-axis optimal offsets respectively for the input image blocks of the input images Il, Ir to the corresponding optimal matching blocks, expressed in the number of pixels.
Step 7: depth compensation—use the depth information dl and dr, combine the projection shadow areas Al and Ar detected in Step 4 to compensate and correct the depth information dl,r, and output the final depth value dout the central point o on the projection image block blockm×n.
According to the schematic diagram for FOV integration of the binocular camera as shown in FIG. 7, the specific method for depth compensation is as follows: if the central point o of the projection image block falls within the non-cross area {circle around (1)} in the left view, then select dl as the output dout; if the central point o of the projection image block falls within the non-cross area {circle around (3)} in the right view, then select dr as the output dout; if the central point of the projection image block falls within the cross area {circle around (2)} in the left and right views, and for such non-projection shadow areas, if |dl−dr|≤th1, and
d l , r - d l + d r 2 > th 2
(which indicates that if the depth values dl and dr have no great difference while the depth value dl,r has a great difference from the average of dl and dr, then it argues that dl,r is a depth error value while th1 and th2 represent the thresholds), then select dl or dr as the output dout, or select dl,r as the output; for the projection shadow area Al, select dr as the output dout; for the projection shadow area Ar, then select dl as the output dout.
The above example is only a specific method for depth compensation, but not limited to such method.
Step 8: move the central point o of the projection image block to the next pixel in the same line, repeat the steps 5-7 to calculate the depth value corresponding to the next pixel and follow such calculation sequence from left to right and from top to bottom line by line to obtain the depth information (distance) of the whole image based on point-by-point calculation. Similarly, it also can be used to calculate the depth information of the input image sequence.
As an example, the binocular camera in the present invention adopts two independent cameras identical in performance indexes (the same optical lens and image sensors) and arranged on both sides of the coded pattern projector symmetrically at the same distance to the left and right sides, with its optic axis parallel to that of the coded pattern projector and kept on the same baseline, but we can adjust the baseline of the two cameras in accordance with different requirements or adopt other two cameras different in focal length or model.
As an example, the projection shadow area in the present invention can be detected in such methods but not limited to that adopted in the example of the present invention; the search strategy of the matching block in the present invention adopts the conventional full-search block matching, but other kinds of improved search strategies can also be used; the similarity value is calculated in such method of sum of absolute differences (SAD), but not limited to such method; the method for depth compensation is not limited to that adopted in the example of the present invention too; all methods similar to the content flow in the present invention should be included in the range of the requirements for rights in the present invention.
In the present invention, the input images include a series of test images captured during movement of the described object and moreover, we can track the movement of the object within the target area according to the location as estimated.
As mentioned above in the present invention, we can splice the images after preprocessing of the two cameras and then calculate the depth. However, the process of splicing will increase a lot of redundant matching calculation and the details of this method has not been described in the implementation example, but it does not go beyond the spirit or range of the present invention, so should be included in the range of the requirements for rights mentioned above.
The above implementation example is completed in a specific system, but it has not restricted the present invention and the present invention can be applied to similar coded pattern projection and image sensor systems. The present invention not only supports the structured light modes from different laser sources, such as infrared light, visible light, ultraviolet light and invisible light, but also applies to the projection solutions of different patterns, such as round dots, blocks, cross shapes, stripe patterns. Therefore, any modification and perfection within the spirit and range of the present invention should be included in the range of the requirements for rights as mentioned above.

Claims (8)

What is claimed is:
1. A method of binocular depth perception based on active structured light, comprising the following steps of:
Step 1: projecting coherent laser beams, by a coded pattern projector, with a coded pattern to carry out structured light coding for a target object with an unknown depth;
Step 2: arranging a first camera and a second camera symmetrically at the same distances on the left side and right side of the coded pattern projector to acquire and fix their respective reference coded pattern Rl and reference coded pattern Rr, the first camera and the second camera being two separate and distinct components and each having the same or substantially the same optical lens and image sensor, and sharing the same baseline with the coded pattern projector and receiving the coded pattern within the range of a wavelength;
Step 3: acquiring input image Il, by the first camera, and acquiring input image Ir, by the second camera, each of the input image Il and the input image Ir containing the coded pattern and the target object and preprocessing the input images Il and Ir, wherein the preprocessing includes video format conversion, color space conversion, and grey image adaptive denoising and enhancement;
Step 4: using the input image Il and the input image lr after being preprocessed to detect projection shadow areas Al and Ar of the target object respectively, wherein projection shadow area Ar located behind the left side of the target object is detected in the input image Il and projection shadow area Al located behind the right side the target object is detected in the input image Ir;
Step 5: performing two matching motion estimation: a first block matching motion estimation based on the symmetric arrangements and equal distances of the first camera and the second camera from the coded pattern projector and a second of block matching motion estimation to generate the offset respectively, wherein the first block matching motion estimation is to perform a binocular block matching calculation between a first input image block of the input image Il and a corresponding matching image block of the input image Ir based on the symmetric arrangements and equal distances of the first camera and the second camera from the projector and get an X-axis offset Δxl,r or a Y-axis offset Δy l, r; and the second block matching motion estimation is to perform (1) a first block matching calculation between the first input image block of the input image Il image and a corresponding matching image block with the reference coded pattern Rl to get an X-axis offset Δxl and a Y-axis offset Δyl and (2) a second block matching calculation between a second input image block of the input image Ir and a corresponding matching image block with the reference coded pattern Rr to get an X-axis offset Δxr or a Y-axis offset Δyr, wherein the block matching motion estimation is based on similarity values between input images and corresponding matching images;
Step 6: carrying out depth calculation, including:
(6a) selecting the X-axis offset Δxl, ror Δy l,r and combining the focal length f of the image sensor, the baseline distance between the first camera and the second camera S and a dot pitch parameter μ of the image sensor to obtain depth information d l ,r for a central point 0 of an image block mxn;
(6b) selecting the X-axis offset Δxl and Δxr or the Y-axis offset Δyl and Δyr and combining a given distance parameter d of the reference coded pattern Rl and reference coded pattern Rr, the focal length f of the image sensor, the baseline distance s between the first camera and the coded pattern projector, as well as the dot pitch parameter μ of the image sensor to obtain depth information dl and d r respectively for the central point 0 of the image blockmxn corresponding to the same position in each of the input image Il and the input image Ir;
Step 7: performing depth compensation, including, using the depth information dl and d r, combining the projection shadow areas Al and Ar detected in Step 4 to compensate and correct the depth information dl,r , and outputting a final depth value dout of the central point 0 on the image block mxn;
Step 8: moving the central point 0 of the image blockmxn to a next pixel in the same line, repeating the steps 5-7 to calculate a depth value corresponding to the next pixel and following such calculation sequence from left to right and from top to bottom line by line to obtain the depth information of the input image Il and the input image Ireach comprising the target object based on point-by-point calculation.
2. The method according to claim 1, wherein the reference coded Rland reference coded pattern Rr, as mentioned in Step 2 are standard patterns stored and cured in the memory for matching benchmark and depth perception calculation after preprocessing of static images acquired by the first camera and the second camera when the coded pattern projector projects code patterns onto a plane vertical to the optical center axis of the coded pattern projector with the distance of d to the coded pattern projector.
3. The method according to claim 1, wherein the projection shadow area detection as mentioned in Step 4 is determined by detecting the number of the feature points contained in the image blockmxn.
4. The method according to claim 1, wherein in Step (6a), if the first camera and the second camera are arranged horizontally to the coded pattern projector, then selecting the X-axis offset Δxl,r; if the first camera and the second camera are arranged vertically to the coded pattern projector, then selecting the offset Δyl, r.
5. The method according to claim 1, wherein in Step (6a), if Δxl, r is selected, then the formula for depth calculation is as follows:
d l , r = 2 fS Δ x l , r μ . ( 1 )
wherein the X-axis offset Δxl, r is an X-axis optimal offset of an optimal matching block B on the input image Ir corresponding to an input image block B′ of the input image Il, and wherein the X-axis optimal offset is the x coordinate value of the central point in the matching search window of the input image Ir subtracted by the x′ coordinate value of the central point of optimal matching block B searched in matching search window by the input image block B′, and wherein the X-axis optimal offset is changed into a value expressed in the number of pixels.
6. The method according to claim 1, wherein in Step (6b), if the first camera and the second camera are arranged horizontally to the coded pattern projector, then selecting the X-axis offset Δxl and Δxr; if the first camera and the second camera are arranged vertically to the coded pattern projector, then selecting the Y-axis offset Yl and Yr.
7. The method according to claim 1, wherein in Step (6b), if Δxl and Δxr are selected, then the formula for depth calculation is as follows:
d l = fSd fS + Δ x l μ d , d r = fSD fS + Δ x r μ d , ( 2 )
wherein Δxl, Δxr indicate X-axis optimal offsets respectively for input image blocks of the input images Il and Ir to corresponding optimal matching blocks, expressed in the number of pixels.
8. The method according to claim 1, wherein in Step (7), the specific method for depth compensation is as follows:
if the central point 0 of the projection image block falls within the left view of a non-cross area, then selecting dl as the output dout;
if the central point 0 of the projection image block falls within right view of the non-cross area, then selecting dr as the output dout;
if the central point 0 of the image block falls within the cross area, and for non-projection shadow areas, when there is no difference between dl and d rand there is a difference between dl,r and the average of d l and dr,
then selecting dl or dr as the output dout, otherwise selecting dl,r, as the output; and
if the central point 0 of the image block falls within the cross area,
for the projection shadow area Al, selecting dr as the output dout;
for the projection shadow area Ar, then selecting dl as the output dout, wherein thlis a first threshold and the th2 is a second threshold.
US14/591,083 2014-02-13 2015-01-07 One method of binocular depth perception based on active structured light Active 2036-05-12 US10142612B2 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
CN201410050675.7A CN103796004B (en) 2014-02-13 2014-02-13 A kind of binocular depth cognitive method of initiating structure light
CN201410050675 2014-02-13
CN201410050675.7 2014-02-13

Publications (2)

Publication Number Publication Date
US20150229911A1 US20150229911A1 (en) 2015-08-13
US10142612B2 true US10142612B2 (en) 2018-11-27

Family

ID=50671229

Family Applications (1)

Application Number Title Priority Date Filing Date
US14/591,083 Active 2036-05-12 US10142612B2 (en) 2014-02-13 2015-01-07 One method of binocular depth perception based on active structured light

Country Status (2)

Country Link
US (1) US10142612B2 (en)
CN (1) CN103796004B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180160094A1 (en) * 2016-12-07 2018-06-07 Sony Corporation Color noise reduction in 3d depth map
US10451714B2 (en) 2016-12-06 2019-10-22 Sony Corporation Optical micromesh for computerized devices
US10484667B2 (en) 2017-10-31 2019-11-19 Sony Corporation Generating 3D depth map using parallax
US10495735B2 (en) 2017-02-14 2019-12-03 Sony Corporation Using micro mirrors to improve the field of view of a 3D depth map
US10549186B2 (en) 2018-06-26 2020-02-04 Sony Interactive Entertainment Inc. Multipoint SLAM capture
US10795022B2 (en) 2017-03-02 2020-10-06 Sony Corporation 3D depth map
US10979687B2 (en) 2017-04-03 2021-04-13 Sony Corporation Using super imposition to render a 3D depth map
US20220222843A1 (en) * 2019-05-22 2022-07-14 Omron Corporation Three-dimensional measurement system and three-dimensional measurement method

Families Citing this family (81)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8866912B2 (en) 2013-03-10 2014-10-21 Pelican Imaging Corporation System and methods for calibration of an array camera using a single captured image
US10250871B2 (en) * 2014-09-29 2019-04-02 Fotonation Limited Systems and methods for dynamic calibration of array cameras
US12259231B2 (en) 2015-01-18 2025-03-25 Dentlytec G.P.L. Ltd. Intraoral scanner
US10966614B2 (en) 2015-01-18 2021-04-06 Dentlytec G.P.L. Ltd. Intraoral scanner
US9948920B2 (en) 2015-02-27 2018-04-17 Qualcomm Incorporated Systems and methods for error correction in structured light
US10068338B2 (en) 2015-03-12 2018-09-04 Qualcomm Incorporated Active sensing spatial resolution improvement through multiple receivers and code reuse
DE112015006245B4 (en) * 2015-03-30 2019-05-23 Fujifilm Corporation Distance image detection device and distance image detection method
CN104802710B (en) * 2015-04-17 2017-07-11 浙江大学 A kind of intelligent automobile reversing aid system and householder method
CN104749803A (en) * 2015-04-21 2015-07-01 京东方科技集团股份有限公司 Imaging system of 3D printing device, imaging method and 3D printing device
CN104952074B (en) * 2015-06-16 2017-09-12 宁波盈芯信息科技有限公司 Storage controlling method and device that a kind of depth perception is calculated
KR102056015B1 (en) 2015-06-19 2019-12-13 상하이 퍼시피오 테크놀로지 리미티드 Depth Data Detection Device and Monitoring Device
CN105004282B (en) * 2015-06-19 2018-01-16 上海图漾信息科技有限公司 Depth data detection means
EP3301913A4 (en) * 2015-06-23 2018-05-23 Huawei Technologies Co., Ltd. Photographing device and method for acquiring depth information
EP3326153A1 (en) * 2015-07-17 2018-05-30 Koninklijke Philips N.V. Device and method for determining a position of a mobile device in relation to a subject
US9635339B2 (en) 2015-08-14 2017-04-25 Qualcomm Incorporated Memory-efficient coded light error correction
CN105120257B (en) * 2015-08-18 2017-12-15 宁波盈芯信息科技有限公司 A kind of vertical depth sensing device based on structure light coding
US9846943B2 (en) 2015-08-31 2017-12-19 Qualcomm Incorporated Code domain power control for structured light
CN105430297B (en) * 2015-12-11 2018-03-30 中国航空工业集团公司西安航空计算技术研究所 The automatic control system that more video formats are changed to IIDC protocol videos form
CN107025830A (en) * 2016-01-29 2017-08-08 北京新唐思创教育科技有限公司 Simulation method and device for teaching experiment
CN105931240B (en) 2016-04-21 2018-10-19 西安交通大学 Three dimensional depth sensing device and method
WO2017220598A1 (en) * 2016-06-20 2017-12-28 Cognex Corporation Method for the three dimensional measurement of moving objects during a known movement
US10609359B2 (en) * 2016-06-22 2020-03-31 Intel Corporation Depth image provision apparatus and method
CN106170086B (en) * 2016-08-19 2019-03-15 深圳奥比中光科技有限公司 Method and device thereof, the system of drawing three-dimensional image
US12285188B2 (en) 2016-09-10 2025-04-29 Ark Surgical Ltd. Laparoscopic workspace device
US11690604B2 (en) 2016-09-10 2023-07-04 Ark Surgical Ltd. Laparoscopic workspace device
CN106791763B (en) * 2016-11-24 2019-02-22 深圳奥比中光科技有限公司 A kind of application specific processor for 3D display and 3D interaction
CN108693538A (en) * 2017-04-07 2018-10-23 北京雷动云合智能技术有限公司 Accurate confidence level depth camera range unit based on binocular structure light and method
CN107144257B (en) * 2017-05-16 2019-03-26 江苏省电力试验研究院有限公司 A kind of binocular distance measurement method and device of charged electric power apparatus detection
EP4154845A1 (en) 2017-07-04 2023-03-29 Dentlytec G.P.L. Ltd. Dental device with probe
US11690701B2 (en) * 2017-07-26 2023-07-04 Dentlytec G.P.L. Ltd. Intraoral scanner
CN107343122A (en) * 2017-08-02 2017-11-10 深圳奥比中光科技有限公司 3D imaging devices
CN109870126A (en) * 2017-12-05 2019-06-11 宁波盈芯信息科技有限公司 A kind of area computation method and a kind of mobile phone for being able to carry out areal calculation
CN109903719A (en) 2017-12-08 2019-06-18 宁波盈芯信息科技有限公司 A kind of the structure light coding method for generating pattern and device of space-time code
CN110177266B (en) * 2017-12-18 2021-02-26 西安交通大学 Self-correcting method and device of structured light 3D depth camera
CN107917701A (en) * 2017-12-28 2018-04-17 人加智能机器人技术(北京)有限公司 Measuring method and RGBD camera systems based on active binocular stereo vision
CN108234874B (en) * 2018-01-10 2020-07-21 南京华捷艾米软件科技有限公司 Method and device for adjusting imaging precision of somatosensory camera
CN110326028A (en) * 2018-02-08 2019-10-11 深圳市大疆创新科技有限公司 Method, apparatus, computer system and the movable equipment of image procossing
CN108495113B (en) * 2018-03-27 2020-10-27 百度在线网络技术(北京)有限公司 Control method and device for binocular vision system
US11017540B2 (en) 2018-04-23 2021-05-25 Cognex Corporation Systems and methods for improved 3-d data reconstruction from stereo-temporal image sequences
CN108645353B (en) * 2018-05-14 2020-09-01 四川川大智胜软件股份有限公司 Three-dimensional data acquisition system and method based on multi-frame random binary coding light field
CN109191562B (en) * 2018-07-15 2022-10-25 黑龙江科技大学 3D reconstruction method based on color pseudorandom coding structured light
CN109190484A (en) * 2018-08-06 2019-01-11 北京旷视科技有限公司 Image processing method, device and image processing equipment
CN112101338B (en) * 2018-08-14 2021-04-30 成都佳诚弘毅科技股份有限公司 Image restoration method based on VIN image acquisition device
CN109194780B (en) * 2018-08-15 2020-08-25 信利光电股份有限公司 Rotation correction method and device of structured light module and readable storage medium
CN109194947A (en) * 2018-09-13 2019-01-11 广东光阵光电科技有限公司 Binocular camera module and mobile terminal
CN109270546A (en) * 2018-10-17 2019-01-25 郑州雷动智能技术有限公司 Distance measuring device based on structured light and double image sensors and distance measuring method thereof
CN109756660B (en) * 2019-01-04 2021-07-23 Oppo广东移动通信有限公司 Electronic Devices and Mobile Platforms
CN109887022A (en) * 2019-02-25 2019-06-14 北京超维度计算科技有限公司 A kind of characteristic point matching method of binocular depth camera
CN111739111B (en) * 2019-03-20 2023-05-30 上海交通大学 Method and system for intra-block offset optimization of point cloud projection encoding
CN111901502A (en) * 2019-05-06 2020-11-06 三赢科技(深圳)有限公司 Camera module
WO2020230921A1 (en) * 2019-05-14 2020-11-19 엘지전자 주식회사 Method for extracting features from image using laser pattern, and identification device and robot using same
CN110853086A (en) * 2019-10-21 2020-02-28 北京清微智能科技有限公司 Depth image generation method and system based on speckle projection
CN110784706B (en) * 2019-11-06 2021-08-31 Oppo广东移动通信有限公司 Information processing method, encoding device, decoding device, system, and storage medium
CN112926367B (en) * 2019-12-06 2024-06-21 杭州海康威视数字技术股份有限公司 Living body detection equipment and method
WO2021120217A1 (en) * 2019-12-20 2021-06-24 深圳市汇顶科技股份有限公司 Image acquisition apparatus, image acquisition method and acquisition chip
CN111307069B (en) * 2020-04-11 2023-06-02 武汉玄景科技有限公司 Compact parallel line structured light three-dimensional scanning method and system
CN111336950B (en) * 2020-04-11 2023-06-02 武汉玄景科技有限公司 Single frame measurement method and system combining space coding and line structured light
CN113763295B (en) * 2020-06-01 2023-08-25 杭州海康威视数字技术股份有限公司 Image fusion method, method and device for determining image offset
CN111783877B (en) * 2020-06-30 2023-08-01 西安电子科技大学 Depth information measurement method based on single-frame grid composite coding template structured light
CN112129262B (en) * 2020-09-01 2023-01-06 珠海一微半导体股份有限公司 Visual ranging method and visual navigation chip of multi-camera group
CN112562008B (en) * 2020-11-30 2022-04-08 成都飞机工业(集团)有限责任公司 Target point matching method in local binocular vision measurement
CN112542248B (en) * 2020-11-30 2022-11-08 清华大学 Helmet and augmented reality projection method
CN113155417B (en) * 2021-04-25 2022-10-18 歌尔股份有限公司 Offset state test method, test equipment and storage medium
CN113251951B (en) * 2021-04-26 2024-03-01 湖北汽车工业学院 Calibration method of line structured light vision measurement system based on single calibration surface mapping
CN113538548B (en) * 2021-06-24 2024-09-06 七海测量技术(深圳)有限公司 3D detection system and method for semiconductor tin ball
CN113932736B (en) * 2021-09-23 2022-12-02 华中科技大学 A 3D measurement method and system based on structured light
CN113936049A (en) * 2021-10-21 2022-01-14 北京的卢深视科技有限公司 Monocular structured light speckle image depth recovery method, electronic device and storage medium
CN113973167A (en) * 2021-10-28 2022-01-25 维沃移动通信有限公司 Camera assembly, electronic device and image generation method
CN114445793A (en) * 2021-12-20 2022-05-06 桂林电子科技大学 An intelligent driving assistance system based on artificial intelligence and computer vision
CN114263352B (en) * 2022-01-07 2023-10-03 中国建筑第八工程局有限公司 Reinforcement binding robot and identification method of reinforcement intersection
CN114404084B (en) * 2022-01-21 2024-08-02 北京大学口腔医学院 Scanning device and scanning method
CN114440834B (en) * 2022-01-27 2023-05-02 中国人民解放军战略支援部队信息工程大学 A Matching Method of Object-Space and Image-Space for Non-coded Signs
CN114723828B (en) * 2022-06-07 2022-11-01 杭州灵西机器人智能科技有限公司 Multi-line laser scanning method and system based on binocular vision
CN115082621B (en) * 2022-06-21 2023-01-31 中国科学院半导体研究所 Three-dimensional imaging method, device and system, electronic equipment and storage medium
CN114877826B (en) * 2022-07-11 2022-10-14 南京信息工程大学 Binocular stereo matching three-dimensional measurement method, system and storage medium
CN115184378B (en) * 2022-09-15 2024-03-29 北京思莫特科技有限公司 Concrete structure disease detection system and method based on mobile equipment
CN116105600B (en) * 2023-02-10 2023-06-13 深圳市中图仪器股份有限公司 Aiming target method based on binocular camera, processing device and laser tracker
CN116300039A (en) * 2023-03-09 2023-06-23 成都弘照科技有限公司 Digital image stereo microscope system and method for baseline-variable phase angle photography
CN116977403B (en) * 2023-09-20 2023-12-22 山东科技大学 Binocular vision-based film production breadth detection and control method
CN117558106B (en) * 2023-11-24 2024-05-03 中国地质科学院探矿工艺研究所 Non-contact type surface deformation quantitative monitoring and early warning method and monitoring system
CN117764864B (en) * 2024-02-22 2024-04-26 济南科汛智能科技有限公司 Nuclear magnetic resonance tumor visual detection method based on image denoising

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6262743B1 (en) * 1995-06-22 2001-07-17 Pierre Allio Autostereoscopic image acquisition method and system
US20110043613A1 (en) * 2008-01-04 2011-02-24 Janos Rohaly Three-dimensional model refinement
US20120253201A1 (en) * 2011-03-29 2012-10-04 Reinhold Ralph R System and methods for monitoring and assessing mobility

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130027548A1 (en) * 2011-07-28 2013-01-31 Apple Inc. Depth perception device and system
CN102970548B (en) * 2012-11-27 2015-01-21 西安交通大学 Image depth sensing device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6262743B1 (en) * 1995-06-22 2001-07-17 Pierre Allio Autostereoscopic image acquisition method and system
US20110043613A1 (en) * 2008-01-04 2011-02-24 Janos Rohaly Three-dimensional model refinement
US20120253201A1 (en) * 2011-03-29 2012-10-04 Reinhold Ralph R System and methods for monitoring and assessing mobility

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10451714B2 (en) 2016-12-06 2019-10-22 Sony Corporation Optical micromesh for computerized devices
US20180160094A1 (en) * 2016-12-07 2018-06-07 Sony Corporation Color noise reduction in 3d depth map
US10536684B2 (en) * 2016-12-07 2020-01-14 Sony Corporation Color noise reduction in 3D depth map
US10495735B2 (en) 2017-02-14 2019-12-03 Sony Corporation Using micro mirrors to improve the field of view of a 3D depth map
US10795022B2 (en) 2017-03-02 2020-10-06 Sony Corporation 3D depth map
US10979687B2 (en) 2017-04-03 2021-04-13 Sony Corporation Using super imposition to render a 3D depth map
US10484667B2 (en) 2017-10-31 2019-11-19 Sony Corporation Generating 3D depth map using parallax
US10979695B2 (en) 2017-10-31 2021-04-13 Sony Corporation Generating 3D depth map using parallax
US10549186B2 (en) 2018-06-26 2020-02-04 Sony Interactive Entertainment Inc. Multipoint SLAM capture
US11590416B2 (en) 2018-06-26 2023-02-28 Sony Interactive Entertainment Inc. Multipoint SLAM capture
US20220222843A1 (en) * 2019-05-22 2022-07-14 Omron Corporation Three-dimensional measurement system and three-dimensional measurement method
US12159425B2 (en) * 2019-05-22 2024-12-03 Omron Corporation Three-dimensional measurement system and three-dimensional measurement method

Also Published As

Publication number Publication date
CN103796004B (en) 2015-09-30
US20150229911A1 (en) 2015-08-13
CN103796004A (en) 2014-05-14

Similar Documents

Publication Publication Date Title
US10142612B2 (en) One method of binocular depth perception based on active structured light
US9454821B2 (en) One method of depth perception based on binary laser speckle images
CN103824318B (en) A kind of depth perception method of multi-cam array
US10194135B2 (en) Three-dimensional depth perception apparatus and method
EP3788403B1 (en) Field calibration of a structured light range-sensor
US10194138B2 (en) Structured light encoding-based vertical depth perception apparatus
US9829309B2 (en) Depth sensing method, device and system based on symbols array plane structured light
CN109751973B (en) Three-dimensional measuring device, three-dimensional measuring method, and storage medium
Lee et al. Low-cost 3D motion capture system using passive optical markers and monocular vision
KR101278430B1 (en) Method and circuit arrangement for recognising and tracking eyes of several observers in real time
US20180204329A1 (en) Generating a Distance Map Based on Captured Images of a Scene
US20150256813A1 (en) System and method for 3d reconstruction using multiple multi-channel cameras
US20120134537A1 (en) System and method for extracting three-dimensional coordinates
CN104537657A (en) Laser speckle image depth perception method implemented through parallel search GPU acceleration
US20200081249A1 (en) Internal edge verification
US11803982B2 (en) Image processing device and three-dimensional measuring system
CN103841406B (en) A kind of depth camera device of plug and play
KR20230065978A (en) Systems, methods and media for directly repairing planar surfaces in a scene using structured light
JP7184203B2 (en) Image processing device, three-dimensional measurement system, image processing method
KR20200046789A (en) Method and apparatus for generating 3-dimensional data of moving object
Um et al. Three-dimensional scene reconstruction using multiview images and depth camera
CN106352847B (en) Phase difference-based distance measurement device and distance measurement method
CN109661683B (en) Structured light projection method, depth detection method and structured light projection device based on image content
CN110853086A (en) Depth image generation method and system based on speckle projection
EP3001141A1 (en) Information processing system and information processing method

Legal Events

Date Code Title Description
AS Assignment

Owner name: GE, CHENYANG, CHINA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:GE, CHENYANG;ZHENG, NANNING;YAO, HUIMIN;AND OTHERS;REEL/FRAME:034656/0239

Effective date: 20141219

Owner name: ZHENG, NANNING, CHINA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:GE, CHENYANG;ZHENG, NANNING;YAO, HUIMIN;AND OTHERS;REEL/FRAME:034656/0239

Effective date: 20141219

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YR, SMALL ENTITY (ORIGINAL EVENT CODE: M2551); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

Year of fee payment: 4