[go: up one dir, main page]

CN101866056A - Three-dimensional imaging method and system based on LED array common lens TOF depth measurement - Google Patents

Three-dimensional imaging method and system based on LED array common lens TOF depth measurement Download PDF

Info

Publication number
CN101866056A
CN101866056A CN 201010190028 CN201010190028A CN101866056A CN 101866056 A CN101866056 A CN 101866056A CN 201010190028 CN201010190028 CN 201010190028 CN 201010190028 A CN201010190028 A CN 201010190028A CN 101866056 A CN101866056 A CN 101866056A
Authority
CN
China
Prior art keywords
dimensional
target
led
depth
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN 201010190028
Other languages
Chinese (zh)
Inventor
王焕钦
徐军
何德勇
赵天鹏
明海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Institutes of Physical Science of CAS
Original Assignee
Hefei Institutes of Physical Science of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei Institutes of Physical Science of CAS filed Critical Hefei Institutes of Physical Science of CAS
Priority to CN 201010190028 priority Critical patent/CN101866056A/en
Publication of CN101866056A publication Critical patent/CN101866056A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Length Measuring Devices By Optical Means (AREA)
  • Measurement Of Optical Distance (AREA)

Abstract

本发明公开了一种基于LED阵列共透镜TOF深度测量的三维成像方法和系统,采用二维LED阵列作为照明光源,每次仅有一颗LED为点亮状态,LED发射的调制光经投影透镜投射到目标的表面,光电接收器接收目标表面的散射光,测量从光源到目标之间的往返飞行时间,并由其得出点亮状态的LED深度像素值,完成单个LED深度像素值测量;再时分扫描整个二维LED阵列,重复单个LED深度像素值的测量过程,获得全部LED深度像素值并组合生成目标的深度图像;同时目标表面的散射光经二维成像透镜后由二维图像传感器获得目标二维图像;投影透镜和二维成像透镜为同一透镜;以二维图像和深度图像融合生成目标三维图像。本发明深度图像获取速度快,深度测量分辨率高。

Figure 201010190028

The invention discloses a three-dimensional imaging method and system based on LED array common-lens TOF depth measurement. A two-dimensional LED array is used as an illumination light source, and only one LED is lit each time, and the modulated light emitted by the LED is projected through a projection lens. To the surface of the target, the photoelectric receiver receives the scattered light of the target surface, measures the round-trip flight time from the light source to the target, and obtains the LED depth pixel value in the lit state from it, and completes the single LED depth pixel value measurement; Time division scans the entire two-dimensional LED array, repeats the measurement process of a single LED depth pixel value, obtains all LED depth pixel values and combines them to generate a depth image of the target; at the same time, the scattered light on the target surface is obtained by the two-dimensional image sensor after passing through the two-dimensional imaging lens The two-dimensional image of the target; the projection lens and the two-dimensional imaging lens are the same lens; the three-dimensional image of the target is generated by fusing the two-dimensional image and the depth image. The invention has fast depth image acquisition speed and high depth measurement resolution.

Figure 201010190028

Description

基于LED阵列共透镜TOF深度测量的三维成像方法和系统 Three-dimensional imaging method and system based on LED array common lens TOF depth measurement

技术领域technical field

本发明涉及测距成像和三维成像技术领域,特别涉及一种基于LED阵列共透镜TOF深度测量的三维成像方法和系统。The invention relates to the technical fields of ranging imaging and three-dimensional imaging, in particular to a three-dimensional imaging method and system based on LED array common-lens TOF depth measurement.

背景技术Background technique

现有的以CCD/CMOS照相和数字图像处理为基础的二维2D成像技术已经取得了长足的发展,并且获得了广泛的应用。然而对于三维的真实世界,使用传统二维成像技术获得的二维图像并不足以充分表达所有信息,因此限制了它在许多领域中的应用。为了解决上述的问题,三维成像(3D Imaging)技术应运而生。三维成像是指将客观世界的三维图像通过某种特殊的方法记录下来,然后通过处理、压缩、传输、显示等过程并最终在人的大脑中再现客观的真实图像。由于与通常意义上的二维成像相比,三维成像包含了第三维的距离或深度信息,能够更加充分地描述真实三维场景中物体的位置和运动信息,因此具有许多突出的优点,在机器视觉、实物仿形、工业检测、生物医学、反向工程、虚拟现实等领域具有广泛的应用前景。The existing two-dimensional 2D imaging technology based on CCD/CMOS photography and digital image processing has made great progress and has been widely used. However, for the 3D real world, the 2D image obtained by traditional 2D imaging technology is not enough to fully express all the information, thus limiting its application in many fields. In order to solve the above problems, three-dimensional imaging (3D Imaging) technology came into being. Three-dimensional imaging refers to recording the three-dimensional image of the objective world through a special method, and then reproducing the objective real image in the human brain through processes such as processing, compression, transmission, and display. Compared with two-dimensional imaging in the usual sense, three-dimensional imaging contains third-dimensional distance or depth information, which can more fully describe the position and motion information of objects in a real three-dimensional scene, so it has many outstanding advantages. , Physical profiling, industrial testing, biomedicine, reverse engineering, virtual reality and other fields have broad application prospects.

基于光学测距的三维成像技术由于具有分辨高、无需接触等优点,逐渐成为国内外的研究热点。目前研究的大部分光学三维成像系统都是基于三角法(Triangulation)或飞行时间(Time-of-Flight:TOF)原理来测量距离的。基于三角法测距的三维成像系统,包括被动三角法(如立体视觉法)和主动三角法(如投影结构光法),它们都需要处理“阴影”效应或投影条纹“模糊”问题,因此严格限制了它们的应用范围。例如立体视觉法一般只能用于对比度较高的三维场景中目标的识别和分析,因为该方法在确定第三维的距离信息时,需要对不同视觉方向获取的两幅或多幅图像进行特征点的匹配,因此,需要进行复杂的信号处理和大量耗时的数据计算;此外实际应用中目标往往缺乏特征的结构信息或者目标上各点的反射率没有明显差异,这时匹配计算会变得十分困难甚至产生错误,深度测量精度将会受到严重影响。The 3D imaging technology based on optical ranging has gradually become a research hotspot at home and abroad due to its advantages of high resolution and no contact. Most of the currently researched optical 3D imaging systems measure distance based on triangulation or time-of-flight (TOF) principles. Three-dimensional imaging systems based on triangulation ranging, including passive triangulation (such as stereo vision) and active triangulation (such as projected structured light), they all need to deal with the "shadow" effect or "blurring" of projected fringes, so strict limit their scope of application. For example, the stereo vision method can generally only be used for the recognition and analysis of targets in 3D scenes with high contrast, because when this method determines the distance information of the third dimension, it needs to perform feature points on two or more images acquired in different visual directions. Therefore, complex signal processing and a large amount of time-consuming data calculation are required; in addition, in practical applications, the target often lacks characteristic structural information or the reflectivity of each point on the target has no obvious difference, and the matching calculation will become very complicated at this time. Difficulties or even errors, depth measurement accuracy will be seriously affected.

与三角法测距相比,基于飞行时间TOF的测距方法由于发射单元和接收单元在同一直线上,因此不会产生不完整的数据,不存在“阴影”效应,这使得基于飞行时间TOF的测距方法具有更加广泛的应用范围。但是,传统的基于飞行时间TOF的光学三维成像系统,例如激光成像雷达,实际上只能测量一点的距离(一维测距)。为了获得三维的信息,需要使用精密、笨重且价格昂贵的机械扫描装置将激光束在其它两个方向上对被测场景进行机械扫描,因此深度图像获取速度慢,实时性差;由于机械扫描装置本身存在老化和磨损现象,利用该方法获得的深度图像与二维图像之间对准精度差。此外,该系统在抗振、体积、重量和成本等方面也很难获得突破性的提高。Compared with the triangulation method, the distance measurement method based on time-of-flight TOF will not produce incomplete data because the transmitting unit and the receiving unit are on the same straight line, and there is no "shadow" effect, which makes the time-of-flight based TOF The ranging method has a wider range of applications. However, traditional TOF-based optical 3D imaging systems, such as LiDAR, can actually only measure the distance of a point (one-dimensional ranging). In order to obtain three-dimensional information, it is necessary to use a precise, bulky and expensive mechanical scanning device to mechanically scan the laser beam on the measured scene in the other two directions, so the depth image acquisition speed is slow and the real-time performance is poor; due to the mechanical scanning device itself There are aging and wear phenomena, and the alignment accuracy between the depth image and the 2D image obtained by this method is poor. In addition, it is difficult for the system to achieve breakthrough improvements in terms of vibration resistance, volume, weight and cost.

发明内容Contents of the invention

本发明的目的是针对现有三维成像方法和系统存在深度图像获取速度慢、深度图像与二维图像之间对准精度差等不足,提出一种基于LED阵列共透镜TOF深度测量的三维成像方法和系统,用于实现快速、高精度的三维成像,满足现有诸多领域对高性能三维成像的迫切需求。The purpose of the present invention is to propose a three-dimensional imaging method based on LED array common lens TOF depth measurement for the shortcomings of the existing three-dimensional imaging method and system, such as slow depth image acquisition speed and poor alignment accuracy between the depth image and the two-dimensional image. And the system is used to realize fast and high-precision three-dimensional imaging, which meets the urgent demand for high-performance three-dimensional imaging in many existing fields.

本发明解决其技术问题所采取的技术方案是:The technical scheme that the present invention solves its technical problem to take is:

本发明基于LED阵列共透镜TOF深度测量的三维成像方法,其特点是:The present invention is based on the three-dimensional imaging method of LED array common lens TOF depth measurement, and its characteristics are:

采用光功率调制的二维LED阵列作为照明光源,每次仅有所述二维LED阵列中的一颗LED为点亮状态,所述LED发射的调制光经投影透镜投射到目标的表面,用光电接收器接收所述目标表面的散射光,测量从光源到目标之间的往返飞行时间TOF,依据所述往返飞行时间TOF计算得出所述点亮状态的LED的深度像素值,完成单个LED深度像素值的测量;A two-dimensional LED array modulated by optical power is used as an illumination source, and only one LED in the two-dimensional LED array is turned on each time, and the modulated light emitted by the LED is projected onto the surface of the target through a projection lens, and is used The photoelectric receiver receives the scattered light from the surface of the target, measures the round-trip time of flight TOF from the light source to the target, calculates the depth pixel value of the LED in the lit state according to the round-trip time of flight TOF, and completes a single LED Measurement of depth pixel values;

时分扫描整个二维LED阵列,重复所述单个LED深度像素值的测量过程,获得全部LED深度像素值并组合生成目标的深度图像;同时,目标表面的散射光经二维成像透镜后由二维CCD/CMOS图像传感器接收,获得目标的二维图像;设置所述投影透镜和二维成像透镜为同一透镜,使深度图像和二维图像实时对准;Time division scans the entire two-dimensional LED array, repeats the measurement process of the single LED depth pixel value, obtains all LED depth pixel values and combines them to generate a depth image of the target; at the same time, the scattered light on the target surface passes through the two-dimensional imaging lens The CCD/CMOS image sensor receives and obtains the two-dimensional image of the target; the projection lens and the two-dimensional imaging lens are set as the same lens, so that the depth image and the two-dimensional image are aligned in real time;

将所述二维图像和深度图像融合生成目标的三维图像。The 2D image and the depth image are fused to generate a 3D image of the target.

本发明基于LED阵列共透镜TOF深度测量的三维成像方法的特点也在于:设置分束镜,所述LED光源发射的调制光经分束镜透射到达投影透镜;所述目标表面的散射光经二维成像透镜后,再经所述分束镜反射到达所述二维CCD/CMOS图像传感器。The feature of the three-dimensional imaging method based on LED array common lens TOF depth measurement of the present invention is that a beam splitter is set, and the modulated light emitted by the LED light source is transmitted through the beam splitter to reach the projection lens; the scattered light on the target surface passes through two After the three-dimensional imaging lens, it is reflected by the beam splitter mirror to reach the two-dimensional CCD/CMOS image sensor.

本发明基于LED阵列共透镜TOF深度测量的三维成像系统的特点是:The characteristics of the three-dimensional imaging system based on LED array common lens TOF depth measurement of the present invention are:

设置光电调制扫描电路对N×M的LED阵列的输出光功率进行调制和时分扫描,在任意时刻,整个LED阵列中仅有一颗LED被点亮;LED发射的调制光到达分束镜后,分成透射光和反射光,所述透射光经透镜投影在目标的表面,以光电接收器PD1接收目标表面产生的散射光;所述反射光直接由光电接收器PD2接收;Set the photoelectric modulation scanning circuit to modulate and time-division scan the output light power of the N×M LED array. At any time, only one LED in the entire LED array is lit; the modulated light emitted by the LED reaches the beam splitter and is divided into Transmitted light and reflected light, the transmitted light is projected on the surface of the target through the lens, and the scattered light generated by the target surface is received by the photoelectric receiver PD1; the reflected light is directly received by the photoelectric receiver PD2;

设置TOF深度测量电路(8),以所述TOF深度测量电路(8)分别处理光电接收器PD1和光电接收器PD2输出的光电信号,依次计算每一个LED深度像素值,所述LED的深度像素值在PC中组合生成目标的深度图像;同时目标表面的散射光经透镜、再经分束镜反射后被二维CCD/CMOS图像传感器接收,并经二维图像信号处理电路获得与所述深度图像实时对准的目标的二维图像;The TOF depth measurement circuit (8) is set, and the photoelectric signals output by the photoelectric receiver PD1 and the photoelectric receiver PD2 are processed respectively with the TOF depth measurement circuit (8), and each LED depth pixel value is calculated in turn, and the depth pixel value of the LED is Values are combined in the PC to generate the depth image of the target; at the same time, the scattered light on the target surface is received by the two-dimensional CCD/CMOS image sensor after being reflected by the lens and the beam splitter mirror, and obtained by the two-dimensional image signal processing circuit. A two-dimensional image of the target to which the image is aligned in real time;

所述深度图像与二维图像在PC中融合生成目标的三维图像。The depth image and the two-dimensional image are fused in the PC to generate a three-dimensional image of the target.

本发明系统的特点也在于在所述TOF深度测量电路中包括有一个反馈式自动增益控制AGC电路,所述反馈式AGC电路采用平方幅度检测电路对输入信号进行幅度检测,其输出信号经第二固定增益放大电路后,被送入电感电阻LR低通滤波器以滤除产生的高次谐波,其输出的直流电平被用于控制可变增益放大电路的增益;输入信号在经可变增益放大电路和第一固定增益放大电路放大后,获得反馈式AGC电路的输出信号。The system of the present invention is also characterized in that a feedback automatic gain control AGC circuit is included in the TOF depth measurement circuit, and the feedback AGC circuit uses a square amplitude detection circuit to detect the amplitude of the input signal, and the output signal is passed through the second After the fixed gain amplifier circuit, it is sent to the inductance resistance LR low-pass filter to filter out the high-order harmonics generated, and the output DC level is used to control the gain of the variable gain amplifier circuit; the input signal is passed through the variable gain After being amplified by the amplifying circuit and the first fixed gain amplifying circuit, the output signal of the feedback AGC circuit is obtained.

与已有技术相比,本发明有益效果体现在:Compared with the prior art, the beneficial effects of the present invention are reflected in:

1、本发明采用快速电子扫描的二维LED阵列作为照明光源,成像不需要任何的机械移动和旋转部件;1. The present invention uses fast electronically scanned two-dimensional LED arrays as the lighting source, and imaging does not require any mechanical moving and rotating parts;

2、本发明通过采用高精度、快速的平方幅度检测电路和响应时间短的LR低通滤波器构建了增益控制精度高且响应快速的反馈式AGC电路,并将其用于快速TOF深度测量,成像系统的深度图像获取速度快,且深度测量分辨率高;2. The present invention builds a feedback AGC circuit with high gain control precision and fast response by using a high-precision, fast square amplitude detection circuit and an LR low-pass filter with a short response time, and uses it for fast TOF depth measurement, The depth image acquisition speed of the imaging system is fast, and the depth measurement resolution is high;

3、本发明中二维图像与深度图像共用同一透镜成像,能够克服使用不同镜头成像带来的固有像差问题,可实现深度图像与二维图像之间的实时高精度对准;3. In the present invention, the two-dimensional image and the depth image share the same lens for imaging, which can overcome the inherent aberration problem caused by using different lenses for imaging, and can realize real-time high-precision alignment between the depth image and the two-dimensional image;

4、通过对LED管芯密集封装可获得高密度的二维LED阵列,因此,本发明成像系统的深度图像像素数量大,空间分辨率高;4. A high-density two-dimensional LED array can be obtained by densely packaging the LED dies. Therefore, the imaging system of the present invention has a large number of depth image pixels and high spatial resolution;

5、本发明中成像系统通过小电容的交流耦合滤除接收光电信号中由环境光作用产生的直流分量,因此成像系统具有很强的抗环境光干扰能力。5. The imaging system in the present invention filters out the DC component generated by the action of ambient light in the received photoelectric signal through the AC coupling of a small capacitor, so the imaging system has a strong ability to resist ambient light interference.

6、本发明成像系统结构简单、体积小、成本低。6. The imaging system of the present invention has simple structure, small volume and low cost.

附图说明Description of drawings

图1为本发明的结构原理示意图。Fig. 1 is a schematic diagram of the structure principle of the present invention.

图2为反馈式AGC电路结构方框图。Figure 2 is a block diagram of the feedback AGC circuit structure.

图3为深度图像获取的一个具体实施方案。Fig. 3 is a specific implementation of depth image acquisition.

具体实施方式Detailed ways

本实施例中基于LED阵列共透镜TOF深度测量的三维成像方法按如下步骤进行:In this embodiment, the three-dimensional imaging method based on LED array common lens TOF depth measurement is carried out as follows:

1、采用光功率调制的二维LED阵列作为照明光源,每次仅有二维LED阵列中的一颗LED为点亮状态,LED发射的调制光经投影透镜投射到目标1的表面,用光电接收器6接收目标1表面的散射光,测量从光源到目标1之间的往返飞行时间TOF,依据往返飞行时间TOF计算得出点亮状态的LED的深度像素值,完成单个LED深度像素值的测量;1. A two-dimensional LED array modulated by optical power is used as the lighting source. Only one LED in the two-dimensional LED array is lit at a time, and the modulated light emitted by the LED is projected onto the surface of the target 1 through the projection lens. The receiver 6 receives the scattered light on the surface of the target 1, measures the round-trip time of flight TOF from the light source to the target 1, calculates the depth pixel value of the LED in the lit state according to the round-trip time of flight TOF, and completes the depth pixel value of a single LED. Measurement;

2、时分扫描整个二维LED阵列,重复单个LED深度像素值的测量过程,获得全部LED深度像素值并组合生成目标1的深度图像;同时,目标1表面的散射光经二维成像透镜后由二维CCD/CMOS图像传感器9接收,获得目标1的二维图像;设置投影透镜和二维成像透镜为同一透镜2,使深度图像和二维图像实时对准;2. Scan the entire two-dimensional LED array in time division, repeat the measurement process of the depth pixel value of a single LED, obtain all the depth pixel values of the LED and combine them to generate the depth image of the target 1; at the same time, the scattered light on the surface of the target 1 passes through the two-dimensional imaging lens. The two-dimensional CCD/CMOS image sensor 9 receives and obtains the two-dimensional image of the target 1; the projection lens and the two-dimensional imaging lens are set as the same lens 2, so that the depth image and the two-dimensional image are aligned in real time;

3、将二维图像和深度图像融合生成目标1的三维图像。3. The 2D image and the depth image are fused to generate a 3D image of the target 1 .

具体实施中,设置分束镜3,LED光源发射的调制光经分束镜3透射到达投影透镜;目标1表面的散射光经二维成像透镜后,再经分束镜3反射到达所述二维CCD/CMOS图像传感器9。In the specific implementation, the beam splitter 3 is set, and the modulated light emitted by the LED light source is transmitted through the beam splitter 3 and reaches the projection lens; the scattered light on the surface of the target 1 passes through the two-dimensional imaging lens, and then is reflected by the beam splitter 3 to reach the two-dimensional imaging lens. Dimensional CCD/CMOS image sensor9.

本实施例中基于LED阵列共透镜TOF深度测量的三维成像系统的具体实施为:In this embodiment, the specific implementation of the three-dimensional imaging system based on LED array common lens TOF depth measurement is as follows:

参见图1,设置光电调制扫描电路5对N×M的LED阵列4的输出光功率进行调制和时分扫描,在任意时刻,整个LED阵列4中仅有一颗LED被点亮;LED发射的调制光到达分束镜3后,分成透射光和反射光,透射光经透镜2投影在目标1的表面,以PD1光电接收器6接收目标1表面产生的散射光;反射光直接由PD2光电接收器7接收;设置TOF深度测量电路8,以TOF深度测量电路8分别处理PD1光电接收器6和PD2光电接收器7输出的光电信号,依次计算每一个LED深度像素值,LED的深度像素值在PC 11中组合生成目标1的深度图像;同时,目标1表面的散射光经透镜2再经分束镜3反射后被二维CCD/CMOS图像传感器9接收,并经二维图像信号处理电路10获得与深度图像实时对准的目标1的二维图像;深度图像与二维图像在PC中融合生成目标1的三维图像。Referring to Fig. 1, the photoelectric modulation scanning circuit 5 is set to modulate and time-divisionally scan the output optical power of the N×M LED array 4. At any time, only one LED in the entire LED array 4 is lit; the modulated light emitted by the LED After reaching the beam splitter 3, it is divided into transmitted light and reflected light. The transmitted light is projected on the surface of the target 1 through the lens 2, and the scattered light generated by the surface of the target 1 is received by the PD1 photoelectric receiver 6; the reflected light is directly sent by the PD2 photoelectric receiver 7 Receive; set the TOF depth measurement circuit 8, process the photoelectric signals output by the PD1 photoelectric receiver 6 and the PD2 photoelectric receiver 7 respectively with the TOF depth measurement circuit 8, and calculate the depth pixel value of each LED in turn, and the depth pixel value of the LED is in the PC 11 The depth image of the target 1 is generated in combination; at the same time, the scattered light on the surface of the target 1 is received by the two-dimensional CCD/CMOS image sensor 9 after being reflected by the lens 2 and then beam splitter 3, and obtained by the two-dimensional image signal processing circuit 10. The depth image is aligned with the two-dimensional image of the target 1 in real time; the depth image and the two-dimensional image are fused in the PC to generate a three-dimensional image of the target 1.

本实施例中,在TOF深度测量电路8中包括有一个反馈式自动增益控制AGC电路,反馈式自动增益控制AGC电路采用平方幅度检测电路15对输入信号进行幅度检测,其输出信号经第一固定增益放大电路16后,被送入电感电阻LR低通滤波器14以滤除产生的高次谐波,其输出的直流电平被用于控制可变增益放大电路12的增益。输入信号在经可变增益放大电路12和第二固定增益放大电路13放大后,最终获得反馈式AGC电路22的输出信号。In this embodiment, a feedback automatic gain control AGC circuit is included in the TOF depth measurement circuit 8. The feedback automatic gain control AGC circuit uses the square amplitude detection circuit 15 to detect the amplitude of the input signal, and the output signal is fixed by the first After the gain amplifying circuit 16, it is sent to the inductance-resistor LR low-pass filter 14 to filter out the high-order harmonics generated, and the output DC level is used to control the gain of the variable gain amplifying circuit 12 . After the input signal is amplified by the variable gain amplifier circuit 12 and the second fixed gain amplifier circuit 13 , the output signal of the feedback AGC circuit 22 is finally obtained.

TOF测距是应用光速c不变和测量光的飞行时间t来获得被测距离的。LED阵列中每一个LED像素的深度值可以采用光电相移式TOF测距方法来获取,即使用频率为fm的连续正弦波对阵列中LED的输出光功率进行调制,将直接测量光的往返飞行时间t转化为间接测量与t对应的调制电信号的相位移动ΔΦ来获得被测距离d:TOF ranging is to obtain the measured distance by applying the constant speed of light c and measuring the flight time t of light. The depth value of each LED pixel in the LED array can be obtained by the photoelectric phase-shift TOF ranging method, that is, the output optical power of the LEDs in the array is modulated by a continuous sine wave with frequency f m , and the round-trip of light will be directly measured The time of flight t is transformed into an indirect measurement of the phase shift ΔΦ of the modulated electrical signal corresponding to t to obtain the measured distance d:

dd == 11 22 ctct == 11 22 cc ·&Center Dot; 11 ff mm ·&Center Dot; ΔΦΔΦ 22 ππ == cc 44 ππ ff mm ·&Center Dot; ΔΦΔΦ -- -- -- (( 11 ))

由于光的飞行速度很快,采用这种方法可以避免直接测量光极其短暂的飞行时间所带来的困难,同时又能够获得很高的测距分辨率。Since the flight speed of light is very fast, this method can avoid the difficulty of directly measuring the extremely short flight time of light, and at the same time can obtain high ranging resolution.

图3为深度图像获取的一个具体实施方案,也是TOF深度测量电路8的一个实施例。首先使用频率为fm的调制源17产生的连续正弦波经驱动器18后对N×M LED阵列4中LED的输出光功率进行调制,并将输出的调制光经透镜2投影到目标1上,接着使用光电接收器PD1接收目标1表面的散射光信号,在将其转化为相应的电信号后,通过小电容的交流耦合作用滤除该信号中主要由环境光作用产生的并且影响深度测量分辨率的直流分量,然后使用低噪声放大器19对该信号中的交流成分进行放大,并与fo本振21进行差频处理以获得低中频信号fI;然后采用快速反馈式AGC电路22对其进行自动增益控制并送入带通滤波器23进行滤波处理,最后在相移检测模块24中采用“四点”相位算法对获得的高信噪比的中频信号fI进行相位检测。FIG. 3 is a specific implementation of depth image acquisition, which is also an embodiment of the TOF depth measurement circuit 8 . First use the continuous sine wave generated by the modulation source 17 with a frequency of f m to modulate the output light power of the LEDs in the N×M LED array 4 through the driver 18, and project the output modulated light onto the target 1 through the lens 2, Then, the photoelectric receiver PD1 is used to receive the scattered light signal on the surface of the target 1, and after converting it into a corresponding electrical signal, the AC coupling effect of a small capacitor is used to filter out the signal that is mainly generated by the ambient light and affects the resolution of the depth measurement. frequency DC component, then use the low noise amplifier 19 to amplify the AC component in the signal, and perform frequency difference processing with the f o local oscillator 21 to obtain the low intermediate frequency signal f I ; then use the fast feedback AGC circuit 22 to it Perform automatic gain control and send it to the bandpass filter 23 for filtering processing, and finally use the "four-point" phase algorithm in the phase shift detection module 24 to perform phase detection on the obtained intermediate frequency signal f I with high SNR.

成像系统中另外一路PD2接收处理通道被用作参考通道,用来直接接收未经透镜投影的LED光信号,该参考通道的信号处理流程与上一路完全相同,其目的是为了减小电路的温漂对相移测量引入的误差。具体实施中,如果电路的温漂对相移测量引入的误差不大,也可以在系统中去除光电接收器PD2及其相应的接收处理通道,直接使用调制源17产生的正弦调制信号作为相位测量的参考信号。Another PD2 receiving and processing channel in the imaging system is used as a reference channel to directly receive the LED light signal without lens projection. The signal processing flow of this reference channel is exactly the same as that of the previous one. Drift is the error introduced by the phase shift measurement. In specific implementation, if the temperature drift of the circuit does not introduce much error to the phase shift measurement, the photoelectric receiver PD2 and its corresponding receiving processing channel can also be removed from the system, and the sinusoidal modulation signal generated by the modulation source 17 can be directly used as the phase measurement the reference signal.

此外,两路PD接收通道均采用全差分方式处理得到的微弱光电信号,目的是为了抑制共模干扰信号,改善成像系统的信噪比,提高最终的深度测量分辨率。In addition, the two PD receiving channels adopt the weak photoelectric signal processed by the full differential method, the purpose is to suppress the common mode interference signal, improve the signal-to-noise ratio of the imaging system, and improve the final depth measurement resolution.

在分别测得两路中频信号的相位后,在相移检测模块24中将两者作差即可得到相应于光飞行时间的相移ΔΦ。最后将获得的相移ΔΦ带入公式(1),通过处理单元电路就可得到最终的被测距离d,并将相应的结果发往PC 11。在获得单个LED深度像素的距离后,通过对二维LED阵列进行时分扫描,最终实现整个深度图像的获取。After the phases of the two intermediate frequency signals are respectively measured, the phase shift ΔΦ corresponding to the time of flight of the light can be obtained by making a difference between the two in the phase shift detection module 24 . Finally, the obtained phase shift ΔΦ is brought into the formula (1), and the final measured distance d can be obtained through the processing unit circuit, and the corresponding result is sent to the PC 11. After obtaining the distance of a single LED depth pixel, the acquisition of the entire depth image is finally realized by time-division scanning of the two-dimensional LED array.

时分扫描控制流程:Time division scanning control process:

在成像系统工作时,相隔一定的时间间隔依次点亮阵列中的每一颗LED,直到遍历整个LED阵列。在任意时刻,整个阵列中仅有一颗LED处于点亮状态,而其他的LED都是熄灭的,并且每颗LED被点亮的时长也是固定的,该固定时长是根据成像系统深度图像像素的多少和实际应用的需求来确定的。例如N=M=10,即对于深度像素为10×10的LED阵列,如果实际应用需要10帧/秒的动态测量速度,那么阵列中每颗LED点亮的固定时长为1ms;在这1ms时间内,成像系统需要完成一个LED深度像素的快速TOF距离测量和结果的发送工作,在此之后成像系统熄灭该颗LED,点亮下一个相邻的LED灯,进入下一个1ms周期的深度像素测量和结果发送;以此类推,直到遍历整个LED阵列完成整个一帧深度图像的测量并进入下一帧的循环测量。When the imaging system is working, each LED in the array is turned on sequentially at a certain time interval until the entire LED array is traversed. At any moment, only one LED in the entire array is on, while the other LEDs are off, and the duration of each LED is also fixed, which is based on the number of pixels in the depth image of the imaging system determined by the needs of practical applications. For example, N=M=10, that is, for an LED array with a depth of 10×10 pixels, if the actual application requires a dynamic measurement speed of 10 frames per second, then the fixed duration of each LED in the array is 1ms; during this 1ms time Inside, the imaging system needs to complete the fast TOF distance measurement of an LED depth pixel and the sending of the result. After that, the imaging system turns off the LED, lights up the next adjacent LED light, and enters the next 1ms cycle depth pixel measurement. And the result is sent; and so on, until the entire LED array is traversed to complete the measurement of the depth image of one frame and enter the cycle measurement of the next frame.

在TOF深度测量电路8中采用反馈式AGC电路22对混频后的中频信号进行处理,可以防止光电接收器PD1接收的光电信号由于自身幅度大小的不同而引入相移或距离测量误差。由于需要在很短的时间内对该中频信号进行自动增益控制,该AGC电路要求具有极短的响应时间。例如对于深度像素为10×10的LED阵列,如果实际应用需要10帧/秒的动态测量速度,那么每个LED深度像素的测量时间tS=1ms,为了保证高的深度测量分辨率,AGC电路的响应时间tR应该远远小于tS,例如可取tR=tS/10=0.1ms。而传统的反馈式AGC电路往往需要较长的响应时间(一般大于秒级),例如使用二极管检波,RC滤波的AGC电路,不符合此处的应用要求;尽管现有前馈式AGC电路具有很短的响应时间,但是由于它的增益控制精度较差而不能用于这里高精度的相移或距离检测场合。The feedback AGC circuit 22 is used in the TOF depth measurement circuit 8 to process the mixed intermediate frequency signal, which can prevent the photoelectric signal received by the photoelectric receiver PD1 from introducing phase shift or distance measurement error due to its own amplitude difference. Since the automatic gain control of the intermediate frequency signal needs to be performed in a very short time, the AGC circuit is required to have a very short response time. For example, for an LED array with a depth pixel of 10×10, if the actual application requires a dynamic measurement speed of 10 frames per second, then the measurement time t S of each LED depth pixel = 1ms. In order to ensure a high depth measurement resolution, the AGC circuit The response time t R should be much shorter than t S , for example, t R =t S /10 = 0.1 ms. And the traditional feedback AGC circuit often needs longer response time (generally greater than the second level), such as using diode detection, RC filter AGC circuit, does not meet the application requirements here; although the existing feedforward AGC circuit has many Short response time, but because of its poor gain control accuracy, it cannot be used in high-precision phase shift or distance detection occasions.

图2为增益控制精度高且响应快速的反馈式AGC电路方框图,其中输入中频信号的幅度和频率约为0.5~20mVp-p和10KHz~1MHz。对输入信号进行幅度检测的平方幅度检测电路15可选用响应时间很短的双平衡模拟乘法器,而可变增益放大电路12可以采用差分电流模式增益控制电路,第一固定增益放大电路13和第二固定增益放大电路16为普通的差分放大器。经第二固定增益放大电路16输出的信号被送入LR低通滤波器14以滤除产生的高次谐波,这是利用了电感的感抗随着输入信号频率升高而增大的特性,因此该滤波器除了具有很好的低通滤波效果外,还具有很短的响应时间,可明显优于0.1ms。LR低通滤波器14的输出直流电平被用于控制可变增益放大电路12的增益。输入信号在经可变增益放大电路12和第一固定增益放大电路13放大后,最终可获得反馈式AGC电路22的输出信号。Figure 2 is a block diagram of a feedback AGC circuit with high gain control precision and fast response, in which the amplitude and frequency of the input intermediate frequency signal are about 0.5-20mV pp and 10KHz-1MHz. The square amplitude detection circuit 15 that detects the amplitude of the input signal can be a double-balanced analog multiplier with a short response time, and the variable gain amplifier circuit 12 can use a differential current mode gain control circuit, the first fixed gain amplifier circuit 13 and the second The two fixed-gain amplifying circuits 16 are common differential amplifiers. The signal output by the second fixed-gain amplifying circuit 16 is sent to the LR low-pass filter 14 to filter out the high-order harmonics generated, which utilizes the characteristic that the inductive reactance of the inductor increases as the frequency of the input signal increases , so the filter has a very short response time in addition to a good low-pass filtering effect, which is obviously better than 0.1ms. The output DC level of the LR low pass filter 14 is used to control the gain of the variable gain amplifier circuit 12 . After the input signal is amplified by the variable gain amplifier circuit 12 and the first fixed gain amplifier circuit 13 , the output signal of the feedback AGC circuit 22 can be finally obtained.

二维CCD/CMOS图像传感器9可以选用成熟的二维图像传感器MI360,二维图像信号处理电路10可以使用现有的专用图像处理芯片ZC0301,通过两者结合来共同完成实时高像素的二维图像的获取。The two-dimensional CCD/CMOS image sensor 9 can be selected from the mature two-dimensional image sensor MI360, and the two-dimensional image signal processing circuit 10 can use the existing dedicated image processing chip ZC0301 to jointly complete a real-time high-pixel two-dimensional image of acquisition.

通过应用分束镜3对N×M LED阵列4投影光的透射作用和对被测三维目标1表面散射光的反射作用,将透镜2不仅用作LED阵列深度测量的投影透镜,同时也用于收集目标1表面的散射光,作为目标1的二维成像透镜,实现深度图像和二维图像共用同一透镜2成像,从而克服使用不同镜头成像所带来的固有像差问题,实现二维图像和深度图像之间的实时高精度对准,详细的光路结构如图1所示。By using the beam splitter 3 to transmit the projected light of the N×M LED array 4 and to reflect the scattered light on the surface of the three-dimensional target 1 to be measured, the lens 2 is not only used as a projection lens for the depth measurement of the LED array, but also as a projection lens for the depth measurement of the LED array. The scattered light on the surface of the target 1 is collected and used as the two-dimensional imaging lens of the target 1 to realize the imaging of the depth image and the two-dimensional image with the same lens 2, so as to overcome the inherent aberration problem caused by the use of different lenses for imaging, and realize the two-dimensional image and Real-time high-precision alignment between depth images, the detailed optical path structure is shown in Figure 1.

图1给出了深度图像和二维图像共用同一透镜2成像的一种光路结构,具体实施中,至少还可以有另一种光路结构:保持图1中分束镜3的摆放位置和角度不变,将图1中N×M LED阵列4和二维CCD/CMOS图像传感器9的位置进行互换,使得LED光源发射的调制光经分束镜3反射到达透镜2;目标1表面的散射光经透镜2后,再经分束镜3透射到达二维CCD/CMOS图像传感器9。Figure 1 shows an optical path structure in which the depth image and the two-dimensional image share the same lens 2 for imaging. In specific implementation, there may be at least another optical path structure: keep the placement position and angle of the beam splitter 3 in Figure 1 Invariably, the positions of the N×M LED array 4 and the two-dimensional CCD/CMOS image sensor 9 in Fig. 1 are exchanged so that the modulated light emitted by the LED light source is reflected by the beam splitter 3 and reaches the lens 2; the scattering on the surface of the target 1 After passing through the lens 2 , the light is transmitted through the beam splitter 3 and reaches the two-dimensional CCD/CMOS image sensor 9 .

为了符合共用透镜成像的要求,分束镜3在安装后需要给N×M LED阵列4和二维CCD/CMOS图像传感器9提供同样的视场,并且N×M LED阵列4的面积大小必须和二维CCD/CMOS图像传感器9的面积大小相同。可以利用现有成熟的微电子工艺,对小体积的LED管芯进行密集封装,以形成高像素小面积的LED阵列芯片,这不仅是减小LED阵列面积的有效方法,同时也是增加LED深度图像像素数量,提高成像系统空间分辨率的有效途径。经密集封装的小面积LED阵列还可以保证在进行深度测量时阵列中所有被调制发射的LED管芯具有相同的温度环境,可减小温度效应对深度测量分辨率的影响,同时还能够减小投影成像透镜2的直径,缩小整个光学成像系统的体积。此外,如果在对LED管芯进行密集封装的同时集成入一些简单的LED阵列行列选通和驱动电路,可进一步提高系统的集成度并构建更加小巧紧凑的三维成像系统。In order to meet the requirements of shared lens imaging, the beam splitter 3 needs to provide the same field of view for the N×M LED array 4 and the two-dimensional CCD/CMOS image sensor 9 after installation, and the area size of the N×M LED array 4 must be the same as The two-dimensional CCD/CMOS image sensors 9 have the same area size. The existing mature microelectronics technology can be used to densely package small-volume LED dies to form LED array chips with high pixels and small area. This is not only an effective way to reduce the area of LED arrays, but also to increase the depth of LED images. The number of pixels is an effective way to improve the spatial resolution of the imaging system. The densely packaged small-area LED array can also ensure that all modulated emission LED dies in the array have the same temperature environment during depth measurement, which can reduce the impact of temperature effects on depth measurement resolution, and at the same time reduce The diameter of the projection imaging lens 2 reduces the volume of the entire optical imaging system. In addition, if some simple LED array row and column gating and driving circuits are integrated while the LED dies are densely packaged, the integration level of the system can be further improved and a more compact and compact 3D imaging system can be constructed.

在PC中,基于成熟的图像处理算法,将获得的二维图像和深度图像数据“融合”在一起,即可得到最终所需的高性能实时三维图像。另外,如果实际成像系统中获得的深度图像像素数量少于二维图像的像素数量,可以首先对深度图像进行插值处理后再进行二者的数据融合处理。In the PC, based on mature image processing algorithms, the obtained 2D image and depth image data are "fused" together to obtain the final high-performance real-time 3D image. In addition, if the number of pixels of the depth image obtained in the actual imaging system is less than the number of pixels of the two-dimensional image, the interpolation processing of the depth image can be performed first, and then the data fusion processing of the two can be performed.

Claims (4)

1.一种基于LED阵列共透镜TOF深度测量的三维成像方法,其特征在于:1. A three-dimensional imaging method based on LED array common lens TOF depth measurement, characterized in that: 采用光功率调制的二维LED阵列作为照明光源,每次仅有所述二维LED阵列中的一颗LED为点亮状态,所述LED发射的调制光经投影透镜投射到目标(1)的表面,用光电接收器(6)接收所述目标(1)表面的散射光,测量从光源到目标(1)之间的往返飞行时间TOF,依据所述往返飞行时间TOF计算得出所述点亮状态的LED的深度像素值,完成单个LED深度像素值的测量;A two-dimensional LED array modulated by optical power is used as an illumination light source, and only one LED in the two-dimensional LED array is turned on each time, and the modulated light emitted by the LED is projected to the target (1) through a projection lens. surface, use a photoelectric receiver (6) to receive the scattered light on the surface of the target (1), measure the round-trip time-of-flight TOF from the light source to the target (1), and calculate the point according to the round-trip time-of-flight TOF The depth pixel value of the LED in the bright state, to complete the measurement of the depth pixel value of a single LED; 时分扫描整个二维LED阵列,重复所述单个LED深度像素值的测量过程,获得全部LED深度像素值并组合生成目标(1)的深度图像;同时,目标(1)表面的散射光经二维成像透镜后由二维CCD/CMOS图像传感器(9)接收,获得目标(1)的二维图像;设置所述投影透镜和二维成像透镜为同一透镜(2),使深度图像和二维图像实时对准;Scan the entire two-dimensional LED array in time division, repeat the measurement process of the single LED depth pixel value, obtain all LED depth pixel values and combine them to generate the depth image of the target (1); at the same time, the scattered light on the surface of the target (1) is passed through the two-dimensional After the imaging lens is received by a two-dimensional CCD/CMOS image sensor (9), the two-dimensional image of the target (1) is obtained; the projection lens and the two-dimensional imaging lens are set as the same lens (2), so that the depth image and the two-dimensional image real-time alignment; 将所述二维图像和深度图像融合生成目标(1)的三维图像。The two-dimensional image and the depth image are fused to generate a three-dimensional image of the target (1). 2.根据权利要求1所述的基于LED阵列共透镜TOF深度测量的三维成像方法,其特征在于:设置分束镜(3),所述LED光源发射的调制光经分束镜(3)透射到达投影透镜;所述目标(1)表面的散射光经二维成像透镜后,再经所述分束镜(3)反射到达所述二维CCD/CMOS图像传感器(9)。2. The three-dimensional imaging method based on LED array common lens TOF depth measurement according to claim 1, characterized in that: a beam splitter (3) is set, and the modulated light emitted by the LED light source is transmitted through the beam splitter (3) Reaching the projection lens; the scattered light on the surface of the target (1) passes through the two-dimensional imaging lens, and then is reflected by the beam splitter (3) to reach the two-dimensional CCD/CMOS image sensor (9). 3.一种基于LED阵列共透镜TOF深度测量的三维成像系统,其特征在于:3. A three-dimensional imaging system based on LED array common lens TOF depth measurement, characterized in that: 设置光电调制扫描电路(5)对N×M的LED阵列(4)的输出光功率进行调制和时分扫描,在任意时刻,整个LED阵列(4)中仅有一颗LED被点亮;LED发射的调制光到达分束镜(3)后,分成透射光和反射光,所述透射光经透镜(2)投影在目标(1)的表面,以光电接收器PD1(6)接收目标(1)表面产生的散射光;所述反射光直接由光电接收器PD2(7)接收;The photoelectric modulation scanning circuit (5) is set to modulate and time-divisionally scan the output optical power of the N×M LED array (4), at any moment, only one LED is lit in the entire LED array (4); After the modulated light reaches the beam splitter (3), it is divided into transmitted light and reflected light. The transmitted light is projected on the surface of the target (1) through the lens (2), and received by the photoelectric receiver PD1 (6) on the surface of the target (1) The scattered light produced; the reflected light is directly received by the photoelectric receiver PD2 (7); 设置TOF深度测量电路(8),以所述TOF深度测量电路(8)分别处理光电接收器PD1(6)和光电接收器PD2(7)输出的光电信号,依次计算每一个LED深度像素值,所述LED的深度像素值在PC(11)中组合生成目标(1)的深度图像;同时目标(1)表面的散射光经透镜(2)、再经分束镜(3)反射后被二维CCD/CMOS图像传感器(9)接收,并经二维图像信号处理电路(10)获得与所述深度图像实时对准的目标(1)的二维图像;The TOF depth measurement circuit (8) is set, and the photoelectric signals output by the photoelectric receiver PD1 (6) and the photoelectric receiver PD2 (7) are processed respectively with the TOF depth measurement circuit (8), and each LED depth pixel value is calculated in turn, The depth pixel values of the LEDs are combined in the PC (11) to generate a depth image of the target (1); at the same time, the scattered light on the surface of the target (1) is reflected by the lens (2) and the beam splitter (3). The three-dimensional CCD/CMOS image sensor (9) receives, and obtains the two-dimensional image of the target (1) aligned with the described depth image in real time through the two-dimensional image signal processing circuit (10); 所述深度图像与二维图像在PC(11)中融合生成目标(1)的三维图像。The depth image and the two-dimensional image are fused in the PC (11) to generate a three-dimensional image of the target (1). 4.根据权利要求3所述的基于LED阵列共透镜TOF深度测量的三维成像系统,其特征是在所述TOF深度测量电路(8)中包括有一个反馈式自动增益控制AGC电路(22),所述反馈式AGC电路(22)采用平方幅度检测电路(15)对输入信号进行幅度检测,其输出信号经第二固定增益放大电路(16)后,被送入电感电阻LR低通滤波器(14)以滤除产生的高次谐波,其输出的直流电平被用于控制可变增益放大电路(12)的增益;输入信号在经可变增益放大电路(12)和第一固定增益放大电路(13)放大后,获得反馈式AGC电路(22)的输出信号。4. the three-dimensional imaging system based on the LED array common lens TOF depth measurement according to claim 3, is characterized in that a feedback type automatic gain control AGC circuit (22) is included in the TOF depth measurement circuit (8), Described feedback type AGC circuit (22) adopts square amplitude detection circuit (15) to carry out amplitude detection to input signal, and its output signal is sent into inductance resistance LR low-pass filter ( 14) to filter out the higher harmonics produced, the direct current level of its output is used to control the gain of the variable gain amplifier circuit (12); The input signal is amplified through the variable gain amplifier circuit (12) and the first fixed gain After the circuit (13) is amplified, the output signal of the feedback AGC circuit (22) is obtained.
CN 201010190028 2010-05-28 2010-05-28 Three-dimensional imaging method and system based on LED array common lens TOF depth measurement Pending CN101866056A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201010190028 CN101866056A (en) 2010-05-28 2010-05-28 Three-dimensional imaging method and system based on LED array common lens TOF depth measurement

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201010190028 CN101866056A (en) 2010-05-28 2010-05-28 Three-dimensional imaging method and system based on LED array common lens TOF depth measurement

Publications (1)

Publication Number Publication Date
CN101866056A true CN101866056A (en) 2010-10-20

Family

ID=42957845

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201010190028 Pending CN101866056A (en) 2010-05-28 2010-05-28 Three-dimensional imaging method and system based on LED array common lens TOF depth measurement

Country Status (1)

Country Link
CN (1) CN101866056A (en)

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102538758A (en) * 2010-12-20 2012-07-04 微软公司 Plural detector time-of-flight depth mapping
CN102984446A (en) * 2011-09-05 2013-03-20 联想(北京)有限公司 Image acquisition device and image acquisition method
CN103079085A (en) * 2011-10-25 2013-05-01 三星电子株式会社 3D image acquisition apparatus and method of calculating depth information in the 3D image acquisition apparatus
CN103809742A (en) * 2012-06-19 2014-05-21 英飞凌科技股份有限公司 Dynamic adaptation of imaging parameters
CN105094005A (en) * 2014-05-19 2015-11-25 洛克威尔自动控制技术股份有限公司 Integration of optical area monitoring with industrial machine control
CN106125063A (en) * 2015-05-07 2016-11-16 通用汽车环球科技运作有限责任公司 Multi-wavelength array laser radar
WO2017063435A1 (en) * 2015-10-15 2017-04-20 杭州海康威视数字技术股份有限公司 Method for obtaining combined depth image, and depth camera
CN107515402A (en) * 2017-08-21 2017-12-26 东莞市迈科新能源有限公司 TOF three-dimensional ranging system
CN107515403A (en) * 2017-08-21 2017-12-26 东莞市迈科新能源有限公司 A TOF three-dimensional ranging sensor
CN107564051A (en) * 2017-09-05 2018-01-09 歌尔股份有限公司 A kind of depth information acquisition method and system
CN107656284A (en) * 2017-09-26 2018-02-02 艾普柯微电子(上海)有限公司 Range unit and distance-finding method
CN107710741A (en) * 2016-04-21 2018-02-16 华为技术有限公司 A kind of method and camera device for obtaining depth information
CN107823877A (en) * 2016-09-16 2018-03-23 天津思博科科技发展有限公司 The fantasy sport game device realized using three-dimensional localization sensor
CN109085603A (en) * 2017-06-14 2018-12-25 浙江舜宇智能光学技术有限公司 Optical 3-dimensional imaging system and color three dimensional image imaging method
CN109375237A (en) * 2018-12-12 2019-02-22 北京华科博创科技有限公司 A kind of all solid state face array three-dimensional imaging laser radar system
CN109726611A (en) * 2017-10-27 2019-05-07 北京小米移动软件有限公司 Biological feather recognition method and device, readable storage medium storing program for executing and electronic equipment
CN109726614A (en) * 2017-10-27 2019-05-07 北京小米移动软件有限公司 3D stereoscopic imaging method and device, readable storage medium, and electronic device
JP2019533324A (en) * 2016-09-09 2019-11-14 グーグル エルエルシー 3D telepresence system
CN111487648A (en) * 2020-04-16 2020-08-04 北京深测科技有限公司 Non-visual field imaging method and system based on flight time
CN111856433A (en) * 2020-07-25 2020-10-30 深圳奥锐达科技有限公司 Distance measuring system and measuring method
CN112180397A (en) * 2014-01-29 2021-01-05 Lg伊诺特有限公司 Apparatus and method for extracting depth information
CN112255174A (en) * 2020-10-28 2021-01-22 江苏善果缘智能科技有限公司 Co-frequency confocal LED illumination light source structure for detecting three-dimensional defects on surface of product
CN112771410A (en) * 2018-08-16 2021-05-07 感觉光子公司 Integrated lidar image sensor apparatus and systems and related methods of operation
CN112867962A (en) * 2018-09-11 2021-05-28 恩耐公司 Electro-optic modulator and method of use and manufacture thereof for three-dimensional imaging
WO2021136098A1 (en) * 2020-01-03 2021-07-08 华为技术有限公司 Tof depth sensing module and image generation method
CN114363490A (en) * 2021-12-30 2022-04-15 西安交通大学 A TOF camera suitable for power line inspection and its working method
WO2022166583A1 (en) * 2021-02-08 2022-08-11 深圳市灵明光子科技有限公司 Projection device, three-dimensional imaging system, three-dimensional imaging method, and electronic product
CN119064668A (en) * 2024-11-06 2024-12-03 浙江老鹰半导体技术有限公司 Single-channel power test method and system for multi-channel VCSEL chip

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101619962A (en) * 2009-07-30 2010-01-06 浙江工业大学 Active three-dimensional panoramic view vision sensor based on full color panoramic view LED light source
CN201707438U (en) * 2010-05-28 2011-01-12 中国科学院合肥物质科学研究院 Three-dimensional imaging system based on LED array co-lens TOF (Time of Flight) depth measurement

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101619962A (en) * 2009-07-30 2010-01-06 浙江工业大学 Active three-dimensional panoramic view vision sensor based on full color panoramic view LED light source
CN201707438U (en) * 2010-05-28 2011-01-12 中国科学院合肥物质科学研究院 Three-dimensional imaging system based on LED array co-lens TOF (Time of Flight) depth measurement

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
《中国博士学位论文全文数据库-信息科技辑(月刊)》 20090915 王焕钦 新型光电测距与三维成像技术研究 83-112 1-4 , 第09期 2 *

Cited By (47)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102538758A (en) * 2010-12-20 2012-07-04 微软公司 Plural detector time-of-flight depth mapping
US8803952B2 (en) 2010-12-20 2014-08-12 Microsoft Corporation Plural detector time-of-flight depth mapping
CN102538758B (en) * 2010-12-20 2015-04-01 微软公司 Plural detector time-of-flight depth mapping
CN102984446A (en) * 2011-09-05 2013-03-20 联想(北京)有限公司 Image acquisition device and image acquisition method
CN102984446B (en) * 2011-09-05 2016-01-13 联想(北京)有限公司 Image collecting device and image-pickup method
CN103079085A (en) * 2011-10-25 2013-05-01 三星电子株式会社 3D image acquisition apparatus and method of calculating depth information in the 3D image acquisition apparatus
CN103079085B (en) * 2011-10-25 2016-12-21 三星电子株式会社 Acquiring three-dimensional images device and the method calculating depth information in this device
CN103809742A (en) * 2012-06-19 2014-05-21 英飞凌科技股份有限公司 Dynamic adaptation of imaging parameters
CN112180397B (en) * 2014-01-29 2023-07-25 Lg伊诺特有限公司 Apparatus and method for extracting depth information
CN112180397A (en) * 2014-01-29 2021-01-05 Lg伊诺特有限公司 Apparatus and method for extracting depth information
CN105094005A (en) * 2014-05-19 2015-11-25 洛克威尔自动控制技术股份有限公司 Integration of optical area monitoring with industrial machine control
CN106125063A (en) * 2015-05-07 2016-11-16 通用汽车环球科技运作有限责任公司 Multi-wavelength array laser radar
CN106612387B (en) * 2015-10-15 2019-05-21 杭州海康威视数字技术股份有限公司 A kind of combined depth figure preparation method and depth camera
US10713804B2 (en) 2015-10-15 2020-07-14 Hangzhou Hikvision Digital Technology Co., Ltd. Method for obtaining combined depth image, and depth camera
WO2017063435A1 (en) * 2015-10-15 2017-04-20 杭州海康威视数字技术股份有限公司 Method for obtaining combined depth image, and depth camera
CN106612387A (en) * 2015-10-15 2017-05-03 杭州海康威视数字技术股份有限公司 Combined depth map acquisition method and depth camera
CN107710741A (en) * 2016-04-21 2018-02-16 华为技术有限公司 A kind of method and camera device for obtaining depth information
JP7001675B2 (en) 2016-09-09 2022-01-19 グーグル エルエルシー 3D telepresence system
JP7443314B2 (en) 2016-09-09 2024-03-05 グーグル エルエルシー 3D telepresence system
JP2022009242A (en) * 2016-09-09 2022-01-14 グーグル エルエルシー Three-dimensional telepresence system
JP2019533324A (en) * 2016-09-09 2019-11-14 グーグル エルエルシー 3D telepresence system
CN107823877A (en) * 2016-09-16 2018-03-23 天津思博科科技发展有限公司 The fantasy sport game device realized using three-dimensional localization sensor
CN109085603A (en) * 2017-06-14 2018-12-25 浙江舜宇智能光学技术有限公司 Optical 3-dimensional imaging system and color three dimensional image imaging method
CN107515403A (en) * 2017-08-21 2017-12-26 东莞市迈科新能源有限公司 A TOF three-dimensional ranging sensor
CN107515402A (en) * 2017-08-21 2017-12-26 东莞市迈科新能源有限公司 TOF three-dimensional ranging system
CN107564051A (en) * 2017-09-05 2018-01-09 歌尔股份有限公司 A kind of depth information acquisition method and system
CN107564051B (en) * 2017-09-05 2020-06-02 歌尔股份有限公司 A kind of depth information collection method and system
CN107656284B (en) * 2017-09-26 2022-11-18 艾普柯微电子(江苏)有限公司 Distance measuring device and distance measuring method
CN107656284A (en) * 2017-09-26 2018-02-02 艾普柯微电子(上海)有限公司 Range unit and distance-finding method
CN109726611B (en) * 2017-10-27 2021-07-23 北京小米移动软件有限公司 Biological feature recognition method and device, readable storage medium and electronic equipment
CN109726614A (en) * 2017-10-27 2019-05-07 北京小米移动软件有限公司 3D stereoscopic imaging method and device, readable storage medium, and electronic device
CN109726611A (en) * 2017-10-27 2019-05-07 北京小米移动软件有限公司 Biological feather recognition method and device, readable storage medium storing program for executing and electronic equipment
CN112771410A (en) * 2018-08-16 2021-05-07 感觉光子公司 Integrated lidar image sensor apparatus and systems and related methods of operation
CN112867962A (en) * 2018-09-11 2021-05-28 恩耐公司 Electro-optic modulator and method of use and manufacture thereof for three-dimensional imaging
US12066546B2 (en) 2018-09-11 2024-08-20 Nlight, Inc. Electro-optic modulator and methods of using and manufacturing same for three-dimensional imaging
CN109375237A (en) * 2018-12-12 2019-02-22 北京华科博创科技有限公司 A kind of all solid state face array three-dimensional imaging laser radar system
CN109375237B (en) * 2018-12-12 2019-11-19 北京华科博创科技有限公司 A kind of all solid state face array three-dimensional imaging laser radar system
CN113156459A (en) * 2020-01-03 2021-07-23 华为技术有限公司 TOF depth sensing module and image generation method
WO2021136098A1 (en) * 2020-01-03 2021-07-08 华为技术有限公司 Tof depth sensing module and image generation method
CN113156459B (en) * 2020-01-03 2023-10-13 华为技术有限公司 A TOF depth sensing module and image generation method
CN111487648A (en) * 2020-04-16 2020-08-04 北京深测科技有限公司 Non-visual field imaging method and system based on flight time
CN111856433A (en) * 2020-07-25 2020-10-30 深圳奥锐达科技有限公司 Distance measuring system and measuring method
WO2022021797A1 (en) * 2020-07-25 2022-02-03 深圳奥锐达科技有限公司 Distance measurement system and distance measurement method
CN112255174A (en) * 2020-10-28 2021-01-22 江苏善果缘智能科技有限公司 Co-frequency confocal LED illumination light source structure for detecting three-dimensional defects on surface of product
WO2022166583A1 (en) * 2021-02-08 2022-08-11 深圳市灵明光子科技有限公司 Projection device, three-dimensional imaging system, three-dimensional imaging method, and electronic product
CN114363490A (en) * 2021-12-30 2022-04-15 西安交通大学 A TOF camera suitable for power line inspection and its working method
CN119064668A (en) * 2024-11-06 2024-12-03 浙江老鹰半导体技术有限公司 Single-channel power test method and system for multi-channel VCSEL chip

Similar Documents

Publication Publication Date Title
CN101866056A (en) Three-dimensional imaging method and system based on LED array common lens TOF depth measurement
CN201707438U (en) Three-dimensional imaging system based on LED array co-lens TOF (Time of Flight) depth measurement
Gokturk et al. A time-of-flight depth sensor-system description, issues and solutions
CN110596721B (en) Flight time distance measuring system and method of double-shared TDC circuit
CN207354504U (en) A kind of frequency is with exposing adjustable flight time three-dimensional image forming apparatus
US10677923B2 (en) Optoelectronic modules for distance measurements and/or multi-dimensional imaging
US10712432B2 (en) Time-of-light-based systems using reduced illumination duty cycles
JP4405154B2 (en) Imaging system and method for acquiring an image of an object
US6600168B1 (en) High speed laser three-dimensional imager
US10302424B2 (en) Motion contrast depth scanning
US20100046802A1 (en) Distance estimation apparatus, distance estimation method, storage medium storing program, integrated circuit, and camera
US20130148102A1 (en) Method to Compensate for Errors in Time-of-Flight Range Cameras Caused by Multiple Reflections
CN102508259A (en) Miniaturization lens-free laser three-dimensional imaging system based on micro-electromechanical system (MEMS) scanning micro-mirror and imaging method thereof
CN111123289A (en) Depth measuring device and measuring method
WO2021228235A1 (en) Photoelectric detection and collection system and centroid detection method based on single pixel detector
CN107607960A (en) A kind of anallatic method and device
CN103438832A (en) Three-dimensional image measuring method based on line-structured light
CN103064087A (en) Three-dimensional imaging radar system and method based on multiple integral
CN209676383U (en) Depth camera mould group, depth camera, mobile terminal and imaging device
CN106970024A (en) Gauge detection distance-finding method and system based on camera and controllable stroboscopic light source
CN104931974A (en) Light source modulation and demodulation-based ICMOS high-speed 3D imaging laser radar
CN103983981A (en) Three-dimensional compressed imaging method and device based on phase position distance measurement principle
CN211148917U (en) Distance measuring system
CN112255639B (en) Depth perception sensor and depth perception sensing module for region of interest
CN113465545A (en) Three-dimensional measurement system based on high-speed LED array and measurement method thereof

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C12 Rejection of a patent application after its publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20101020