[go: up one dir, main page]

CN104374374B - 3D environment dubbing system and 3D panoramas display method for drafting based on active panoramic vision - Google Patents

3D environment dubbing system and 3D panoramas display method for drafting based on active panoramic vision Download PDF

Info

Publication number
CN104374374B
CN104374374B CN201410632152.3A CN201410632152A CN104374374B CN 104374374 B CN104374374 B CN 104374374B CN 201410632152 A CN201410632152 A CN 201410632152A CN 104374374 B CN104374374 B CN 104374374B
Authority
CN
China
Prior art keywords
laser
display
panoramic
viewpoint
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410632152.3A
Other languages
Chinese (zh)
Other versions
CN104374374A (en
Inventor
汤平
汤一平
周伟敏
鲁少辉
韩国栋
吴挺
陈麒
韩旺明
胡克钢
王伟羊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN201410632152.3A priority Critical patent/CN104374374B/en
Publication of CN104374374A publication Critical patent/CN104374374A/en
Application granted granted Critical
Publication of CN104374374B publication Critical patent/CN104374374B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

本发明公开了一种基于主动全景视觉的3D环境复制系统,包括全方位视觉传感器、移动体激光光源以及用于对全方位图像进行3D全景重构和3D全景绘制输出的微处理器,微处理器包括标定、3D重构和3D全景显示绘制三个部分,其中3D全景显示绘制部分主要包括:以物为中心的全景绘制模块、以人为中心的透视图显示绘制模块、全景透视图循环显示绘制模块、立体透视图显示绘制模块、全景立体图循环显示绘制模块和观察距离变化立体透视图显示绘制模块。本发明还公开了一种基于主动全景视觉的3D全景显示绘制方法。本发明实现了3D全景模型重构的几何准确性、真实感、具有亲临其境视觉感全景3D场景绘制显示、重建过程自动化的完美统一。

The invention discloses a 3D environment replication system based on active panoramic vision, which includes omnidirectional visual sensors, a moving body laser light source, and a microprocessor for performing 3D panoramic reconstruction and 3D panoramic rendering output on omnidirectional images. The controller includes three parts: calibration, 3D reconstruction, and 3D panoramic display rendering. The 3D panoramic display rendering part mainly includes: an object-centered panoramic rendering module, a human-centered perspective display rendering module, and a panoramic perspective cycle display rendering The module, the stereo perspective view display drawing module, the panoramic stereo view cycle display drawing module and the observation distance change stereo perspective view display drawing module. The invention also discloses a 3D panoramic display drawing method based on active panoramic vision. The present invention realizes the perfect unification of geometric accuracy, realism, panoramic 3D scene rendering and display with immersive visual sense, and reconstruction process automation of 3D panoramic model reconstruction.

Description

基于主动全景视觉的3D环境复制系统及3D全景显示绘制方法3D environment replication system and 3D panoramic display rendering method based on active panoramic vision

技术领域technical field

本发明涉及激光光源、全方位视觉传感器以及计算机视觉技术在立体视觉测量及3D绘制方面的应用,尤其涉及一种基于主动全景视觉的3D环境复制系统及3D全景显示绘制方法。The invention relates to the application of laser light source, omnidirectional visual sensor and computer vision technology in stereoscopic vision measurement and 3D rendering, in particular to a 3D environment replication system and 3D panoramic display rendering method based on active panoramic vision.

背景技术Background technique

三维重建技术包括了三维测量与立体重构,是一门新兴的、极具发展潜力和实用价值的应用技术,三维模型的重建技术主要涉及到以下三个方面的内容:1)几何的准确性;2)真实感;3)重建过程的自动化。三维模型的重建所需要的数据主要包括激光扫描的深度图像数据和图像传感器采集的图像数据两个方面。3D reconstruction technology includes 3D measurement and 3D reconstruction. It is an emerging application technology with great development potential and practical value. The reconstruction technology of 3D model mainly involves the following three aspects: 1) The accuracy of geometry ; 2) Realism; 3) Automation of the reconstruction process. The data required for the reconstruction of the 3D model mainly includes two aspects: the depth image data of the laser scanning and the image data collected by the image sensor.

目前的三维激光扫描仪仍然有很多可改进之处,1)如精密的硬件构造,要求将CCD技术、激光技术、精密机械传感技术等进行高质量的整合,导致了该类仪器具有昂贵的制造成本和维护成本。2)现有的三维激光扫描技术属于面扫描成像技术,一幅扫描点云图无法获取建筑物的全貌,尤其是建筑物内部的全貌;从不同扫描站(视角)获得的点云分别采用其各自的局部坐标系,因此需要将它们配准到一个统一坐标系下。配准过程中存在着多次多个坐标系之间的转换,造成了各种误差并影响计算速度和计算资源。3)点云采集过程中会带入较多的干扰,导致了需要对点云数据进行预处理等环节。4)各厂商的三维激光扫描仪所配置的软件中点云数据缺乏统一的数据标准,难以实现数据的共享,这点将在数字城市建设中尤为突出。5)两种不同设备来获取空间物点的几何和色彩信息,不同设备之间的几何和色彩信息数据配准好坏直接影响纹理映射和纹理合成的效果。6)三维建模处理过程中需要多次人工干预,建模效率不高,这需要操作人员具有较高的专业知识,并且影响着自动化程度。The current 3D laser scanner still has a lot of room for improvement, 1) such as precise hardware structure, which requires high-quality integration of CCD technology, laser technology, precision mechanical sensing technology, etc., resulting in expensive Manufacturing costs and maintenance costs. 2) The existing 3D laser scanning technology belongs to the surface scanning imaging technology, and a scanned point cloud image cannot obtain the whole picture of the building, especially the whole picture of the interior of the building; the point clouds obtained from different scanning stations (viewpoints) use their respective local coordinate system, so they need to be registered to a unified coordinate system. During the registration process, there are multiple transformations between multiple coordinate systems, which cause various errors and affect the calculation speed and resources. 3) More interference will be introduced in the point cloud collection process, which leads to the need to preprocess the point cloud data. 4) The point cloud data in the software configured by the 3D laser scanners of various manufacturers lacks a unified data standard, and it is difficult to achieve data sharing, which will be particularly prominent in the construction of digital cities. 5) Two different devices are used to obtain the geometric and color information of spatial object points. The quality of geometric and color information data registration between different devices directly affects the effect of texture mapping and texture synthesis. 6) The 3D modeling process requires multiple manual interventions, and the modeling efficiency is not high, which requires operators to have high professional knowledge and affects the degree of automation.

中国发明专利申请号为201210137201.7公开了一种基于主动全景视觉传感器的全方位三维建模系统,系统主要包括全方位视觉传感器、移动体激光光源以及用于对全方位图像进行3D全景重构的微处理器,移动体激光光源完成一次垂直方向的扫描得到了在不同高度情况下的切片点云数据,将这些数据以所述的移动体激光光源的高度值作为保存索引,这样就能按切片点云数据产生顺序进行累加,最终构建出带有几何信息和颜色信息的全景3D模型。但是该技术方案存在着两个主要问题,其一是移动体激光光源扫描得到的点云数据无法得到垂直于移动体激光光源的平面点云数据,如桌面、地面和屋内天顶平面;其二是目前的三维重构软件不能满足以人为中心的3D绘制显示,现有的3D软件主要是针对以物为中心的3D绘制显示,要满足以人为中心的3D显示就必须是让观察者具有亲临其境的3D重构再现,比如对龙门石窟的三维数字化,其三维数字化的最终目标就是能让任何人都能通过互联网远程游览龙门石窟,以多角度、全方位体验欣赏龙门石窟的艺术魅力。人机工程中,一般以静视野为依据设计视觉显示器等有关部件,为了让人们通过互联网游览展示的场景时具有亲临其境视觉感,就需要用人机工程的方法来实现一种以人为中心的3D立体绘制显示。The Chinese invention patent application number is 201210137201.7, which discloses an omnidirectional three-dimensional modeling system based on an active panoramic vision sensor. The processor, the moving body laser light source completes a vertical scan to obtain the slice point cloud data at different heights, and these data are stored with the height value of the moving body laser light source as the index, so that the slice points can be The cloud data is generated sequentially and accumulated, and finally a panoramic 3D model with geometric information and color information is constructed. However, there are two main problems in this technical solution. One is that the point cloud data obtained by scanning the moving body laser light source cannot obtain plane point cloud data perpendicular to the moving body laser light source, such as the plane of the desktop, the ground, and the zenith of the house; The current 3D reconstruction software cannot satisfy the human-centered 3D rendering and display. The existing 3D software is mainly aimed at object-centered 3D rendering and display. To satisfy the human-centered 3D display, the observer must have the ability to The 3D reconstruction and reproduction of its environment, such as the 3D digitization of the Longmen Grottoes, the ultimate goal of its 3D digitization is to allow anyone to remotely visit the Longmen Grottoes through the Internet, and appreciate the artistic charm of the Longmen Grottoes from multiple angles and all-round experience. In ergonomics, visual displays and other related components are generally designed on the basis of static vision. In order to allow people to have an immersive visual sense when browsing the displayed scenes through the Internet, it is necessary to use ergonomics methods to achieve a human-centered 3D stereo rendering display.

发明内容Contents of the invention

为了克服已有的被动式全景立体视觉测量装置的计算机资源消耗大、实时性能差、实用性不强、鲁棒性不高等不足,全彩色全景LED光源的主动三维立体全景视觉测量装置容易受到环境光的干扰等不足,本发明提供一种通过直接获取空间三维点的位置几何信息以及颜色信息,能够减少计算机资源消耗、快速完成测量、实时性好、实用性强、鲁棒性高的基于主动全景视觉传感器的全方位三维建模系统,采用以物为中心的3D宏观视觉和以人为中心的3D中观视觉相融合的绘制显示模式,增加用户的亲临其境体验感。In order to overcome the deficiencies of the existing passive panoramic stereo vision measurement devices, such as high computer resource consumption, poor real-time performance, poor practicability, and low robustness, the active three-dimensional stereo panoramic vision measurement device with full-color panoramic LED light source is vulnerable to environmental light. The present invention provides an active panorama-based system that can reduce the consumption of computer resources, quickly complete the measurement, have good real-time performance, strong practicability, and high robustness by directly acquiring the positional geometric information and color information of three-dimensional points in space. The all-round 3D modeling system of the visual sensor adopts the rendering display mode that combines the object-centered 3D macro vision and the human-centered 3D mesoscopic vision to increase the user's immersive experience.

要实现上述发明内容,必须要解决几个核心问题:(1)实现一种能覆盖整个重构场景的移动体激光光源;(2)实现一种能快速获得实际物体深度信息的主动式全景视觉传感器;(3)将激光扫描空间数据点与全景图像中相应像素点进行快速融合的方法;(4)一种基于规则点云数据的高度自动化三维场景重建方法;(5)以人为中心的3D全景绘制显示技术;(6)以物为中心的3D宏观视觉和以人为中心的3D中宏观视觉相融合的绘制显示技术;(7)3D重建过程的自动化,减少人工干预,整个扫描、处理、生成、绘制显示过程一气呵成。In order to realize the content of the above invention, several core problems must be solved: (1) Realize a moving body laser light source that can cover the entire reconstructed scene; (2) Realize an active panoramic vision that can quickly obtain the depth information of the actual object sensor; (3) a method for quickly fusing laser scanning spatial data points with corresponding pixels in a panoramic image; (4) a highly automated 3D scene reconstruction method based on regular point cloud data; (5) human-centered 3D Panoramic rendering and display technology; (6) rendering and display technology that combines object-centered 3D macro vision and human-centered 3D mid-macro vision; (7) automation of the 3D reconstruction process reduces manual intervention, and the entire scanning, processing, The process of generating, drawing and displaying is completed in one go.

本发明解决其技术问题所采用的技术方案是:The technical solution adopted by the present invention to solve its technical problems is:

一种基于主动全景视觉的3D环境复制系统,包括全方位视觉传感器、移动体激光光源以及用于对全方位图像进行3D全景重构和3D全景绘制输出的微处理器,全方位视觉传感器安装在移动体激光光源的导向支撑杆上,A 3D environment replication system based on active panoramic vision, including an omnidirectional visual sensor, a moving body laser light source, and a microprocessor for 3D panoramic reconstruction and 3D panoramic rendering output of omnidirectional images. The omnidirectional visual sensor is installed on On the guide support rod of the moving laser light source,

所述的移动体激光光源还包括沿导向支撑杆上下移动的体激光光源,该体激光光源具有垂直导向支撑杆的第一全方位面激光、与导向支撑杆的轴心线成θc倾斜的第二全方位面激光以及与导向支撑杆的轴心线成θa倾斜的第三全方位面激光;The moving body laser light source also includes a volume laser light source that moves up and down along the guide support rod, the volume laser source has a first omni-directional laser that is vertical to the guide support rod, and is inclined at θ c with the axis of the guide support rod. The second omni-directional laser and the third omni-directional laser with an inclination of θ a with the axis of the guide support rod;

所述的微处理器分为标定部分、3D重构部分和3D全景显示绘制部分;The microprocessor is divided into a calibration part, a 3D reconstruction part and a 3D panoramic display drawing part;

所述的标定部分,用于确定全方位视觉传感器的标定参数,并在全方位视觉传感器所拍摄的全景图像上解析出第一全方位面激光、第二全方位面激光和第三全方位面激光对应的激光投影信息;The calibration part is used to determine the calibration parameters of the omnidirectional visual sensor, and analyze the first omnidirectional laser, the second omnidirectional laser and the third omnidirectional laser on the panoramic image taken by the omnidirectional visual sensor. Laser projection information corresponding to the laser;

所述的3D重构部分,根据移动体激光光源的位置,以及所述激光投影信息的相关像素坐标值,计算移动面的点云几何信息,并将移动面的点云几何信息和各全方位面激光的颜色信息进行融合,构建全景3D模型;The 3D reconstruction part calculates the point cloud geometric information of the moving surface according to the position of the laser light source of the moving body and the relevant pixel coordinate values of the laser projection information, and combines the point cloud geometric information of the moving surface with each omni-directional The color information of the surface laser is fused to construct a panoramic 3D model;

所述的3D全景显示绘制部分,包括:The 3D panoramic display drawing part includes:

以人为中心的透视图显示绘制模块,根据所述的全景3D模型,以及观察者处于3D重构环境中的视角和视场范围,来绘制以人为中心的透视图。The human-centered perspective display rendering module draws the human-centered perspective according to the panoramic 3D model, and the view angle and field of view of the observer in the 3D reconstruction environment.

所述的3D全景显示绘制部分,还包括:The 3D panoramic display drawing part also includes:

全景透视图循环显示绘制模块,根据所述的全景3D模型,以及观察者处于3D重构环境中视角的循环改变和视场范围,来绘制以人为中心的全景透视循环显示图;The panoramic perspective circulation display drawing module draws a human-centered panoramic perspective circulation display diagram according to the panoramic 3D model, and the cyclic change and field of view of the observer in the 3D reconstruction environment;

立体透视图显示绘制模块,根据所述的透视图,生成右视点图像、左视点图像和左右视点图像,来绘制以人为中心的立体透视图;The stereoscopic perspective display drawing module generates a right viewpoint image, a left viewpoint image and a left and right viewpoint image according to the perspective diagram to draw a human-centered stereoscopic perspective diagram;

全景立体图循环显示绘制模块,根据所述的全景3D模型,以及观察者处于3D重构环境中视角的循环改变和视场范围,通过不断改变方位角β,生成在该方位角β下的左右立体像对,来绘制以人为中心的全景立体透视循环显示图;The panoramic stereogram cyclic display drawing module, according to the panoramic 3D model, as well as the cyclic change of the view angle and the field of view range of the observer in the 3D reconstruction environment, generates left and right stereoscopic images under the azimuth angle β by constantly changing the azimuth angle β Image pair, to draw a human-centered panoramic stereoscopic perspective cycle display;

观察距离变化立体透视图显示绘制模块,根据所述的全景3D模型,以及观察者在3D重构环境中观察距离的改变和视场范围,来绘制在不断改变观察距离时以人为中心的全景立体透视显示图。The stereoscopic perspective display drawing module for changing observation distance draws a human-centered panoramic stereoscopic view when the observation distance is constantly changing according to the panoramic 3D model, as well as the change of the observer's observation distance and field of view in the 3D reconstruction environment Perspective display diagram.

一种基于上述3D环境复制系统的3D全景显示绘制方法,包括步骤:A 3D panoramic display drawing method based on the above-mentioned 3D environment replication system, comprising steps:

1)采用全方位视觉传感器拍摄移动体激光光源投射形成的全景图像;1) The omnidirectional visual sensor is used to shoot the panoramic image formed by the projection of the moving laser light source;

2)根据所述的全景图像,确定全方位视觉传感器的标定参数,并解析出第一全方位面激光、第二全方位面激光和第三全方位面激光对应的激光投影信息;2) According to the panoramic image, determine the calibration parameters of the omnidirectional visual sensor, and analyze the laser projection information corresponding to the first omnidirectional laser, the second omnidirectional laser and the third omnidirectional laser;

3)根据移动体激光光源的位置,以及所述激光投影信息的相关像素坐标值,计算移动面的点云几何信息,并将移动面的点云几何信息和各全方位面激光的颜色信息进行融合,构建全景3D模型;3) Calculate the point cloud geometric information of the moving surface according to the position of the laser light source of the moving body and the relevant pixel coordinate values of the laser projection information, and compare the point cloud geometric information of the moving surface with the color information of each omnidirectional laser Fusion to build a panoramic 3D model;

4)根据所述的全景3D模型,以及观察者处于3D重构环境中的视角和视场范围,来绘制以人为中心的透视图;具体步骤如下:4) Draw a human-centered perspective view according to the panoramic 3D model, and the observer's viewing angle and field of view in the 3D reconstruction environment; the specific steps are as follows:

STEP1)以全方位视觉传感器的单视点为坐标原点Om(0,0,0),建立三维柱状空间坐标系;STEP1) Set up a three-dimensional columnar space coordinate system with the single viewpoint of the omnidirectional visual sensor as the coordinate origin O m (0,0,0);

STEP2)根据人眼的视觉范围,确定透视窗口的大小,以方位角β和高度h为透视窗口的变量,并得到透视窗口对应的点云数据(h,β,r);STEP2) Determine the size of the perspective window according to the visual range of the human eye, take the azimuth β and height h as the variables of the perspective window, and obtain the point cloud data (h, β, r) corresponding to the perspective window;

STEP3)根据方位角β的步长和高度h的范围,以及所述的点云数据(h,β,r),生成数据矩阵;STEP3) Generate a data matrix according to the step size of the azimuth angle β and the range of the height h, and the point cloud data (h, β, r);

STEP4)将数据矩阵中所有的三维坐标用三角形面片连接起来,连接线的颜色采用连接的两个点的颜色平均值;STEP4) All three-dimensional coordinates in the data matrix are connected with triangular patches, and the color of the connecting line adopts the color average value of the two points connected;

STEP5)将所有连接的三角形面片进行输出显示,完成以人为中心的透视图绘制。STEP5) Output and display all the connected triangle faces to complete the human-centered perspective drawing.

所述的3D全景显示绘制方法还包括绘制以人为中心的全景透视循环显示图,通过不断改变方位角β,生成在该方位角β下的显示数据矩阵,完成全景透视循环显示图绘制。The 3D panoramic display drawing method further includes drawing a human-centered panoramic perspective cycle display map, and generating a display data matrix under the azimuth angle β by continuously changing the azimuth angle β to complete the drawing of the panoramic perspective cycle display map.

所述的3D全景显示绘制方法还包括绘制以人为中心的立体透视图,具体绘制算法如下:The described 3D panorama display drawing method also includes drawing a human-centered stereo perspective view, and the specific drawing algorithm is as follows:

5.1:确定初始方位角β1,读取最小距离hmin和最大距离hmax数据,确定视点,中央眼=0,左视点=1,右视点=2;5.1: Determine the initial azimuth angle β 1 , read the minimum distance h min and maximum distance h max data, determine the viewpoint, central eye=0, left viewpoint=1, right viewpoint=2;

5.2:h=hmin5.2: h =hmin;

5.3:读取距离值h的初始方位角β1开始到β1+300的数据,如果β1+300≥1000,则β1+300=β1+300-1000;根据所确定的视点,选择中央眼0计算空间物点在左右眼的柱坐标;选择左视点1计算空间物点在右眼的柱坐标,将原坐标数据作为左眼的柱坐标;选择右视点2计算空间物点在左眼的柱坐标,将原坐标数据作为右眼的柱坐标;分别用左右视点将这些值作为左视点矩阵和右视点矩阵的新一行数据,h=h+Δh;5.3: Read the data from the initial azimuth angle β 1 to β 1 +300 of the distance value h, if β 1 +300≥1000, then β 1 +300=β 1 +300-1000; according to the determined viewpoint, select The central eye 0 calculates the cylindrical coordinates of the spatial object point in the left and right eyes; selects the left viewpoint 1 to calculate the cylindrical coordinates of the spatial object point in the right eye, and uses the original coordinate data as the cylindrical coordinates of the left eye; selects the right viewpoint 2 to calculate the spatial object point in the left The cylindrical coordinates of the eye, the original coordinate data is used as the cylindrical coordinates of the right eye; these values are used as a new row of data of the left viewpoint matrix and the right viewpoint matrix respectively with the left and right viewpoints, h=h+Δh;

5.4:判断h≥hmax,如果不满足转到5.3,直至生成左右视点的显示矩阵;5.4: Judging h≥h max , if not satisfied, go to 5.3 until the display matrix of the left and right viewpoints is generated;

5.5:分别将显示矩阵中所有的三维坐标用三角形面片连接起来,连接方法是首先将每一行的数据用直线进行连接,然后将每一列的数据用直线进行连接,最后将矩阵中(i,j)和(i+1,j+1)的点云数据用直线进行连接;连接线的颜色采用连接的两个点的颜色平均值;5.5: Connect all the three-dimensional coordinates in the display matrix with triangle patches. The connection method is to first connect the data of each row with a straight line, then connect the data of each column with a straight line, and finally connect (i, The point cloud data of j) and (i+1, j+1) are connected by a straight line; the color of the connecting line adopts the color average value of the two connected points;

5.6:通过对上述处理后生成的立体像对进行双目立体显示,完成以人为中心的立体透视图绘制。5.6: Through the binocular stereo display of the stereo image pair generated after the above processing, the rendering of the human-centered stereo perspective drawing is completed.

所述的3D全景显示绘制方法还包括绘制以人为中心的全景立体透视循环显示图,通过不断改变方位角β,保存当前β的显示数据矩阵,分别将左视点矩阵和右视点矩阵用三角形面片进行连接,生成的立体像对,立体显示输出。The 3D panoramic display drawing method also includes drawing a human-centered panoramic stereoscopic perspective cycle display map, by constantly changing the azimuth angle β, saving the display data matrix of the current β, respectively using triangle meshes for the left viewpoint matrix and the right viewpoint matrix The connection is made, and the resulting stereo pair is output for stereo display.

所述的3D全景显示绘制方法还包括绘制以人为中心的观察距离变化立体透视图,根据观察者不断改变在3D环境中的自身空间位置,从人机工程的角度确定视场范围,来绘制全景立体图。The 3D panorama display drawing method also includes drawing a human-centered stereoscopic perspective view of changes in observation distance, according to the observer constantly changing his own spatial position in the 3D environment, and determining the field of view from the perspective of ergonomics to draw the panorama stereogram.

所述的第一全方位面激光、第二全方位面激光和第三全方位面激光分别为蓝色线激光、红色线激光和绿色线激光,蓝色线激光和绿色线激光安装在红色线激光的上下两侧,且所有线激光的轴线相交于所述导向支撑杆的轴心线上的一点。The first omnidirectional laser, the second omnidirectional laser and the third omnidirectional laser are blue line laser, red line laser and green line laser respectively, and the blue line laser and green line laser are installed on the red line The upper and lower sides of the laser, and the axes of all line lasers intersect at a point on the axis of the guide support rod.

所述的3D全景显示绘制方法还包括以物为中心的全景绘制显示,根据以全方位视觉传感器的单视点Om为坐标原点的3D全景模型的点云数据,用移动体激光光源扫描全景场景时产生的扫描切片,提取出蓝色、红色和绿色全方位面激光投射所产生的点云数据,生成点云数据矩阵,实现以物为中心的全景绘制显示。The described 3D panoramic display rendering method also includes object-centered panoramic rendering display, according to the point cloud data of the 3D panoramic model with the single viewpoint O m of the omnidirectional visual sensor as the coordinate origin, scanning the panoramic scene with a moving body laser light source Scanning slices generated during scanning, extracting point cloud data generated by blue, red and green omni-directional laser projections, generating a point cloud data matrix, and realizing object-centered panoramic rendering and display.

本发明的有益效果主要表现在:The beneficial effects of the present invention are mainly manifested in:

(1)提供了一种全新的立体视觉获取方法,利用全方位激光扫描和全方位视觉的特性使得重构后的三维模型同时具有较高的精度和较好的纹理信息;(1) Provide a brand-new stereoscopic vision acquisition method, using the characteristics of omnidirectional laser scanning and omnidirectional vision to make the reconstructed 3D model have high precision and good texture information at the same time;

(2)能有效地减少计算机资源消耗,具有实时性好、实用性强、鲁棒性高、自动化程度高等优点,整个3D重构不需要人工介入;(2) It can effectively reduce the consumption of computer resources, and has the advantages of good real-time performance, strong practicability, high robustness, and high degree of automation, and the entire 3D reconstruction does not require manual intervention;

(3)利用全方位激光检测保证了几何的准确性,采用高分辨率的全景图像采集技术使得在全景图像上的每个像素同时拥有几何信息和颜色信息,从而保证了3D重构的真实感,整个过程自动扫描、自动解析和计算,不存在着三维重构的病态计算问题,实现了三维重建过程的自动化;实现了3D全景模型重构的几何准确性、真实感和重建过程自动化的完美统一;(3) The use of omni-directional laser detection ensures the accuracy of geometry, and the use of high-resolution panoramic image acquisition technology enables each pixel on the panoramic image to have both geometric information and color information, thereby ensuring the realism of 3D reconstruction , the whole process is automatically scanned, automatically analyzed and calculated, there is no pathological calculation problem of 3D reconstruction, and the automation of the 3D reconstruction process is realized; the geometric accuracy, realism and reconstruction process automation of the 3D panoramic model reconstruction are realized. Unite;

(4)实现了一种以人为中心的3D全景绘制显示技术,将以物为中心的3D视觉和以人为中心的3D视觉进行融合,使得绘制出来的全景3D场景更具有亲临其境视觉感。(4) A human-centered 3D panoramic rendering display technology is realized, which integrates object-centered 3D vision and human-centered 3D vision, so that the drawn panoramic 3D scene has a more immersive visual sense.

附图说明Description of drawings

图1为一种全方位视觉传感器的结构图;Fig. 1 is a structural diagram of an omnidirectional vision sensor;

图2为单视点折反射全方位视觉传感器成像模型,图2(a)透视成像过程,图2(b)传感器平面,图2(c)图像平面;Figure 2 is a single-view point catadioptric omnidirectional vision sensor imaging model, Figure 2 (a) perspective imaging process, Figure 2 (b) sensor plane, Figure 2 (c) image plane;

图3为移动体激光光源结构简图;Fig. 3 is a schematic diagram of the structure of the moving body laser light source;

图4为主动全景视觉传感器的标定说明图;Fig. 4 is a calibration explanatory diagram of an active panoramic vision sensor;

图5为基于主动全景视觉传感器的全方位三维建模系统的硬件结构图;Fig. 5 is the hardware structural diagram of the all-round three-dimensional modeling system based on active panoramic vision sensor;

图6为全方位激光发生器部件的结构图,图6(a)为全方位激光发生器部件正视图,图(b)为全方位激光发生器部件俯视图;Fig. 6 is a structural diagram of an omnidirectional laser generator component, Fig. 6 (a) is a front view of an omnidirectional laser generator component, and figure (b) is a top view of an omnidirectional laser generator component;

图7为全方位视觉传感器的成像原理图;Fig. 7 is the imaging principle diagram of omnidirectional vision sensor;

图8为透视成像原理图;Figure 8 is a schematic diagram of perspective imaging;

图9为改变观察距离双目立体成像的原理图,图9(a)为改变观察距离前双目立体成像的原理图,图9(b)改变观察距离后双目立体成像的原理图;Fig. 9 is a schematic diagram of binocular stereo imaging for changing the observation distance, Fig. 9 (a) is a schematic diagram of binocular stereo imaging before changing the observation distance, and Fig. 9 (b) is a schematic diagram of binocular stereo imaging after changing the observation distance;

图10为基于主动全景视觉传感器的全方位三维建模系统及3D全景绘制的软件架构图;Fig. 10 is a software architecture diagram of an all-round three-dimensional modeling system based on an active panoramic vision sensor and 3D panoramic drawing;

图11为基于主动全景视觉传感器的全方位三维建模系统中的点云空间几何信息计算的说明图;Fig. 11 is an explanatory diagram of the point cloud spatial geometric information calculation in the omni-directional three-dimensional modeling system based on the active panoramic vision sensor;

图12为基于主动全景视觉传感器的全方位三维建模系统在获取三维点云数据时得到的切片全景图像示意图;12 is a schematic diagram of a sliced panoramic image obtained when the omnidirectional three-dimensional modeling system based on the active panoramic vision sensor acquires three-dimensional point cloud data;

图13为在全景图像上解析点云空间几何信息计算的过程说明图;Fig. 13 is an explanatory diagram of the process of analyzing point cloud spatial geometric information calculation on a panoramic image;

图14为人眼的水平视野说明图;Figure 14 is an explanatory diagram of the horizontal field of view of the human eye;

图15为人眼的垂直视野说明图;Figure 15 is an explanatory diagram of the vertical field of view of the human eye;

图16为在解析全景切片图像上分别获取绿色、红色和蓝色激光投影线的说明图。FIG. 16 is an explanatory diagram of acquiring green, red and blue laser projection lines respectively on the analyzed panoramic slice image.

具体实施方式detailed description

实施例1Example 1

参照图1~16,一种基于主动全景视觉的3D环境复制系统及3D全景显示绘制方法,包括全方位视觉传感器、移动体激光光源以及用于对全方位图像进行3D全景重构和3D全景绘制输出的微处理器。Referring to Figures 1 to 16, a 3D environment replication system based on active panoramic vision and a 3D panoramic display rendering method, including omnidirectional visual sensors, moving laser light sources, and 3D panoramic reconstruction and 3D panoramic rendering of omnidirectional images output of the microprocessor.

全方位视觉传感器的中心与移动体激光光源的中心配置在同一根轴心线上。如附图1所示,全方位视觉传感器包括双曲面镜面2、上盖1、透明半圆形外罩3、下固定座4、摄像单元固定座5、摄像单元6、连接单元7、上罩8。双曲面镜面2固定在上盖1上,连接单元7将下固定座4和透明半圆形外罩3连接成一体,透明半圆形外罩3与上盖1以及上罩8通过螺钉固定在一起,摄像单元6用螺钉固定在摄像单元固定座5上,摄像单元固定座5用螺钉固定在下固定座4上,全方位视觉传感器中的摄像单元6的输出与微处理器连接。The center of the omnidirectional vision sensor and the center of the moving body laser light source are arranged on the same axis line. As shown in Figure 1, the omnidirectional visual sensor includes a hyperboloid mirror 2, an upper cover 1, a transparent semicircular cover 3, a lower fixing seat 4, a camera unit fixing seat 5, a camera unit 6, a connecting unit 7, and an upper cover 8 . The hyperboloid mirror surface 2 is fixed on the upper cover 1, and the connecting unit 7 connects the lower fixing base 4 and the transparent semicircular cover 3 into one body, and the transparent semicircular cover 3, the upper cover 1 and the upper cover 8 are fixed together by screws, The camera unit 6 is fixed on the camera unit holder 5 with screws, and the camera unit holder 5 is fixed on the lower holder 4 with screws, and the output of the camera unit 6 in the omnidirectional visual sensor is connected with the microprocessor.

如附图3所示,移动体激光光源用于产生三维体结构投射光源,包括:导向支撑杆2-1、激光发生组合单元2-2、底盘2-3、直线电机移动杆2-4、直线电机组件2-5、蓝色线激光发生单元2-6、红色线激光发生单元2-7、绿色线激光发生单元2-8。As shown in Figure 3, the mobile laser light source is used to generate a three-dimensional structure projection light source, including: guide support rod 2-1, laser generating combination unit 2-2, chassis 2-3, linear motor moving rod 2-4, Linear motor assembly 2-5, blue line laser generating unit 2-6, red line laser generating unit 2-7, green line laser generating unit 2-8.

激光发生组合单元2-2上共开有12个孔,其中每4个孔为一组,分为蓝色线激光发生单元安装孔组、红色线激光发生单元安装孔组和绿色线激光发生单元安装孔组。4个红色线激光发生单元安装孔的轴心线分别与激光发生组合单元2-2的圆柱体的轴心线成正交关系,4个蓝色线激光发生单元安装孔的轴心线分别与激光发生组合单元2-2的圆柱体的轴心线成倾斜θc关系,4个绿色线激光发生单元安装孔的轴心线分别与激光发生组合单元2-2的圆柱体的轴心线成倾斜θa关系。这12个孔在激光发生组合单元2-2的圆柱体的圆周方向上成90°夹角均匀分布,这样保证了12个孔的轴心线相交于激光发生组合单元2-2的圆柱体的轴心线上的同一点上,如附图6所示。A total of 12 holes are opened on the laser generating combination unit 2-2, and every 4 holes form a group, which are divided into the blue line laser generating unit installation hole group, the red line laser generating unit installation hole group and the green line laser generating unit Mounting hole set. The axis lines of the mounting holes of the four red line laser generating units are respectively in an orthogonal relationship with the axis lines of the cylinder of the laser generating unit 2-2, and the axis lines of the mounting holes of the four blue line laser generating units are respectively perpendicular to The axis line of the cylinder of the laser generation combination unit 2-2 is in a relationship of inclination θc , and the axis lines of the mounting holes of the four green line laser generation units are respectively aligned with the axis line of the cylinder of the laser generation combination unit 2-2. Slope theta a relationship. These 12 holes are evenly distributed at an angle of 90° on the circumferential direction of the cylinder of the laser generation combination unit 2-2, which ensures that the axis lines of the 12 holes intersect with the cylinder of the laser generation combination unit 2-2. On the same point on the axis line, as shown in accompanying drawing 6.

蓝色线激光发生单元2-6固定在激光发生组合单元2-2的蓝色线激光发生单元安装孔组的孔中,如附图6所示,经过这样组合后的蓝色线激光能形成一个发出蓝色的全方位面激光光源;红色线激光发生单元2-7固定在线激光发生组合单元2-2的红色线激光发生单元安装孔组的孔中,如附图6所示,经过这样组合后的红色线激光能形成一个发出红色的全方位面激光光源;绿光线激光发生单元2-8固定在线激光发生组合单元2-2的绿色线激光发生单元安装孔组的孔中,如附图6所示,经过这样组合后的绿色线激光能形成一个发出绿色的全方位面激光光源。当激光发生组合单元2-2中的12个孔均固定好相对应的线激光发生单元后,就构成了体激光光源;这样体激光光源就能依次分别投射出蓝色、红色和绿色的3色全方位面激光,其中红色全方位面激光与导向支撑杆成垂直关系,蓝色全方位面激光与导向支撑杆的轴心线成θc倾斜角度关系,绿色全方位面激光与导向支撑杆的轴心线成θa倾斜角度关系。The blue line laser generating unit 2-6 is fixed in the hole of the blue line laser generating unit mounting hole group of the laser generating combination unit 2-2, as shown in accompanying drawing 6, the blue line laser after such combination can form One emits a blue omnidirectional laser light source; red line laser generating unit 2-7 is fixed in the hole of the red line laser generating unit mounting hole group of online laser generating combination unit 2-2, as shown in accompanying drawing 6, through such The combined red line laser can form a red omnidirectional laser light source; the green line laser generating unit 2-8 is fixed in the hole of the green line laser generating unit mounting hole group of the line laser generating combination unit 2-2, as attached As shown in Figure 6, the combined green line laser can form a green omnidirectional laser light source. When the 12 holes in the laser generating combination unit 2-2 are all fixed with the corresponding line laser generating units, a volume laser light source is formed; in this way, the volume laser light source can respectively project blue, red and green 3 The red omni-directional laser is in a vertical relationship with the guide support rod, the blue omni-directional laser is in a θ c inclination angle relationship with the axis of the guide support rod, and the green omni-directional laser is in a relationship with the guide support rod The axis line is in the θ a tilt angle relationship.

移动体激光光源的装配关系是将体激光光源套入到导向支撑杆2-1内构成移动副,导向支撑杆2-1垂直固定在底盘2-3上,直线电机组件2-5固定在底盘2-3上,直线电机移动杆2-4上端与体激光光源进行固定连接,控制直线电机组件2-5来实现直线电机移动杆2-4的上下移动,从而带动体激光光源在导向支撑杆2-1的导向作用下上下移动,形成一种移动的体激光光源,使得重构的全景均被扫描。直线电机组件2-5为微型交流直线往复式减速电动机,其往复运动范围为700mm,型号为4IK25GNCMZ15S500,直线往复移动速度为15mm/s,最大移动推力为625N。The assembly relationship of the moving body laser light source is that the body laser light source is inserted into the guide support rod 2-1 to form a moving pair, the guide support rod 2-1 is vertically fixed on the chassis 2-3, and the linear motor assembly 2-5 is fixed on the chassis 2-3, the upper end of the linear motor moving rod 2-4 is fixedly connected with the volume laser light source, and the linear motor assembly 2-5 is controlled to realize the up and down movement of the linear motor moving rod 2-4, thereby driving the volume laser light source on the guide support rod Under the guidance of 2-1, it moves up and down to form a moving volume laser light source, so that the reconstructed panorama is scanned. The linear motor assembly 2-5 is a miniature AC linear reciprocating geared motor with a reciprocating range of 700mm, model 4IK25GNCMZ15S500, a linear reciprocating speed of 15mm/s, and a maximum thrust of 625N.

全方位视觉传感器通过连接板安装在移动体激光光源中的导向支撑杆2-1上,构成一个主动式全方位视觉传感器,如附图4所示,全方位视觉传感器通过USB接口与微处理器相连接。The omnidirectional visual sensor is installed on the guide support rod 2-1 in the laser light source of the mobile body through the connecting plate, forming an active omnidirectional visual sensor, as shown in accompanying drawing 4, the omnidirectional visual sensor communicates with the microprocessor through the USB interface connected.

微处理器的应用软件中主要由标定、3D重构和3D全景显示绘制三个部分组成。标定部分主要包括:视频图像读取模块、全方位视觉传感器标定模块、全方位面激光信息解析模块、联合标定模块。3D重构部分主要包括:视频图像读取模块、移动体激光光源的直线电机的位置估计模块、全方位面激光信息解析模块、移动面的点云几何信息的计算模块,点云的几何信息和颜色信息的融合模块,以移动面的位置信息构建全景3D模型构建模块,3D全景模型生成模块和存储单元。3D全景显示绘制部分主要包括:以物为中心的全景绘制模块、以人为中心的透视图显示绘制模块、全景透视图循环显示绘制模块、立体透视图显示绘制模块、全景立体图循环显示绘制模块和观察距离变化立体透视图显示绘制模块。The application software of the microprocessor is mainly composed of three parts: calibration, 3D reconstruction and 3D panoramic display drawing. The calibration part mainly includes: video image reading module, omnidirectional visual sensor calibration module, omnidirectional laser information analysis module, and joint calibration module. The 3D reconstruction part mainly includes: the video image reading module, the position estimation module of the linear motor of the moving body laser light source, the omnidirectional laser information analysis module, the calculation module of the point cloud geometric information of the moving surface, the geometric information of the point cloud and A color information fusion module, a panoramic 3D model construction module based on the position information of the moving surface, a 3D panoramic model generation module and a storage unit. The 3D panoramic display and drawing part mainly includes: object-centered panoramic drawing module, human-centered perspective display and drawing module, panoramic perspective display and drawing module, stereoscopic perspective display and drawing module, panoramic stereogram display and drawing module and observation The distance-varying stereo perspective view shows the drawing module.

视频图像读取模块,用于读取全方位视觉传感器的视频图像,并保存在存储单元中,其输出与全方位视觉传感器标定模块和全方位面激光信息解析模块连接。The video image reading module is used to read the video image of the omnidirectional visual sensor and store it in the storage unit, and its output is connected with the omnidirectional visual sensor calibration module and the omnidirectional laser information analysis module.

全方位视觉传感器标定模块,用于确定三维空间点和摄像机成像平面上的二维图像点之间映射关系的参数,如附图2所示;具体标定过程是将标定板绕全方位视觉传感器一周,拍摄若干组全景图像,建立空间点和成像平面中像素点的若干等式,使用最优化算法求出最优解,计算结果如表1所示,即为本发明中使用的全方位视觉传感器的标定参数;Omni-directional visual sensor calibration module, used to determine the parameters of the mapping relationship between three-dimensional space points and two-dimensional image points on the camera imaging plane, as shown in Figure 2; the specific calibration process is to wrap the calibration board around the omni-directional visual sensor , shoot several groups of panoramic images, establish several equations of spatial points and pixel points in the imaging plane, use the optimization algorithm to find the optimal solution, and the calculation results are shown in Table 1, which is the omnidirectional visual sensor used in the present invention calibration parameters;

表1 ODVS的标定结果Table 1 Calibration results of ODVS

标定出全方位视觉传感器的内外参数后,就能建立一个成像平面的像素点与入射光线,即入射角之间的对应关系,如公式(1)表示;After the internal and external parameters of the omnidirectional vision sensor are calibrated, the corresponding relationship between the pixels of the imaging plane and the incident light, that is, the incident angle, can be established, as shown in formula (1);

式中,α表示点云的入射角,||u″||为成像平面上的点到该平面中心点的距离,a0、a1、a2和aN为标定的全方位视觉传感器的内外参数,通过公式(1)建立一张成像平面任一像素点与入射角之间的对应关系表。In the formula, α represents the incident angle of the point cloud, ||u″|| is the distance from a point on the imaging plane to the center point of the plane, a 0 , a 1 , a 2 and a N are the calibrated omnidirectional vision sensor For internal and external parameters, a table of correspondence between any pixel point on the imaging plane and the incident angle is established through formula (1).

对于本发明中所采用的全方位视觉传感器的标定后,成像平面上的点||u″||与点云的入射角α关系可以用下面等式来表示;After the calibration of the omnidirectional visual sensor adopted in the present invention, the point ||u″|| on the imaging plane and the incident angle α relationship of the point cloud can be represented by the following equation;

本发明中移动体激光光源的两个极限位置是由移动体激光光源中的直线电机组件的最大行程以及体激光光源的投射角所决定的,上极限位置设置以成年人当站立时眼睛处于平视状态的高度为基准,上极限位置初始值设置为1500mm,下极限位置设置以成年人当蹲下时眼睛处于平视状态的高度为基准,下极限位置初始值设置为800mm;直线电机组件的最大行程为700mm,在上极限位置时还有30°的仰视角,在下极限位置时还有30°的俯视角;本发明采用的全方位视觉传感器具有28°的仰视角和65°的俯视角,覆盖了近93°垂直视场和360°水平视场;根据本发明的设计,移动体激光光源与全方位视觉传感器的单视点Om之间的距离用公式(3)进行计算;In the present invention, the two limit positions of the moving body laser light source are determined by the maximum stroke of the linear motor assembly in the moving body laser light source and the projection angle of the body laser light source. The upper limit position is set so that the eyes of an adult are in a head-up state when standing The height of the upper limit position is used as the benchmark, the initial value of the upper limit position is set to 1500mm, the lower limit position is set based on the height of the eyes of an adult when squatting down, and the initial value of the lower limit position is set to 800mm; the maximum stroke of the linear motor assembly is 700mm, there is still a 30° upward angle of view at the upper limit position, and a 30° downward angle of view at the lower limit position; the omnidirectional visual sensor adopted in the present invention has a 28° upward angle of view and a 65° downward angle of view, covering Nearly 93 ° vertical field of view and 360 ° horizontal field of view; according to the design of the present invention, the distance between the moving body laser light source and the single viewpoint 0 m of the omnidirectional vision sensor is calculated with formula (3);

式中,为全方位视觉传感器的单视点Om离地面的距离,hup limit为移动体激光光源的上极限位置,hLaserMD为移动体激光光源的移动距离,h(z)为移动体激光光源与全方位视觉传感器的单视点Om之间的距离;如附图4所示。In the formula, is the distance from the ground of the single-view point O m of the omnidirectional visual sensor, h up limit is the upper limit position of the laser light source of the moving body, h LaserMD is the moving distance of the laser light source of the moving body, h(z) is the distance between the laser light source of the moving body and the full The distance between the single viewpoint O m of the orientation vision sensor; as shown in accompanying drawing 4.

这里规定全方位视觉传感器的采集图像速率为15Flame/s,本发明中设定了移动体激光光源的垂直方向上的直线往复移动速度为15mm/s,两个帧间之间移动体激光光源的垂直方向上的直线移动距离为1mm,两个极限位置之间距离为700mm,因此完成一次垂直方向上扫描时间为47s,共会产生700个全景切片图像;在一次垂直扫描过程中要处理700帧图像,在1帧图像中存在着蓝色激光线、红色激光线和绿色激光线的三条投影线,其中第1帧和700帧图像就是两个极限位置的扫描全景切片图像。It is stipulated here that the acquisition image rate of the omni-directional visual sensor is 15 Flame/s, and the linear reciprocating speed on the vertical direction of the moving body laser light source is set to be 15mm/s in the present invention, and the speed of the moving body laser light source between two frames is 15 mm/s. The linear movement distance in the vertical direction is 1mm, and the distance between the two extreme positions is 700mm, so the time to complete a scan in the vertical direction is 47s, and a total of 700 panoramic slice images will be generated; 700 frames need to be processed during a vertical scan In the image, there are three projection lines of the blue laser line, the red laser line and the green laser line in one frame of the image, and the images of the first frame and the 700th frame are the scanning panoramic slice images of the two extreme positions.

全方位面激光信息解析模块,用于在全景图像上解析出激光投影信息。解析在全景图上的蓝色激光、红色激光和绿色激光投射点的方法是根据蓝色激光、红色激光和绿色激光投射点的像素的亮度要大于成像平面上的平均亮度,首先是将全景图的RGB颜色空间转化成HIS颜色空间,然后将成像平面上的平均亮度的1.2倍作为提取蓝色激光、绿色激光和红色激光投射点的阈值,在提取出蓝色激光、红色激光和绿色激光投射点后需要进一步区分蓝色激光、红色激光和绿色激光投射点,本发明中根据HIS颜色空间中的色调值H进行判断,如果色调值H在(225,255)之间就判断为蓝色激光投射点,如果色调值H在(0,30)之间就判断为红色激光投射点,如果色调值H在(105,135)之间就判断为绿色激光投射点,其余像素点就判断为干扰;为了得到激光投射线的准确位置,本发明采用高斯近似方法来抽取出激光投射线的中心位置,具体实现算法是:The all-round laser information analysis module is used to analyze the laser projection information on the panoramic image. The method of analyzing the blue laser, red laser and green laser projection points on the panorama is based on the fact that the pixel brightness of the blue laser, red laser and green laser projection points is greater than the average brightness on the imaging plane. First, the panoramic image The RGB color space is converted into the HIS color space, and then 1.2 times the average brightness on the imaging plane is used as the threshold for extracting the blue laser, green laser and red laser projection points, and the blue laser, red laser and green laser projection are extracted After pointing, it is necessary to further distinguish the blue laser, red laser and green laser projection points. In the present invention, it is judged according to the hue value H in the HIS color space. If the hue value H is between (225, 255), it is judged as a blue laser For the projection point, if the hue value H is between (0, 30), it is judged as a red laser projection point, if the hue value H is between (105, 135), it is judged as a green laser projection point, and the rest of the pixels are judged as interference ; In order to obtain the exact position of the laser projection line, the present invention adopts the Gaussian approximation method to extract the center position of the laser projection line, and the specific implementation algorithm is:

Step1:设置初始方位角β=0;Step1: Set the initial azimuth angle β=0;

Step2:在全景图像上以方位角β从全景图像的中心点开始检索绿色激光、红色激光和蓝色激光投射点,对于方位角β上存在着若干个连续的某种颜色激光投射的像素,这里选择HIS颜色空间中的I分量,即亮度值接近最高值的三个连续像素通过高斯近似方法来估算激光投射线的中心位置;具体计算方法由公式(4)给出,Step2: On the panoramic image, search the green laser, red laser and blue laser projection points from the center point of the panoramic image at the azimuth angle β. For the azimuth angle β, there are several consecutive pixels projected by a certain color laser, here Select the I component in the HIS color space, that is, three consecutive pixels whose brightness value is close to the highest value, and use the Gaussian approximation method to estimate the center position of the laser projection line; the specific calculation method is given by formula (4),

式中,f(i-1)、f(i)和f(i+1)分别为三个相邻像素接近最高亮度值的亮度值,d为修正值,i表示从图像中心点开始的第i个像素点。因此估算得到的该颜色激光投射线的中心位置为(i+d),该值对应于公式(1)中的||u″||;由于在全景图像上出现颜色激光投射的顺序是绿色激光、红色激光和蓝色激光,因此依次顺序来排除其他的投影噪声对激光信息解析的影响;In the formula, f(i-1), f(i) and f(i+1) are the brightness values of three adjacent pixels close to the highest brightness value respectively, d is the correction value, and i represents the first pixel starting from the center point of the image i pixels. Therefore, the estimated central position of the color laser projection line is (i+d), which corresponds to ||u″|| in formula (1); since the order of color laser projection on the panoramic image is green laser , red laser and blue laser, so in order to exclude the influence of other projection noise on the analysis of laser information;

Step3:改变方位角继续检索激光投射点,即β=β+Δβ,Δβ=0.36;Step3: Change the azimuth and continue to search for the laser projection point, that is, β=β+Δβ, Δβ=0.36;

Step4:判断方位角β=360,如果成立,检索结束;反之转到Step2。Step4: Determine the azimuth angle β=360, if it is true, the search ends; otherwise, go to Step2.

另一种全方位面激光信息解析方法是基于帧间差的激光投射点提取算法,该算法是一种通过对两个相邻位置高度所获得的全景切片图像作差分运算来获得激光投射点的方法,当移动激光面在上下扫描过程中,帧与帧之间在垂直方向,即不同的切面上会出现较为明显的差别,两帧相减,得到两帧图像亮度差的绝对值,判断它是否大于阈值来分析提取全景切片图像中的激光投射点;然后根据在全景图像上出现颜色激光投射的顺序来判定在某一方位角β上的绿色激光、红色激光和蓝色激光投射光。Another omnidirectional laser information analysis method is the laser projection point extraction algorithm based on the difference between frames. method, when the moving laser surface is scanning up and down, there will be obvious differences between frames in the vertical direction, that is, on different slices, and the two frames are subtracted to obtain the absolute value of the brightness difference between the two frames, and judge it Whether it is greater than the threshold to analyze and extract the laser projection points in the panoramic slice image; and then determine the green laser, red laser and blue laser projection light on a certain azimuth angle β according to the order in which the color laser projections appear on the panoramic image.

联合标定,用于对主动式全方位视觉传感器进行标定。由于全方位视觉传感器和移动体激光光源在装配过程中不可避免的存在着各种装配误差,通过联合标定将这些误差减少到最低限度。具体做法是:首先,将主动式全方位视觉传感器放置在一个直径为1000mm的空心圆柱体内,并将主动式全方位视觉传感器的轴心线与空心圆柱体内的轴心线重合,如附图4所示;接着,使得移动体激光光源ON,发射红色和绿色激光,将移动体激光光源调整到上极限位置hup lim it,并采集全景图像,观察在全景图像上的蓝色光圈、红色光圈和绿色光圈的圆心是否与全景图像上的中心一致,检测在全景图像上的蓝色光圈、红色光圈和绿色光圈的圆度,如果出现中心不一致或者圆度不满足要求情况需要调整全方位视觉传感器和移动体激光光源之间的连接;进一步,将移动体激光光源调整到下极限位置hdown lim it,并采集全景图像,观察在全景图像上的蓝色光圈、红色光圈和绿色光圈的圆心是否与全景图像上的中心一致,检测在全景图像上的蓝色光圈、红色光圈和绿色光圈的圆度,如果出现中心不一致或者圆度不满足要求情况需要调整全方位视觉传感器和移动体激光光源之间的连接;最后,将上极限位置hup lim it、下极限位置hdown lim it、移动体激光光源的最大移动距离hLaserMD、全方位视觉传感器的标定参数信息存放在联合标定数据库中,以便在三维重构时调用。Joint calibration, used to calibrate the active omnidirectional vision sensor. Due to the inevitable existence of various assembly errors in the assembly process of the omnidirectional vision sensor and the moving laser light source, these errors can be reduced to a minimum by joint calibration. The specific method is: first, place the active omnidirectional vision sensor in a hollow cylinder with a diameter of 1000 mm, and coincide the axis line of the active omnidirectional vision sensor with the axis line in the hollow cylinder, as shown in Figure 4 As shown; then, make the moving body laser light source ON, emit red and green laser light, adjust the moving body laser light source to the upper limit position h up lim it , and collect a panoramic image, observe the blue aperture and red aperture on the panoramic image Whether the center of the circle and the green aperture is consistent with the center of the panoramic image, and detect the roundness of the blue aperture, red aperture and green aperture on the panoramic image. If the center is inconsistent or the circularity does not meet the requirements, the omnidirectional vision sensor needs to be adjusted and the connection between the laser light source of the moving body; further, adjust the laser light source of the moving body to the lower limit position h down limit it , and collect a panoramic image, and observe whether the centers of the blue aperture, red aperture and green aperture on the panoramic image are Consistent with the center on the panoramic image, detect the roundness of the blue aperture, red aperture and green aperture on the panoramic image. If the center is inconsistent or the roundness does not meet the requirements, it is necessary to adjust the omnidirectional vision sensor and the moving laser light source. connection between; finally, store the upper limit position h up lim it , the lower limit position h down limit it , the maximum moving distance h LaserMD of the moving body laser light source, and the calibration parameter information of the omnidirectional vision sensor in the joint calibration database, so that Called during 3D reconstruction.

本发明中在全方位视觉传感器中采用高清成像芯片,具有4096×2160分辨率;移动体激光光源的移动步长为1mm,垂直扫描范围700mm,因此由移动体激光光源产生的切片全景图像的分辨率为700,这样完成一次垂直扫描就能完成全景图像上的每一个像素点几何信息和颜色信息的采样、融合直至三维重构及绘制显示输出,如附图10所示。In the present invention, a high-definition imaging chip is used in the omnidirectional visual sensor, which has a resolution of 4096×2160; the moving step of the moving body laser light source is 1mm, and the vertical scanning range is 700mm, so the resolution of the sliced panoramic image produced by the moving body laser light source The ratio is 700, so that one vertical scan can complete the sampling and fusion of the geometric information and color information of each pixel on the panoramic image until the three-dimensional reconstruction and rendering display output, as shown in Figure 10.

对于三维重构部分,其处理流程是:For the 3D reconstruction part, the processing flow is:

StepA:通过视频图像读取模块读取全景视频图像;StepA: Read the panoramic video image through the video image reading module;

StepB:根据直线电机的移动速度以及到达两个极限点的时间估计移动体激光光源的直线电机的位置;StepB: Estimate the position of the linear motor of the moving body laser light source according to the moving speed of the linear motor and the time to reach the two limit points;

StepC:在全景图像上解析出全方位面激光信息,计算移动面点云几何信息;StepC: Analyze the omni-directional laser information on the panoramic image, and calculate the geometric information of the moving surface point cloud;

StepD:从内存中读取无激光投射情况下的全景视频图像,根据StepC中处理结果将移动面几何信息和颜色信息进行融合;StepD: Read the panoramic video image without laser projection from the memory, and fuse the geometric information and color information of the moving surface according to the processing results in StepC;

StepE:逐步构建全景3D模型;StepE: Build a panoramic 3D model step by step;

StepF:判断是否已经到达极限点位置,如果是的话转到StepG,不成立的话转到StepA;StepF: Determine whether the limit point has been reached, if yes, go to StepG, if not, go to StepA;

StepG:设置移动体激光光源为OFF,读取无激光投射情况下的全景视频图像,并将其保存在内存单元中,输出3D全景模型并保存到存储单元,设置移动体激光光源为ON,转到StepA。StepG: Set the moving body laser light source to OFF, read the panoramic video image without laser projection, and save it in the memory unit, output the 3D panoramic model and save it to the storage unit, set the moving body laser light source to ON, turn to Step A.

下面对三维重构的处理流程作详细说明,在StepA中专门采用一个线程读取全景视频图像,视频图像的读取速率是15Flame/s,采集后的全景图像保存在一个内存单元内,以便后续的处理调用;The following is a detailed description of the processing flow of 3D reconstruction. In StepA, a thread is specially used to read the panoramic video image. The reading rate of the video image is 15 Flame/s. The collected panoramic image is stored in a memory unit, so that Subsequent processing calls;

在StepB中,主要用于估算移动体激光光源的当前位置;规定在重构开始时将移动体激光光源的初始位置定在上极限位置hup lim it,初始步长控制值zmove(j)=0,相邻两帧时间移动体激光光源的移动步长为Δz,即存在着以下关系,In StepB, it is mainly used to estimate the current position of the moving body laser source; it is stipulated that the initial position of the moving body laser source is set at the upper limit position h up lim it at the beginning of reconstruction, and the initial step length control value z move (j) = 0, the moving step of the moving body laser light source in two adjacent frames is Δz, that is, the following relationship exists,

zmove(j+1)=zmove(j)+Δz (5)z move (j+1)=z move (j)+Δz (5)

式中,zmove(j)为第j帧时步长控制值,zmove(j+1)为第j+1帧时步长控制值,Δz为移动体激光光源的移动步长,这里规定从上极限位置hup lim it向下方向移动时,Δz=1mm;从下极限位置hdown lim it向上方向移动时,Δz=-1mm;程序实现时通过以下关系式进行判断,In the formula, z move (j) is the time step control value of the jth frame, z move (j+1) is the time step control value of the j+1th frame, and Δz is the moving step size of the laser light source of the moving body. Here, When moving downward from the upper limit position h up lim it , Δz=1mm; when moving upward from the lower limit position h down lim it , Δz=-1mm; when the program is implemented, it is judged by the following relational formula,

用公式(5)的计算结果值zmove(j+1)代入公式(3)中的hLaserMD,得到移动体激光光源与全方位视觉传感器的单视点Om之间的距离h(z)。Substituting the calculation result value z move (j+1) of formula (5) into h LaserMD in formula (3), the distance h(z) between the laser light source of the moving body and the single-view point O m of the omnidirectional vision sensor is obtained.

在StepC中,读取内存单元中的全景图像并采用全方位面激光信息解析模块从全景图像上解析出全方位面激光信息,接着计算移动面点云几何信息。In StepC, the panoramic image in the memory unit is read and the omnidirectional laser information analysis module is used to analyze the omnidirectional laser information from the panoramic image, and then the geometric information of the moving surface point cloud is calculated.

点云的空间位置信息用高斯坐标系来表示的话,每一个点云的空间坐标相对于全方位视觉传感器的单视点Om为高斯坐标原点的高斯坐标用3个值来确定,即(R,α,β),R为某一个点云到全方位视觉传感器的单视点Om的距离,α为某一个点云到全方位视觉传感器的单视点Om的入射角,β为某一个点云到全方位视觉传感器的单视点Om的方位角,对于附图13中的点云点,在高斯坐标下的点云数据的计算方法由公式(7)、(8)、(9)给出,If the spatial position information of the point cloud is represented by the Gaussian coordinate system, the spatial coordinates of each point cloud relative to the single-view point O m of the omni-directional visual sensor are determined by three values for the Gaussian coordinate origin of the Gaussian coordinate origin, namely (R, α, β), R is the distance from a certain point cloud to the single-viewpoint O m of the omnidirectional vision sensor, α is the incident angle from a certain point cloud to the single-viewpoint O m of the omnidirectional vision sensor, and β is a certain point cloud To the azimuth of the single viewpoint O m of the omnidirectional vision sensor, for the point cloud in Figure 13 with point, the calculation method of point cloud data in Gaussian coordinates is given by formulas (7), (8), and (9),

式中,(β)green为绿色激光投影点云到全方位视觉传感器的单视点Om的方位角,(β)red为红色激光投影点云到全方位视觉传感器的单视点Om的方位角,(β)blue为蓝色激光投影点云到全方位视觉传感器的单视点Om的方位角,θB为蓝色投射线与Z轴之间的夹角,θG为绿色投射线与Z轴之间的夹角,h(z)为移动体激光光源到全方位视觉传感器的单视点Om的距离,αa为绿色激光投影点云到全方位视觉传感器的单视点Om的入射角,αb为红色激光投影点云到全方位视觉传感器的单视点Om的入射角,αc为蓝色激光投影点云到全方位视觉传感器的单视点Om的入射角,Ra为绿色激光投影点云到全方位视觉传感器的单视点Om的距离,Rb为红色激光投影点云到全方位视觉传感器的单视点Om的距离,Rc为蓝色激光投影点云到全方位视觉传感器的单视点Om的距离,||u"||(β)green为绿色激光投影点在成像平面上的对应点到全景成像平面中心之间的距离,||u"||(β)red为红色激光投影点在成像平面上的对应点到全景成像平面中心之间的距离,||u"||(β)blue为蓝色激光投影点在成像平面上的对应点到全景成像平面中心之间的距离。In the formula, (β) green is the azimuth angle of the green laser projection point cloud to the single-view point O m of the omnidirectional vision sensor, (β) red is the azimuth angle of the red laser projection point cloud to the single-view point O m of the omnidirectional vision sensor , (β) blue is the azimuth angle of the blue laser projection point cloud to the single-viewpoint O m of the omnidirectional vision sensor, θ B is the angle between the blue projection line and the Z axis, θ G is the green projection line and the Z axis The angle between the axes, h(z) is the distance from the moving body laser light source to the single-view point O m of the omnidirectional vision sensor, α a is the incident angle of the green laser projection point cloud to the single-view point O m of the omnidirectional vision sensor , α b is the incident angle of the red laser projection point cloud to the single-viewpoint O m of the omnidirectional vision sensor, α c is the incident angle of the blue laser projection point cloud to the single-viewpoint O m of the omnidirectional vision sensor, R a is green The distance from the laser projection point cloud to the single-viewpoint Om of the omnidirectional vision sensor, Rb is the distance from the red laser projection point cloud to the single-viewpoint Om of the omnidirectional vision sensor, and Rc is the distance from the blue laser projection point cloud to the omnidirectional The distance of the single-viewpoint O m of the visual sensor, ||u"||(β) green is the distance between the corresponding point of the green laser projection point on the imaging plane and the center of the panoramic imaging plane, ||u"||(β ) red is the distance between the corresponding point of the red laser projection point on the imaging plane and the center of the panoramic imaging plane, ||u"||(β) blue is the corresponding point of the blue laser projection point on the imaging plane to the panoramic imaging The distance between the centers of the planes.

如果将点云点用笛卡尔坐标系来表示的话,参考附图11,其计算方法由公式(10)、(11)、(12)给出,If the point cloud with point in Cartesian coordinate system with To express, with reference to accompanying drawing 11, its calculation method is given by formula (10), (11), (12),

式中,Ra为绿色激光投影点云到全方位视觉传感器的单视点Om的距离,Rb为红色激光投影点云到全方位视觉传感器的单视点Om的距离,Rc为蓝色激光投影点云到全方位视觉传感器的单视点Om的距离,αa为绿色激光投影点云到全方位视觉传感器的单视点Om的入射角,αb为红色激光投影点云到全方位视觉传感器的单视点Om的入射角,αc为蓝色激光投影点云到全方位视觉传感器的单视点Om的入射角,(β)green为绿色激光投影点云到全方位视觉传感器的单视点Om的方位角,(β)red为红色激光投影点云到全方位视觉传感器的单视点Om的方位角,(β)blue为蓝色激光投影点云到全方位视觉传感器的单视点Om的方位角。In the formula, R a is the distance from the green laser projection point cloud to the single-viewpoint Om of the omnidirectional vision sensor, Rb is the distance from the red laser projection point cloud to the single-viewpoint Om of the omnidirectional vision sensor, and Rc is blue The distance from the laser projection point cloud to the single-viewpoint Om of the omnidirectional vision sensor, α a is the incident angle of the green laser projection point cloud to the single-viewpoint Om of the omnidirectional vision sensor, αb is the distance from the red laser projection point cloud to the omnidirectional The incident angle of the single-view point O m of the visual sensor, α c is the incident angle of the blue laser projected point cloud to the single-view point O m of the omnidirectional visual sensor, (β) green is the incident angle of the green laser projected point cloud to the omnidirectional visual sensor The azimuth of the single-view point O m , (β) red is the azimuth of the single-view point O m from the red laser projection point cloud to the omnidirectional vision sensor, (β) blue is the single-view point O m from the blue laser projection point cloud to the omnidirectional vision sensor The azimuth of the viewpoint O m .

在StepC计算过程中遍历了全方位360°的蓝色、红色和绿色全方位面激光投射所产生的点云数据;由于本发明中采用高清成像芯片,为了与垂直扫描精度取得一致,这里采用计算步长为Δβ=0.36来遍历整个360°的方位角,附图16为移动体激光光源在某一个高度位置上扫描结果全景图,在全景图上绿色短虚线为绿色全方位面激光投射所产生的点云数据红色长虚线为红色全方位面激光投射所产生的点云数据蓝色短虚线为蓝色全方位面激光投射所产生的点云数据下面具体说明遍历算法;In the StepC calculation process, the point cloud data generated by the omnidirectional 360° blue, red and green omnidirectional laser projections have been traversed; since the high-definition imaging chip is used in the present invention, in order to achieve consistency with the vertical scanning accuracy, the calculation method is used here The step size is Δβ=0.36 to traverse the entire 360° azimuth angle. Attached figure 16 is the panorama of the scanning result of the moving body laser light source at a certain height position. The green short dotted line on the panorama is produced by the green omnidirectional laser projection point cloud data The red long dotted line is the point cloud data generated by the red omnidirectional laser projection The blue short dotted line is the point cloud data generated by the blue omnidirectional laser projection The traversal algorithm is described in detail below;

StepⅠ:设置初始方位角β=0;StepⅠ: Set the initial azimuth angle β=0;

StepⅡ:采用全方位面激光信息解析模块,沿射线方向检索点云得到在成像平面上与点云数据相对应的||u″||(β)green、||u″||(β)red和||u"||(β)blue三个点,用公式(7)计算点云的距离值Ra和入射角αa,用公式(8)计算点云的距离值Rb和入射角αb,用公式(9)计算点云的距离值Rc和入射角αc;然后再用公式(10)计算点云在笛卡尔坐标系下的用公式(11)计算点云在笛卡尔坐标系下的用公式(12)计算点云在笛卡尔坐标系下的在这一计算步骤中,遍历方位角β分别代入到公式(10)、(11)、(12)中的(β)green、(β)red、(β)blue;将上述的计算数据保存在内存单元中;Step Ⅱ: Use the omnidirectional laser information analysis module to retrieve the point cloud along the ray direction with Get the three points ||u″||(β) green , ||u″||(β) red and ||u"||(β) blue corresponding to the point cloud data on the imaging plane, using the formula (7) Calculate point cloud The distance value R a and the incident angle α a , use the formula (8) to calculate the point cloud The distance value R b and the incident angle α b , use the formula (9) to calculate the point cloud The distance value R c and the incident angle α c ; then use the formula (10) to calculate the point cloud in the Cartesian coordinate system Calculate the point cloud with formula (11) in the Cartesian coordinate system Calculate the point cloud with formula (12) in the Cartesian coordinate system In this calculation step, the ergodic azimuth β is respectively substituted into (β) green , (β) red , (β) blue in the formulas (10), (11), and (12); the above calculation data are saved in in the memory unit;

StepⅢ:β←β+Δβ,Δβ=0.36,判断β=360是否成立,如果成立结束计算,否则转到StepⅡ。StepⅢ: β←β+Δβ, Δβ=0.36, judge whether β=360 is true, if it is true, end the calculation, otherwise go to StepⅡ.

在StepD中,首先从内存中读取无激光投射情况下的全景视频图像,根据StepC中处理结果将点云的几何信息和颜色信息进行融合;融合后的点云数据将包括该点云的几何信息和颜色信息,即用(R,α,β,r,g,b)来表达某一个点云的几何信息和颜色信息,下面具体说明融合算法,In StepD, the panoramic video image without laser projection is first read from the memory, and the geometric information and color information of the point cloud are fused according to the processing results in StepC; the fused point cloud data will include the geometry of the point cloud Information and color information, that is, use (R, α, β, r, g, b) to express the geometric information and color information of a certain point cloud. The fusion algorithm will be described in detail below.

Step①:设置初始方位角β=0;Step①: Set the initial azimuth angle β=0;

Step②:根据方位角β和在传感器平面上与点云数据相对应的||u″||(β)red和||u″||(β)green两个点的信息,读取无激光投射情况下的全景视频图上的相关像素点的(r,g,b)颜色数据,与从StepC中处理加工得到的相对应的(R,α,β)进行融合,得到相对应的点云几何信息和颜色信息(R,α,β,r,g,b);Step②: According to the azimuth β and the information of the two points ||u″||(β) red and ||u″||(β) green corresponding to the point cloud data on the sensor plane, read the non-laser projection In this case, the (r, g, b) color data of the relevant pixels on the panoramic video image are fused with the corresponding (R, α, β) obtained from the processing in StepC to obtain the corresponding point cloud geometry information and color information (R,α,β,r,g,b);

Step③:β←β+Δβ,Δβ=0.36,判断β=360是否成立,如果成立结束计算,将计算结果保存在存储单元中;否则转到Step②。Step③: β←β+Δβ, Δβ=0.36, judge whether β=360 is true, if true, end the calculation, save the calculation result in the storage unit; otherwise, go to Step②.

在StepE中根据StepD的计算结果逐步构建全景3D模型,在本发明中,移动体激光光源完成一次垂直方向的扫描过程,即从一个极限位置到另一个极限位置就完成了全景3D模型的构建,扫描过程中每一移动步长都会产生在某一个高度情况下的切片点云数据,如附图12所示;将这些数据以移动体激光光源的高度值作为保存索引,这样就能按切片点云数据产生顺序进行累加,为最后构建带有几何信息和颜色信息的全景3D模型;根据上述的描述,本发明有向下全景3D重构和向上全景3D重构两种不同模式;In StepE, the panoramic 3D model is gradually built according to the calculation results of StepD. In the present invention, the moving body laser light source completes a vertical scanning process, that is, the construction of the panoramic 3D model is completed from one extreme position to another extreme position. Each moving step in the scanning process will generate slice point cloud data at a certain height, as shown in Figure 12; these data are stored with the height value of the laser light source of the moving body as the index, so that the slice point cloud data can be The cloud data generation sequence is accumulated to finally build a panoramic 3D model with geometric information and color information; according to the above description, the present invention has two different modes of downward panoramic 3D reconstruction and upward panoramic 3D reconstruction;

在StepF中判断移动体激光光源是否达到极限位置,即判断zmove(j)=0或者zmove(j)=hLaserMD是否成立,如果成立的话转到StepG,不成立的话转到StepA。In StepF, it is judged whether the laser light source of the moving body has reached the limit position, that is, it is judged whether z move (j)=0 or z move (j)=h LaserMD is established, and if it is established, it is turned to StepG, and if it is not established, it is turned to StepA.

在StepG中,主要工作是输出重构结果并为下一次重构做一些准备;具体做法是:首先设置移动体激光光源为OFF,读取无激光投射情况下的全景视频图像,并将其保存在内存单元中;然后输出3D重构全景模型并保存到存储单元,由于本发明中无论是在切片点云数据产生方面还是在某一个切片上的全方位点云数据产生方面均采用了高分辨率的采集手段,在成像平面上的每个像素都具备了与实际点云相对应的几何信息和颜色信息,因此也就有效地回避了在三维重构中的对应点问题、平铺问题和分支问题;最后设置移动体激光光源为ON,转到StepA,进行新的3D全景模型的重构。In StepG, the main job is to output the reconstruction results and make some preparations for the next reconstruction; the specific method is: first set the moving body laser light source to OFF, read the panoramic video image without laser projection, and save it In the memory unit; then output the 3D reconstructed panorama model and save it to the storage unit, because in the present invention, no matter whether it is in the generation of slice point cloud data or in the aspect of omnidirectional point cloud data generation on a certain slice, high resolution is adopted. Each pixel on the imaging plane has the geometric information and color information corresponding to the actual point cloud, so it effectively avoids the problems of corresponding points, tiles and problems in 3D reconstruction. Branch problem; finally set the moving body laser light source to ON, go to Step A, and reconstruct the new 3D panoramic model.

通过上述处理得到了以全方位视觉传感器的单视点Om为坐标原点的3D全景模型的点云数据;在用全景移动面激光投射光源扫描全景场景时产生了一个个扫描切片,如附图12所示;图中蓝色部分是由蓝色全方位面激光投射所产生的点云数据、红色部分是由红色全方位面激光投射所产生的点云数据、绿色部分是由绿色全方位面激光投射所产生的点云数据。Through the above processing, the point cloud data of the 3D panoramic model with the single-view point O m of the omnidirectional visual sensor as the coordinate origin is obtained; when scanning the panoramic scene with the laser projection light source of the panoramic mobile surface, scanning slices are generated one by one, as shown in Figure 12 As shown; the blue part in the figure is the point cloud data generated by the blue omnidirectional laser projection, the red part is the point cloud data generated by the red omnidirectional laser projection, and the green part is the green omnidirectional laser projection Project the resulting point cloud data.

这些扫描切片图像是在扫描过程中随着全景移动面激光投射光源的移动自然形成的,每扫描得到一张全景切片图像后就用方位角遍历整个全景切片图像提取出蓝色、红色和绿色全方位面激光投射所产生的点云数据;点云数据存储以矩阵方式存储,其中矩阵的1~700行存储蓝色全方位面激光投射所产生的点云数据,矩阵的701~1400行存储红色全方位面激光投射所产生的点云数据,矩阵的1401~2100行存储绿色全方位面激光投射所产生的点云数据,列数表示从0°~359.64°方位角扫描的数目,共1000列;因此,点云存储矩阵为2100×1000的矩阵;每个点云中包括了(x,y,z,R,G,B)6个属性,这就形成了有序点云数据集,有序数据集的优势在于,在预先了解相邻点的关系后其邻域操作能更加高效。These scanning slice images are naturally formed with the movement of the laser projection light source on the panoramic moving surface during the scanning process. After each scan obtains a panoramic slice image, the azimuth angle is used to traverse the entire panoramic slice image to extract blue, red and green full-scale images. The point cloud data generated by laser projection on the azimuth plane; the point cloud data is stored in a matrix, in which the 1-700 rows of the matrix store the point cloud data generated by the blue omni-directional laser projection, and the 701-1400 rows of the matrix store the red The point cloud data generated by omnidirectional laser projection, the 1401~2100 rows of the matrix store the point cloud data generated by green omnidirectional laser projection, the number of columns indicates the number of azimuth scans from 0° to 359.64°, a total of 1000 columns ; Therefore, the point cloud storage matrix is a matrix of 2100×1000; each point cloud includes (x, y, z, R, G, B) 6 attributes, which forms an ordered point cloud data set, with The advantage of ordinal data set is that its neighborhood operation can be more efficient after knowing the relationship between adjacent points in advance.

以物为中心的全景绘制模块,用于读取点云存储矩阵中数据然后调用PCL中的一个pcl_visualization库,通过该库可以对获取的点云数据文件进行快速地三维建模并实现可视化显示,实现以物为中心的全景绘制显示。The object-centered panorama rendering module is used to read the data in the point cloud storage matrix and then call a pcl_visualization library in PCL, through which the obtained point cloud data files can be quickly 3D modeled and visualized. Realize object-centered panorama rendering and display.

以人为中心的透视图显示绘制模块,用于根据观察者处于3D重构环境中的视角和视场范围绘制3D透视图,其绘制算法步骤如下:The human-centered perspective view display drawing module is used to draw a 3D perspective view according to the view angle and field of view of the observer in the 3D reconstruction environment. The drawing algorithm steps are as follows:

STEP1)以全景视觉传感器的单视点为坐标原点Om(0,0,0),建立三维柱状空间坐标系;STEP1) Take the monoscopic point of the panoramic vision sensor as the origin of coordinates O m (0,0,0), and establish a three-dimensional cylindrical space coordinate system;

STEP 2)确定透视窗口的大小,以人眼的视觉范围为基准,宽度方向约108°,用方位角β为变量;高度方向用h为变量;STEP 2) Determine the size of the perspective window, based on the visual range of the human eye, about 108° in the width direction, and use the azimuth angle β as a variable; use h as a variable in the height direction;

STEP 3)在获取主动式全景视觉传感器的检测结果时,得到了以全景视觉传感器的单视点为坐标原点Om(0,0,0)的点云数据(h,β,r),高度方向的范围就以最小距离hmin和最大距离hmax确定,共有N行数据;考虑到在宽度方向时,方位角β是以0.36°角度为步长来连续得到点云数据的,每一个切片扫描过程中会产生1000个点云数据;这里以透视窗口的左侧边缘为初始方位角β1,那么在透视窗口的右侧边缘为β1+300,共有M列数据,这里M=300;STEP 3) When obtaining the detection results of the active panoramic vision sensor, the point cloud data (h, β, r) with the single viewpoint of the panoramic vision sensor as the coordinate origin O m (0, 0, 0) is obtained, and the height direction The range is determined by the minimum distance h min and the maximum distance h max , and there are N rows of data in total; considering that in the width direction, the azimuth β is continuously obtained with a step of 0.36° angle, and each slice scan In the process, 1000 point cloud data will be generated; here, the left edge of the perspective window is taken as the initial azimuth β 1 , then the right edge of the perspective window is β 1 +300, and there are M columns of data, where M=300;

STEP 4)根据所选择的初始方位角β1确定第一个数据,比如初始方位角β1=36°,那么就选择第100个数据开始到400个数据为在最小距离hmin的第一列数据,以1100个数据开始到1400个数据为hmin+Δh的第二列数据,…STEP 4) Determine the first data according to the selected initial azimuth angle β 1 , such as the initial azimuth angle β 1 = 36°, then select the 100th data and start to 400 data as the first column at the minimum distance h min Data, starting from 1100 data to 1400 data is the second column data of h min +Δh,...

STEP 5)透视窗口的显示数据矩阵的加工,比如目前得到的最小距离hmin和最大距离hmax确定,共有2100行数据;具体算法如下:STEP 5) Processing of the display data matrix of the perspective window, such as the determination of the minimum distance h min and maximum distance h max obtained so far, with a total of 2100 rows of data; the specific algorithm is as follows:

STEP51:确定初始方位角β1,读取最小距离hmin和最大距离hmax数据;STEP51: Determine the initial azimuth angle β 1 , read the minimum distance h min and maximum distance h max data;

STEP52:h=hminSTEP52: h=h min ;

STEP53:读取距离h的初始方位角β1开始到β1+300的数据,如果β1+300≥1000,则β1+300=β1+300-1000;将这些值作为矩阵的新一行数据,h=h+Δh;STEP53: Read the data from the initial azimuth angle β 1 to β 1 +300 at the distance h, if β 1 +300≥1000, then β 1 +300=β 1 +300-1000; use these values as a new row of the matrix Data, h=h+Δh;

STEP54:判断h≥hmax,如果不满足转到STEP53;STEP54: Judging h≥h max , if not satisfied, go to STEP53;

STEP55:结束。STEP55: end.

通过上述算法得到一个80×300的矩阵;Obtain a 80×300 matrix through the above algorithm;

STEP 6)将矩阵中的所有的三维坐标用三角形面片连接起来,连接方法是首先将每一行的数据用直线进行连接,然后将每一列的数据用直线进行连接,最后将矩阵中的(i,j)和(i+1,j+1)的点云数据用直线进行连接;连接线的颜色采用连接的两个点的颜色平均值;STEP 6) Connect all the three-dimensional coordinates in the matrix with triangle patches. The connection method is to first connect the data of each row with a straight line, then connect the data of each column with a straight line, and finally connect the (i , j) and (i+1, j+1) point cloud data are connected by a straight line; the color of the connecting line is the average color of the two connected points;

STEP 7)将所有连接的三角形面片显示在输出设备上。STEP 7) Display all connected triangles on the output device.

全方位视觉传感器的工作原理是:进入双曲面镜的中心的光,根据双曲面的镜面特性向着其虚焦点折射。实物图像经双曲面镜反射到聚光透镜中成像,在该成像平面上的一个点P(x,y)对应着实物在空间上的一个点的坐标A(X,Y,Z)。The working principle of the omnidirectional vision sensor is: the light entering the center of the hyperboloid mirror is refracted toward its virtual focus according to the specular characteristics of the hyperboloid. The object image is reflected by the hyperboloid mirror into the condenser lens for imaging, and a point P (x, y) on the imaging plane corresponds to the coordinate A (X, Y, Z) of a point of the object in space.

附图7中的2-双曲线面镜,12-入射光线,13-双曲面镜的实焦点Om(0,0,c),14-双曲面镜的虚焦点,即摄像单元6的中心Oc(0,0,-c),15-反射光线,16-成像平面,17-实物图像的空间坐标A(X,Y,Z),18-入射到双曲面镜面上的图像的空间坐标,19-反射在成像平面上的点P(x,y)。In accompanying drawing 7, 2—hyperbolic mirror, 12—incident light, 13—real focus Om(0,0,c) of hyperboloid mirror, 14—virtual focus of hyperboloid mirror, that is, center Oc of camera unit 6 (0,0,-c), 15-reflected light, 16-imaging plane, 17-space coordinates A(X,Y,Z) of the real image, 18-space coordinates of the image incident on the hyperboloid mirror surface, 19 - Point P(x,y) reflected on the imaging plane.

附图7中所示的双曲面镜构成的光学系统可以由下面5个等式表示;The optical system that the hyperboloid mirror shown in accompanying drawing 7 constitutes can be expressed by following 5 equations;

((X2+Y2)/a2)-((Z-c)2/b2)=-1当Z>0时 (20)((X 2 +Y 2 )/a 2 )-((Zc) 2 /b 2 )=-1 when Z>0 (20)

β=tan-1(Y/X) (22)β=tan -1 (Y/X) (22)

α=tan-1[(b2+c2)sinγ-2bc]/(b2+c2)cosγ (23)α=tan -1 [(b 2 +c 2 )sinγ-2bc]/(b 2 +c 2 )cosγ (23)

式中X、Y、Z表示空间坐标,c表示双曲面镜的焦点,2c表示两个焦点之间的距离,a,b分别是双曲面镜的实轴和虚轴的长度,β表示入射光线在XY投影平面上与X轴的夹角,即方位角,α表示入射光线在XZ投影平面上与X轴的夹角,这里将α称为入射角,α大于或等于0时称为俯角,将α小于0时称为仰角,f表示成像平面到双曲面镜的虚焦点的距离,γ表示折反射光线与Z轴的夹角;x,y表示在成像平面上的一个点。In the formula, X, Y, and Z represent the space coordinates, c represents the focal point of the hyperbolic mirror, 2c represents the distance between the two focal points, a, b are the lengths of the real axis and imaginary axis of the hyperbolic mirror, respectively, and β represents the incident light The angle between the X-axis and the X-axis on the XY projection plane is the azimuth angle. α represents the angle between the incident light and the X-axis on the XZ projection plane. Here, α is called the incident angle, and when α is greater than or equal to 0, it is called the depression angle. When α is less than 0, it is called the elevation angle, f represents the distance from the imaging plane to the virtual focus of the hyperbolic mirror, γ represents the angle between the refraction light and the Z axis; x, y represent a point on the imaging plane.

实施例2Example 2

本实施例的其他结构和工作过程与实施例1相同,所不同的是:针对不同的重构场景的需求,更换移动体激光光源硬件模块,即改变附图4中的绿色激光线的投射角θa和蓝色激光线的投射角θc,以改变垂直扫描的范围。The other structure and working process of this embodiment are the same as those of Embodiment 1, the difference is: to meet the needs of different reconstruction scenarios, replace the hardware module of the moving body laser light source, that is, change the projection angle of the green laser line in Figure 4 θ a and the projection angle θ c of the blue laser line to change the vertical scanning range.

实施例3Example 3

其余与实施例1相同,针对不同的3D场景绘制显示的需求,以人为中心的全景透视图循环显示绘制模块,用于根据观察者不断改变3D重构环境中的视角,从人机工程的角度确定视场范围来绘制3D透视图,其核心就是要不断顺序缓慢改变方位角β,生成在该方位角β下的显示数据矩阵;其关键算法如下:The rest are the same as in Embodiment 1. According to the requirements of drawing and displaying different 3D scenes, the human-centered panoramic perspective view cycle display drawing module is used to continuously change the viewing angle in the 3D reconstruction environment according to the observer. From the perspective of ergonomics Determine the field of view to draw a 3D perspective, the core of which is to continuously and slowly change the azimuth angle β to generate a display data matrix under the azimuth angle β; the key algorithm is as follows:

STEP1:确定初始方位角β1,读取最小距离hmin和最大距离hmax数据;STEP1: Determine the initial azimuth angle β 1 , read the minimum distance h min and maximum distance h max data;

STEP2:β=β1,确定循环显示次数N,n=0;STEP2: β=β 1 , determine the cycle display times N, n=0;

STEP3:h=hminSTEP3: h=h min ;

STEP4:读取距离值h的初始方位角β开始到β+300的数据,如果β+300≥1000,则β+300=β+300-1000;将这些值作为矩阵的新一行数据,h=h+Δh;STEP4: Read the data from the initial azimuth β of the distance value h to β+300, if β+300≥1000, then β+300=β+300-1000; use these values as a new row of data in the matrix, h= h+Δh;

STEP5:判断h≥hmax,如果不满足转到STEP4;STEP5: Judging h≥h max , if not satisfied, go to STEP4;

STEP6:保存在当前β的显示数据矩阵,将显示数据矩阵用三角形面片进行连接,显示输出;β=β+Δβ;STEP6: Save the display data matrix in the current β, connect the display data matrix with triangle patches, and display the output; β=β+Δβ;

STEP7:判断β≥3600;STEP7: Judging that β≥3600;

STEP8:如果是β=β-3600,n=n+1;STEP8: If β=β-3600, n=n+1;

STEP9:判断n≥N,如果不满足转到STEP3;STEP9: Judging n≥N, if not satisfied, go to STEP3;

STEP10:结束。STEP10: end.

实施例4Example 4

本实施例的其他结构和工作过程与实施例1相同,所不同的是:针对不同的3D场景绘制显示的需求,以人为中心的立体透视图显示绘制模块,用于根据观察者不断改变3D重构环境中的视角,从人机工程的角度确定视场范围来绘制3D立体透视图;即在以人为中心的透视图显示绘制模块的所生成的透视图作为左视点图像情况下,生成右视点图像;在以人为中心的透视图显示绘制模块的所生成的透视图作为右视点图像情况下,生成左视点图像;在以人为中心的透视图显示绘制模块的所生成的透视图作为中央眼图像情况下,生成左右视点图像,从而实现立体像对生成;因此,需要以ODVS的单视点坐标Om(0,0,0)及空间物点P(h,β,r)之间的几何关系计算出新视点的坐标;The other structure and working process of this embodiment are the same as those of Embodiment 1, the difference is: for different 3D scene rendering and display requirements, a human-centered stereo perspective view display rendering module is used to continuously change the 3D weight according to the observer. According to the angle of view in the structural environment, the scope of the field of view is determined from the perspective of ergonomics to draw a 3D stereoscopic perspective; that is, when the perspective generated by the human-centered perspective display drawing module is used as a left viewpoint image, a right viewpoint is generated Image; in case of anthropocentric perspective displaying the generated perspective of the rendering module as a right viewpoint image, a left viewpoint image is generated; in anthropocentric perspective displaying the generated perspective of the rendering module as a central eye image In this case, left and right viewpoint images are generated to realize stereo pair generation; therefore, the geometric relationship between the single-viewpoint coordinate O m (0,0,0) of ODVS and the spatial object point P(h,β,r) is required Calculate the coordinates of the new viewpoint;

将ODVS的单视点坐标Om(0,0,0)作为右眼视点的坐标,用公式(13)计算出空间P点在左眼视点的坐标;Using the single-viewpoint coordinates O m (0,0,0) of ODVS as the coordinates of the right-eye viewpoint, use the formula (13) to calculate the coordinates of the point P in the left-eye viewpoint;

式中,hR为空间P点在左眼视点的高度,hL为空间P点在右眼视点的高度,rR为空间P点在左眼视点的入射角,为空间P点在右眼视点的入射角的平方值,βL为空间P点在左眼视点的方位角,βR为空间P点在右眼视点的方位角;In the formula, h R is the height of point P in space at the viewpoint of the left eye, h L is the height of point P in space at the viewpoint of the right eye, r R is the incident angle of point P in space at the viewpoint of the left eye, is the square value of the incident angle of point P in space at the viewpoint of the right eye, β L is the azimuth of point P in space at the viewpoint of the left eye, and β R is the azimuth of point P in space at the viewpoint of the right eye;

将ODVS的单视点坐标Om(0,0,0)作为左眼视点的坐标,用公式(14)计算出空间P点在右眼视点的坐标;Using the single-viewpoint coordinates O m (0,0,0) of ODVS as the coordinates of the left-eye viewpoint, use formula (14) to calculate the coordinates of the space P point at the right-eye viewpoint;

式中,hR为空间P点在左眼视点的高度,hL为空间P点在右眼视点的高度,rR为空间P点在左眼视点的入射角,为空间P点在右眼视点的入射角的平方值,βL为空间P点在左眼视点的方位角,βR为空间P点在右眼视点的方位角;In the formula, h R is the height of point P in space at the viewpoint of the left eye, h L is the height of point P in space at the viewpoint of the right eye, r R is the incident angle of point P in space at the viewpoint of the left eye, is the square value of the incident angle of point P in space at the viewpoint of the right eye, β L is the azimuth of point P in space at the viewpoint of the left eye, and β R is the azimuth of point P in space at the viewpoint of the right eye;

将ODVS的单视点坐标Om(0,0,0)作为中央眼的坐标,用公式(15)计算出空间P点在左右眼视点的坐标;Using the single-viewpoint coordinates Om (0,0,0) of ODVS as the coordinates of the central eye, use formula (15) to calculate the coordinates of the point P in the left and right eyes;

式中,hR为空间P点在左眼视点的高度,hL为空间P点在右眼视点的高度,rR为空间P点在左眼视点的入射角,为空间P点在右眼视点的入射角的平方值,βL为空间P点在左眼视点的方位角,βR为空间P点在右眼视点的方位角;In the formula, h R is the height of point P in space at the viewpoint of the left eye, h L is the height of point P in space at the viewpoint of the right eye, r R is the incident angle of point P in space at the viewpoint of the left eye, is the square value of the incident angle of point P in space at the viewpoint of the right eye, β L is the azimuth of point P in space at the viewpoint of the left eye, and β R is the azimuth of point P in space at the viewpoint of the right eye;

B为两眼间的距离,女士两眼的距离在56~64mm,男性两眼的距离在60~70mm,这里取一个两性之间都能接受的距离60mm,即B=60;B is the distance between the eyes. The distance between the eyes of women is 56-64mm, and the distance between the eyes of men is 60-70mm. Here, a distance of 60mm that is acceptable to both sexes is taken, that is, B=60;

立体透视显示算法如下:The stereo perspective display algorithm is as follows:

STEP1:确定初始方位角β1,读取最小距离hmin和最大距离hmax数据,确定视点,中央眼=0,左视点=1,右视点=2;STEP1: Determine the initial azimuth angle β 1 , read the minimum distance h min and maximum distance h max data, determine the viewpoint, central eye=0, left viewpoint=1, right viewpoint=2;

STEP2:h=hminSTEP2: h=h min ;

STEP3:读取距离值h的初始方位角β1开始到β1+300的数据,如果β1+300≥1000,则β1+300=β1+300-1000;根据所确定的视点,如果选择0(中央眼)用公式(15)计算空间物点在左右眼的柱坐标;如果选择1(左视点)用公式(14)计算空间物点在右眼的柱坐标,将原坐标数据作为左眼的柱坐标;如果选择2(右视点)用公式(13)计算空间物点在左眼的柱坐标,将原坐标数据作为右眼的柱坐标;分别用左右视点将这些值作为左视点矩阵和右视点矩阵的新一行数据,h=h+Δh;STEP3: Read the data from the initial azimuth angle β 1 to β 1 +300 of the distance value h, if β 1 +300≥1000, then β 1 +300=β 1 +300-1000; according to the determined viewpoint, if Select 0 (central eye) and use formula (15) to calculate the cylindrical coordinates of the spatial object point in the left and right eyes; if you select 1 (left viewpoint), use formula (14) to calculate the cylindrical coordinates of the spatial object point in the right eye, and use the original coordinate data as Cylindrical coordinates of the left eye; if 2 (right viewpoint) is selected, use formula (13) to calculate the cylindrical coordinates of the space object point in the left eye, and use the original coordinate data as the cylindrical coordinates of the right eye; use the left and right viewpoints respectively and use these values as the left viewpoint Matrix and a new row of data of the right view matrix, h=h+Δh;

STEP4:判断h≥hmax,如果不满足转到STEP3;STEP4: Judging h≥h max , if not satisfied, go to STEP3;

STEP5:结束;STEP5: end;

通过上述算法得到二个80×300的矩阵,分别为左右视点的显示矩阵。Through the above algorithm, two 80×300 matrices are obtained, which are the display matrices of the left and right viewpoints respectively.

分别将矩阵中的所有的三维坐标用三角形面片连接起来,连接方法是首先将每一行的数据用直线进行连接,然后将每一列的数据用直线进行连接,最后将矩阵中的(i,j)和(i+1,j+1)的点云数据用直线进行连接;连接线的颜色采用连接的两个点的颜色平均值。Connect all the three-dimensional coordinates in the matrix with triangle patches. The connection method is to first connect the data of each row with a straight line, then connect the data of each column with a straight line, and finally connect the (i, j ) and (i+1, j+1) point cloud data are connected by a straight line; the color of the connecting line is the average color of the two connected points.

通过上述处理后生成的立体像对,接着进行双目立体显示。The stereoscopic image pair generated after the above processing is then subjected to binocular stereoscopic display.

对于显卡支持的双目立体显示。如在支持立体显示的OpenGL环境下,在创建设备句柄阶段启动OpenGL的立体显示模式,将生成的立体像对分别输送到左右两个缓冲区中,实现立体显示。For binocular stereo display supported by the graphics card. For example, in the OpenGL environment that supports stereoscopic display, start the stereoscopic display mode of OpenGL in the stage of creating a device handle, and send the generated stereoscopic image pairs to the left and right buffers to realize stereoscopic display.

对于不支持立体显示的显卡上,将立体像对合成为一幅红绿互补色立体图像,从左右立体像对中的一个图像提取红色通道,另一个图像中提取绿色和蓝色通道,将提取的通道融合,形成一个互补色的立体图像。For graphics cards that do not support stereoscopic display, the stereoscopic image pair is synthesized into a red-green complementary color stereoscopic image, and the red channel is extracted from one image in the left and right stereoscopic image pair, and the green and blue channels are extracted from the other image. The channels are fused to form a complementary color stereoscopic image.

实施例5Example 5

本实施例的其他结构和工作过程与实施例1相同,所不同的是:针对不同的3D场景绘制显示的需求,以人为中心的全景立体图循环显示绘制模块,用于根据观察者不断改变3D重构环境中的视角,从人机工程的角度确定视场范围来绘制全景立体图,其核心就是要不断改变方位角β,生成在该方位角β下的左右立体像对;其关键算法如下:The other structure and working process of this embodiment are the same as those of Embodiment 1, the difference is: for different 3D scene drawing and displaying requirements, the human-centered panoramic stereogram display drawing module is used to constantly change the 3D weight according to the viewer. The angle of view in the structural environment is determined from the perspective of ergonomics to draw a panoramic stereogram. The core is to continuously change the azimuth β to generate left and right stereo pairs under the azimuth β; the key algorithm is as follows:

STEP1:确定初始方位角β1,读取最小距离hmin和最大距离hmax数据,确定视点,中央眼=0,左视点=1,右视点=2;STEP1: Determine the initial azimuth angle β 1 , read the minimum distance h min and maximum distance h max data, determine the viewpoint, central eye=0, left viewpoint=1, right viewpoint=2;

STEP2:β=β1,确定循环显示次数N,n=0;STEP2: β=β 1 , determine the cycle display times N, n=0;

STEP3:h=hminSTEP3: h=h min ;

STEP4:读取距离值h的初始方位角β1开始到β1+300的数据,如果β1+300≥1000,则β1+300=β1+300-1000;根据所确定的视点,如果选择0(中央眼)用公式(3)计算空间物点在左右眼的柱坐标;如果选择1(左视点)用公式(2)计算空间物点在右眼的柱坐标,将原坐标数据作为左眼的柱坐标;如果选择2(右视点)用公式(1)计算空间物点在左眼的柱坐标,将原坐标数据作为右眼的柱坐标;分别用左右视点将这些值作为左视点矩阵和右视点矩阵的新一行数据,h=h+Δh;STEP4: Read the data from the initial azimuth angle β 1 to β 1 +300 of the distance value h, if β 1 +300≥1000, then β 1 +300=β 1 +300-1000; according to the determined viewpoint, if Select 0 (central eye) and use formula (3) to calculate the cylindrical coordinates of the spatial object point in the left and right eyes; if you select 1 (left viewpoint), use formula (2) to calculate the cylindrical coordinates of the spatial object point in the right eye, and use the original coordinate data as Cylindrical coordinates of the left eye; if you choose 2 (right viewpoint), use formula (1) to calculate the cylindrical coordinates of the space object point in the left eye, and use the original coordinate data as the cylindrical coordinates of the right eye; use the left and right viewpoints respectively and use these values as the left viewpoint Matrix and a new row of data of the right view matrix, h=h+Δh;

STEP5:判断h≥hmax,如果不满足转到STEP4;STEP5: Judging h≥h max , if not satisfied, go to STEP4;

STEP6:保存在当前β的显示数据矩阵,分别将左视点矩阵和右视点矩阵用三角形面片进行连接,生成的立体像对,立体显示输出;β=β+Δβ;STEP6: Save the display data matrix in the current β, respectively connect the left viewpoint matrix and the right viewpoint matrix with triangular patches, and generate stereoscopic image pairs for stereoscopic display output; β=β+Δβ;

STEP7:判断β≥3600;STEP7: Judging that β≥3600;

STEP8:如果是β=β-3600,n=n+1;STEP8: If β=β-3600, n=n+1;

STEP9:判断n≥N,如果不满足转到STEP3;STEP9: Judging n≥N, if not satisfied, go to STEP3;

STEP10:结束。STEP10: end.

实施例6Example 6

本实施例的其他结构和工作过程与实施例1相同,所不同的是:针对不同的3D场景绘制显示的需求,以人为中心的观察距离变化立体透视图显示绘制技术,用于根据观察者不断改变在3D环境中的自身空间位置,从人机工程的角度确定视场范围来绘制全景立体图;观测者由远及近漫游场景或者由近及远过程中,观察者与被重构空间之间的位置发生了变化,这涉及到点云坐标的变化和多层次显示技术,同时用于显示的点云数据的量也会相应发生变化。The other structure and working process of this embodiment are the same as those of Embodiment 1, the difference is: for different 3D scene rendering and display requirements, the human-centered observation distance change stereoscopic perspective display rendering technology is used to continuously Change your own spatial position in the 3D environment, determine the field of view from the perspective of ergonomics to draw a panoramic stereogram; when the observer roams the scene from far to near or from near to far, the distance between the observer and the reconstructed space The position of the point cloud has changed, which involves the change of point cloud coordinates and multi-level display technology, and the amount of point cloud data used for display will also change accordingly.

首先,我们来观察观测者由远及近漫游场景时,点云空间位置是如何变化的;考虑点云空间位置时,这里仍然采用中央眼的计算方式,即用两眼之间的中点作为观测者的视点;First, let’s observe how the spatial position of the point cloud changes when the observer roams the scene from far to near; when considering the spatial position of the point cloud, the calculation method of the central eye is still used here, that is, the midpoint between the two eyes is used as the observer's point of view;

当观测者由远及近漫游场景时,相对观测者的视点来说空间上的P(h,β,r)变成了P(hDD,rD)。由于在柱坐标系上进行上述运算比较麻烦,我们将观察的方向作为Y轴,那么当观测者由远及近漫游场景时只是在Y轴上移动了一个距离D。因此我们将柱坐标系的点云P(h,β,r)点先用公式(16)进行;When the observer roams the scene from far to near, the spatial P(h,β,r) becomes P(h DD ,r D ) relative to the observer's point of view. Since it is cumbersome to perform the above calculations on the cylindrical coordinate system, we use the direction of observation as the Y axis, so when the observer roams the scene from far to near, he only moves a distance D on the Y axis. Therefore, we use the point cloud P(h, β, r) of the cylindrical coordinate system first to use the formula (16);

转换成笛卡尔坐标系的点云P(x,y,z)点;根据运算结果为了不失一般性,新的视点O'm(0,0,0)与原视点Om(0,0,0)的移动距离为D,用公式(17)和公式(18)表示,Point cloud P(x,y,z) converted into Cartesian coordinate system; According to the calculation results, in order not to lose generality, the new viewpoint O' m (0,0,0) and the original viewpoint O m (0,0 ,0) is D, which is expressed by formula (17) and formula (18),

用公式(19)计算出在新的视点O'm(0,0,0)坐标系情况下所有点云的坐标P(x',y',z'),然后将笛卡尔坐标系的点云P(x',y',z')点再转换成高斯坐标系的点云P(h',β',r')点。对于在Y轴方向上接近漫游场景的情况,就可以做一个简单运算,即y'=y+y0。将移动后的视点O'm(0,0,0)作为中央眼的坐标,用公式(19)计算出空间点P(h',β',r')在左右眼视点的坐标;对于移动后的视点O'm(0,0,0)情况下,与原视点Om(0,0,0)相比,视域会随之发生变化,上面已经规定视域的水平视野范围为108°、垂直视野范围为66°;因此,在新视点O'm(0,0,0)的视域需要在公式(19)计算结果上通过对新视点O'm(0,0,0)的入射角α'和方位角β'来确定,然后用基于ASODVS的以人为中心的立体透视图显示绘制技术绘制在新视点O'm(0,0,0)的立体透视图;对于在新视点O'm(0,0,0)的以人为中心的全景立体循环显示绘制用基于ASODVS的以人为中心的全景立体图循环显示绘制技术;Use the formula (19) to calculate the coordinates P(x',y',z') of all point clouds in the new viewpoint O' m (0,0,0) coordinate system, and then convert the points of the Cartesian coordinate system The point cloud P(x',y',z') is converted into the point cloud P(h',β',r') of the Gaussian coordinate system. For the case of approaching the roaming scene in the Y-axis direction, a simple calculation can be performed, that is, y'=y+y 0 . Take the moved viewpoint O' m (0,0,0) as the coordinates of the central eye, and use the formula (19) to calculate the coordinates of the spatial point P(h',β',r') in the left and right eye viewpoints; for moving In the case of the later viewpoint O' m (0,0,0), compared with the original viewpoint O m (0,0,0), the field of view will change accordingly, and the horizontal field of view of the field of view specified above is 108 °, the vertical field of view is 66°; therefore, the field of view at the new viewpoint O' m (0,0,0) needs to be calculated by the new viewpoint O' m (0,0,0) on the calculation result of formula (19) The incident angle α' and the azimuth angle β' are determined, and then the stereoscopic perspective at the new viewpoint O' m (0,0,0) is drawn using the ASODVS-based anthropocentric stereoscopic rendering technique; for the new The human-centered panoramic stereoscopic cyclic display rendering of the viewpoint O' m (0,0,0) uses the human-centric panoramic stereoscopic cyclic display rendering technology based on ASODVS;

式中,h'L为新的视点O'm(0,0,0)坐标系下空间P点在右眼视点的高度,r'L为新的视点O'm(0,0,0)坐标系下空间P点在右眼视点的入射角,β'L为新的视点O'm(0,0,0)坐标系下空间P点在左眼视点的方位角,h'R为新的视点O'm(0,0,0)坐标系下空间P点在左眼视点的高度,r'R为新的视点O'm(0,0,0)坐标系下空间P点在左眼视点的入射角,β'R为新的视点O'm(0,0,0)坐标系下空间P点在右眼视点的方位角。In the formula, h' L is the height of point P in the right eye viewpoint in the new viewpoint O' m (0,0,0) coordinate system, and r' L is the new viewpoint O' m (0,0,0) In the coordinate system, the incident angle of the space point P at the right eye viewpoint, β' L is the azimuth angle of the space P point at the left eye viewpoint under the new viewpoint O' m (0,0,0) coordinate system, and h' R is the new The viewpoint O' m (0,0,0) coordinate system of the space P point is at the height of the left eye viewpoint, and r' R is the new viewpoint O' m (0,0,0) coordinate system, and the space point P is on the left The angle of incidence of the eye point of view, β' R is the azimuth angle of point P in the right eye point of view in the new point of view O' m (0,0,0) coordinate system.

当对新视角显示时,首先判断新视点到根节点的距离,当距离较近时,即当用于显示的点云数据少于某一个阈值时,从相邻的根节点数据插值进一步生成低层次点云数据;生成低层次点云数据,即插值运算是体绘制中的一个基础运算。由于其运算量大,通过体绘制中的快速插值算法来实现。When displaying a new viewpoint, first judge the distance from the new viewpoint to the root node. When the distance is relatively close, that is, when the point cloud data used for display is less than a certain threshold, the interpolation from the adjacent root node data is further generated. Hierarchical point cloud data; generate low-level point cloud data, that is, interpolation operation is a basic operation in volume rendering. Due to its large amount of calculation, it is realized through the fast interpolation algorithm in volume rendering.

Claims (9)

1. a kind of 3D environment dubbing systems based on active panoramic vision, including omnibearing vision sensor, moving body laser light Source and the microprocessor for omnidirectional images to be carried out with the reconstruct of 3D panoramas and the drafting output of 3D panoramas, all-directional vision sensing Device is arranged on the guiding support bar of moving body LASER Light Source, it is characterised in that:
Described moving body LASER Light Source also includes the volumetric laser light source moved up and down along guiding support bar, volumetric laser light source tool There are the first comprehensive face laser of vertically-guided support bar and the axial line of guiding support bar into θcThe full side of angle inclined second Plane laser and with the axial line of guiding support bar into θaThe comprehensive face laser of angle the inclined 3rd;
Described microprocessor is divided into demarcation part, 3D reconstruct part and the display of 3D panoramas and draws part;
Described demarcation part, the calibrating parameters for determining omnibearing vision sensor, and in omnibearing vision sensor institute The first comprehensive face laser, the second comprehensive face laser and the 3rd comprehensive face laser correspondence are parsed on the panoramic picture of shooting Laser projection information;
Described 3D reconstruct part, according to the position of moving body LASER Light Source, and the laser projection information related pixel Coordinate value, calculates the point cloud geological information in mobile face, and by the point cloud geological information in mobile face and the face of each comprehensive face laser Color information is merged, and builds panorama 3D models;
Part is drawn in described 3D panoramas display, including:
Perspective view focusing on people shows drafting module, is reconstructed in 3D according to described panorama 3D models, and observer Perspective view focusing on people is drawn at visual angle and field range in environment.
2. the 3D environment dubbing systems of active panoramic vision are based on as claimed in claim 1, it is characterised in that described 3D is complete Part is drawn in scape display, is also included:
Full-view perspective circulation display drafting module, is in 3D and reconstructs environment according to described panorama 3D models, and observer The varying cyclically and field range at middle visual angle draw panoramic perspective focusing on people circulation display figure;
Perspective view shows drafting module, according to the perspective view focusing on people, generates right visual point image, left view point Image and left and right visual point image draw perspective view focusing on people;
Full-view stereo figure circulation display drafting module, is in 3D and reconstructs environment according to described panorama 3D models, and observer The varying cyclically and field range at middle visual angle, by constantly changing azimuthal angle beta, generate the left and right space image under the azimuthal angle beta It is right to draw full-view stereo focusing on people perspective circulation display figure;
With viewing distance change perspective view display drafting module, according to described panorama 3D models, and observer is in 3D Panorama focusing on people when the change of viewing distance and field range constantly change viewing distance to be plotted in reconstruct environment Volume rendering display figure.
3. a kind of 3D panoramas based on 3D environment dubbing system described in claim 1 or 2 show method for drafting, it is characterised in that Including step:
1) panoramic picture that moving body laser light source projects are formed is shot using omnibearing vision sensor;
2) panoramic picture according to, determines the calibrating parameters of omnibearing vision sensor, and parse the first comprehensive face Laser, the second comprehensive face laser and the corresponding laser projection information of the 3rd comprehensive face laser;
3) according to the position of moving body LASER Light Source, and the laser projection information related pixel coordinate value, calculate movement The point cloud geological information in face, and the colouring information of the point cloud geological information in mobile face and each comprehensive face laser is merged, Build panorama 3D models;
4) visual angle and field range that the panorama 3D models according to, and observer are in 3D reconstruct environment are drawn Perspective view focusing on people;Comprise the following steps that:
STEP1 it is) origin of coordinates O with the single view of omnibearing vision sensorm(0,0,0), sets up three-dimensional column space coordinates System;
STEP2) according to the visual range of human eye, the size of see-through window is determined, with azimuthal angle beta and height h as see-through window Variable, and (h, β, r), r is distance of the correspondence spatial point to single view to obtain the corresponding cloud data of see-through window;
STEP3 step-length and the scope of height h) according to azimuthal angle beta, and described cloud data (h, β r), generate data square Battle array;
STEP4) all of three-dimensional coordinate in data matrix is coupled together with triangle surface, the color of connecting line is using connection Two color averages of point;
STEP5 the triangle surface of all connections) is carried out into output display, perspective view focusing on people is completed and is drawn.
4. 3D panoramas as claimed in claim 3 show method for drafting, it is characterised in that described 3D panoramas display method for drafting Also include drawing panoramic perspective circulation display figure focusing on people, by constantly changing azimuthal angle beta, generate in the azimuthal angle beta Under display data matrix, complete panoramic perspective circulation display figure draw;The algorithm of the display data matrix is as follows:
4.1) initial orientation angle beta is determined1, read minimum range hminWith ultimate range hmaxData;
4.2) β is entered as β1, it is determined that circulation display number of times is N, n initial values are 0;
4.3) h is entered as hmin
4.4) the initial orientation angle beta of reading distance value h starts the data to β+300, if β+300 >=1000, the assignment of β+300 It is β+300-1000;Using these β value as matrix new data line, h is entered as h+ Δs h;
4.5) h >=h is judgedmaxIf being unsatisfactory for going to 4.4);
4.6) the display data matrix of current β is stored in, display data matrix is attached with triangle surface, shown defeated Go out;β is entered as β+Δ β;
4.7) β >=3600 are judged;
4.8) if it is, β is entered as β -3600, n is entered as n+1;
4.9) n >=N is judged, if being unsatisfactory for going to 4.3).
5. 3D panoramas as claimed in claim 3 show method for drafting, it is characterised in that described 3D panoramas display method for drafting Also include drawing perspective view focusing on people, specific rendering algorithm is as follows:
5.1:Determine initial orientation angle beta1, read minimum range hminWith ultimate range hmaxData, determine viewpoint;
5.2:H is entered as hmin
5.3:Read the initial orientation angle beta of distance value h1Start to β1+ 300 data, if β1+ 300 >=1000, then β1+ 300 assign It is β to be worth1+300-1000;According to identified viewpoint, selection median eye calculates cylindrical coordinates of the dimensional target point in right and left eyes;Selection Left view point calculate dimensional target point right eye cylindrical coordinates, using former coordinate data as left eye cylindrical coordinates;Right viewpoint is selected to calculate Dimensional target point left eye cylindrical coordinates, using former coordinate data as right eye cylindrical coordinates;These values are made with left and right viewpoint respectively It is left view dot matrix and the new data line of right viewpoint matrix, h is entered as h+ Δs h;
5.4:Judge h >=hmaxIf being unsatisfactory for going to 5.3, until the display matrix of generation left and right viewpoint;
5.5:All of three-dimensional coordinate in display matrix is coupled together with triangle surface respectively, connection method is first will be every The data of a line are attached with straight line, and then the data of each row are attached with straight line, finally by (i, j) in matrix and The cloud data of (i+1, j+1) is attached with straight line;The color of connecting line is using two color averages of point for connecting;
5.6:The stereogram generated after being processed to more than carries out binocular solid and shows, completes focusing on people three-dimensional saturating View Drawing.
6. 3D panoramas as claimed in claim 3 show method for drafting, it is characterised in that described 3D panoramas display method for drafting Also include drawing full-view stereo perspective circulation display figure focusing on people, by constantly changing azimuthal angle beta, generate in the orientation Left and right stereogram under angle beta, its algorithm is as follows:
6.1:Determine initial orientation angle beta1, read minimum range hminWith ultimate range hmaxData, on median eye, left view point and the right side Determine viewpoint among viewpoint three;
6.2:β is entered as β1, it is determined that circulation display times N, n initial values are 0;
6.3:H is entered as hmin
6.4:Read the initial orientation angle beta of distance value h1Start to β1+ 300 data, if β1+ 300 >=1000, then β1+ 300 assign It is β to be worth1+300-1000;According to identified viewpoint, selection median eye calculates cylindrical coordinates of the dimensional target point in right and left eyes;Selection Left view point calculate dimensional target point right eye cylindrical coordinates, using former coordinate data as left eye cylindrical coordinates;Right viewpoint is selected to calculate Dimensional target point left eye cylindrical coordinates, using former coordinate data as right eye cylindrical coordinates;These values are made with left and right viewpoint respectively It is left view dot matrix and the new data line of right viewpoint matrix, h is entered as h+ Δs h;
6.5:Judge h >=hmaxIf being unsatisfactory for going to 6.4;
6.6:The display data matrix of current β is preserved, left view dot matrix and right viewpoint matrix are carried out with triangle surface respectively Connection, generates stereogram, stereoscopic display output;β is entered as β+Δ β;
6.7:Judge β >=3600;
6.8:If β >=3600, β is entered as β -3600, and n is entered as n+1;
6.9:N >=N is judged, if being unsatisfactory for going to 6.3.
7. the 3D panoramas as described in claim 5 or 6 show method for drafting, it is characterised in that according to haplopia point coordinates Om(0,0, 0) and dimensional target point P (h, β, r) between geometrical relationship, the coordinate for calculating new viewpoint includes:
By haplopia point coordinates Om(0,0,0) calculates space P points in left eye viewpoint as the coordinate of right eye viewpoint with formula (13) Coordinate;
h R = h L r R = B 2 + r L 2 + 2 × B × r L × cosβ L β R = a r c s i n ( r R r L × sinβ L ) - - - ( 13 )
In formula, hRIt is space P points in the height of left eye viewpoint, hLIt is space P points in the height of right eye viewpoint, rRFor P points in space exist The incidence angle of left eye viewpoint,It is space P points in the square value of the incidence angle of right eye viewpoint, βLIt is space P points in left eye viewpoint Azimuth, βRIt is space P points at the azimuth of right eye viewpoint;
By haplopia point coordinates Om(0,0,0) calculates space P points in right eye viewpoint as the coordinate of left eye viewpoint with formula (14) Coordinate;
h L = h R r L = B 2 + r R 2 + 2 × B × r R × cosβ R β L = a r c s i n ( r L r R × sinβ R ) - - - ( 14 )
By haplopia point coordinates Om(0,0,0) calculates space P points in right and left eyes viewpoint as the coordinate of median eye with formula (15) Coordinate;
h L = h r L = ( B / 2 ) 2 + r 2 - B × r × c o s β β L = arcsin ( r L r × s i n β ) h R = h r R = ( B / 2 ) 2 + r 2 + B × r × c o s β β R = arcsin ( r R r × s i n β ) - - - ( 15 )
Wherein, the distance between B is two.
8. 3D panoramas as claimed in claim 3 display method for drafting, it is characterised in that described the first comprehensive face laser, Second comprehensive face laser and the 3rd comprehensive face laser are respectively blue line laser, red line laser and green line laser, blue Colo(u)r streak laser and green line laser are separately mounted to the both sides up and down of red line laser, and the axis of all line lasers intersects at institute State a bit on the axial line of guiding support bar.
9. 3D panoramas as claimed in claim 8 show method for drafting, it is characterised in that described 3D panoramas display method for drafting Also include that the panorama centered on thing draws display, according to the single view O of omnibearing vision sensormIt is the 3D of the origin of coordinates The cloud data of panorama model, the scan slice produced when scanning panoramic scene with moving body LASER Light Source, extracts blue, red Cloud data produced by color and green comprehensive face laser projection, generates cloud data matrix, realizes complete centered on thing Scape draws display.
CN201410632152.3A 2014-11-11 2014-11-11 3D environment dubbing system and 3D panoramas display method for drafting based on active panoramic vision Active CN104374374B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410632152.3A CN104374374B (en) 2014-11-11 2014-11-11 3D environment dubbing system and 3D panoramas display method for drafting based on active panoramic vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410632152.3A CN104374374B (en) 2014-11-11 2014-11-11 3D environment dubbing system and 3D panoramas display method for drafting based on active panoramic vision

Publications (2)

Publication Number Publication Date
CN104374374A CN104374374A (en) 2015-02-25
CN104374374B true CN104374374B (en) 2017-07-07

Family

ID=52553430

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410632152.3A Active CN104374374B (en) 2014-11-11 2014-11-11 3D environment dubbing system and 3D panoramas display method for drafting based on active panoramic vision

Country Status (1)

Country Link
CN (1) CN104374374B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106204733B (en) * 2016-07-22 2024-04-19 青岛大学附属医院 Liver and kidney CT image combined three-dimensional construction system
CN107958489B (en) * 2016-10-17 2021-04-02 杭州海康威视数字技术股份有限公司 Curved surface reconstruction method and device
CN109631799B (en) * 2019-01-09 2021-03-26 浙江浙大万维科技有限公司 Intelligent measuring and marking method
CN109508755B (en) * 2019-01-22 2022-12-09 中国电子科技集团公司第五十四研究所 Psychological assessment method based on image cognition
CN110659440B (en) * 2019-09-25 2023-04-18 云南电网有限责任公司曲靖供电局 Method for rapidly and dynamically displaying different detail levels of point cloud data large scene

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006094409A1 (en) * 2005-03-11 2006-09-14 Creaform Inc. Auto-referenced system and apparatus for three-dimensional scanning
CN101619962A (en) * 2009-07-30 2010-01-06 浙江工业大学 Active three-dimensional panoramic view vision sensor based on full color panoramic view LED light source
CN102289144A (en) * 2011-06-30 2011-12-21 浙江工业大学 Intelligent three-dimensional (3D) video camera equipment based on all-around vision sensor

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2256667B1 (en) * 2009-05-28 2012-06-27 Honda Research Institute Europe GmbH Driver assistance system or robot with dynamic attention module

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006094409A1 (en) * 2005-03-11 2006-09-14 Creaform Inc. Auto-referenced system and apparatus for three-dimensional scanning
CN101619962A (en) * 2009-07-30 2010-01-06 浙江工业大学 Active three-dimensional panoramic view vision sensor based on full color panoramic view LED light source
CN102289144A (en) * 2011-06-30 2011-12-21 浙江工业大学 Intelligent three-dimensional (3D) video camera equipment based on all-around vision sensor

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
立体全方位视觉传感器的设计;汤一平等;《仪器仪表学报》;20100731;第31卷(第7期);第1520-1527页 *

Also Published As

Publication number Publication date
CN104374374A (en) 2015-02-25

Similar Documents

Publication Publication Date Title
CN102679959B (en) Omnidirectional 3D modeling system based on active panoramic vision sensor
US10896497B2 (en) Inconsistency detecting system, mixed-reality system, program, and inconsistency detecting method
CN105243637B (en) One kind carrying out full-view image joining method based on three-dimensional laser point cloud
CN104406539B (en) Round-the-clock active panorama sensing device and 3D full-view modeling methods
CN104374374B (en) 3D environment dubbing system and 3D panoramas display method for drafting based on active panoramic vision
CN103606151B (en) Based on the virtual geographical scene method for auto constructing on a large scale of imaging point cloud
CN103759671B (en) A kind of dental model three-dimensional surface data non-contact scanning method
CN104330074B (en) Intelligent surveying and mapping platform and realizing method thereof
CN109242898B (en) Three-dimensional modeling method and system based on image sequence
CN106971403A (en) Point cloud chart is as processing method and processing device
CN108053469A (en) Complicated dynamic scene human body three-dimensional method for reconstructing and device under various visual angles camera
CN110827392B (en) Monocular image three-dimensional reconstruction method, system and device
CN105264566A (en) Modeling device, three-dimensional model generation device, modeling method, program, and layout simulator
CN105574905B (en) A kind of two dimensional image expression method of three-dimensional laser point cloud data
CN110246186A (en) A kind of automatized three-dimensional colour imaging and measurement method
CN104506761A (en) 360-degree panoramic stereoscopic camera
US20210329217A1 (en) Method and an apparatus for generating data representative of a pixel beam
CN104567818A (en) Portable all-weather active panoramic vision sensor
CN106683163B (en) Imaging method and system for video monitoring
US10909704B2 (en) Apparatus and a method for generating data representing a pixel beam
CN107864372B (en) Stereo photographing method and device and terminal
CN110942516B (en) Tunnel roaming video generation method based on laser scanning data
CN115205491A (en) A handheld multi-view three-dimensional reconstruction method and device
CN113362458B (en) Three-dimensional model interpretation method for simulating multi-view imaging, terminal and storage medium
CN103297798A (en) Three-dimensional reconstruction method of scene spots in binocular stereo visual system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant