[go: up one dir, main page]

WO2016037486A1 - 人体三维成像方法及系统 - Google Patents

人体三维成像方法及系统 Download PDF

Info

Publication number
WO2016037486A1
WO2016037486A1 PCT/CN2015/076869 CN2015076869W WO2016037486A1 WO 2016037486 A1 WO2016037486 A1 WO 2016037486A1 CN 2015076869 W CN2015076869 W CN 2015076869W WO 2016037486 A1 WO2016037486 A1 WO 2016037486A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
human body
dimensional
camera
base station
Prior art date
Application number
PCT/CN2015/076869
Other languages
English (en)
French (fr)
Inventor
刘晓利
何懂
彭翔
李阿蒙
Original Assignee
深圳大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳大学 filed Critical 深圳大学
Publication of WO2016037486A1 publication Critical patent/WO2016037486A1/zh
Priority to US15/181,398 priority Critical patent/US10019838B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/2513Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object with several lines being projected in more than one direction, e.g. grids, patterns
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B21/00Measuring arrangements or details thereof, where the measuring technique is not covered by the other groups of this subclass, unspecified or not relevant
    • G01B21/02Measuring arrangements or details thereof, where the measuring technique is not covered by the other groups of this subclass, unspecified or not relevant for measuring length, width, or thickness
    • G01B21/04Measuring arrangements or details thereof, where the measuring technique is not covered by the other groups of this subclass, unspecified or not relevant for measuring length, width, or thickness by measuring coordinates of points
    • G01B21/042Calibration or calibration artifacts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/243Image signal generators using stereoscopic image cameras using three or more 2D image sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Definitions

  • the invention belongs to the technical field of three-dimensional imaging, and particularly relates to a three-dimensional imaging method and system for realizing human body through distributed network control, and a simultaneous calibration method for multi-control base station.
  • Phase-assisted three-dimensional imaging has the advantages of non-contact, high speed, high precision, and high data density. It is widely used in industrial production fields such as reverse engineering, quality control, and defect detection.
  • a single three-dimensional sensing system is used to achieve full-scale three-dimensional imaging of objects by moving objects or sensing systems, which is often difficult to automate and consumes too much time.
  • This solution is often difficult to apply to three-dimensional imaging of the human body and the like.
  • these human bodies have a large area and need to acquire three-dimensional data from multiple angles; on the other hand, taking the human body as an example, it may be difficult for the human body to remain stationary and must Quickly complete 3D data acquisition.
  • the existing human body three-dimensional imaging systems mainly include: laser-based handheld and mechanical motion device scanning systems.
  • the handheld imaging system realizes complete data acquisition by hand-moving multiple angles, usually takes several minutes, and it is difficult for the human body to maintain Still, the imaging system based on mechanical motion devices often requires human body rotation or large mechanical motion control devices, which are relatively bulky and bulky.
  • the first technical problem to be solved by the present invention is to provide a three-dimensional imaging method for a human body, which aims to quickly and completely acquire high-precision, high-density three-dimensional data and high-realistic color data of the human body.
  • a three-dimensional imaging method of a human body includes the following steps:
  • Step A After receiving the acquisition instruction of the control center, the plurality of control base stations perform coding pattern projection on the human body, and simultaneously collect human body image information observed from respective perspectives, and each control base station performs depth calculation on the respective human body image information. Obtaining three-dimensional geometric depth data in respective local coordinate systems;
  • the plurality of control base stations are interconnected, and a measurement space completely covering the human body is formed around the human body.
  • Each control base station includes two three-dimensional sensors disposed longitudinally, and the two three-dimensional sensors are respectively used for the control base station. The perspective of the three-dimensional geometric depth data and texture data of the upper and lower parts of the human body;
  • Step B Each control base station transforms the three-dimensional geometric depth data from a respective local coordinate system to a global coordinate system;
  • Step C each control base station performs white light projection on the human body, and then the acquisition is observed from the respective perspectives.
  • the texture data of the human body color, and the texture data is sent to the control center together with the 3D geometric depth data after the step B transformation;
  • Step D The control center receives the three-dimensional geometric depth data and the corresponding surface texture data in the global coordinate system transmitted by each control base station, and the control center firstly splices the three-dimensional geometric depth data collected by each control base station to obtain a three-dimensional human body model, and then Removing the redundancy of the whole body model to obtain a three-dimensional model of the human body after fusion; the control center then weights the texture data of the overlapping portions of all the human body colors collected by each control base station to obtain the merged texture data; the control center Finally, the merged human body 3D model is associated with the merged texture data in a one-to-one correspondence.
  • a second technical problem to be solved by the present invention is to provide a three-dimensional imaging system for a human body, including:
  • a plurality of control base stations are configured to construct an anthropometric measurement space, and after receiving the acquisition instruction of the control center, perform coding pattern projection on the human body located in the human body measurement space, and simultaneously collect human body image information observed from respective perspectives, and each control base station Then, the depth calculation of the respective human body image information is performed to obtain three-dimensional geometric depth data in respective local coordinate systems; each control base station then transforms the three-dimensional geometric depth data from the respective local coordinate systems to the global coordinate system; Performing white light projection on the human body, and then collecting texture data of the human body color observed from the respective viewing angles, and transmitting the texture data together with the transformed three-dimensional geometric depth data; wherein the plurality of control base stations are interconnected and surround the human body Arranging to form an anthropometric space, each control base station includes two three-dimensional sensors disposed longitudinally, and the two three-dimensional sensors are respectively configured to acquire human body image information and texture data of the upper and lower parts of the human body from the perspective of the control base station;
  • the control center receives the three-dimensional geometric depth data and the corresponding surface texture data in the global coordinate system transmitted by each control base station, and the control center firstly splices the three-dimensional geometric depth data collected by each control base station to obtain a three-dimensional human body model, and then Removing the redundancy of the whole body model to obtain a three-dimensional model of the human body after fusion; the control center then weights the texture data of the overlapping portions of all the human body colors collected by each control base station to obtain the merged texture data; the control center Finally, the merged human body 3D model is associated with the merged texture data in a one-to-one correspondence.
  • a third technical problem to be solved by the present invention is to provide a method for realizing simultaneous calibration of a multi-control base station in a three-dimensional imaging system as described above, comprising the steps of:
  • Step A01 using a high-resolution digital camera to capture a stereoscopic target at a plurality of different viewing angles to obtain a target image;
  • the stereoscopic target can cover the measurement space and has a plurality of coded marker points on the surface, and each marker point has a different
  • the coding band is used as the unique identifier;
  • Step A02 performing central positioning and decoding on the coded marker points in the target image, and obtaining corresponding correspondences and image coordinates between the different perspective images according to different coding values of each code point;
  • Step A03 using the bundling adjustment method, the image coordinates of the re-projection of each differently encoded world coordinate X j under the shooting angle i are Optimize the reprojection error as shown in the following equation:
  • Step A04 placing the corrected stereo target in the measurement space, controlling the stereo target to rotate multiple times, and collecting the stereo target image by the control base station after each rotation, for a node i, the binocular
  • the structural parameters of the sensor and the external parameters of the node are used as the parameter vector to be optimized, and the optimization objective function is constructed as follows:
  • the superscript s represents the sth shooting position of the system, and t represents the tth landmark in the target.
  • the parameter vector to be optimized for the sensor node i The internal parameters and distortion of the first and second cameras of sensor i, respectively.
  • Step A05 by minimizing the objective function formula, the optimal estimation of the system parameters is realized, and the structural parameters of the node are obtained.
  • the depth reconstruction of the sensor i is implemented by the method; r i , t i is expressed as the relationship between the sensor i and the global world coordinate system.
  • the invention provides a large human body three-dimensional scanning system composed of a multi-control base station sensor network using distributed computing, which has the following advantages: First, all sensors are realized under the condition that the effective field of each distributed sensor has no spatial overlap. Calibration of structural parameters and their global matching parameters. Second, the corresponding point finding method combining phase shift and random structured light reduces the image acquisition time of single-view depth data acquisition. Third, using the idea of time multiplexing, the overall data acquisition time of the system is compressed, and the design of distributed computing enhances the computing power of the entire system and improves the data acquisition speed. Fourth, according to the global matching parameters of different sensors, automatic matching of different sensor depth data is realized. Fifth, the system debugging and expansion of the embodiment are convenient, the degree of automation is high, and the three-dimensional scanning process is simple.
  • FIG. 1 is a schematic diagram of a multi-control base station arrangement provided by the present invention.
  • FIG. 2 is a schematic block diagram of a three-dimensional sensor provided by the present invention.
  • FIG. 3 is a diagram showing the internal structure of a control base station provided by the present invention.
  • FIG. 4 is a schematic diagram of control of a control center provided by the control center according to the present invention.
  • FIG. 5 is a schematic diagram of time multiplexing of a control base station provided by the present invention.
  • Figure 6 is a schematic view of a calibration reference provided by the present invention.
  • Figure 7 is a calibration schematic diagram provided by the present invention.
  • Figure 8 is a schematic diagram of a sequence of projection patterns provided by the present invention.
  • Figure 11 is a diagram of a matched three-dimensional model provided by the present invention.
  • a host is used to control two vertically arranged three-dimensional sensors to form a scanning control base station, so as to realize data acquisition and three-dimensional reconstruction of the upper and lower parts of the human body at an angle.
  • a scanning control base station In order to obtain a relatively complete and detailed three-dimensional data of the human body surface, five such control base stations constitute a human body scanning system.
  • the three-dimensional imaging process of the human body provided by the embodiments of the present invention can be divided into four phases, namely, configuration and spatial arrangement of the control base station, calibration of the multi-control base station of the scanning system, single-view depth data reconstruction, and automatic matching of all different viewpoint depth data.
  • the main principles are as follows:
  • the number of control base stations that need to be arranged in the entire scanning system. Generally, under the premise of satisfying the fineness and integrity requirements of the data, the number of control base stations is minimized. Usually, the number of control base stations is 3-6. In this embodiment, five control base stations are used.
  • the control base station of the present embodiment has a distance of about 1.2 m, which is equivalent to five uniformly arranged on a circumference having a radius of 1.2 m, and the center position is the center of the measurement space of the system.
  • Each control base station is respectively configured with two sets of three-dimensional depth sensors arranged vertically. If you are specifically facing a scanning system with a height of no more than 1.2m, each control base station only needs a set of depth sensors to meet the requirements.
  • the coded marker points are pasted on the surface of an object covering the measurement space of the system, and the spatial three-dimensional coordinates of the marker points are reconstructed by using the close-range measurement method as a reference for each three-dimensional sensor calibration.
  • the edge of the marker point is obtained according to the sub-pixel edge extraction algorithm, and the image coordinates of the center are obtained by fitting.
  • the first and second inter-camera marker point correspondences and the three-dimensional coordinates of each marker point are obtained by uniquely identifying the marker points by means of the encoded information.
  • the host of the control base station sends a serial port signal to the control board, and the control board gives the projection module a coding pattern according to the serial port signal, and simultaneously triggers the first and second camera acquisitions.
  • the sequence of images acquired by the projection is composed of a multi-step phase shift image and a pseudo-random coded image, or a multi-step phase shift
  • the image is composed of a gray coded image or consists of a multi-step phase shift image and a time phase unwrapped image. Among them, the more steps the phase shift image has, the higher the precision, but the increase of the number of steps will affect the projection speed.
  • the present invention is implemented by using a 4-step phase shift image and a pseudo-random code, which will be described below by way of example.
  • the upper and lower three-dimensional geometric depth data obtained in each control base station is transformed into the global coordinate system, and then transmitted to the control center through the Gigabit switch.
  • control center After receiving all the depth data of the control base station, the control center refines the matching result by using the nearest point iterative method to obtain the complete data of the surface of the human body.
  • a plurality of control base stations are configured to construct an anthropometric measurement space, and after receiving the acquisition instruction of the control center, perform coding pattern projection on the human body located in the human body measurement space, and simultaneously collect the human body observed from the respective viewing angles.
  • Image information each control base station further performs depth calculation on the respective human body image information to obtain three-dimensional geometric depth data in respective local coordinate systems; each control base station then transforms the three-dimensional geometric depth data from respective local coordinate systems to global coordinates.
  • Each control base station performs white light projection on the human body, and then collects texture data of the human body color observed from the respective viewing angles, and sends the texture data together with the transformed three-dimensional geometric depth data; wherein, the plurality of control base stations Interconnected, and arranged around the human body to form an anthropometric space, each control base station includes two three-dimensional sensors disposed longitudinally, the two three-dimensional sensors are respectively used to obtain human body image information of the upper and lower parts of the human body from the perspective of the control base station And texture data.
  • the control center After receiving the three-dimensional geometric depth data and the corresponding surface texture data in the global coordinate system transmitted by each control base station, the control center firstly splicing the three-dimensional geometric depth data collected by each control base station to obtain a three-dimensional human body model, and then The human body model is redundantly removed, and the merged human body three-dimensional model is obtained. Then, the texture data of the overlapping portions of all the human body colors collected by each control base station is weighted to obtain the merged texture data. Finally, the merged human body three-dimensional image is obtained. The model and the merged texture data are related one by one according to the coordinate correspondence. To save costs, the control center can reuse one of the control base stations to achieve.
  • FIG. 1 is a schematic diagram of a specific spatial position of five control base stations in the embodiment.
  • Five control base stations are evenly arranged in a circle with a radius of about 1.2 m, and the imaging directions of the three-dimensional sensors of each control base station are directly opposite to the center of the circle. Measuring space. It should be understood that, in the specific implementation, the fineness and integrity requirements of the human body data are different, and the number of control base stations is generally 3 to 6 sets, not necessarily 5 sets, and the working distance of the control base station needs to adapt to different imaging lens focal lengths. It is not limited to about 1.2 m.
  • FIG. 3 is a schematic diagram of the internal structure of each control base station.
  • the three-dimensional sensors with two binocular structures are vertically arranged in a row, and a host controls the image acquisition and completes the reconstruction calculation of the depth data, and can acquire the depth data of the upper and lower parts of the human body at an angle, and complete the depth data to Global coordinates Match.
  • 101 is a CCD camera
  • 102 is a CCD camera
  • 105 is a CCD camera
  • 106 is a CCD camera
  • 103 is a CCD projector
  • 107 is a CCD projector
  • 104 is a control board
  • 108 is a control board
  • 109 is a host
  • 110 is the subject.
  • 101, 102 and 103 constitute an upper part of the three-dimensional sensor 1, and the sensor receives a sequence of acquired images by the signal of the control board 104.
  • the sensor 2 corresponding to 105, 106, 107 has the same working mechanism as the sensor 1.
  • the host 109 is connected to the control boards 104 and 108 through different serial ports, and realizes controlled projection acquisition of the sensors 1 and 2 through different COM ports.
  • a three-dimensional target capable of covering the human body measurement space is prepared, and the cross section is a nearly equilateral hexahedron with a side length of 0.4 m and a height of 2 m, and 750 coded mark points are pasted on the surface, and each mark point is There are different coding bands as the unique identification, as shown in Figure 6(a).
  • the three-dimensional coordinates of the center of each marker point are corrected by the method of close-range photogrammetry to obtain the precise spatial three-dimensional coordinates. It specifically includes the following steps:
  • Step 1 Use a high-resolution digital camera to capture the target image at 58 different angles of view, as shown in Figure 6(b) (image taken in partial view).
  • Step 2 Centering and decoding the coded marker points in the target image, and obtaining the correspondence between the different images and the image coordinates according to the different code values of each code point.
  • Gaussian filtering removes image noise
  • the edge detection operator (Canny operator) performs pixel-level coarse positioning on the elliptical edge
  • Step 3 Using the bundling adjustment method, the image coordinates of the re-projection of each different encoded world coordinate X j under the shooting angle i are Optimize the reprojection error as shown in equation (1)
  • the present embodiment is composed of five control base stations, wherein each control base station has two three-dimensional sensors of a binocular system, and a total of ten sets of three-dimensional sensors have the same function and status in the system.
  • 7 is a three-dimensional sensor calibration model, where r i , t i are vector representations of the first and second camera rotation translation transformations of the system, Represents the global coordinate system to the sensor local coordinate system transformation relationship.
  • r i , t i are vector representations of the first and second camera rotation translation transformations of the system, Represents the global coordinate system to the sensor local coordinate system transformation relationship.
  • Each time the target is rotated at approximately 45 degrees each camera in the control system captures a target image, and a total of 8 sets of images of different positions of the target are obtained.
  • the structural parameters of the binocular sensor and the external parameters of the node are used as parameter vectors to be optimized, and an optimization objective function is constructed, such as equation (2).
  • the subscript s represents the sth shooting position of the system
  • t represents the tth landmark point in the target.
  • the parameter vector to be optimized for the sensor node i The internal parameters and distortion of the first and second cameras of sensor i, respectively.
  • r i , t i are the transformation relations of the first and second cameras of the sensor node i.
  • the specific mathematical model See “Computer Vision” (Ma Yude, Zhang Zhengyou, Science Press, 1998)).
  • the optimal estimation of the system parameters is realized, and the structural parameters of the node are obtained.
  • internal reference It is used to realize the depth reconstruction of the sensor i; r i , t i is expressed as the relationship between the sensor i and the global world coordinate system, and the internal parameters can be obtained by using the same method for different sensors.
  • the key to the reconstruction of the 3D geometric depth data is the corresponding point search of the first and second cameras in the sensor.
  • the present invention uses a phase shift and random structure light combination method to search for corresponding points.
  • the method can shorten the image sequence of more than 10 images to 5 frames, and the image sequence of the projection image is as shown in FIG. .
  • the corresponding point search method can be divided into the following steps:
  • Step 1 Perform polar line correction on the upper body image and the lower body image according to the parameters of the first and second cameras that are pre-calibrated, so that the ordinate of the upper body image and the lower body image have a polar line correspondence relationship;
  • Step 2 according to the 4-step phase shift algorithm, using the phase shift diagram of the polar line correction to calculate the folding phase of the first and second cameras;
  • Step 3 The folding phase value of the pixel position p l (i, j) in the folding phase of the first camera is w(i, j), traversing the jth row of the second camera, and the phase difference can be found in each folding cycle.
  • Pixel position p r (ir, j), these pixel positions p r are regarded as candidate corresponding points of the first camera pixel position p l ;
  • Step 4 treating the pseudo-random coded image of the first camera and the second camera as the target image and the image to be matched, and considering that the matching window size centered on p l on the target image is (2w+1) ⁇ (2w+1), The gray value of any pixel p l in the window is recorded as p l (u l , u l ), and the gray point of the corresponding point of the selected point p r on the image to be matched is p r (u r , u r ) .
  • NCC normalized cross-correlation measure function
  • the average gray level of the image of the window, u and v are the coordinates of the selected matching window, respectively, p l (i+u, j+v) is the pixel of the first camera in the window (i+u, j+ v) the gray value of the position, p r (ir+u, j+v) is the gray value of the position of the second camera in the window (ir+u, j+v), after each correction by the polar line
  • the candidate points have the same ordinate j, and the correlation values of all the candidate points (ir, j) of the second camera and the position of the first camera (i, j) are traversed to obtain a correlation measure function value, wherein only the corresponding points have higher
  • the correlation value of the pixel-level corresponding points of the first and second cameras can be obtained among the plurality of candidate points by setting the threshold.
  • Step 5 After obtaining the pixel-level correspondence, the corresponding relationship of the sub-pixels is obtained according to the difference of the folding phase values of the first and second cameras, and then the three-dimensional geometric depth data is reconstructed according to the internal parameters and structural parameters of the first and second cameras that are calibrated.
  • a depth image of the viewpoint can be obtained, and then the depth image of the viewpoint can be matched into the global coordinate system by the coordinate transformation of the equation (4).
  • X i is the coordinate representation of the three-dimensional point in the local coordinate system of the i-th three-dimensional sensor
  • R i and T i are the transformation matrix of the i-th three-dimensional sensor local coordinate system to the global coordinate system
  • X wi is the point X i is transformed into
  • each control base station transmits the depth data to the control center through the Gigabit network. Then the control center checks the matching accuracy. If the precision is not good, the fine matching can be further carried out.
  • the control center checks the matching accuracy. If the precision is not good, the fine matching can be further carried out.
  • each control base station uses the field arrangement shown in Figure 1, five control base stations are provided, each of which has two sets of three-dimensional sensors. The entire system is calibrated using the three-dimensional reference points shown in FIG. And according to the control flow shown in FIG. 5, triggering two sets of three-dimensional sensors in each control base station to acquire an image sequence and reconstruct the three-dimensional geometric depth data of the viewpoint, as shown in FIG. 9, mapping the color information to the depth data, as shown in FIG. 10 is shown. Then, each control base station matches the depth data to the unified coordinate system according to the calibration parameter, and then transmits it to the control center, and the control center performs further fine matching and data fusion, wherein FIG. 11 is the result of the depth image fusion output.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Optics & Photonics (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)

Abstract

本发明适用于三维成像技术领域,提供了一种人体的三维成像方法及系统、多控制基站同时标定方法。其在各分布式传感器的有效视场无空间重叠的条件下,实现了所有传感器结构参数与其全局匹配参数的标定。还利用相移与随机结构光结合的对应点查找方法,减少了单视点深度数据获取的图像采集时间。还利用时间复用的思想,压缩了系统整体的数据采集时间,同时分布计算的设计增强整个系统的计算能力,提高数据获取速度。根据标定各不同传感器的全局匹配参数,实现不同传感器深度数据的自动匹配。该系统调试和扩展方便,自动化程度高,三维扫描过程简易。

Description

人体三维成像方法及系统 技术领域
本发明属于三维成像技术领域,尤其涉及一种通过分布式网络控制实现人体的三维成像方法及系统、多控制基站同时标定方法。
背景技术
相位辅助三维成像具有非接触、速度快、精度高、数据密度大等优点,在逆向工程、质量控制、缺陷检测等工业生产领域均有广泛的应用。通常工业应用中,采用单个三维传感系统,通过移动物体或传感系统实现物体的全方位三维成像,这往往难以自动化,并且时间消耗过长。该方案往往难以适用于人体等人体的三维成像,一方面,这些人体面积较大,需要从多个角度获取三维数据;另一方面,以人体为例,可能人体很难保持静止不动,必须快速完成三维数据采集。目前,已有的人体三维成像系统主要有:基于激光的手持式以及机械运动装置的扫描系统,手持式成像系统通过手持移动多角度实现完整数据采集,通常需要几分钟时间,期间人体很难保持静止不动;而基于机械运动装置的成像系统,往往需要人体转动或大型机械运动控制装置,较庞大笨重。
发明内容
本发明所要解决的第一个技术问题在于提供一种人体的三维成像方法,旨在快速、完整的获取人体高精度、高密度的三维数据及高真实感的颜色数据。
本发明是这样实现的,一种人体的三维成像方法,包括下述步骤:
步骤A,若干控制基站在接收到控制中心的采集指令后对人体进行编码图案投影,并同时采集从各自视角所观察到的人体图像信息,各控制基站再对各自的人体图像信息进行深度计算,得到在各自局部坐标系下的三维几何深度数据;
其中,所述若干控制基站之间互联,且围绕人体布设形成一个完全覆盖人体的测量空间,每个控制基站均包括纵向设置的两个三维传感器,该两个三维传感器分别用于从该控制基站的视角获取人体上下两部分的三维几何深度数据和纹理数据;
步骤B,各控制基站将三维几何深度数据从各自的局部坐标系下变换到全局坐标系;
步骤C,各控制基站再对人体进行白光投影,然后采集从各自视角所观察到 的人体颜色的纹理数据,将纹理数据连同步骤B变换后的三维几何深度数据一并发送至控制中心;
步骤D,控制中心接收各控制基站传输的全局坐标系下的三维几何深度数据与其相对应的表面纹理数据,控制中心首先将各控制基站所采集的三维几何深度数据进行拼接得到人体三维模型,再对所述人体整体模型去除冗余,得到融合后的人体三维模型;控制中心然后对各个控制基站所采集的所有人体颜色的相重叠部分的纹理数据加权运算,得到融合后的纹理数据;控制中心最后将融合后的人体三维模型与融合后的纹理数据进行一一对应关联。
本发明所要解决的第二个技术问题在于提供一种人体的三维成像系统,包括:
若干控制基站,用于构造人体测量空间,在接收到控制中心的采集指令后对位于在人体测量空间的人体进行编码图案投影,并同时采集从各自视角所观察到的人体图像信息,各控制基站再对各自的人体图像信息进行深度计算,得到在各自局部坐标系下的三维几何深度数据;各控制基站然后将三维几何深度数据从各自的局部坐标系下变换到全局坐标系;各控制基站再对人体进行白光投影,然后采集从各自视角所观察到的人体颜色的纹理数据,将纹理数据连同变换后的三维几何深度数据一并发送;其中,所述若干控制基站之间互联,且围绕人体布设形成一个人体测量空间,每个控制基站均包括纵向设置的两个三维传感器,该两个三维传感器分别用于从该控制基站的视角获取人体上下两部分的人体图像信息和纹理数据;
控制中心,控制中心接收各控制基站传输的全局坐标系下的三维几何深度数据与其相对应的表面纹理数据,控制中心首先将各控制基站所采集的三维几何深度数据进行拼接得到人体三维模型,再对所述人体整体模型去除冗余,得到融合后的人体三维模型;控制中心然后对各个控制基站所采集的所有人体颜色的相重叠部分的纹理数据加权运算,得到融合后的纹理数据;控制中心最后将融合后的人体三维模型与融合后的纹理数据进行一一对应关联。
本发明所要解决的第三个技术问题在于提供一种在如上所述的三维成像系统中实现多控制基站同时标定的方法,包括下述步骤:
步骤A01,用高分辨率数码相机在多个不同的视角拍摄立体标靶获得标靶图像;所述立体标靶可覆盖测量空间且表面粘贴有若干编码标志点,每个标志点都有不同的编码带作为该为唯一性的标识;
步骤A02,对标靶图像中的编码标志点进行中心定位和解码,根据每个编码点不同的编码值获得其在不同视角图像之间的对应关系和图像坐标;
步骤A03,利用捆绑调整方法,每个不同编码的世界坐标Xj在拍摄视角i下的重投影的图像坐标为
Figure PCTCN2015076869-appb-000001
优化该重投影误差,如下式所示:
Figure PCTCN2015076869-appb-000002
其中,(K,θ)为摄像的内部结构参数,(ri,ti)为拍摄的位姿,
Figure PCTCN2015076869-appb-000003
为该点圆心图像坐标,由此可得不同编码点的世界坐标Xj,完成标靶校正;
步骤A04,将完成校正的立体标靶置于测量空间中,控制立体标靶多次转动,并在每次转动后由控制基站采集立体标靶图像,对于某一节点i而言,将双目传感器的结构参数和该节点的外参作为待优化的参数向量,构造优化目标函数如下式:
Figure PCTCN2015076869-appb-000004
其中上标s表示系统的第s个拍摄姿态,t表示标靶中第t个标志点,
Figure PCTCN2015076869-appb-000005
为传感器节点i待优化的参数向量,
Figure PCTCN2015076869-appb-000006
分别为传感器i第一、第二相机的内参和畸变,
Figure PCTCN2015076869-appb-000007
为第一、第二相机中基准点的图像坐标,
Figure PCTCN2015076869-appb-000008
为重投影图像坐标;
步骤A05,通过最小化目标函数式,实现对系统参数的最优化估计,得到该节点的结构参数
Figure PCTCN2015076869-appb-000009
和内参
Figure PCTCN2015076869-appb-000010
用于式实现该传感器i的深度重建;ri,ti则表示为传感器i与全局世界坐标系变换关系。
本发明提供了利用分布式计算的多控制基站传感器网络组成的大型人体三维扫描系统,其具有优点如下:第一、在各分布式传感器的有效视场无空间重叠的条件下,实现了所有传感器结构参数与其全局匹配参数的标定。第二、利用相移与随机结构光结合的对应点查找方法,减少了单视点深度数据获取的图像采集时间。第三、利用时间复用的思想,压缩了系统整体的数据采集时间,同时分布计算的设计增强整个系统的计算能力,提高数据获取速度。第四、根据标定各不同传感器的全局匹配参数,实现不同传感器深度数据的自动匹配。第五、该实施例的系统调试和扩展方便,自动化程度高,三维扫描过程简易。
附图说明
图1是本发明提供的多控制基站布置示意图;
图2是本发明提供的三维传感器原理框图;
图3是本发明提供的控制基站内部构造图;
图4是本发明提供的控制中心对控制基站的控制示意图;
图5是本发明提供的控制基站的时间复用示意图;
图6是本发明提供的标定参照物示意图;
图7是本发明提供的标定原理图;
图8是本发明提供的投影图案序列示意图;
图9是本发明提供的各控制基站三维传感器采集的深度数据图;
图10是本发明提供的纹理映射的深度数据;
图11是本发明提供的匹配后三维模型图。
具体实施方式
为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。
本发明实施例中,使用一个主机控制两个竖向排列的三维传感器构成扫描控制基站,实现人体一个角度上下两部分数据的采集和三维重建。为了获得较为完整和精细的人体表面三维数据,由五个这样的控制基站构成人体扫描系统。
本发明实施例提供的人体三维成像过程可分为4阶段,即控制基站的配置及空间布置、扫描系统多控制基站的标定、单视点深度数据重建及所有不同视点深度数据的自动匹配。主要原理如下:
1.控制基站空间布置及其配置
1.1根据对人体扫描数据精细程度和数据完整性,确定出整个扫描系统所需要布置的控制基站的数目,一般的在满足数据的精细度和完整性要求的前提下,尽量使用控制基站的数目最少,通常控制基站的数目为3-6个。本实施例中使用了5个控制基站。
1.2考虑系统结构光投影装置和摄像机采用的成像镜头和有效视场,调试系统中控制基站的成像工作距和两两控制基站的合适角度。本实施的控制基站距约为1.2m,相当于五个均匀布置于半径为1.2m的圆周上,圆心位置为系统的测量空间中心。
1.3本实施例适用于成年人,其扫描高度为不超过2.2m,每个控制基站分别配置竖向排列的两套三维深度传感器。如果专门面对身高不超过1.2m孩童的扫描系统,每个控制基站只需一套深度传感器即可满足要求。
2.扫描系统多控制基站的标定
2.1制作立体标靶。在一能覆盖系统测量空间的物体表面粘贴编码标志点,结合近景测量方法重建标志点的空间三维坐标,将其作为每个三维传感器标定的参照物。
2.2将立体标靶放置于测量空间中每旋转大约45度就控制系统中每个三维传感器的第一、第二相机拍摄标定图像,这样在标靶八个不同的位置,每个相机依次采集8幅标定图像。
2.3处理每个相机所得的图像的编码标志点。利用图像处理和自动识别技术,根据亚像素边缘提取算法得到标志点边缘,并拟合得到中心的图像坐标。借助编码信息唯一识别标志点,获取每个三维传感器第一、第二相机间标志点对应关系和每个标志点三维坐标。
2.4利用标志点图像坐标与其相对应的空间三维坐标,标定每个相机的内参和外差,同时标定每个三维传感器的结构参数和所有传感器与全局坐标变换关系。
3.单视点深度数据重建
3.1采集图像序列,由控制基站的主机给控制板发送串口信号,控制板根据串口信号给予投影模块于编码图案,同时触发第一、第二相机采集。其投影采集的图像序列为由多步相移图像和1幅伪随机编码图像组成、或由多步相移 图像和Grey编码图像组成、或由多步相移图像和时间相位展开图像组成。其中,相移图像的步数越多,精度越高,但是步数的增加又会影响投影速度。作为本发明的一个优选实施例,本发明选用4步相移图像和1幅伪随机编码来实现,下文也以此为例来进行说明。
3.2利用相移和伪随机编码相结合的对应点查找方案建立第一、第二相机的点对应关系,结合三维传感器的结构参数重建该传感器的三维深度像。
4.不同视点深度数据的自动匹配
4.1利用标定得到的每个不同传感器与全局坐标系的变换关系,每个控制基站中所得的上下三维几何深度数据变换至于全局坐标系中,然后通过千兆交换机传输至控制中心。
4.2控制中心接收到所有控制基站的深度数据之后,利用最近点迭代方法精炼匹配结果,获得人体的表面的完整数据。
根据以上所述的四个实施步骤,结合实施例的附图进行更进一步的详细说明。
具体地,本发明中,若干控制基站用于构造人体测量空间,在接收到控制中心的采集指令后对位于在人体测量空间的人体进行编码图案投影,并同时采集从各自视角所观察到的人体图像信息,各控制基站再对各自的人体图像信息进行深度计算,得到在各自局部坐标系下的三维几何深度数据;各控制基站然后将三维几何深度数据从各自的局部坐标系下变换到全局坐标系;各控制基站再对人体进行白光投影,然后采集从各自视角所观察到的人体颜色的纹理数据,将纹理数据连同变换后的三维几何深度数据一并发送;其中,所述若干控制基站之间互联,且围绕人体布设形成一个人体测量空间,每个控制基站均包括纵向设置的两个三维传感器,该两个三维传感器分别用于从该控制基站的视角获取人体上下两部分的人体图像信息和纹理数据。
控制中心在接收各控制基站传输的全局坐标系下的三维几何深度数据与其相对应的表面纹理数据后,首先将各控制基站所采集的三维几何深度数据进行拼接得到人体三维模型,再对所述人体整体模型去除冗余,得到融合后的人体三维模型;然后对各个控制基站所采集的所有人体颜色的相重叠部分的纹理数据加权运算,得到融合后的纹理数据;最后将融合后的人体三维模型与融合后的纹理数据按照坐标对应关系进行一一关联。为节省成本,控制中心可以复用其中一个控制基站来实现。
图1是本实施例中五个控制基站的具体空间位置示意图,五个控制基站均匀排布于半径约为1.2m的圆周中,每个控制基站三维传感器的成像方向都正对着圆心位置的测量空间。应当理解,具体实施时,对人体数据的精细度和完整性要求不一样,控制基站数目一般为3到6套,不一定都是5套,控制基站的工作距离需要适应不同的成像镜头焦距,也并非限定为1.2m左右。
图3为每个控制基站的内部结构工作示意图。有两个双目结构的三维传感器竖向排成一列,由一台主机控制图像的采集和完成深度数据的重建计算,能够获取人体在一个角度的上下两部分的深度数据,并完成深度数据到全局坐标 的匹配。在图4中,101是CCD摄像机,102是CCD摄像机,105是CCD摄像机,106是CCD摄像机;103是CCD投影机,107是CCD投影机;104是控制板,108是控制板;109是主机;110为被测者。其中101、102与103构成上部分的三维传感器1,传感器接收由控制板104的信号同步投影采集图像序列。对应由105、106、107组成的传感器2其工作机理与传感器1相同。主机109通过不同的串口与控制板104、108连接,通过不同COM口实现对传感器1、2的控制投影采集。
在本实施例中,制作了能覆盖人体测量空间的立体标靶,其截面为近乎等边六面体,边长为0.4m,高度为2m,表面粘贴了750个编码标志点,每个标志点都有不同的编码带作为该为唯一性的识别,如图6(a)所示。利用近景摄影测量的方法对每个标志点中心的三维坐标进行校正,得到其精确的空间三维坐标。其具体包括如下步骤:
Step1、用高分辨率数码相机在58个不同的视角拍摄获得标靶图像,如图6(b)所示(部分视角拍摄的图像)。
Step 2、对标靶图像中的编码标志点进行中心定位和解码,根据每个编码点不同的编码值获得其在不同图像之间的对应关系和图像坐标。
2.1,高斯滤波去除图像噪声;
2.2,边缘检测算子(Canny算子)对椭圆边缘进行像素级粗定位;
2.3,标志点的自动识别(满足以下两个条件的被认为是标志点:即标志点轮廓所包含的像素数在一定范围内波动;而且标志点轮廓是闭合的);
2.4,椭圆边缘的亚像素级精定位(本章对像素级边缘的每个像素的5×5邻域进行三次多项式曲面拟合,求取曲面的一阶导数局部极值的位置,即亚像素位置);
25,对椭圆边缘点进行最小二乘拟合,得到椭圆中心的亚像素坐标视为标志点中心图像坐标。
Step 3、利用捆绑调整方法,每个不同编码的世界坐标Xj在拍摄视角i下的重投影的图像坐标为
Figure PCTCN2015076869-appb-000011
优化该重投影误差,如式(1)所示
Figure PCTCN2015076869-appb-000012
其中,(K,θ)为相机的内部结构参数,其中,
Figure PCTCN2015076869-appb-000013
{f,u0,v0,α}分别为相机的焦距、主点横坐标、主点纵坐标、倾斜因子,θ={k1,k2,k3,p1,p2},(k1,k2,k3)为镜头径向畸变系数,(p1,p2)为镜头切向畸变系数,M为标志点的个数,N为图像个数,(ri,ti)为拍摄的位姿,
Figure PCTCN2015076869-appb-000014
为该点圆心图像坐标,由此可得不同编码点的世界坐标Xj,完成标靶校正,如图6(c)所示。该非线性最优化问题的详细解法可参见《相位辅助光学三维测量系统的标定方法》,殷永凯,2012,天津大学博士论文。
立体标靶校正完成后,将其置于系统的测量空间中标定系统中三维传感器的系统参数和匹配参数。本实施例由五个控制基站组成,其中每个控制基站有两个双目系统的三维传感器,一共有10套三维传感器,它们在系统中的作用和地位都是相同的。图7为三维传感器标定模型,其中ri,ti为系统第一、第二相机旋转平移变换的向量表示,
Figure PCTCN2015076869-appb-000015
表示全局坐标系到该传感器局部坐标系变换关系。每次转动标靶约为45度,这时控制系统中每个相机拍摄标靶图像,一共获得8组标靶不同位置的图像。对于某一节点i而言,将双目传感器的结构参数和该节点的外参作为待优化的参数向量,构造优化目标函数,如式(2)
Figure PCTCN2015076869-appb-000016
其中,下标s表示系统的第s个拍摄姿态,t表示标靶中第t个标志点,
Figure PCTCN2015076869-appb-000017
为第t个标志点在世界坐标系下坐标,
Figure PCTCN2015076869-appb-000018
为传感器节点i待优化的参数向量,
Figure PCTCN2015076869-appb-000019
分别为传感器i第一、第二相机的内参和畸变,
Figure PCTCN2015076869-appb-000020
为第一、第二相机中基准点的图像坐标,ri,ti为传感器节点i的第一、第二相机的变换关系,
Figure PCTCN2015076869-appb-000021
为在第s个姿态拍摄时的外参,
Figure PCTCN2015076869-appb-000022
为重投影图像坐标,其具体数学模型(参见《计算机视觉》(马颂德、张正友,科学出版社,1998))。通过最小化目标函数式(2),实现对系统参数的最优化估计,得到该节点的结构参数
Figure PCTCN2015076869-appb-000023
和内参
Figure PCTCN2015076869-appb-000024
用于式实现该传感器i的深度重建;ri,ti则表示为传感器i与全局世界坐标系变换关系,对与不同的传感器利用相同的方法都可以求得内参
Figure PCTCN2015076869-appb-000025
和结构参数
Figure PCTCN2015076869-appb-000026
同时也可得到匹配参数ri,ti(i=1,2,3...10)。
系统标定完成后,三维几何深度数据重建的关键在于传感器中第一、第二相机对应点查找。为了缩短图像序列采集时间,本发明采用相移与随机结构光相结合方法进行对应点的搜索,该方法能将10幅以上的图像序列缩短为5幅,投影采集的图像序列如图8所示。该对应点查找方法可以分为如下几个步骤:
Step1、根据预先标定的第一、第二相机的参数,对上半身图像和下半身图像进行极线矫正,使得上半身图像和下半身图像相同的纵坐标有极线对应关系;
Step2、根据4步相移算法,利用已极线矫正的相移图计算得到第一、第二相机的折叠相位;
Step3、第一相机的折叠相位中像素位置pl(i,j)的折叠相位值为w(i,j),遍历第二相机第j行,能在每个折叠周期内查找到相位差最小的像素位置pr(ir,j),这些像素位置pr都看成第一相机像素位置pl的候选对应点;
Step4、将第一、第二相机的伪随机编码图像看做目标图像和待匹配图像,考虑目标图像上以pl为中心的匹配窗口大小为(2w+1)×(2w+1),该窗口内任一像素pl的灰度值记为pl(ul,ul),而获选点pr在待匹配图像上的相应点的灰度为pr(ur,ur)。这两窗口的归一化互相关测度函数(NCC)如式(3)所示:
Figure PCTCN2015076869-appb-000027
其中
Figure PCTCN2015076869-appb-000028
分别为该窗口的图像平均灰度,u、v分别为在选定的匹配窗口的坐标,pl(i+u,j+v)为第一相机在窗口中像素(i+u,j+v)位置的灰度值,pr(ir+u,j+v)为第二相机在窗口中像素为(ir+u,j+v)位置的灰度值,经过极线矫正后每个候选点有着相同的纵坐标j,遍历第二相机所有候选点(ir,j)与第一相机(i,j)位置的相关值都能得到一个相关测度函数值,其中只有对应点有着较高的相关值,通过设定阈值可以在众多候选点中得到第一、第二相机的像素级对应点。
Step5、得到像素级对应之后,根据第一、第二相机的折叠相位值差值得到亚像素的对应关系,然后结合标定的第一、第二相机的内参和结构参数,重建三维几何深度数据。
对于传感器i来说能得到该视点的深度像,然后通过式(4)的坐标变换可将该视点的深度像匹配到全局坐标系中。
Figure PCTCN2015076869-appb-000029
其中,Xi为三维点在第i个三维传感器局部坐标系的坐标表示,Ri、Ti为第i个三维传感器局部坐标系到全局坐标系的变换矩阵,Xwi为点Xi变换为世界坐标系的坐标表示,通过标定的结果ri,ti(i=1,2,3...10)变换可得到。
完成了不同视点的匹配之后,每个控制基站通过千兆网络将深度数据传输至控制中心。然后由控制中心检验匹配精度,如果精度欠佳,可进一步的进行精匹配,具体方法可参见相关文献(《多视场深度像造型中的若干关键技术》,刘晓利,天津大学博士论文,2008)
下文以实际人体扫描为来具体描述数据获取过程和展示数据获取结果。按照上述的步骤。利用图1所示的现场布置,设置了五个控制基站,每个控制基站有上下两套三维传感器。利用图6所示的三维基准点对整个系统进行标定。并按照图5所示的控制流水,触发每个控制基站中上下两套三维传感器获取图像序列并重建该视点的三维几何深度数据,如图9所示,将颜色信息映射到深度数据,如图10所示。然后各控制基站根据标定参将深度数据匹配到统一坐标系中,再传输至控制中心,由控制中心完成进一步的精匹配和数据融合,其中图11为深度像融合输出的结果。
以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内所作的任何修改、等同替换和改进等,均应包含在本发明 的保护范围之内。

Claims (13)

  1. 一种人体三维成像方法,其特征在于,包括下述步骤:
    步骤A,若干控制基站在接收到控制中心的采集指令后对人体进行编码图案投影,并同时采集从各自视角所观察到的人体图像信息,各控制基站再对各自的人体图像信息进行深度计算,得到在各自局部坐标系下的三维几何深度数据;
    其中,所述若干控制基站之间互联,且围绕人体布设形成一个完全覆盖人体的测量空间,每个控制基站均包括纵向设置的两个三维传感器,该两个三维传感器分别用于从该控制基站的视角获取人体上下两部分的三维几何深度数据和纹理数据;
    步骤B,各控制基站将三维几何深度数据从各自的局部坐标系下变换到全局坐标系;
    步骤C,各控制基站再对人体进行白光投影,然后采集从各自视角所观察到的人体颜色的纹理数据,将纹理数据连同步骤B变换后的三维几何深度数据一并发送至控制中心;
    步骤D,控制中心接收各控制基站传输的全局坐标系下的三维几何深度数据与其相对应的表面纹理数据,控制中心首先将各控制基站所采集的三维几何深度数据进行拼接得到人体三维模型,再对所述人体整体模型去除冗余,得到融合后的人体三维模型;控制中心然后对各个控制基站所采集的所有人体颜色的相重叠部分的纹理数据加权运算,得到融合后的纹理数据;控制中心最后将融合后的人体三维模型与融合后的纹理数据进行一一对应关联。
  2. 如权利要求1所述的人体三维成像方法,其特征在于,步骤A中采集的人体图像信息为由多步相移图像和1幅伪随机编码图像组成的图像序列,或者为由多步相移图像和Grey编码图像组成的图像序列,或者由多步相移图像和时间相位展开图像组成的图像序列。
  3. 如权利要求2所述的人体三维成像方法,其特征在于,每个控制基站的两个三维传感器所采集的上半身图像和下半身图像互不重叠,且可拼接成从其视角所观察到的人体的完整图像,定义采集上半身图像、下半身图像的相机分别为第一、第二相机;
    步骤A通过下述方法对人体图像信息进行深度计算得到三维几何深度数据:
    步骤A1,根据预先标定的第一、第二相机的参数,对上半身图像和下半身图像进行极线矫正,使得上半身图像和下半身图像相同的纵坐标有极线对应关系;
    步骤A2,根据4步相移算法,利用已极线矫正的相移图计算得到第一、第二相机的折叠相位;
    步骤A3,第一相机的折叠相位中像素位置pl(i,j)的折叠相位值为w(i,j),遍历第二相机第j行,能在每个折叠周期内查找到相位差最小的像素位置pr(ir,j),这些像素位置pr都作为第一相机像素位置pl的对应候选点;
    步骤A4,将第一、第二相机的伪随机编码图像作为目标图像和待匹配图像,考虑人体图像上以pl为中心的匹配窗口大小为(2w+1)×(2w+1),该窗口内任一像素pl的灰度值记为pl(ul,ul),而候选选点pr在待匹配图像上的相应点的灰度为pr(ur,ur),这两窗口的归一化互相关测度函数NCC如下式所示:
    Figure PCTCN2015076869-appb-100001
    其中
    Figure PCTCN2015076869-appb-100002
    分别为该窗口的图像平均灰度,u、v分别为在选定的匹配窗口的坐标,pl(i+u,j+v)为第一相机在窗口中像素(i+u,j+v)位置的灰度值,pr(ir+u,j+v)为第二相机在窗口中像素为(ir+u,j+v)位置的灰度值,经过极线矫正后每个候选点有着相同的纵坐标j,遍历第二相机所有候选点(ir,j)与第一相机(i,j)位置的相关值都能得到一个相关测度函数值,其中只有对应点有着较高的相关值,通过设定阈值可以在众多候选点中得到第一、第二相机的像素级对应点。
    步骤A5,得到像素级对应点之后,根据第一、第二相机的折叠相位值差值得到亚像素的对应关系,然后结合预先标定的第一、第二相机的内参和结构参数,重建三维几何深度数据。
  4. 如权利要求1至3任一项所述的人体三维成像方法,其特征在于,步骤A中按照下述方式对所有控制基站的三维传感器中的相机进行图像采集、深度计算的时序控制:
    控制基站内部的每个相机均先进行人体图像采集,再对其采集的人体图像进行深度计算,而在对第i相机采集的人体图像进行深度计算的同时控制第i+1相机开始采集人体图像。
  5. 如权利要求3所述的人体三维成像方法,其特征在于,步骤B根据下式将各控制基站将三维几何深度数据从各自的局部坐标系变换至全局坐标系下:
    Figure PCTCN2015076869-appb-100003
    其中,Xi为三维点在第i个三维传感器局部坐标系的坐标表示,Ri、Ti为第i个三维传感器局部坐标系到全局坐标系的变换矩阵,Xwi为点Xi变换为世界坐标系的坐标表示,通过标定的结果ri,ti(i=1,2,3...10)变换可得到;
    所述ri,ti(i=1,2,3...10)通过下述步骤予以标定:
    步骤A01,用高分辨率数码相机在多个不同的视角拍摄立体标靶获得标靶图像;所述立体标靶可覆盖测量空间且表面粘贴有若干编码标志点,每个标志 点都有不同的编码带作为该为唯一性的标识;
    步骤A02,对标靶图像中的编码标志点进行中心定位和解码,根据每个编码点不同的编码值获得其在不同视角图像之间的对应关系和图像坐标;
    步骤A03,利用捆绑调整方法,每个不同编码的世界坐标Xj在拍摄视角i下的重投影的图像坐标为
    Figure PCTCN2015076869-appb-100004
    优化该重投影误差,如下式所示:
    Figure PCTCN2015076869-appb-100005
    其中,(K,θ)为相机的内部结构参数,其中,
    Figure PCTCN2015076869-appb-100006
    {f,u0,v0,α}分别为相机的焦距、主点横坐标、主点纵坐标、倾斜因子,θ={k1,k2,k3,p1,p2},(k1,k2,k3)为镜头径向畸变系数,(p1,p2)为镜头切向畸变系数,M为标志点的个数,N为图像个数,(ri,ti)为拍摄的位姿,
    Figure PCTCN2015076869-appb-100007
    为该点圆心图像坐标,由此可得不同编码点的世界坐标Xj,完成标靶校正;
    步骤A04,将完成校正的立体标靶置于测量空间中,控制立体标靶多次转动,并在每次转动后由控制基站采集立体标靶图像,对于某一双目节点i而言,将双目传感器的结构参数和该节点的外参作为待优化的参数向量,构造优化目标函数如下式:
    Figure PCTCN2015076869-appb-100008
    其中,下标s表示系统的第s个拍摄姿态,t表示标靶中第t个标志点,
    Figure PCTCN2015076869-appb-100009
    为第t个标志点在世界坐标系下坐标,
    Figure PCTCN2015076869-appb-100010
    为传感器节点i待优化的参数向量,
    Figure PCTCN2015076869-appb-100011
    分别为传感器i第一、第二相机的内参和畸变,
    Figure PCTCN2015076869-appb-100012
    为第一、第二相机中基准点的图像坐标,ri,ti为传感器节点i的第一、第二相机的变换关系,
    Figure PCTCN2015076869-appb-100013
    为在第s个姿态拍摄时的外参,
    Figure PCTCN2015076869-appb-100014
    为重投影图像坐标。
    步骤A05,通过最小化目标函数式,实现对系统参数的最优化估计,得到该节点的结构参数
    Figure PCTCN2015076869-appb-100015
    和内参
    Figure PCTCN2015076869-appb-100016
    用于式实现该传感器i的深度重建;ri,ti则表示为传感器i与全局世界坐标系变换关系。
  6. 一种人体三维成像系统,其特征在于,包括:
    若干控制基站,用于构造人体测量空间,在接收到控制中心的采集指令后对位于在人体测量空间的人体进行编码图案投影,并同时采集从各自视角所观察到的人体图像信息,各控制基站再对各自的人体图像信息进行深度计算,得到在各自局部坐标系下的三维几何深度数据;各控制基站然后将三维几何深度数据从各自的局部坐标系下变换到全局坐标系;各控制基站再对人体进行白光投影,然后采集从各自视角所观察到的人体颜色的纹理数据,将纹理数据连同变 换后的三维几何深度数据一并发送;其中,所述若干控制基站之间互联,且围绕人体布设形成一个人体测量空间,每个控制基站均包括纵向设置的两个三维传感器,该两个三维传感器分别用于从该控制基站的视角获取人体上下两部分的人体图像信息和纹理数据;
    控制中心,控制中心接收各控制基站传输的全局坐标系下的三维几何深度数据与其相对应的表面纹理数据,控制中心首先将各控制基站所采集的三维几何深度数据进行拼接得到人体三维模型,再对所述人体整体模型去除冗余,得到融合后的人体三维模型;控制中心然后对各个控制基站所采集的所有人体颜色的相重叠部分的纹理数据加权运算,得到融合后的纹理数据;控制中心最后将融合后的人体三维模型与融合后的纹理数据进行一一对应关联。
  7. 如权利要求6所述的三维成像系统,其特征在于,所述控制中心通过交换机与各控制基站连接;每个三维传感器包括:
    投影机,用于先后对人体进行编码图案投影和白光投影;
    第一、第二相机,用于采集人体被编码图案投影后所呈现的图像信息,该图像信息用于人体的三维重建;其中第一相机同时在人体被白光投影后采集人体颜色的纹理数据,该纹理数据用于纹理映射获得三维模型的彩色信息;
    控制板,用于将待投影图案发送至投影机,控制投影机将待投影图案投影在人体上,并控制所述相机同步采集图像信息;
    主机,与控制板连接,通过控制板实现投采的控制,并对相机采集的人体图像信息进行深度计算和坐标变换。
  8. 如权利要求7所述的三维成像系统,其特征在于,所述相机为彩色相机,采集的人体图像信息为由多步相移图像和1幅伪随机编码图像组成的图像序列,或者为由多步相移图像和Grey编码图像组成的图像序列,或者由多步相移图像和时间相位展开图像组成的图像序列。
  9. 如权利要求8所述的三维成像系统,其特征在于,每个三维传感器中包括第一、第二相机,分别用于采集人体的上半身图像和下半身图像,该上半身图像和下半身图像不重叠且可拼接成从其视角所观察到的人体的完整图像;
    所述主机具体根据预先标定的第一、第二相机的参数,对上半身图像和下半身图像进行极线矫正,使得上半身图像和下半身图像相同的纵坐标有极线对应关系;再根据4步相移算法,利用已极线矫正的相移图计算得到折叠相位;第一相机像素点pl(i,j)的折叠相位值为w(i,j),遍历第二相机第j行,能在每个折叠周期内查找到相位差最小的像素点pr(ir,j),这些像素点pr都作为第一相机像素点pl的对应候选点;将第一、第二相机的伪随机编码图像为匹配依据,分别视为目标图像和待匹配图像,考虑人体图像上以pl为中心的匹配窗口大小为(2w+1)×(2w+1),该窗口内任一像素pl的灰度值记为pl(ul,ul),而候选选点pr在待匹配图像上的相应点的灰度为pr(ur,ur),这两窗口的归一化互相关测度函数NCC如下式所示:
    Figure PCTCN2015076869-appb-100017
    其中
    Figure PCTCN2015076869-appb-100018
    Figure PCTCN2015076869-appb-100019
    分别为该窗口的图像平均灰度,u、v分别为在选定的匹配窗口的坐标,
    pl(i+u,j+v)为第一相机在窗口中像素(i+u,j+v)位置的灰度值,pr(ir+u,j+v)为第二相机在窗口中像素为(ir+u,j+v)位置的灰度值,经过极线矫正后每个候选点有着相同的纵坐标j,遍历第二相机所有候选点(ir,j)与第一相机(i,j)位置的相关值都能得到一个相关测度函数值,其中只有对应点有着较高的相关值,通过设定阈值可以在众多候选点中得到第一、第二相机的像素级对应点;得到像素级对应点之后,根据第一、第二相机的折叠相位值差值得到亚像素的对应关系,然后结合预先标定的第一、第二相机的内参和结构参数,重建三维几何深度数据。
  10. 如权利要求7至9任一项所述的三维成像系统,其特征在于,每个控制基站内部三维传感器的每个相机均先进行人体图像采集,再对其采集的人体图像进行深度计算;而对于所有控制基站的三维传感器中的相机,所述主机在第i个三维传感器采集的人体图像进行深度计算的同时控制第i+1个三维传感器采集人体图像。
  11. 如权利要求6所述的三维成像系统,其特征在于,所述主机根据下式将各控制基站将三维几何深度数据从各自的局部坐标系变换至全局坐标系下:
    Figure PCTCN2015076869-appb-100020
    其中,Xi为三维点在第i个三维传感器局部坐标系的坐标表示,Ri、Ti为第i个三维传感器局部坐标系到全局坐标系的变换矩阵,Xwi为点Xi变换为世界坐标系的坐标表示,通过标定的结果ri,ti(i=1,2,3...10)变换可得到;
    所述主机中包括一标定模块,该标定模块用于对标靶图像中的编码标志点进行中心定位和解码,根据每个编码点不同的编码值获得其在不同视角图像之间的对应关系和图像坐标;然后利用捆绑调整方法,对每个不同编码的世界坐标Xj在拍摄视角i下的重投影的图像坐标
    Figure PCTCN2015076869-appb-100021
    进行优化该重投影误差,如下式所示:
    Figure PCTCN2015076869-appb-100022
    其中,(K,θ)为相机的内部结构参数,其中,
    Figure PCTCN2015076869-appb-100023
    {f,u0,v0,α}分别为相机的焦距、 主点横坐标、主点纵坐标、倾斜因子,θ={k1,k2,k3,p1,p2},(k1,k2,k3)为镜头径向畸变系数,(p1,p2)为镜头切向畸变系数,M为标志点的个数,N为图像个数,(ri,ti)为拍摄的位姿,
    Figure PCTCN2015076869-appb-100024
    为该点圆心图像坐标,由此可得不同编码点的世界坐标Xj,完成标靶校正;然后,控制置于测量空间中的、完成校正的立体标靶多次转动,并在每次转动后由控制基站采集立体标靶图像,对于某一节点i而言,将双目传感器的结构参数和该节点的外参作为待优化的参数向量,构造优化目标函数如下式:
    Figure PCTCN2015076869-appb-100025
    其中,下标s表示系统的第s个拍摄姿态,t表示标靶中第t个标志点,
    Figure PCTCN2015076869-appb-100026
    为第t个标志点在世界坐标系下坐标,
    Figure PCTCN2015076869-appb-100027
    为传感器节点i待优化的参数向量,
    Figure PCTCN2015076869-appb-100028
    分别为传感器i第一、第二相机的内参和畸变,
    Figure PCTCN2015076869-appb-100029
    为第一、第二相机中基准点的图像坐标,ri,ti为传感器节点i的第一、第二相机的变换关系,
    Figure PCTCN2015076869-appb-100030
    为在第s个姿态拍摄时的外参,
    Figure PCTCN2015076869-appb-100031
    为重投影图像坐标;最后,通过最小化目标函数式,实现对系统参数的最优化估计,得到该节点的结构参数
    Figure PCTCN2015076869-appb-100032
    和内参
    Figure PCTCN2015076869-appb-100033
    用于式实现该传感器i的深度重建;ri,ti则表示为传感器i与全局世界坐标系变换关系。
  12. 如权利要求6所述的三维成像系统,其特征在于,所述控制中心为所述若干控制基站中的一个。
  13. 一种在权利要求6所述的三维成像系统中实现多控制基站同时标定的方法,其特征在于,包括下述步骤:
    步骤A01,用高分辨率数码相机在多个不同的视角拍摄立体标靶获得标靶图像;所述立体标靶可覆盖测量空间且表面粘贴有若干编码标志点,每个标志点都有不同的编码带作为该为唯一性的标识;
    步骤A02,对标靶图像中的编码标志点进行中心定位和解码,根据每个编码点不同的编码值获得其在不同视角图像之间的对应关系和图像坐标;
    步骤A03,利用捆绑调整方法,每个不同编码的世界坐标Xj在拍摄视角i下的重投影的图像坐标为
    Figure PCTCN2015076869-appb-100034
    优化该重投影误差,如下式所示:
    Figure PCTCN2015076869-appb-100035
    其中,(K,θ)为相机的内部结构参数,其中,
    Figure PCTCN2015076869-appb-100036
    {f,u0,v0,α}分别为相机的焦距、主点横坐标、主点纵坐标、倾斜因子,θ={k1,k2,k3,p1,p2},(k1,k2,k3)为镜头径向畸变系数,(p1,p2)为镜头切向畸变系数,M为标志点的个数,N为图像个数,(ri,ti)为拍摄的位姿,
    Figure PCTCN2015076869-appb-100037
    为该点圆心图像坐标,由此可得不 同编码点的世界坐标Xj,完成标靶校正;
    步骤A04,将完成校正的立体标靶置于测量空间中,控制立体标靶多次转动,并在每次转动后由控制基站采集立体标靶图像,对于某一节点i而言,将双目传感器的结构参数和该节点的外参作为待优化的参数向量,构造优化目标函数如下式:
    Figure PCTCN2015076869-appb-100038
    其中,下标s表示系统的第s个拍摄姿态,t表示标靶中第t个标志点,
    Figure PCTCN2015076869-appb-100039
    为第t个标志点在世界坐标系下坐标,
    Figure PCTCN2015076869-appb-100040
    为传感器节点i待优化的参数向量,
    Figure PCTCN2015076869-appb-100041
    分别为传感器i第一、第二相机的内参和畸变,
    Figure PCTCN2015076869-appb-100042
    为第一、第二相机中基准点的图像坐标,ri,ti为传感器节点i的第一、第二相机的变换关系,
    Figure PCTCN2015076869-appb-100043
    为在第s个姿态拍摄时的外参,
    Figure PCTCN2015076869-appb-100044
    为重投影图像坐标;
    步骤A05,通过最小化目标函数式,实现对系统参数的最优化估计,得到该节点的结构参数
    Figure PCTCN2015076869-appb-100045
    和内参
    Figure PCTCN2015076869-appb-100046
    用于式实现该传感器i的深度重建;ri,ti则表示为传感器i与全局世界坐标系变换关系。
PCT/CN2015/076869 2014-09-10 2015-04-17 人体三维成像方法及系统 WO2016037486A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/181,398 US10019838B2 (en) 2014-09-10 2016-06-13 Human body three-dimensional imaging method and system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201410458109.X 2014-09-10
CN201410458109.XA CN104299261B (zh) 2014-09-10 2014-09-10 人体三维成像方法及系统

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/181,398 Continuation US10019838B2 (en) 2014-09-10 2016-06-13 Human body three-dimensional imaging method and system

Publications (1)

Publication Number Publication Date
WO2016037486A1 true WO2016037486A1 (zh) 2016-03-17

Family

ID=52318983

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/076869 WO2016037486A1 (zh) 2014-09-10 2015-04-17 人体三维成像方法及系统

Country Status (3)

Country Link
US (1) US10019838B2 (zh)
CN (1) CN104299261B (zh)
WO (1) WO2016037486A1 (zh)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105711096A (zh) * 2016-03-21 2016-06-29 联想(北京)有限公司 数据处理方法及电子设备
CN109242960A (zh) * 2018-09-15 2019-01-18 武汉智觉空间信息技术有限公司 采用双Kinect和旋转平台的人体实时建模系统及其建模方法
CN109658365A (zh) * 2017-10-11 2019-04-19 阿里巴巴集团控股有限公司 图像处理方法、装置、系统和存储介质
CN110619601A (zh) * 2019-09-20 2019-12-27 西安知象光电科技有限公司 一种基于三维模型的图像数据集生成方法
CN110772258A (zh) * 2019-11-07 2020-02-11 中国石油大学(华东) 一种用于人体尺寸测量的多视角测距方法
CN111461029A (zh) * 2020-04-03 2020-07-28 西安交通大学 一种基于多视角Kinect的人体关节点数据优化系统及方法
CN111862139A (zh) * 2019-08-16 2020-10-30 中山大学 一种基于彩色-深度相机的动态物体参数化建模方法
CN112509055A (zh) * 2020-11-20 2021-03-16 浙江大学 基于双目视觉和编码结构光相结合的穴位定位系统及方法
CN112509129A (zh) * 2020-12-21 2021-03-16 神思电子技术股份有限公司 一种基于改进gan网络的空间视场图像生成方法
CN112991517A (zh) * 2021-03-08 2021-06-18 武汉大学 一种纹理影像编解码自动匹配的三维重建方法
CN113205592A (zh) * 2021-05-14 2021-08-03 湖北工业大学 一种基于相位相似性的光场三维重建方法及系统
CN115795633A (zh) * 2023-02-07 2023-03-14 中国建筑西南设计研究院有限公司 一种木结构连接节点的参数化设计方法、系统及存储介质
CN119402729A (zh) * 2025-01-02 2025-02-07 合肥埃科光电科技股份有限公司 一种图像传感器拼接判定方法、系统、调整方法及存储介质

Families Citing this family (70)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10460158B2 (en) * 2014-06-19 2019-10-29 Kabushiki Kaisha Toshiba Methods and systems for generating a three dimensional representation of a human body shape
CN104299261B (zh) * 2014-09-10 2017-01-25 深圳大学 人体三维成像方法及系统
WO2016180957A2 (en) * 2015-05-13 2016-11-17 Naked Labs Austria Gmbh 3d body scanner data processing flow
CN106264536A (zh) * 2015-05-21 2017-01-04 长沙维纳斯克信息技术有限公司 一种三维人体扫描装置和方法
CN106197320B (zh) * 2015-05-29 2019-05-10 苏州笛卡测试技术有限公司 一种分时复用快速三维扫描及其数据处理方法
EP3312803A4 (en) * 2015-06-17 2018-12-26 Toppan Printing Co., Ltd. Image processing system, method and program
CN106468542A (zh) * 2015-08-22 2017-03-01 吴翔 一种采用码分多址和多天线技术的三维扫描方法
CN105160680B (zh) * 2015-09-08 2017-11-21 北京航空航天大学 一种基于结构光的无干扰深度相机的设计方法
KR20170047780A (ko) * 2015-10-23 2017-05-08 한국전자통신연구원 적응적 윈도우 마스크를 이용하는 로우 코스트 계산장치 및 그 방법
US20170140215A1 (en) * 2015-11-18 2017-05-18 Le Holdings (Beijing) Co., Ltd. Gesture recognition method and virtual reality display output device
US20170185142A1 (en) * 2015-12-25 2017-06-29 Le Holdings (Beijing) Co., Ltd. Method, system and smart glove for obtaining immersion in virtual reality system
CN105761240A (zh) * 2016-01-18 2016-07-13 盛禾东林(厦门)文创科技有限公司 一种相机采集数据生成3d模型的系统
CN106073903A (zh) * 2016-06-03 2016-11-09 上海德稻集群文化创意产业(集团)有限公司 制备骨骼辅助支架的三维扫描相机阵列及扫描方法
US10607408B2 (en) * 2016-06-04 2020-03-31 Shape Labs Inc. Method for rendering 2D and 3D data within a 3D virtual environment
CN109478338B (zh) * 2016-07-19 2023-04-04 松下电器(美国)知识产权公司 三维数据制作方法、发送方法、制作装置、发送装置
JP6737503B2 (ja) * 2016-09-13 2020-08-12 株式会社Vrc 3dスキャナ
CN106595642B (zh) * 2016-12-29 2023-04-11 中国科学院西安光学精密机械研究所 一种位姿测算光学仪器及调试方法
CN106842219B (zh) * 2017-01-18 2019-10-29 北京商询科技有限公司 一种用于混合现实设备的空间测距方法和系统
WO2018184675A1 (en) * 2017-04-05 2018-10-11 Telefonaktiebolaget Lm Ericsson (Publ) Illuminating an environment for localisation
DE102017109854A1 (de) * 2017-05-08 2018-11-08 Wobben Properties Gmbh Verfahren zur Referenzierung mehrerer Sensoreinheiten und zugehörige Messeinrichtung
CN107133989B (zh) * 2017-06-12 2020-11-06 中国科学院长春光学精密机械与物理研究所 一种三维扫描系统参数标定方法
CN107240149A (zh) * 2017-06-14 2017-10-10 广东工业大学 基于图像处理的物体三维模型构建方法
CN107564067A (zh) * 2017-08-17 2018-01-09 上海大学 一种适用于Kinect的标定方法
CN107504919B (zh) * 2017-09-14 2019-08-16 深圳大学 基于相位映射的折叠相位三维数字成像方法及装置
CN109785225B (zh) * 2017-11-13 2023-06-16 虹软科技股份有限公司 一种用于图像矫正的方法和装置
CN109785390B (zh) * 2017-11-13 2022-04-01 虹软科技股份有限公司 一种用于图像矫正的方法和装置
CN107945269A (zh) * 2017-12-26 2018-04-20 清华大学 基于多视点视频的复杂动态人体对象三维重建方法及系统
JP6843780B2 (ja) * 2018-01-18 2021-03-17 ヤフー株式会社 情報処理装置、学習済みモデル、情報処理方法、およびプログラム
CN108433704B (zh) * 2018-04-10 2024-05-14 西安维塑智能科技有限公司 一种三维人体扫描设备
DE102018109586A1 (de) * 2018-04-20 2019-10-24 Carl Zeiss Ag 3D-Digitalisierungssystem und 3D-Digitalisierungsverfahren
CN108665535A (zh) * 2018-05-10 2018-10-16 青岛小优智能科技有限公司 一种基于编码光栅结构光的三维结构重建方法与系统
CN109241841B (zh) * 2018-08-01 2022-07-05 甘肃未来云数据科技有限公司 视频人体动作的获取方法和装置
CN109389665B (zh) * 2018-08-24 2021-10-22 先临三维科技股份有限公司 三维模型的纹理获取方法、装置、设备和存储介质
EP3633405B1 (de) * 2018-10-03 2023-01-11 Hexagon Technology Center GmbH Messgerät zur geometrischen 3d-abtastung einer umgebung mit einer vielzahl sendekanäle und semiconductor-photomultiplier sensoren
CN109945802B (zh) * 2018-10-11 2021-03-09 苏州深浅优视智能科技有限公司 一种结构光三维测量方法
CN109359609B (zh) * 2018-10-25 2022-06-14 浙江宇视科技有限公司 一种人脸识别训练样本获取方法及装置
CN111060006B (zh) * 2019-04-15 2024-12-13 深圳市易尚展示股份有限公司 一种基于三维模型的视点规划方法
CN110230979B (zh) * 2019-04-15 2024-12-06 深圳市易尚展示股份有限公司 一种立体标靶及其三维彩色数字化系统标定方法
CN110243307B (zh) * 2019-04-15 2025-01-17 深圳市易尚展示股份有限公司 一种自动化三维彩色成像与测量系统
CN110176035B (zh) * 2019-05-08 2021-09-28 深圳市易尚展示股份有限公司 标志点的定位方法、装置、计算机设备和存储介质
CN110348371B (zh) * 2019-07-08 2023-08-29 叠境数字科技(上海)有限公司 人体三维动作自动提取方法
CN110986757B (zh) * 2019-10-08 2021-05-04 新拓三维技术(深圳)有限公司 一种三维人体扫描方法、装置及系统
DE102019216231A1 (de) * 2019-10-22 2021-04-22 Carl Zeiss Industrielle Messtechnik Gmbh Vorrichtung und Verfahren zur dimensionellen Vermessung von scharfen Kanten
CN110930374A (zh) * 2019-11-13 2020-03-27 北京邮电大学 一种基于双深度相机的腧穴定位方法
CN111028280B (zh) * 2019-12-09 2022-06-03 西安交通大学 井字结构光相机系统及进行目标有尺度三维重建的方法
CN111076674B (zh) * 2019-12-12 2020-11-17 天目爱视(北京)科技有限公司 一种近距离目标物3d采集设备
CN111458693A (zh) * 2020-02-01 2020-07-28 上海鲲游光电科技有限公司 直接测距tof分区探测方法及其系统和电子设备
US11580692B2 (en) 2020-02-26 2023-02-14 Apple Inc. Single-pass object scanning
CN113327291B (zh) * 2020-03-16 2024-03-22 天目爱视(北京)科技有限公司 一种基于连续拍摄对远距离目标物3d建模的标定方法
CN113554582B (zh) * 2020-04-22 2022-11-08 中国科学院长春光学精密机械与物理研究所 电子设备盖板上功能孔的缺陷检测方法、装置以及系统
CN111398893B (zh) * 2020-05-14 2021-11-23 南京工程学院 一种基于无线定位的网格人体模型测量装置及方法
CN111721197B (zh) * 2020-05-14 2022-02-01 南京工程学院 一种基于双目立体的身体模型测量装置及方法
CN111612831A (zh) * 2020-05-22 2020-09-01 创新奇智(北京)科技有限公司 一种深度估计方法、装置、电子设备及存储介质
CN111855664B (zh) * 2020-06-12 2023-04-07 山西省交通科技研发有限公司 一种可调节隧道病害三维检测系统
CN111862241B (zh) * 2020-07-28 2024-04-12 杭州优链时代科技有限公司 一种人体对齐方法及装置
CN111981982B (zh) * 2020-08-21 2021-07-06 北京航空航天大学 一种基于加权sfm算法的多向合作靶标光学测量方法
CN112102477B (zh) * 2020-09-15 2024-09-27 腾讯科技(深圳)有限公司 三维模型重建方法、装置、计算机设备和存储介质
CN112365585B (zh) * 2020-11-24 2023-09-12 革点科技(深圳)有限公司 一种基于事件相机的双目结构光三维成像方法
CN114581526B (zh) * 2020-12-02 2024-08-27 中国科学院沈阳自动化研究所 一种基于球形标定块的多相机标定方法
CN113192143B (zh) * 2020-12-23 2022-09-06 合肥工业大学 一种用于摄像机快速标定的编码立体靶标及其解码方法
CN112729156A (zh) * 2020-12-24 2021-04-30 上海智能制造功能平台有限公司 一种人体数字化测量装置的数据拼接及系统标定方法
CN112884895B (zh) * 2021-02-09 2024-03-12 郭金磊 一种基于人体外貌形态的穿搭匹配系统
CN113516007B (zh) * 2021-04-02 2023-12-22 中国海洋大学 多组双目相机组网的水下标志物识别与拼接方法
CN113643436B (zh) * 2021-08-24 2024-04-05 凌云光技术股份有限公司 一种深度数据拼接融合方法及装置
CN114179082B (zh) * 2021-12-07 2024-09-13 南京工程学院 一种基于接触力信息的图像力触觉检测装置及再现方法
CN114485396B (zh) * 2022-01-10 2023-06-20 上海电气核电设备有限公司 一种核电蒸发器管板深孔几何量的测量系统及测量方法
CN114943664B (zh) * 2022-06-02 2025-07-22 北京字跳网络技术有限公司 图像处理方法、装置、电子设备及存储介质
CN116524008B (zh) * 2023-04-14 2024-02-02 公安部第一研究所 一种用于安检ct智能识别的目标物匹配与空间位置估计方法
CN116758267A (zh) * 2023-06-26 2023-09-15 广州市浩洋电子股份有限公司 一种基于多视角的人体定位方法及灯光系统
CN117726907B (zh) * 2024-02-06 2024-04-30 之江实验室 一种建模模型的训练方法、三维人体建模的方法以及装置

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102175261A (zh) * 2011-01-10 2011-09-07 深圳大学 一种基于自适应标靶的视觉测量系统及其标定方法
US20130195330A1 (en) * 2012-01-31 2013-08-01 Electronics And Telecommunications Research Institute Apparatus and method for estimating joint structure of human body
CN103267491A (zh) * 2012-07-17 2013-08-28 深圳大学 自动获取物体表面完整三维数据的方法及系统
CN104299261A (zh) * 2014-09-10 2015-01-21 深圳大学 人体三维成像方法及系统

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6215898B1 (en) * 1997-04-15 2001-04-10 Interval Research Corporation Data processing system and method
US7929751B2 (en) * 2005-11-09 2011-04-19 Gi, Llc Method and apparatus for absolute-coordinate three-dimensional surface imaging
US9189886B2 (en) * 2008-08-15 2015-11-17 Brown University Method and apparatus for estimating body shape
US8823775B2 (en) * 2009-04-30 2014-09-02 Board Of Regents, The University Of Texas System Body surface imaging
US9341464B2 (en) * 2011-10-17 2016-05-17 Atlas5D, Inc. Method and apparatus for sizing and fitting an individual for apparel, accessories, or prosthetics
US11510600B2 (en) * 2012-01-04 2022-11-29 The Trustees Of Dartmouth College Method and apparatus for quantitative and depth resolved hyperspectral fluorescence and reflectance imaging for surgical guidance
US8913809B2 (en) * 2012-06-13 2014-12-16 Microsoft Corporation Monitoring physical body changes via image sensor
KR101977711B1 (ko) * 2012-10-12 2019-05-13 삼성전자주식회사 깊이 센서, 이의 이미지 캡쳐 방법, 및 상기 깊이 센서를 포함하는 이미지 처리 시스템
US9384585B2 (en) * 2012-10-23 2016-07-05 Electronics And Telecommunications Research Institute 3-dimensional shape reconstruction device using depth image and color image and the method
CN103279762B (zh) * 2013-05-21 2016-04-13 常州大学 一种自然环境下果实常见生长形态判定方法
CN103876710A (zh) * 2014-02-17 2014-06-25 钱晨 一种高解析度的人体局部三维成像系统
US9877012B2 (en) * 2015-04-01 2018-01-23 Canon Kabushiki Kaisha Image processing apparatus for estimating three-dimensional position of object and method therefor

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102175261A (zh) * 2011-01-10 2011-09-07 深圳大学 一种基于自适应标靶的视觉测量系统及其标定方法
US20130195330A1 (en) * 2012-01-31 2013-08-01 Electronics And Telecommunications Research Institute Apparatus and method for estimating joint structure of human body
CN103267491A (zh) * 2012-07-17 2013-08-28 深圳大学 自动获取物体表面完整三维数据的方法及系统
CN104299261A (zh) * 2014-09-10 2015-01-21 深圳大学 人体三维成像方法及系统

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
HE, DONG ET AL.: "Three-dimensional imaging based on combination fringe and pseudorandom pattern projection.", CHINESE JOURNAL OF LASERS., vol. 41, no. 2, 28 February 2014 (2014-02-28) *
LIU, MINGXING ET AL.: "Research of 3D reconstruction based on computer vision.", JOURNAL OF SHENZHEN INSTITUTE OF INFORMATION TECHNOLOGY., vol. 11, no. 3, 30 September 2013 (2013-09-30), pages 15 and 16 *
LIU, XIAOLI ET AL.: "3D Auto-inspection for large thin-wall object.", ACTA OPTICA SINICA., vol. 31, no. 3, 31 March 2011 (2011-03-31) *
LIU, XIAOLI ET AL.: "A method for correcting the 2D calibration target.", OPTO- ELECTRONIC ENGINEERING, vol. 38, no. 4, 30 April 2011 (2011-04-30) *
PENG, XIANG.: "phase-aided three-dimensional imaging and metrology.", ACTA OPTICA SINICA., vol. 31, no. 9, 30 September 2011 (2011-09-30) *

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105711096A (zh) * 2016-03-21 2016-06-29 联想(北京)有限公司 数据处理方法及电子设备
CN109658365A (zh) * 2017-10-11 2019-04-19 阿里巴巴集团控股有限公司 图像处理方法、装置、系统和存储介质
CN109658365B (zh) * 2017-10-11 2022-12-06 阿里巴巴(深圳)技术有限公司 图像处理方法、装置、系统和存储介质
CN109242960A (zh) * 2018-09-15 2019-01-18 武汉智觉空间信息技术有限公司 采用双Kinect和旋转平台的人体实时建模系统及其建模方法
CN111862139A (zh) * 2019-08-16 2020-10-30 中山大学 一种基于彩色-深度相机的动态物体参数化建模方法
CN111862139B (zh) * 2019-08-16 2023-08-18 中山大学 一种基于彩色-深度相机的动态物体参数化建模方法
CN110619601A (zh) * 2019-09-20 2019-12-27 西安知象光电科技有限公司 一种基于三维模型的图像数据集生成方法
CN110772258A (zh) * 2019-11-07 2020-02-11 中国石油大学(华东) 一种用于人体尺寸测量的多视角测距方法
CN111461029A (zh) * 2020-04-03 2020-07-28 西安交通大学 一种基于多视角Kinect的人体关节点数据优化系统及方法
CN111461029B (zh) * 2020-04-03 2023-05-02 西安交通大学 一种基于多视角Kinect的人体关节点数据优化系统及方法
CN112509055B (zh) * 2020-11-20 2022-05-03 浙江大学 基于双目视觉和编码结构光相结合的穴位定位系统及方法
CN112509055A (zh) * 2020-11-20 2021-03-16 浙江大学 基于双目视觉和编码结构光相结合的穴位定位系统及方法
CN112509129A (zh) * 2020-12-21 2021-03-16 神思电子技术股份有限公司 一种基于改进gan网络的空间视场图像生成方法
CN112991517B (zh) * 2021-03-08 2022-04-29 武汉大学 一种纹理影像编解码自动匹配的三维重建方法
CN112991517A (zh) * 2021-03-08 2021-06-18 武汉大学 一种纹理影像编解码自动匹配的三维重建方法
CN113205592A (zh) * 2021-05-14 2021-08-03 湖北工业大学 一种基于相位相似性的光场三维重建方法及系统
CN113205592B (zh) * 2021-05-14 2022-08-05 湖北工业大学 一种基于相位相似性的光场三维重建方法及系统
CN115795633A (zh) * 2023-02-07 2023-03-14 中国建筑西南设计研究院有限公司 一种木结构连接节点的参数化设计方法、系统及存储介质
CN119402729A (zh) * 2025-01-02 2025-02-07 合肥埃科光电科技股份有限公司 一种图像传感器拼接判定方法、系统、调整方法及存储介质

Also Published As

Publication number Publication date
US10019838B2 (en) 2018-07-10
CN104299261A (zh) 2015-01-21
US20160300383A1 (en) 2016-10-13
CN104299261B (zh) 2017-01-25

Similar Documents

Publication Publication Date Title
WO2016037486A1 (zh) 人体三维成像方法及系统
TWI555379B (zh) 一種全景魚眼相機影像校正、合成與景深重建方法與其系統
CN105160680B (zh) 一种基于结构光的无干扰深度相机的设计方法
CN103337094B (zh) 一种应用双目摄像机实现运动三维重建的方法
CN107154014B (zh) 一种实时彩色及深度全景图像拼接方法
CN102072706B (zh) 一种多相机定位与跟踪方法及系统
WO2018076154A1 (zh) 一种基于鱼眼摄像机空间位姿标定的全景视频生成方法
CN113129430B (zh) 基于双目结构光的水下三维重建方法
CN109272570A (zh) 一种基于立体视觉数学模型的空间点三维坐标求解方法
CN105488775A (zh) 一种基于六摄像机环视的柱面全景生成装置及方法
CN104835158B (zh) 基于格雷码结构光与极线约束的三维点云获取方法
CN107358633A (zh) 一种基于三点标定物的多相机内外参标定方法
WO2013076605A1 (en) Method and system for alignment of a pattern on a spatial coded slide image
KR20150120066A (ko) 패턴 프로젝션을 이용한 왜곡 보정 및 정렬 시스템, 이를 이용한 방법
CN113592721B (zh) 摄影测量方法、装置、设备及存储介质
CN111009030A (zh) 一种多视高分辨率纹理图像与双目三维点云映射方法
CN106534670B (zh) 一种基于固联鱼眼镜头摄像机组的全景视频生成方法
CN109579695A (zh) 一种基于异构立体视觉的零件测量方法
CN104318604A (zh) 一种3d图像拼接方法及装置
CN102368137A (zh) 嵌入式标定立体视觉系统
CN104240233A (zh) 一种摄像机单应性矩阵和投影机单应性矩阵的求解方法
CN113112532B (zh) 一种多ToF相机系统实时配准方法
Wu et al. Binocular stereovision camera calibration
Chen et al. Multi-stereo 3D reconstruction with a single-camera multi-mirror catadioptric system
Ortiz-Coder et al. Accurate 3d reconstruction using a videogrammetric device for heritage scenarios

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15840549

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15840549

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 01.06.2018)

122 Ep: pct application non-entry in european phase

Ref document number: 15840549

Country of ref document: EP

Kind code of ref document: A1