WO2016037486A1 - 人体三维成像方法及系统 - Google Patents
人体三维成像方法及系统 Download PDFInfo
- Publication number
- WO2016037486A1 WO2016037486A1 PCT/CN2015/076869 CN2015076869W WO2016037486A1 WO 2016037486 A1 WO2016037486 A1 WO 2016037486A1 CN 2015076869 W CN2015076869 W CN 2015076869W WO 2016037486 A1 WO2016037486 A1 WO 2016037486A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- human body
- dimensional
- camera
- base station
- Prior art date
Links
- 238000003384 imaging method Methods 0.000 title claims abstract description 33
- 238000000034 method Methods 0.000 claims abstract description 30
- 230000010363 phase shift Effects 0.000 claims abstract description 18
- 239000003550 marker Substances 0.000 claims description 27
- 238000005259 measurement Methods 0.000 claims description 18
- 238000012937 correction Methods 0.000 claims description 14
- 238000010586 diagram Methods 0.000 claims description 14
- 238000004364 calculation method Methods 0.000 claims description 13
- 230000009466 transformation Effects 0.000 claims description 13
- 239000013598 vector Substances 0.000 claims description 11
- 230000004927 fusion Effects 0.000 claims description 6
- 238000005457 optimization Methods 0.000 claims description 6
- 239000003086 colorant Substances 0.000 claims description 5
- 238000011872 anthropometric measurement Methods 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 3
- 239000011159 matrix material Substances 0.000 claims description 3
- 230000001131 transforming effect Effects 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims 1
- 230000002123 temporal effect Effects 0.000 claims 1
- 230000008569 process Effects 0.000 abstract description 5
- 238000013461 design Methods 0.000 abstract description 2
- 238000013480 data collection Methods 0.000 abstract 1
- 238000005516 engineering process Methods 0.000 description 2
- 230000007547 defect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000009776 industrial production Methods 0.000 description 1
- 238000013178 mathematical model Methods 0.000 description 1
- 238000000691 measurement method Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000003908 quality control method Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/24—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
- G01B11/25—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
- G01B11/2513—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object with several lines being projected in more than one direction, e.g. grids, patterns
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B21/00—Measuring arrangements or details thereof, where the measuring technique is not covered by the other groups of this subclass, unspecified or not relevant
- G01B21/02—Measuring arrangements or details thereof, where the measuring technique is not covered by the other groups of this subclass, unspecified or not relevant for measuring length, width, or thickness
- G01B21/04—Measuring arrangements or details thereof, where the measuring technique is not covered by the other groups of this subclass, unspecified or not relevant for measuring length, width, or thickness by measuring coordinates of points
- G01B21/042—Calibration or calibration artifacts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/04—Texture mapping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/521—Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/243—Image signal generators using stereoscopic image cameras using three or more 2D image sensors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/08—Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30244—Camera pose
Definitions
- the invention belongs to the technical field of three-dimensional imaging, and particularly relates to a three-dimensional imaging method and system for realizing human body through distributed network control, and a simultaneous calibration method for multi-control base station.
- Phase-assisted three-dimensional imaging has the advantages of non-contact, high speed, high precision, and high data density. It is widely used in industrial production fields such as reverse engineering, quality control, and defect detection.
- a single three-dimensional sensing system is used to achieve full-scale three-dimensional imaging of objects by moving objects or sensing systems, which is often difficult to automate and consumes too much time.
- This solution is often difficult to apply to three-dimensional imaging of the human body and the like.
- these human bodies have a large area and need to acquire three-dimensional data from multiple angles; on the other hand, taking the human body as an example, it may be difficult for the human body to remain stationary and must Quickly complete 3D data acquisition.
- the existing human body three-dimensional imaging systems mainly include: laser-based handheld and mechanical motion device scanning systems.
- the handheld imaging system realizes complete data acquisition by hand-moving multiple angles, usually takes several minutes, and it is difficult for the human body to maintain Still, the imaging system based on mechanical motion devices often requires human body rotation or large mechanical motion control devices, which are relatively bulky and bulky.
- the first technical problem to be solved by the present invention is to provide a three-dimensional imaging method for a human body, which aims to quickly and completely acquire high-precision, high-density three-dimensional data and high-realistic color data of the human body.
- a three-dimensional imaging method of a human body includes the following steps:
- Step A After receiving the acquisition instruction of the control center, the plurality of control base stations perform coding pattern projection on the human body, and simultaneously collect human body image information observed from respective perspectives, and each control base station performs depth calculation on the respective human body image information. Obtaining three-dimensional geometric depth data in respective local coordinate systems;
- the plurality of control base stations are interconnected, and a measurement space completely covering the human body is formed around the human body.
- Each control base station includes two three-dimensional sensors disposed longitudinally, and the two three-dimensional sensors are respectively used for the control base station. The perspective of the three-dimensional geometric depth data and texture data of the upper and lower parts of the human body;
- Step B Each control base station transforms the three-dimensional geometric depth data from a respective local coordinate system to a global coordinate system;
- Step C each control base station performs white light projection on the human body, and then the acquisition is observed from the respective perspectives.
- the texture data of the human body color, and the texture data is sent to the control center together with the 3D geometric depth data after the step B transformation;
- Step D The control center receives the three-dimensional geometric depth data and the corresponding surface texture data in the global coordinate system transmitted by each control base station, and the control center firstly splices the three-dimensional geometric depth data collected by each control base station to obtain a three-dimensional human body model, and then Removing the redundancy of the whole body model to obtain a three-dimensional model of the human body after fusion; the control center then weights the texture data of the overlapping portions of all the human body colors collected by each control base station to obtain the merged texture data; the control center Finally, the merged human body 3D model is associated with the merged texture data in a one-to-one correspondence.
- a second technical problem to be solved by the present invention is to provide a three-dimensional imaging system for a human body, including:
- a plurality of control base stations are configured to construct an anthropometric measurement space, and after receiving the acquisition instruction of the control center, perform coding pattern projection on the human body located in the human body measurement space, and simultaneously collect human body image information observed from respective perspectives, and each control base station Then, the depth calculation of the respective human body image information is performed to obtain three-dimensional geometric depth data in respective local coordinate systems; each control base station then transforms the three-dimensional geometric depth data from the respective local coordinate systems to the global coordinate system; Performing white light projection on the human body, and then collecting texture data of the human body color observed from the respective viewing angles, and transmitting the texture data together with the transformed three-dimensional geometric depth data; wherein the plurality of control base stations are interconnected and surround the human body Arranging to form an anthropometric space, each control base station includes two three-dimensional sensors disposed longitudinally, and the two three-dimensional sensors are respectively configured to acquire human body image information and texture data of the upper and lower parts of the human body from the perspective of the control base station;
- the control center receives the three-dimensional geometric depth data and the corresponding surface texture data in the global coordinate system transmitted by each control base station, and the control center firstly splices the three-dimensional geometric depth data collected by each control base station to obtain a three-dimensional human body model, and then Removing the redundancy of the whole body model to obtain a three-dimensional model of the human body after fusion; the control center then weights the texture data of the overlapping portions of all the human body colors collected by each control base station to obtain the merged texture data; the control center Finally, the merged human body 3D model is associated with the merged texture data in a one-to-one correspondence.
- a third technical problem to be solved by the present invention is to provide a method for realizing simultaneous calibration of a multi-control base station in a three-dimensional imaging system as described above, comprising the steps of:
- Step A01 using a high-resolution digital camera to capture a stereoscopic target at a plurality of different viewing angles to obtain a target image;
- the stereoscopic target can cover the measurement space and has a plurality of coded marker points on the surface, and each marker point has a different
- the coding band is used as the unique identifier;
- Step A02 performing central positioning and decoding on the coded marker points in the target image, and obtaining corresponding correspondences and image coordinates between the different perspective images according to different coding values of each code point;
- Step A03 using the bundling adjustment method, the image coordinates of the re-projection of each differently encoded world coordinate X j under the shooting angle i are Optimize the reprojection error as shown in the following equation:
- Step A04 placing the corrected stereo target in the measurement space, controlling the stereo target to rotate multiple times, and collecting the stereo target image by the control base station after each rotation, for a node i, the binocular
- the structural parameters of the sensor and the external parameters of the node are used as the parameter vector to be optimized, and the optimization objective function is constructed as follows:
- the superscript s represents the sth shooting position of the system, and t represents the tth landmark in the target.
- the parameter vector to be optimized for the sensor node i The internal parameters and distortion of the first and second cameras of sensor i, respectively.
- Step A05 by minimizing the objective function formula, the optimal estimation of the system parameters is realized, and the structural parameters of the node are obtained.
- the depth reconstruction of the sensor i is implemented by the method; r i , t i is expressed as the relationship between the sensor i and the global world coordinate system.
- the invention provides a large human body three-dimensional scanning system composed of a multi-control base station sensor network using distributed computing, which has the following advantages: First, all sensors are realized under the condition that the effective field of each distributed sensor has no spatial overlap. Calibration of structural parameters and their global matching parameters. Second, the corresponding point finding method combining phase shift and random structured light reduces the image acquisition time of single-view depth data acquisition. Third, using the idea of time multiplexing, the overall data acquisition time of the system is compressed, and the design of distributed computing enhances the computing power of the entire system and improves the data acquisition speed. Fourth, according to the global matching parameters of different sensors, automatic matching of different sensor depth data is realized. Fifth, the system debugging and expansion of the embodiment are convenient, the degree of automation is high, and the three-dimensional scanning process is simple.
- FIG. 1 is a schematic diagram of a multi-control base station arrangement provided by the present invention.
- FIG. 2 is a schematic block diagram of a three-dimensional sensor provided by the present invention.
- FIG. 3 is a diagram showing the internal structure of a control base station provided by the present invention.
- FIG. 4 is a schematic diagram of control of a control center provided by the control center according to the present invention.
- FIG. 5 is a schematic diagram of time multiplexing of a control base station provided by the present invention.
- Figure 6 is a schematic view of a calibration reference provided by the present invention.
- Figure 7 is a calibration schematic diagram provided by the present invention.
- Figure 8 is a schematic diagram of a sequence of projection patterns provided by the present invention.
- Figure 11 is a diagram of a matched three-dimensional model provided by the present invention.
- a host is used to control two vertically arranged three-dimensional sensors to form a scanning control base station, so as to realize data acquisition and three-dimensional reconstruction of the upper and lower parts of the human body at an angle.
- a scanning control base station In order to obtain a relatively complete and detailed three-dimensional data of the human body surface, five such control base stations constitute a human body scanning system.
- the three-dimensional imaging process of the human body provided by the embodiments of the present invention can be divided into four phases, namely, configuration and spatial arrangement of the control base station, calibration of the multi-control base station of the scanning system, single-view depth data reconstruction, and automatic matching of all different viewpoint depth data.
- the main principles are as follows:
- the number of control base stations that need to be arranged in the entire scanning system. Generally, under the premise of satisfying the fineness and integrity requirements of the data, the number of control base stations is minimized. Usually, the number of control base stations is 3-6. In this embodiment, five control base stations are used.
- the control base station of the present embodiment has a distance of about 1.2 m, which is equivalent to five uniformly arranged on a circumference having a radius of 1.2 m, and the center position is the center of the measurement space of the system.
- Each control base station is respectively configured with two sets of three-dimensional depth sensors arranged vertically. If you are specifically facing a scanning system with a height of no more than 1.2m, each control base station only needs a set of depth sensors to meet the requirements.
- the coded marker points are pasted on the surface of an object covering the measurement space of the system, and the spatial three-dimensional coordinates of the marker points are reconstructed by using the close-range measurement method as a reference for each three-dimensional sensor calibration.
- the edge of the marker point is obtained according to the sub-pixel edge extraction algorithm, and the image coordinates of the center are obtained by fitting.
- the first and second inter-camera marker point correspondences and the three-dimensional coordinates of each marker point are obtained by uniquely identifying the marker points by means of the encoded information.
- the host of the control base station sends a serial port signal to the control board, and the control board gives the projection module a coding pattern according to the serial port signal, and simultaneously triggers the first and second camera acquisitions.
- the sequence of images acquired by the projection is composed of a multi-step phase shift image and a pseudo-random coded image, or a multi-step phase shift
- the image is composed of a gray coded image or consists of a multi-step phase shift image and a time phase unwrapped image. Among them, the more steps the phase shift image has, the higher the precision, but the increase of the number of steps will affect the projection speed.
- the present invention is implemented by using a 4-step phase shift image and a pseudo-random code, which will be described below by way of example.
- the upper and lower three-dimensional geometric depth data obtained in each control base station is transformed into the global coordinate system, and then transmitted to the control center through the Gigabit switch.
- control center After receiving all the depth data of the control base station, the control center refines the matching result by using the nearest point iterative method to obtain the complete data of the surface of the human body.
- a plurality of control base stations are configured to construct an anthropometric measurement space, and after receiving the acquisition instruction of the control center, perform coding pattern projection on the human body located in the human body measurement space, and simultaneously collect the human body observed from the respective viewing angles.
- Image information each control base station further performs depth calculation on the respective human body image information to obtain three-dimensional geometric depth data in respective local coordinate systems; each control base station then transforms the three-dimensional geometric depth data from respective local coordinate systems to global coordinates.
- Each control base station performs white light projection on the human body, and then collects texture data of the human body color observed from the respective viewing angles, and sends the texture data together with the transformed three-dimensional geometric depth data; wherein, the plurality of control base stations Interconnected, and arranged around the human body to form an anthropometric space, each control base station includes two three-dimensional sensors disposed longitudinally, the two three-dimensional sensors are respectively used to obtain human body image information of the upper and lower parts of the human body from the perspective of the control base station And texture data.
- the control center After receiving the three-dimensional geometric depth data and the corresponding surface texture data in the global coordinate system transmitted by each control base station, the control center firstly splicing the three-dimensional geometric depth data collected by each control base station to obtain a three-dimensional human body model, and then The human body model is redundantly removed, and the merged human body three-dimensional model is obtained. Then, the texture data of the overlapping portions of all the human body colors collected by each control base station is weighted to obtain the merged texture data. Finally, the merged human body three-dimensional image is obtained. The model and the merged texture data are related one by one according to the coordinate correspondence. To save costs, the control center can reuse one of the control base stations to achieve.
- FIG. 1 is a schematic diagram of a specific spatial position of five control base stations in the embodiment.
- Five control base stations are evenly arranged in a circle with a radius of about 1.2 m, and the imaging directions of the three-dimensional sensors of each control base station are directly opposite to the center of the circle. Measuring space. It should be understood that, in the specific implementation, the fineness and integrity requirements of the human body data are different, and the number of control base stations is generally 3 to 6 sets, not necessarily 5 sets, and the working distance of the control base station needs to adapt to different imaging lens focal lengths. It is not limited to about 1.2 m.
- FIG. 3 is a schematic diagram of the internal structure of each control base station.
- the three-dimensional sensors with two binocular structures are vertically arranged in a row, and a host controls the image acquisition and completes the reconstruction calculation of the depth data, and can acquire the depth data of the upper and lower parts of the human body at an angle, and complete the depth data to Global coordinates Match.
- 101 is a CCD camera
- 102 is a CCD camera
- 105 is a CCD camera
- 106 is a CCD camera
- 103 is a CCD projector
- 107 is a CCD projector
- 104 is a control board
- 108 is a control board
- 109 is a host
- 110 is the subject.
- 101, 102 and 103 constitute an upper part of the three-dimensional sensor 1, and the sensor receives a sequence of acquired images by the signal of the control board 104.
- the sensor 2 corresponding to 105, 106, 107 has the same working mechanism as the sensor 1.
- the host 109 is connected to the control boards 104 and 108 through different serial ports, and realizes controlled projection acquisition of the sensors 1 and 2 through different COM ports.
- a three-dimensional target capable of covering the human body measurement space is prepared, and the cross section is a nearly equilateral hexahedron with a side length of 0.4 m and a height of 2 m, and 750 coded mark points are pasted on the surface, and each mark point is There are different coding bands as the unique identification, as shown in Figure 6(a).
- the three-dimensional coordinates of the center of each marker point are corrected by the method of close-range photogrammetry to obtain the precise spatial three-dimensional coordinates. It specifically includes the following steps:
- Step 1 Use a high-resolution digital camera to capture the target image at 58 different angles of view, as shown in Figure 6(b) (image taken in partial view).
- Step 2 Centering and decoding the coded marker points in the target image, and obtaining the correspondence between the different images and the image coordinates according to the different code values of each code point.
- Gaussian filtering removes image noise
- the edge detection operator (Canny operator) performs pixel-level coarse positioning on the elliptical edge
- Step 3 Using the bundling adjustment method, the image coordinates of the re-projection of each different encoded world coordinate X j under the shooting angle i are Optimize the reprojection error as shown in equation (1)
- the present embodiment is composed of five control base stations, wherein each control base station has two three-dimensional sensors of a binocular system, and a total of ten sets of three-dimensional sensors have the same function and status in the system.
- 7 is a three-dimensional sensor calibration model, where r i , t i are vector representations of the first and second camera rotation translation transformations of the system, Represents the global coordinate system to the sensor local coordinate system transformation relationship.
- r i , t i are vector representations of the first and second camera rotation translation transformations of the system, Represents the global coordinate system to the sensor local coordinate system transformation relationship.
- Each time the target is rotated at approximately 45 degrees each camera in the control system captures a target image, and a total of 8 sets of images of different positions of the target are obtained.
- the structural parameters of the binocular sensor and the external parameters of the node are used as parameter vectors to be optimized, and an optimization objective function is constructed, such as equation (2).
- the subscript s represents the sth shooting position of the system
- t represents the tth landmark point in the target.
- the parameter vector to be optimized for the sensor node i The internal parameters and distortion of the first and second cameras of sensor i, respectively.
- r i , t i are the transformation relations of the first and second cameras of the sensor node i.
- the specific mathematical model See “Computer Vision” (Ma Yude, Zhang Zhengyou, Science Press, 1998)).
- the optimal estimation of the system parameters is realized, and the structural parameters of the node are obtained.
- internal reference It is used to realize the depth reconstruction of the sensor i; r i , t i is expressed as the relationship between the sensor i and the global world coordinate system, and the internal parameters can be obtained by using the same method for different sensors.
- the key to the reconstruction of the 3D geometric depth data is the corresponding point search of the first and second cameras in the sensor.
- the present invention uses a phase shift and random structure light combination method to search for corresponding points.
- the method can shorten the image sequence of more than 10 images to 5 frames, and the image sequence of the projection image is as shown in FIG. .
- the corresponding point search method can be divided into the following steps:
- Step 1 Perform polar line correction on the upper body image and the lower body image according to the parameters of the first and second cameras that are pre-calibrated, so that the ordinate of the upper body image and the lower body image have a polar line correspondence relationship;
- Step 2 according to the 4-step phase shift algorithm, using the phase shift diagram of the polar line correction to calculate the folding phase of the first and second cameras;
- Step 3 The folding phase value of the pixel position p l (i, j) in the folding phase of the first camera is w(i, j), traversing the jth row of the second camera, and the phase difference can be found in each folding cycle.
- Pixel position p r (ir, j), these pixel positions p r are regarded as candidate corresponding points of the first camera pixel position p l ;
- Step 4 treating the pseudo-random coded image of the first camera and the second camera as the target image and the image to be matched, and considering that the matching window size centered on p l on the target image is (2w+1) ⁇ (2w+1), The gray value of any pixel p l in the window is recorded as p l (u l , u l ), and the gray point of the corresponding point of the selected point p r on the image to be matched is p r (u r , u r ) .
- NCC normalized cross-correlation measure function
- the average gray level of the image of the window, u and v are the coordinates of the selected matching window, respectively, p l (i+u, j+v) is the pixel of the first camera in the window (i+u, j+ v) the gray value of the position, p r (ir+u, j+v) is the gray value of the position of the second camera in the window (ir+u, j+v), after each correction by the polar line
- the candidate points have the same ordinate j, and the correlation values of all the candidate points (ir, j) of the second camera and the position of the first camera (i, j) are traversed to obtain a correlation measure function value, wherein only the corresponding points have higher
- the correlation value of the pixel-level corresponding points of the first and second cameras can be obtained among the plurality of candidate points by setting the threshold.
- Step 5 After obtaining the pixel-level correspondence, the corresponding relationship of the sub-pixels is obtained according to the difference of the folding phase values of the first and second cameras, and then the three-dimensional geometric depth data is reconstructed according to the internal parameters and structural parameters of the first and second cameras that are calibrated.
- a depth image of the viewpoint can be obtained, and then the depth image of the viewpoint can be matched into the global coordinate system by the coordinate transformation of the equation (4).
- X i is the coordinate representation of the three-dimensional point in the local coordinate system of the i-th three-dimensional sensor
- R i and T i are the transformation matrix of the i-th three-dimensional sensor local coordinate system to the global coordinate system
- X wi is the point X i is transformed into
- each control base station transmits the depth data to the control center through the Gigabit network. Then the control center checks the matching accuracy. If the precision is not good, the fine matching can be further carried out.
- the control center checks the matching accuracy. If the precision is not good, the fine matching can be further carried out.
- each control base station uses the field arrangement shown in Figure 1, five control base stations are provided, each of which has two sets of three-dimensional sensors. The entire system is calibrated using the three-dimensional reference points shown in FIG. And according to the control flow shown in FIG. 5, triggering two sets of three-dimensional sensors in each control base station to acquire an image sequence and reconstruct the three-dimensional geometric depth data of the viewpoint, as shown in FIG. 9, mapping the color information to the depth data, as shown in FIG. 10 is shown. Then, each control base station matches the depth data to the unified coordinate system according to the calibration parameter, and then transmits it to the control center, and the control center performs further fine matching and data fusion, wherein FIG. 11 is the result of the depth image fusion output.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Optics & Photonics (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Image Processing (AREA)
Abstract
Description
Claims (13)
- 一种人体三维成像方法,其特征在于,包括下述步骤:步骤A,若干控制基站在接收到控制中心的采集指令后对人体进行编码图案投影,并同时采集从各自视角所观察到的人体图像信息,各控制基站再对各自的人体图像信息进行深度计算,得到在各自局部坐标系下的三维几何深度数据;其中,所述若干控制基站之间互联,且围绕人体布设形成一个完全覆盖人体的测量空间,每个控制基站均包括纵向设置的两个三维传感器,该两个三维传感器分别用于从该控制基站的视角获取人体上下两部分的三维几何深度数据和纹理数据;步骤B,各控制基站将三维几何深度数据从各自的局部坐标系下变换到全局坐标系;步骤C,各控制基站再对人体进行白光投影,然后采集从各自视角所观察到的人体颜色的纹理数据,将纹理数据连同步骤B变换后的三维几何深度数据一并发送至控制中心;步骤D,控制中心接收各控制基站传输的全局坐标系下的三维几何深度数据与其相对应的表面纹理数据,控制中心首先将各控制基站所采集的三维几何深度数据进行拼接得到人体三维模型,再对所述人体整体模型去除冗余,得到融合后的人体三维模型;控制中心然后对各个控制基站所采集的所有人体颜色的相重叠部分的纹理数据加权运算,得到融合后的纹理数据;控制中心最后将融合后的人体三维模型与融合后的纹理数据进行一一对应关联。
- 如权利要求1所述的人体三维成像方法,其特征在于,步骤A中采集的人体图像信息为由多步相移图像和1幅伪随机编码图像组成的图像序列,或者为由多步相移图像和Grey编码图像组成的图像序列,或者由多步相移图像和时间相位展开图像组成的图像序列。
- 如权利要求2所述的人体三维成像方法,其特征在于,每个控制基站的两个三维传感器所采集的上半身图像和下半身图像互不重叠,且可拼接成从其视角所观察到的人体的完整图像,定义采集上半身图像、下半身图像的相机分别为第一、第二相机;步骤A通过下述方法对人体图像信息进行深度计算得到三维几何深度数据:步骤A1,根据预先标定的第一、第二相机的参数,对上半身图像和下半身图像进行极线矫正,使得上半身图像和下半身图像相同的纵坐标有极线对应关系;步骤A2,根据4步相移算法,利用已极线矫正的相移图计算得到第一、第二相机的折叠相位;步骤A3,第一相机的折叠相位中像素位置pl(i,j)的折叠相位值为w(i,j),遍历第二相机第j行,能在每个折叠周期内查找到相位差最小的像素位置pr(ir,j),这些像素位置pr都作为第一相机像素位置pl的对应候选点;步骤A4,将第一、第二相机的伪随机编码图像作为目标图像和待匹配图像,考虑人体图像上以pl为中心的匹配窗口大小为(2w+1)×(2w+1),该窗口内任一像素pl的灰度值记为pl(ul,ul),而候选选点pr在待匹配图像上的相应点的灰度为pr(ur,ur),这两窗口的归一化互相关测度函数NCC如下式所示:其中分别为该窗口的图像平均灰度,u、v分别为在选定的匹配窗口的坐标,pl(i+u,j+v)为第一相机在窗口中像素(i+u,j+v)位置的灰度值,pr(ir+u,j+v)为第二相机在窗口中像素为(ir+u,j+v)位置的灰度值,经过极线矫正后每个候选点有着相同的纵坐标j,遍历第二相机所有候选点(ir,j)与第一相机(i,j)位置的相关值都能得到一个相关测度函数值,其中只有对应点有着较高的相关值,通过设定阈值可以在众多候选点中得到第一、第二相机的像素级对应点。步骤A5,得到像素级对应点之后,根据第一、第二相机的折叠相位值差值得到亚像素的对应关系,然后结合预先标定的第一、第二相机的内参和结构参数,重建三维几何深度数据。
- 如权利要求1至3任一项所述的人体三维成像方法,其特征在于,步骤A中按照下述方式对所有控制基站的三维传感器中的相机进行图像采集、深度计算的时序控制:控制基站内部的每个相机均先进行人体图像采集,再对其采集的人体图像进行深度计算,而在对第i相机采集的人体图像进行深度计算的同时控制第i+1相机开始采集人体图像。
- 如权利要求3所述的人体三维成像方法,其特征在于,步骤B根据下式将各控制基站将三维几何深度数据从各自的局部坐标系变换至全局坐标系下:其中,Xi为三维点在第i个三维传感器局部坐标系的坐标表示,Ri、Ti为第i个三维传感器局部坐标系到全局坐标系的变换矩阵,Xwi为点Xi变换为世界坐标系的坐标表示,通过标定的结果ri,ti(i=1,2,3...10)变换可得到;所述ri,ti(i=1,2,3...10)通过下述步骤予以标定:步骤A01,用高分辨率数码相机在多个不同的视角拍摄立体标靶获得标靶图像;所述立体标靶可覆盖测量空间且表面粘贴有若干编码标志点,每个标志 点都有不同的编码带作为该为唯一性的标识;步骤A02,对标靶图像中的编码标志点进行中心定位和解码,根据每个编码点不同的编码值获得其在不同视角图像之间的对应关系和图像坐标;其中,(K,θ)为相机的内部结构参数,其中,步骤A04,将完成校正的立体标靶置于测量空间中,控制立体标靶多次转动,并在每次转动后由控制基站采集立体标靶图像,对于某一双目节点i而言,将双目传感器的结构参数和该节点的外参作为待优化的参数向量,构造优化目标函数如下式:其中,下标s表示系统的第s个拍摄姿态,t表示标靶中第t个标志点,为第t个标志点在世界坐标系下坐标,为传感器节点i待优化的参数向量,分别为传感器i第一、第二相机的内参和畸变,为第一、第二相机中基准点的图像坐标,ri,ti为传感器节点i的第一、第二相机的变换关系,为在第s个姿态拍摄时的外参,为重投影图像坐标。
- 一种人体三维成像系统,其特征在于,包括:若干控制基站,用于构造人体测量空间,在接收到控制中心的采集指令后对位于在人体测量空间的人体进行编码图案投影,并同时采集从各自视角所观察到的人体图像信息,各控制基站再对各自的人体图像信息进行深度计算,得到在各自局部坐标系下的三维几何深度数据;各控制基站然后将三维几何深度数据从各自的局部坐标系下变换到全局坐标系;各控制基站再对人体进行白光投影,然后采集从各自视角所观察到的人体颜色的纹理数据,将纹理数据连同变 换后的三维几何深度数据一并发送;其中,所述若干控制基站之间互联,且围绕人体布设形成一个人体测量空间,每个控制基站均包括纵向设置的两个三维传感器,该两个三维传感器分别用于从该控制基站的视角获取人体上下两部分的人体图像信息和纹理数据;控制中心,控制中心接收各控制基站传输的全局坐标系下的三维几何深度数据与其相对应的表面纹理数据,控制中心首先将各控制基站所采集的三维几何深度数据进行拼接得到人体三维模型,再对所述人体整体模型去除冗余,得到融合后的人体三维模型;控制中心然后对各个控制基站所采集的所有人体颜色的相重叠部分的纹理数据加权运算,得到融合后的纹理数据;控制中心最后将融合后的人体三维模型与融合后的纹理数据进行一一对应关联。
- 如权利要求6所述的三维成像系统,其特征在于,所述控制中心通过交换机与各控制基站连接;每个三维传感器包括:投影机,用于先后对人体进行编码图案投影和白光投影;第一、第二相机,用于采集人体被编码图案投影后所呈现的图像信息,该图像信息用于人体的三维重建;其中第一相机同时在人体被白光投影后采集人体颜色的纹理数据,该纹理数据用于纹理映射获得三维模型的彩色信息;控制板,用于将待投影图案发送至投影机,控制投影机将待投影图案投影在人体上,并控制所述相机同步采集图像信息;主机,与控制板连接,通过控制板实现投采的控制,并对相机采集的人体图像信息进行深度计算和坐标变换。
- 如权利要求7所述的三维成像系统,其特征在于,所述相机为彩色相机,采集的人体图像信息为由多步相移图像和1幅伪随机编码图像组成的图像序列,或者为由多步相移图像和Grey编码图像组成的图像序列,或者由多步相移图像和时间相位展开图像组成的图像序列。
- 如权利要求8所述的三维成像系统,其特征在于,每个三维传感器中包括第一、第二相机,分别用于采集人体的上半身图像和下半身图像,该上半身图像和下半身图像不重叠且可拼接成从其视角所观察到的人体的完整图像;所述主机具体根据预先标定的第一、第二相机的参数,对上半身图像和下半身图像进行极线矫正,使得上半身图像和下半身图像相同的纵坐标有极线对应关系;再根据4步相移算法,利用已极线矫正的相移图计算得到折叠相位;第一相机像素点pl(i,j)的折叠相位值为w(i,j),遍历第二相机第j行,能在每个折叠周期内查找到相位差最小的像素点pr(ir,j),这些像素点pr都作为第一相机像素点pl的对应候选点;将第一、第二相机的伪随机编码图像为匹配依据,分别视为目标图像和待匹配图像,考虑人体图像上以pl为中心的匹配窗口大小为(2w+1)×(2w+1),该窗口内任一像素pl的灰度值记为pl(ul,ul),而候选选点pr在待匹配图像上的相应点的灰度为pr(ur,ur),这两窗口的归一化互相关测度函数NCC如下式所示:pl(i+u,j+v)为第一相机在窗口中像素(i+u,j+v)位置的灰度值,pr(ir+u,j+v)为第二相机在窗口中像素为(ir+u,j+v)位置的灰度值,经过极线矫正后每个候选点有着相同的纵坐标j,遍历第二相机所有候选点(ir,j)与第一相机(i,j)位置的相关值都能得到一个相关测度函数值,其中只有对应点有着较高的相关值,通过设定阈值可以在众多候选点中得到第一、第二相机的像素级对应点;得到像素级对应点之后,根据第一、第二相机的折叠相位值差值得到亚像素的对应关系,然后结合预先标定的第一、第二相机的内参和结构参数,重建三维几何深度数据。
- 如权利要求7至9任一项所述的三维成像系统,其特征在于,每个控制基站内部三维传感器的每个相机均先进行人体图像采集,再对其采集的人体图像进行深度计算;而对于所有控制基站的三维传感器中的相机,所述主机在第i个三维传感器采集的人体图像进行深度计算的同时控制第i+1个三维传感器采集人体图像。
- 如权利要求6所述的三维成像系统,其特征在于,所述主机根据下式将各控制基站将三维几何深度数据从各自的局部坐标系变换至全局坐标系下:其中,Xi为三维点在第i个三维传感器局部坐标系的坐标表示,Ri、Ti为第i个三维传感器局部坐标系到全局坐标系的变换矩阵,Xwi为点Xi变换为世界坐标系的坐标表示,通过标定的结果ri,ti(i=1,2,3...10)变换可得到;所述主机中包括一标定模块,该标定模块用于对标靶图像中的编码标志点进行中心定位和解码,根据每个编码点不同的编码值获得其在不同视角图像之间的对应关系和图像坐标;然后利用捆绑调整方法,对每个不同编码的世界坐标Xj在拍摄视角i下的重投影的图像坐标进行优化该重投影误差,如下式所示:其中,(K,θ)为相机的内部结构参数,其中,
- 如权利要求6所述的三维成像系统,其特征在于,所述控制中心为所述若干控制基站中的一个。
- 一种在权利要求6所述的三维成像系统中实现多控制基站同时标定的方法,其特征在于,包括下述步骤:步骤A01,用高分辨率数码相机在多个不同的视角拍摄立体标靶获得标靶图像;所述立体标靶可覆盖测量空间且表面粘贴有若干编码标志点,每个标志点都有不同的编码带作为该为唯一性的标识;步骤A02,对标靶图像中的编码标志点进行中心定位和解码,根据每个编码点不同的编码值获得其在不同视角图像之间的对应关系和图像坐标;其中,(K,θ)为相机的内部结构参数,其中,步骤A04,将完成校正的立体标靶置于测量空间中,控制立体标靶多次转动,并在每次转动后由控制基站采集立体标靶图像,对于某一节点i而言,将双目传感器的结构参数和该节点的外参作为待优化的参数向量,构造优化目标函数如下式:其中,下标s表示系统的第s个拍摄姿态,t表示标靶中第t个标志点,为第t个标志点在世界坐标系下坐标,为传感器节点i待优化的参数向量,分别为传感器i第一、第二相机的内参和畸变,为第一、第二相机中基准点的图像坐标,ri,ti为传感器节点i的第一、第二相机的变换关系,为在第s个姿态拍摄时的外参,为重投影图像坐标;
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/181,398 US10019838B2 (en) | 2014-09-10 | 2016-06-13 | Human body three-dimensional imaging method and system |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410458109.X | 2014-09-10 | ||
CN201410458109.XA CN104299261B (zh) | 2014-09-10 | 2014-09-10 | 人体三维成像方法及系统 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/181,398 Continuation US10019838B2 (en) | 2014-09-10 | 2016-06-13 | Human body three-dimensional imaging method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2016037486A1 true WO2016037486A1 (zh) | 2016-03-17 |
Family
ID=52318983
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2015/076869 WO2016037486A1 (zh) | 2014-09-10 | 2015-04-17 | 人体三维成像方法及系统 |
Country Status (3)
Country | Link |
---|---|
US (1) | US10019838B2 (zh) |
CN (1) | CN104299261B (zh) |
WO (1) | WO2016037486A1 (zh) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105711096A (zh) * | 2016-03-21 | 2016-06-29 | 联想(北京)有限公司 | 数据处理方法及电子设备 |
CN109242960A (zh) * | 2018-09-15 | 2019-01-18 | 武汉智觉空间信息技术有限公司 | 采用双Kinect和旋转平台的人体实时建模系统及其建模方法 |
CN109658365A (zh) * | 2017-10-11 | 2019-04-19 | 阿里巴巴集团控股有限公司 | 图像处理方法、装置、系统和存储介质 |
CN110619601A (zh) * | 2019-09-20 | 2019-12-27 | 西安知象光电科技有限公司 | 一种基于三维模型的图像数据集生成方法 |
CN110772258A (zh) * | 2019-11-07 | 2020-02-11 | 中国石油大学(华东) | 一种用于人体尺寸测量的多视角测距方法 |
CN111461029A (zh) * | 2020-04-03 | 2020-07-28 | 西安交通大学 | 一种基于多视角Kinect的人体关节点数据优化系统及方法 |
CN111862139A (zh) * | 2019-08-16 | 2020-10-30 | 中山大学 | 一种基于彩色-深度相机的动态物体参数化建模方法 |
CN112509055A (zh) * | 2020-11-20 | 2021-03-16 | 浙江大学 | 基于双目视觉和编码结构光相结合的穴位定位系统及方法 |
CN112509129A (zh) * | 2020-12-21 | 2021-03-16 | 神思电子技术股份有限公司 | 一种基于改进gan网络的空间视场图像生成方法 |
CN112991517A (zh) * | 2021-03-08 | 2021-06-18 | 武汉大学 | 一种纹理影像编解码自动匹配的三维重建方法 |
CN113205592A (zh) * | 2021-05-14 | 2021-08-03 | 湖北工业大学 | 一种基于相位相似性的光场三维重建方法及系统 |
CN115795633A (zh) * | 2023-02-07 | 2023-03-14 | 中国建筑西南设计研究院有限公司 | 一种木结构连接节点的参数化设计方法、系统及存储介质 |
CN119402729A (zh) * | 2025-01-02 | 2025-02-07 | 合肥埃科光电科技股份有限公司 | 一种图像传感器拼接判定方法、系统、调整方法及存储介质 |
Families Citing this family (70)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10460158B2 (en) * | 2014-06-19 | 2019-10-29 | Kabushiki Kaisha Toshiba | Methods and systems for generating a three dimensional representation of a human body shape |
CN104299261B (zh) * | 2014-09-10 | 2017-01-25 | 深圳大学 | 人体三维成像方法及系统 |
WO2016180957A2 (en) * | 2015-05-13 | 2016-11-17 | Naked Labs Austria Gmbh | 3d body scanner data processing flow |
CN106264536A (zh) * | 2015-05-21 | 2017-01-04 | 长沙维纳斯克信息技术有限公司 | 一种三维人体扫描装置和方法 |
CN106197320B (zh) * | 2015-05-29 | 2019-05-10 | 苏州笛卡测试技术有限公司 | 一种分时复用快速三维扫描及其数据处理方法 |
EP3312803A4 (en) * | 2015-06-17 | 2018-12-26 | Toppan Printing Co., Ltd. | Image processing system, method and program |
CN106468542A (zh) * | 2015-08-22 | 2017-03-01 | 吴翔 | 一种采用码分多址和多天线技术的三维扫描方法 |
CN105160680B (zh) * | 2015-09-08 | 2017-11-21 | 北京航空航天大学 | 一种基于结构光的无干扰深度相机的设计方法 |
KR20170047780A (ko) * | 2015-10-23 | 2017-05-08 | 한국전자통신연구원 | 적응적 윈도우 마스크를 이용하는 로우 코스트 계산장치 및 그 방법 |
US20170140215A1 (en) * | 2015-11-18 | 2017-05-18 | Le Holdings (Beijing) Co., Ltd. | Gesture recognition method and virtual reality display output device |
US20170185142A1 (en) * | 2015-12-25 | 2017-06-29 | Le Holdings (Beijing) Co., Ltd. | Method, system and smart glove for obtaining immersion in virtual reality system |
CN105761240A (zh) * | 2016-01-18 | 2016-07-13 | 盛禾东林(厦门)文创科技有限公司 | 一种相机采集数据生成3d模型的系统 |
CN106073903A (zh) * | 2016-06-03 | 2016-11-09 | 上海德稻集群文化创意产业(集团)有限公司 | 制备骨骼辅助支架的三维扫描相机阵列及扫描方法 |
US10607408B2 (en) * | 2016-06-04 | 2020-03-31 | Shape Labs Inc. | Method for rendering 2D and 3D data within a 3D virtual environment |
CN109478338B (zh) * | 2016-07-19 | 2023-04-04 | 松下电器(美国)知识产权公司 | 三维数据制作方法、发送方法、制作装置、发送装置 |
JP6737503B2 (ja) * | 2016-09-13 | 2020-08-12 | 株式会社Vrc | 3dスキャナ |
CN106595642B (zh) * | 2016-12-29 | 2023-04-11 | 中国科学院西安光学精密机械研究所 | 一种位姿测算光学仪器及调试方法 |
CN106842219B (zh) * | 2017-01-18 | 2019-10-29 | 北京商询科技有限公司 | 一种用于混合现实设备的空间测距方法和系统 |
WO2018184675A1 (en) * | 2017-04-05 | 2018-10-11 | Telefonaktiebolaget Lm Ericsson (Publ) | Illuminating an environment for localisation |
DE102017109854A1 (de) * | 2017-05-08 | 2018-11-08 | Wobben Properties Gmbh | Verfahren zur Referenzierung mehrerer Sensoreinheiten und zugehörige Messeinrichtung |
CN107133989B (zh) * | 2017-06-12 | 2020-11-06 | 中国科学院长春光学精密机械与物理研究所 | 一种三维扫描系统参数标定方法 |
CN107240149A (zh) * | 2017-06-14 | 2017-10-10 | 广东工业大学 | 基于图像处理的物体三维模型构建方法 |
CN107564067A (zh) * | 2017-08-17 | 2018-01-09 | 上海大学 | 一种适用于Kinect的标定方法 |
CN107504919B (zh) * | 2017-09-14 | 2019-08-16 | 深圳大学 | 基于相位映射的折叠相位三维数字成像方法及装置 |
CN109785225B (zh) * | 2017-11-13 | 2023-06-16 | 虹软科技股份有限公司 | 一种用于图像矫正的方法和装置 |
CN109785390B (zh) * | 2017-11-13 | 2022-04-01 | 虹软科技股份有限公司 | 一种用于图像矫正的方法和装置 |
CN107945269A (zh) * | 2017-12-26 | 2018-04-20 | 清华大学 | 基于多视点视频的复杂动态人体对象三维重建方法及系统 |
JP6843780B2 (ja) * | 2018-01-18 | 2021-03-17 | ヤフー株式会社 | 情報処理装置、学習済みモデル、情報処理方法、およびプログラム |
CN108433704B (zh) * | 2018-04-10 | 2024-05-14 | 西安维塑智能科技有限公司 | 一种三维人体扫描设备 |
DE102018109586A1 (de) * | 2018-04-20 | 2019-10-24 | Carl Zeiss Ag | 3D-Digitalisierungssystem und 3D-Digitalisierungsverfahren |
CN108665535A (zh) * | 2018-05-10 | 2018-10-16 | 青岛小优智能科技有限公司 | 一种基于编码光栅结构光的三维结构重建方法与系统 |
CN109241841B (zh) * | 2018-08-01 | 2022-07-05 | 甘肃未来云数据科技有限公司 | 视频人体动作的获取方法和装置 |
CN109389665B (zh) * | 2018-08-24 | 2021-10-22 | 先临三维科技股份有限公司 | 三维模型的纹理获取方法、装置、设备和存储介质 |
EP3633405B1 (de) * | 2018-10-03 | 2023-01-11 | Hexagon Technology Center GmbH | Messgerät zur geometrischen 3d-abtastung einer umgebung mit einer vielzahl sendekanäle und semiconductor-photomultiplier sensoren |
CN109945802B (zh) * | 2018-10-11 | 2021-03-09 | 苏州深浅优视智能科技有限公司 | 一种结构光三维测量方法 |
CN109359609B (zh) * | 2018-10-25 | 2022-06-14 | 浙江宇视科技有限公司 | 一种人脸识别训练样本获取方法及装置 |
CN111060006B (zh) * | 2019-04-15 | 2024-12-13 | 深圳市易尚展示股份有限公司 | 一种基于三维模型的视点规划方法 |
CN110230979B (zh) * | 2019-04-15 | 2024-12-06 | 深圳市易尚展示股份有限公司 | 一种立体标靶及其三维彩色数字化系统标定方法 |
CN110243307B (zh) * | 2019-04-15 | 2025-01-17 | 深圳市易尚展示股份有限公司 | 一种自动化三维彩色成像与测量系统 |
CN110176035B (zh) * | 2019-05-08 | 2021-09-28 | 深圳市易尚展示股份有限公司 | 标志点的定位方法、装置、计算机设备和存储介质 |
CN110348371B (zh) * | 2019-07-08 | 2023-08-29 | 叠境数字科技(上海)有限公司 | 人体三维动作自动提取方法 |
CN110986757B (zh) * | 2019-10-08 | 2021-05-04 | 新拓三维技术(深圳)有限公司 | 一种三维人体扫描方法、装置及系统 |
DE102019216231A1 (de) * | 2019-10-22 | 2021-04-22 | Carl Zeiss Industrielle Messtechnik Gmbh | Vorrichtung und Verfahren zur dimensionellen Vermessung von scharfen Kanten |
CN110930374A (zh) * | 2019-11-13 | 2020-03-27 | 北京邮电大学 | 一种基于双深度相机的腧穴定位方法 |
CN111028280B (zh) * | 2019-12-09 | 2022-06-03 | 西安交通大学 | 井字结构光相机系统及进行目标有尺度三维重建的方法 |
CN111076674B (zh) * | 2019-12-12 | 2020-11-17 | 天目爱视(北京)科技有限公司 | 一种近距离目标物3d采集设备 |
CN111458693A (zh) * | 2020-02-01 | 2020-07-28 | 上海鲲游光电科技有限公司 | 直接测距tof分区探测方法及其系统和电子设备 |
US11580692B2 (en) | 2020-02-26 | 2023-02-14 | Apple Inc. | Single-pass object scanning |
CN113327291B (zh) * | 2020-03-16 | 2024-03-22 | 天目爱视(北京)科技有限公司 | 一种基于连续拍摄对远距离目标物3d建模的标定方法 |
CN113554582B (zh) * | 2020-04-22 | 2022-11-08 | 中国科学院长春光学精密机械与物理研究所 | 电子设备盖板上功能孔的缺陷检测方法、装置以及系统 |
CN111398893B (zh) * | 2020-05-14 | 2021-11-23 | 南京工程学院 | 一种基于无线定位的网格人体模型测量装置及方法 |
CN111721197B (zh) * | 2020-05-14 | 2022-02-01 | 南京工程学院 | 一种基于双目立体的身体模型测量装置及方法 |
CN111612831A (zh) * | 2020-05-22 | 2020-09-01 | 创新奇智(北京)科技有限公司 | 一种深度估计方法、装置、电子设备及存储介质 |
CN111855664B (zh) * | 2020-06-12 | 2023-04-07 | 山西省交通科技研发有限公司 | 一种可调节隧道病害三维检测系统 |
CN111862241B (zh) * | 2020-07-28 | 2024-04-12 | 杭州优链时代科技有限公司 | 一种人体对齐方法及装置 |
CN111981982B (zh) * | 2020-08-21 | 2021-07-06 | 北京航空航天大学 | 一种基于加权sfm算法的多向合作靶标光学测量方法 |
CN112102477B (zh) * | 2020-09-15 | 2024-09-27 | 腾讯科技(深圳)有限公司 | 三维模型重建方法、装置、计算机设备和存储介质 |
CN112365585B (zh) * | 2020-11-24 | 2023-09-12 | 革点科技(深圳)有限公司 | 一种基于事件相机的双目结构光三维成像方法 |
CN114581526B (zh) * | 2020-12-02 | 2024-08-27 | 中国科学院沈阳自动化研究所 | 一种基于球形标定块的多相机标定方法 |
CN113192143B (zh) * | 2020-12-23 | 2022-09-06 | 合肥工业大学 | 一种用于摄像机快速标定的编码立体靶标及其解码方法 |
CN112729156A (zh) * | 2020-12-24 | 2021-04-30 | 上海智能制造功能平台有限公司 | 一种人体数字化测量装置的数据拼接及系统标定方法 |
CN112884895B (zh) * | 2021-02-09 | 2024-03-12 | 郭金磊 | 一种基于人体外貌形态的穿搭匹配系统 |
CN113516007B (zh) * | 2021-04-02 | 2023-12-22 | 中国海洋大学 | 多组双目相机组网的水下标志物识别与拼接方法 |
CN113643436B (zh) * | 2021-08-24 | 2024-04-05 | 凌云光技术股份有限公司 | 一种深度数据拼接融合方法及装置 |
CN114179082B (zh) * | 2021-12-07 | 2024-09-13 | 南京工程学院 | 一种基于接触力信息的图像力触觉检测装置及再现方法 |
CN114485396B (zh) * | 2022-01-10 | 2023-06-20 | 上海电气核电设备有限公司 | 一种核电蒸发器管板深孔几何量的测量系统及测量方法 |
CN114943664B (zh) * | 2022-06-02 | 2025-07-22 | 北京字跳网络技术有限公司 | 图像处理方法、装置、电子设备及存储介质 |
CN116524008B (zh) * | 2023-04-14 | 2024-02-02 | 公安部第一研究所 | 一种用于安检ct智能识别的目标物匹配与空间位置估计方法 |
CN116758267A (zh) * | 2023-06-26 | 2023-09-15 | 广州市浩洋电子股份有限公司 | 一种基于多视角的人体定位方法及灯光系统 |
CN117726907B (zh) * | 2024-02-06 | 2024-04-30 | 之江实验室 | 一种建模模型的训练方法、三维人体建模的方法以及装置 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102175261A (zh) * | 2011-01-10 | 2011-09-07 | 深圳大学 | 一种基于自适应标靶的视觉测量系统及其标定方法 |
US20130195330A1 (en) * | 2012-01-31 | 2013-08-01 | Electronics And Telecommunications Research Institute | Apparatus and method for estimating joint structure of human body |
CN103267491A (zh) * | 2012-07-17 | 2013-08-28 | 深圳大学 | 自动获取物体表面完整三维数据的方法及系统 |
CN104299261A (zh) * | 2014-09-10 | 2015-01-21 | 深圳大学 | 人体三维成像方法及系统 |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6215898B1 (en) * | 1997-04-15 | 2001-04-10 | Interval Research Corporation | Data processing system and method |
US7929751B2 (en) * | 2005-11-09 | 2011-04-19 | Gi, Llc | Method and apparatus for absolute-coordinate three-dimensional surface imaging |
US9189886B2 (en) * | 2008-08-15 | 2015-11-17 | Brown University | Method and apparatus for estimating body shape |
US8823775B2 (en) * | 2009-04-30 | 2014-09-02 | Board Of Regents, The University Of Texas System | Body surface imaging |
US9341464B2 (en) * | 2011-10-17 | 2016-05-17 | Atlas5D, Inc. | Method and apparatus for sizing and fitting an individual for apparel, accessories, or prosthetics |
US11510600B2 (en) * | 2012-01-04 | 2022-11-29 | The Trustees Of Dartmouth College | Method and apparatus for quantitative and depth resolved hyperspectral fluorescence and reflectance imaging for surgical guidance |
US8913809B2 (en) * | 2012-06-13 | 2014-12-16 | Microsoft Corporation | Monitoring physical body changes via image sensor |
KR101977711B1 (ko) * | 2012-10-12 | 2019-05-13 | 삼성전자주식회사 | 깊이 센서, 이의 이미지 캡쳐 방법, 및 상기 깊이 센서를 포함하는 이미지 처리 시스템 |
US9384585B2 (en) * | 2012-10-23 | 2016-07-05 | Electronics And Telecommunications Research Institute | 3-dimensional shape reconstruction device using depth image and color image and the method |
CN103279762B (zh) * | 2013-05-21 | 2016-04-13 | 常州大学 | 一种自然环境下果实常见生长形态判定方法 |
CN103876710A (zh) * | 2014-02-17 | 2014-06-25 | 钱晨 | 一种高解析度的人体局部三维成像系统 |
US9877012B2 (en) * | 2015-04-01 | 2018-01-23 | Canon Kabushiki Kaisha | Image processing apparatus for estimating three-dimensional position of object and method therefor |
-
2014
- 2014-09-10 CN CN201410458109.XA patent/CN104299261B/zh active Active
-
2015
- 2015-04-17 WO PCT/CN2015/076869 patent/WO2016037486A1/zh active Application Filing
-
2016
- 2016-06-13 US US15/181,398 patent/US10019838B2/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102175261A (zh) * | 2011-01-10 | 2011-09-07 | 深圳大学 | 一种基于自适应标靶的视觉测量系统及其标定方法 |
US20130195330A1 (en) * | 2012-01-31 | 2013-08-01 | Electronics And Telecommunications Research Institute | Apparatus and method for estimating joint structure of human body |
CN103267491A (zh) * | 2012-07-17 | 2013-08-28 | 深圳大学 | 自动获取物体表面完整三维数据的方法及系统 |
CN104299261A (zh) * | 2014-09-10 | 2015-01-21 | 深圳大学 | 人体三维成像方法及系统 |
Non-Patent Citations (5)
Title |
---|
HE, DONG ET AL.: "Three-dimensional imaging based on combination fringe and pseudorandom pattern projection.", CHINESE JOURNAL OF LASERS., vol. 41, no. 2, 28 February 2014 (2014-02-28) * |
LIU, MINGXING ET AL.: "Research of 3D reconstruction based on computer vision.", JOURNAL OF SHENZHEN INSTITUTE OF INFORMATION TECHNOLOGY., vol. 11, no. 3, 30 September 2013 (2013-09-30), pages 15 and 16 * |
LIU, XIAOLI ET AL.: "3D Auto-inspection for large thin-wall object.", ACTA OPTICA SINICA., vol. 31, no. 3, 31 March 2011 (2011-03-31) * |
LIU, XIAOLI ET AL.: "A method for correcting the 2D calibration target.", OPTO- ELECTRONIC ENGINEERING, vol. 38, no. 4, 30 April 2011 (2011-04-30) * |
PENG, XIANG.: "phase-aided three-dimensional imaging and metrology.", ACTA OPTICA SINICA., vol. 31, no. 9, 30 September 2011 (2011-09-30) * |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105711096A (zh) * | 2016-03-21 | 2016-06-29 | 联想(北京)有限公司 | 数据处理方法及电子设备 |
CN109658365A (zh) * | 2017-10-11 | 2019-04-19 | 阿里巴巴集团控股有限公司 | 图像处理方法、装置、系统和存储介质 |
CN109658365B (zh) * | 2017-10-11 | 2022-12-06 | 阿里巴巴(深圳)技术有限公司 | 图像处理方法、装置、系统和存储介质 |
CN109242960A (zh) * | 2018-09-15 | 2019-01-18 | 武汉智觉空间信息技术有限公司 | 采用双Kinect和旋转平台的人体实时建模系统及其建模方法 |
CN111862139A (zh) * | 2019-08-16 | 2020-10-30 | 中山大学 | 一种基于彩色-深度相机的动态物体参数化建模方法 |
CN111862139B (zh) * | 2019-08-16 | 2023-08-18 | 中山大学 | 一种基于彩色-深度相机的动态物体参数化建模方法 |
CN110619601A (zh) * | 2019-09-20 | 2019-12-27 | 西安知象光电科技有限公司 | 一种基于三维模型的图像数据集生成方法 |
CN110772258A (zh) * | 2019-11-07 | 2020-02-11 | 中国石油大学(华东) | 一种用于人体尺寸测量的多视角测距方法 |
CN111461029A (zh) * | 2020-04-03 | 2020-07-28 | 西安交通大学 | 一种基于多视角Kinect的人体关节点数据优化系统及方法 |
CN111461029B (zh) * | 2020-04-03 | 2023-05-02 | 西安交通大学 | 一种基于多视角Kinect的人体关节点数据优化系统及方法 |
CN112509055B (zh) * | 2020-11-20 | 2022-05-03 | 浙江大学 | 基于双目视觉和编码结构光相结合的穴位定位系统及方法 |
CN112509055A (zh) * | 2020-11-20 | 2021-03-16 | 浙江大学 | 基于双目视觉和编码结构光相结合的穴位定位系统及方法 |
CN112509129A (zh) * | 2020-12-21 | 2021-03-16 | 神思电子技术股份有限公司 | 一种基于改进gan网络的空间视场图像生成方法 |
CN112991517B (zh) * | 2021-03-08 | 2022-04-29 | 武汉大学 | 一种纹理影像编解码自动匹配的三维重建方法 |
CN112991517A (zh) * | 2021-03-08 | 2021-06-18 | 武汉大学 | 一种纹理影像编解码自动匹配的三维重建方法 |
CN113205592A (zh) * | 2021-05-14 | 2021-08-03 | 湖北工业大学 | 一种基于相位相似性的光场三维重建方法及系统 |
CN113205592B (zh) * | 2021-05-14 | 2022-08-05 | 湖北工业大学 | 一种基于相位相似性的光场三维重建方法及系统 |
CN115795633A (zh) * | 2023-02-07 | 2023-03-14 | 中国建筑西南设计研究院有限公司 | 一种木结构连接节点的参数化设计方法、系统及存储介质 |
CN119402729A (zh) * | 2025-01-02 | 2025-02-07 | 合肥埃科光电科技股份有限公司 | 一种图像传感器拼接判定方法、系统、调整方法及存储介质 |
Also Published As
Publication number | Publication date |
---|---|
US10019838B2 (en) | 2018-07-10 |
CN104299261A (zh) | 2015-01-21 |
US20160300383A1 (en) | 2016-10-13 |
CN104299261B (zh) | 2017-01-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2016037486A1 (zh) | 人体三维成像方法及系统 | |
TWI555379B (zh) | 一種全景魚眼相機影像校正、合成與景深重建方法與其系統 | |
CN105160680B (zh) | 一种基于结构光的无干扰深度相机的设计方法 | |
CN103337094B (zh) | 一种应用双目摄像机实现运动三维重建的方法 | |
CN107154014B (zh) | 一种实时彩色及深度全景图像拼接方法 | |
CN102072706B (zh) | 一种多相机定位与跟踪方法及系统 | |
WO2018076154A1 (zh) | 一种基于鱼眼摄像机空间位姿标定的全景视频生成方法 | |
CN113129430B (zh) | 基于双目结构光的水下三维重建方法 | |
CN109272570A (zh) | 一种基于立体视觉数学模型的空间点三维坐标求解方法 | |
CN105488775A (zh) | 一种基于六摄像机环视的柱面全景生成装置及方法 | |
CN104835158B (zh) | 基于格雷码结构光与极线约束的三维点云获取方法 | |
CN107358633A (zh) | 一种基于三点标定物的多相机内外参标定方法 | |
WO2013076605A1 (en) | Method and system for alignment of a pattern on a spatial coded slide image | |
KR20150120066A (ko) | 패턴 프로젝션을 이용한 왜곡 보정 및 정렬 시스템, 이를 이용한 방법 | |
CN113592721B (zh) | 摄影测量方法、装置、设备及存储介质 | |
CN111009030A (zh) | 一种多视高分辨率纹理图像与双目三维点云映射方法 | |
CN106534670B (zh) | 一种基于固联鱼眼镜头摄像机组的全景视频生成方法 | |
CN109579695A (zh) | 一种基于异构立体视觉的零件测量方法 | |
CN104318604A (zh) | 一种3d图像拼接方法及装置 | |
CN102368137A (zh) | 嵌入式标定立体视觉系统 | |
CN104240233A (zh) | 一种摄像机单应性矩阵和投影机单应性矩阵的求解方法 | |
CN113112532B (zh) | 一种多ToF相机系统实时配准方法 | |
Wu et al. | Binocular stereovision camera calibration | |
Chen et al. | Multi-stereo 3D reconstruction with a single-camera multi-mirror catadioptric system | |
Ortiz-Coder et al. | Accurate 3d reconstruction using a videogrammetric device for heritage scenarios |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15840549 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 15840549 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 01.06.2018) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 15840549 Country of ref document: EP Kind code of ref document: A1 |