CN105513128A - Kinect-based three-dimensional data fusion processing method - Google Patents
Kinect-based three-dimensional data fusion processing method Download PDFInfo
- Publication number
- CN105513128A CN105513128A CN201610022247.2A CN201610022247A CN105513128A CN 105513128 A CN105513128 A CN 105513128A CN 201610022247 A CN201610022247 A CN 201610022247A CN 105513128 A CN105513128 A CN 105513128A
- Authority
- CN
- China
- Prior art keywords
- kinect
- dimensional
- point cloud
- data fusion
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 15
- 238000007499 fusion processing Methods 0.000 title abstract 3
- 239000011159 matrix material Substances 0.000 claims abstract description 11
- 230000004927 fusion Effects 0.000 claims abstract description 7
- 238000005259 measurement Methods 0.000 claims abstract description 7
- 238000006243 chemical reaction Methods 0.000 claims abstract description 5
- 238000005267 amalgamation Methods 0.000 claims description 7
- 238000012545 processing Methods 0.000 claims description 7
- 230000007704 transition Effects 0.000 claims description 6
- 238000012360 testing method Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000000691 measurement method Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000007921 spray Substances 0.000 description 1
- 238000005507 spraying Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/04—Indexing scheme for image data processing or generation, in general involving 3D image data
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
The invention discloses a Kinect-based three-dimensional data fusion processing method, comprising the following steps: a, using two sets of Kinect to respectively acquire point cloud data A and B of the same chessboard target, and obtaining spatial measurement coordinates of 40 identical chessboard lattice points in the two groups of point clouds; symmetrically arranging the two sets of Kinect relative to the chessboard target; b, acquiring a conversion matrix Mwc under a space coordinates system in which a point cloud B converted from a point cloud A is located; c, performing data fusion on three-dimensional point clouds acquired by the two sets of Kinect. According to the Kinect-based three-dimensional data fusion processing method, two sets of Kinect equipment are used to acquire a group of three-dimensional point cloud data, and space positions of the two sets of Kinect are demarcated, thus the conversion matrix is obtained, and further the data fusion on the acquired two groups of three-dimensional point clouds is realized.
Description
Technical field
The present invention relates to the three-dimensional data method for amalgamation processing based on Kinect.
Background technology
In the optical measurement of wind tunnel model attitude and distortion, to paste, spray patterns mark or embed luminescent marking and destroy model surface characteristic, and be difficult to retain when temperature, pressure change is violent.In order to process especially model not needing when model attitude and displacement measurement in wind tunnel test, as spraying or additional marking point, raising, needs to study a kind of three-dimensional non-contact measurement method newly as the power of test under high-temperature and high-pressure conditions to harsh test environment.
Microsoft's Kinect device is a kind of degree of depth video camera, and it has imported the functions such as instant motion capture, image identification, microphone input, speech recognition, community interactive simultaneously.Do not need to use any controller, it is the 3 d pose and the deformation information that rely on the motion of model in cameras capture three dimensions to obtain tested model.
Although depth information collecting device acquisition precision popular is at present high, the condition required often compares Xun and carves, and adds the reason such as price and complicated operation, cannot reach civilian effect.Because the infrared pick-up head of Kinect and VGA camera are in different positions, and the parameter of camera lens itself is also incomplete same, so the picture acquired by two video cameras has slightly little difference, three-dimensional coordinate (X cannot be made, Y, Z) and the same point of the corresponding model of chromatic information.
Summary of the invention
The object of the present invention is to provide a kind of three-dimensional data method for amalgamation processing based on Kinect, can under complex environment, two Kinect device are utilized to obtain one group of three dimensional point cloud, and by demarcating two Kinect locus, obtain transition matrix, and then the two groups of three-dimensional point clouds obtained are realized the fusion of data.
For achieving the above object, technical scheme of the present invention is a kind of three-dimensional data method for amalgamation processing based on Kinect of design, comprises the steps:
A. obtain cloud data A and B of same chessboard target with two Kinect respectively, and obtain the space measurement coordinate of 40 identical checker-wise o'clock in two groups of some clouds; And two Kinect place with chessboard target symmetry;
B. acquisition point cloud A is transformed into the transition matrix M under a space coordinates at cloud B place
wc, use least square method, according to formula M
wc=(A
ta)
-1a
tb obtains corresponding conversion matrix;
C. the three-dimensional point cloud that two Kinect obtain is carried out data fusion: be transformed into below the world coordinate system at another Kinect place by the cloud data that the Kinect obtaining A point cloud obtains, formulae express is as follows: AM
wc=B.
Preferably, two Kinect place with chessboard target symmetry.
Advantage of the present invention and beneficial effect are: provide a kind of three-dimensional data method for amalgamation processing based on Kinect, can under complex environment, two Kinect device are utilized to obtain one group of three dimensional point cloud, and by demarcating two Kinect locus, obtain transition matrix, and then the two groups of three-dimensional point clouds obtained are realized the fusion of data.
The present invention uses the software write based on OpenNI and Primesense to obtain three-dimensional data points cloud in conjunction with Kinect, and the depth data (Z) that Kinect can be made to obtain and view data (X, Y) can be good at coincidence.
The present invention obtains model aximal deformation value, the 3 d pose in large interpretation region and deformation data when not damage model character of surface, the measuring table three-dimensional coordinate result of building and the consistance of portable three-coordinate instrument system better.The method can not only make its depth data and view data can be good at coincidence, but also the errorless combination of the cloud data that two Kinect can be obtained is fused together.
Accompanying drawing explanation
Fig. 1 is schematic diagram of the present invention;
Fig. 2 is target schematic diagram;
Fig. 3 is target calibration maps.
Embodiment
Below in conjunction with drawings and Examples, the specific embodiment of the present invention is further described.Following examples only for technical scheme of the present invention is clearly described, and can not limit the scope of the invention with this.
As shown in Figure 1 to Figure 3, the technical scheme that the present invention specifically implements is:
(1) computing machine (4 in Fig. 1) is used, the process software that built-in computer is write based on OpenNI and Primesense, two Kinect (1 in Fig. 1,2) are positioned over the position shown in Fig. 1, obtain target 3 front (as shown in Figure 2) tessellated 3 d space coordinate point respectively.
(2) two Kinect (1 in Fig. 1, the 2) spatial relation that front and back are placed is demarcated, and the transition matrix obtained; Two Kinect place with chessboard target symmetry; Key step comprises:
A. two Kinect (1 in Fig. 1,2) are used to obtain cloud data A and B of same chessboard target (3 in Fig. 1) respectively, and obtain identical 40 checker-wise points (as shown in Figure 3,5 in Fig. 3 is calibration point) the space measurement coordinate in two groups of some clouds
with
B. the transformational relation that is transformed under a space coordinates at cloud B place of acquisition point cloud A is as follows:
Transition matrix is M
wc, use least square method, according to formula M
wc=(A
ta)
-1a
tb obtains corresponding conversion matrix.M
wcbe expressed as follows:
(3) three-dimensional point cloud that two Kinect obtain is carried out data fusion, key step comprises, and be transformed into below the world coordinate system at another Kinect place by the cloud data that the Kinect obtaining A point cloud obtains, formulae express is as follows:
Realize the comprehensive three-dimensional non-contact measurement to model.
The above is only the preferred embodiment of the present invention; it should be pointed out that for those skilled in the art, under the prerequisite not departing from the technology of the present invention principle; can also make some improvements and modifications, these improvements and modifications also should be considered as protection scope of the present invention.
Claims (2)
1., based on the three-dimensional data method for amalgamation processing of Kinect, it is characterized in that, comprise the steps:
A. obtain cloud data A and B of same chessboard target with two Kinect respectively, and obtain the space measurement coordinate of 40 identical checker-wise o'clock in two groups of some clouds;
B. acquisition point cloud A is transformed into the transition matrix M under a space coordinates at cloud B place
wc, use least square method, according to formula M
wc=(A
ta)
-1a
tb obtains corresponding conversion matrix;
C. the three-dimensional point cloud that two Kinect obtain is carried out data fusion: be transformed into below the world coordinate system at another Kinect place by the cloud data that the Kinect obtaining A point cloud obtains, formulae express is as follows: AM
wc=B.
2. the three-dimensional data method for amalgamation processing based on Kinect according to claim 1, is characterized in that, two Kinect place with chessboard target symmetry.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610022247.2A CN105513128A (en) | 2016-01-13 | 2016-01-13 | Kinect-based three-dimensional data fusion processing method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610022247.2A CN105513128A (en) | 2016-01-13 | 2016-01-13 | Kinect-based three-dimensional data fusion processing method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN105513128A true CN105513128A (en) | 2016-04-20 |
Family
ID=55721080
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610022247.2A Pending CN105513128A (en) | 2016-01-13 | 2016-01-13 | Kinect-based three-dimensional data fusion processing method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105513128A (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106384380A (en) * | 2016-08-31 | 2017-02-08 | 重庆七腾软件有限公司 | 3D human body scanning, modeling and measuring method and system |
CN107578019A (en) * | 2017-09-13 | 2018-01-12 | 河北工业大学 | Gait recognition system and recognition method based on visual-tactile fusion |
CN108230379A (en) * | 2017-12-29 | 2018-06-29 | 百度在线网络技术(北京)有限公司 | For merging the method and apparatus of point cloud data |
CN109272572A (en) * | 2018-08-30 | 2019-01-25 | 中国农业大学 | Modeling method and device based on double Kinect camera |
CN109875562A (en) * | 2018-12-21 | 2019-06-14 | 鲁浩成 | A kind of human somatotype monitoring system based on the more visual analysis of somatosensory device |
CN112361989A (en) * | 2020-09-30 | 2021-02-12 | 北京印刷学院 | Method for calibrating parameters of measurement system through point cloud uniformity consideration |
CN113198692A (en) * | 2021-05-19 | 2021-08-03 | 飓蜂科技(苏州)有限公司 | High-precision dispensing method and device suitable for batch products |
CN113237628A (en) * | 2021-07-08 | 2021-08-10 | 中国空气动力研究与发展中心低速空气动力研究所 | Method for measuring horizontal free flight model attitude of low-speed wind tunnel |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103106688A (en) * | 2013-02-20 | 2013-05-15 | 北京工业大学 | Indoor three-dimensional scene rebuilding method based on double-layer rectification method |
CN103279987A (en) * | 2013-06-18 | 2013-09-04 | 厦门理工学院 | Object fast three-dimensional modeling method based on Kinect |
CN103413352A (en) * | 2013-07-29 | 2013-11-27 | 西北工业大学 | 3D scene reconstruction method based on RGBD multi-sensor fusion |
CN104952107A (en) * | 2015-05-18 | 2015-09-30 | 湖南桥康智能科技有限公司 | Three-dimensional bridge reconstruction method based on vehicle-mounted LiDAR point cloud data |
-
2016
- 2016-01-13 CN CN201610022247.2A patent/CN105513128A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103106688A (en) * | 2013-02-20 | 2013-05-15 | 北京工业大学 | Indoor three-dimensional scene rebuilding method based on double-layer rectification method |
CN103279987A (en) * | 2013-06-18 | 2013-09-04 | 厦门理工学院 | Object fast three-dimensional modeling method based on Kinect |
CN103413352A (en) * | 2013-07-29 | 2013-11-27 | 西北工业大学 | 3D scene reconstruction method based on RGBD multi-sensor fusion |
CN104952107A (en) * | 2015-05-18 | 2015-09-30 | 湖南桥康智能科技有限公司 | Three-dimensional bridge reconstruction method based on vehicle-mounted LiDAR point cloud data |
Non-Patent Citations (1)
Title |
---|
吴禄慎 等: ""基于特征点的改进ICP三维点云配准技术"", 《南昌大学学报.工科版》 * |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106384380A (en) * | 2016-08-31 | 2017-02-08 | 重庆七腾软件有限公司 | 3D human body scanning, modeling and measuring method and system |
CN107578019A (en) * | 2017-09-13 | 2018-01-12 | 河北工业大学 | Gait recognition system and recognition method based on visual-tactile fusion |
CN107578019B (en) * | 2017-09-13 | 2020-05-12 | 河北工业大学 | A gait recognition system and recognition method based on visual and tactile fusion |
CN108230379A (en) * | 2017-12-29 | 2018-06-29 | 百度在线网络技术(北京)有限公司 | For merging the method and apparatus of point cloud data |
CN108230379B (en) * | 2017-12-29 | 2020-12-04 | 百度在线网络技术(北京)有限公司 | Method and device for fusing point cloud data |
CN109272572A (en) * | 2018-08-30 | 2019-01-25 | 中国农业大学 | Modeling method and device based on double Kinect camera |
CN109875562A (en) * | 2018-12-21 | 2019-06-14 | 鲁浩成 | A kind of human somatotype monitoring system based on the more visual analysis of somatosensory device |
CN112361989A (en) * | 2020-09-30 | 2021-02-12 | 北京印刷学院 | Method for calibrating parameters of measurement system through point cloud uniformity consideration |
CN112361989B (en) * | 2020-09-30 | 2022-09-30 | 北京印刷学院 | Method for calibrating parameters of measurement system through point cloud uniformity consideration |
CN113198692A (en) * | 2021-05-19 | 2021-08-03 | 飓蜂科技(苏州)有限公司 | High-precision dispensing method and device suitable for batch products |
CN113237628A (en) * | 2021-07-08 | 2021-08-10 | 中国空气动力研究与发展中心低速空气动力研究所 | Method for measuring horizontal free flight model attitude of low-speed wind tunnel |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105513128A (en) | Kinect-based three-dimensional data fusion processing method | |
US12014468B2 (en) | Capturing and aligning three-dimensional scenes | |
CN100432897C (en) | System and method of contactless position input by hand and eye relation guiding | |
CN103049912B (en) | Random trihedron-based radar-camera system external parameter calibration method | |
CN110555889A (en) | CALTag and point cloud information-based depth camera hand-eye calibration method | |
CN109018591A (en) | A kind of automatic labeling localization method based on computer vision | |
CN103175485A (en) | Method for visually calibrating aircraft turbine engine blade repair robot | |
CN102506711B (en) | Line laser vision three-dimensional rotate scanning method | |
CN104050859A (en) | Interactive digital stereoscopic sand table system | |
WO2008099915A1 (en) | Road/feature measuring device, feature identifying device, road/feature measuring method, road/feature measuring program, measuring device, measuring method, measuring program, measured position data, measuring terminal, measuring server device, drawing device, drawing method, drawing program, and drawing data | |
CN110879080A (en) | High-precision intelligent measuring instrument and measuring method for high-temperature forge piece | |
CN104700385B (en) | Binocular vision positioning device based on FPGA | |
CN102508575B (en) | Screen writing device, screen writing system and realization method thereof | |
CN104034269A (en) | Monocular vision measuring method and monocular vision measuring device | |
CN105957090A (en) | Monocular vision pose measurement method and system based on Davinci technology | |
CN108007345A (en) | Measuring method of excavator working device based on monocular camera | |
CN103747196B (en) | A Projection Method Based on Kinect Sensor | |
CN106097433A (en) | Object industry and the stacking method of Image model and system | |
CN103606170A (en) | Streetscape image feature detecting and matching method based on same color scale | |
CN111307046A (en) | Tree Height Measurement Method Based on Hemisphere Image | |
CN104952105A (en) | Method and apparatus for estimating three-dimensional human body posture | |
KR101496441B1 (en) | Apparatus and Method for registration of flat panel display device and imaging sensor, and Electronic device having flat panel display device and imaging sensor which are registered using the method | |
CN105844700A (en) | System for acquiring three-dimensional point clouds in outdoor scene | |
CN103644894A (en) | Method for object identification and three-dimensional pose measurement of complex surface | |
CN116266402A (en) | Automatic object labeling method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20160420 |