CN105678765A - Texture-based depth boundary correction method - Google Patents
Texture-based depth boundary correction method Download PDFInfo
- Publication number
- CN105678765A CN105678765A CN201610007975.6A CN201610007975A CN105678765A CN 105678765 A CN105678765 A CN 105678765A CN 201610007975 A CN201610007975 A CN 201610007975A CN 105678765 A CN105678765 A CN 105678765A
- Authority
- CN
- China
- Prior art keywords
- depth
- point
- bad
- boundary
- texture
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
Landscapes
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The present invention relates to a texture-based depth boundary correction method. The method comprises the steps of A1, inputting a texture image and a corresponding depth image, wherein the images are acquired by a depth sensor (such as a Kinect); A2, extracting the boundary of the texture image and the boundary of the depth image, and acquiring a depth boundary dislocation figure with the boundary of the texture image as the reference; A3, calculating the pixel value difference between an accurate depth point (sweet spot) and an error depth point (dead point) and determining a boundary dislocation area; A4, according to the distribution characteristics of the boundary dislocation area, adaptively determining the side length of a square window for depth enhancement processing, correcting the depth of the dead point in the window, eliminating the boundary dislocation area. According to the invention, the accuracy and the time-domain stability of the boundary of the depth image, acquired by the Kinect and other low-end depth sensors, are significantly improved. The method is applied to the fields of three-dimensional reconstruction, free viewpoint video coding and the like, and can effectively improve the scene three-dimensional reconstruction quality and the coding efficiency.
Description
Technical field
The present invention relates to computer vision and digital image processing field, particularly to a kind of depth boundary modification method based on texture.
Background technology
Depth image is commonly used for the numerous areas such as three-dimensional reconstruction, free view-point coding. Existing depth information collection is often based on some complicated and expensive sensors, such as structured light camera or laser range finder etc. Based on the depth image that these equipment gather, not only to be limited by all kinds of noise jamming, too low resolution also greatly limit the development of other research application based on by depth image. With the Kinect low side depth transducer being representative, their price is cheap and as active sensor, it is possible to the degree of depth of quick obtaining scene and texture information, thus is widely used under study for action. But owing to low side depth transducer mainly through emitting structural light and receives its reflection light acquisition scene depth image, therefore suffer from the severe jamming of environment light source and body form material, the depth image noise causing collection is relatively big, especially there is the problem that serious shake accuracy is low in object boundary area. Containing in scene the important informations such as the shaped position of object due to borderline region, and belong to visual acuity region, depth boundary error badly influences other application based on depth image, for instance scene three-dimensional reconstruction etc.
In order to improve the quality in low side depth transducer sampling depth image boundary region, propose a series of improved method currently for its depth image border image defects. Wherein representative method has two kinds: to depth image two-sided filter pretreatment, then on spatial domain, depth map is divided into non-borderline region and borderline region, depth information is strengthened by special weighting method for borderline region, the method reduces the depth image noise in object boundary area, but owing to lacking the restriction to time-domain stability, adjacent each frame boundaries depth value still suffers from larger fluctuation; The texture weighted mean utilizing multiframe corresponding region depth information in time domain improves depth image stability, depth image is made to obtain very big improvement in time domain concordance from the method carrying out associating bilateral filtering time domain and spatial domain present frame depth map, the seriality of smooth surface depth value is obviously improved, but owing to ignoring the process to huge object boundary affected by noise, the filter effect of borderline region is not ideal enough, and boundary misalignment needs to be suppressed.
Summary of the invention
It is an object of the invention to provide a kind of depth boundary modification method based on texture, borderline region is better processed.
For this, the present invention propose one, it is characterised in that comprise the steps: A1, input depth transducer gather texture image and corresponding depth image; A2, respectively extraction texture image border and depth image border, obtain depth boundary dislocation figure with Texture Boundaries for benchmark; A3, calculate the better pixel value difference with bad point, it is determined that boundary misalignment region; Wherein better referring to accurate depth point, bad point refers to error depth point; A4, according to dislocation zone distribution characteristics, adaptively determine the square window W length of side and carry out degree of depth enhancement process, revise the bad point degree of depth in window, eliminate boundary misalignment region; Wherein ensureing better in window W being dominant in step A4, namely better number Ngood crosses bad point number Nbad.
It is an advantage of the current invention that:
1) boundary misalignment region is ensuredFurther, boundary misalignment region E can be completely eliminated by algorithmr;
2) owing to, before degree of depth enhancement process, namely revising DbadBefore, to be always ensured that in W better is dominant, and therefore improves all better depth weighted averages in window WReliability, namely ensure dislocation district comprised by window completely and in window better quantity cross bad point quantity, thus ensure that the robustness of degree of depth enhancement process.
Accompanying drawing explanation
Fig. 1 is embodiment of the present invention schematic flow sheet.
Detailed description of the invention
In order to obtain object boundary clearly smooth, shape complete, position high-quality depth image accurately, it is necessary to for the borderline region of depth image, the advantage of comprehensive various improvement depth image methods, learn from other's strong points to offset one's weaknesses. The thinking of the embodiment of the present invention is based primarily upon depth image borderline region and there are shake and two subject matters of dislocation, by stable and corresponding texture image border accurately, borderline region is carried out degree of depth enhancement process, reach to promote spatial domain accuracy and time-domain stability, suppress the purpose of the boundary error of the depth image existence of low side depth transducer collection comprehensively, conveniently further depth information is dissolved in other application.
The present embodiment is compared with corresponding depth image border by extracting each frame texture image border, obtains the misalignment of depth boundary intuitively. Along depth boundary node-by-node algorithm normal direction gradient value, obtain error degree of depth representative value and accurate depth representative value near this point, acquisition is delimitd a boundary line dislocation zone criterion, utilize this criterion to divide and point out accurate depth point (better) and error depth point (bad point) the depth boundary point direction extension ray being sent to correspondence from Texture Boundaries, bad point constitutes the depth boundary dislocation needing to eliminate, and between every pair of corresponding Texture Boundaries point and depth boundary point, the position of dislocation and size are determined. Distribution situation according to dislocation zone, adaptively determines the square process window length of side, it is ensured that dislocation district is comprised by window completely and in window, better quantity crosses bad point quantity, thus improving the reliability of the degree of depth enhancement process eliminating dislocation zone. This ensure that the accurately clear of border in depth image based on the depth boundary dislocation zone modification method of Texture Boundaries deeply while eliminating border shake, finally it is effectively improved with the depth image quality of the Kinect low side sensor acquisition being representative, is extensively benefited making based on each research field of depth information.
The proposed a kind of depth boundary dislocation zone modification method based on Texture Boundaries of the present embodiment is as follows:
A1: input texture image and corresponding depth image that depth transducer (such as Kinect) gathers;
A2: extract texture image border and depth image border respectively, obtains depth boundary dislocation figure with Texture Boundaries for benchmark;
A3: calculate the pixel value difference of accurate depth point (better) and error depth point (bad point), it is determined that boundary misalignment region;
A4: according to dislocation zone distribution characteristics, adaptively determines the square window length of side and carries out degree of depth enhancement process, revise the bad point degree of depth in window, eliminate boundary misalignment region.
In particular embodiments, can operate by following mode. It is noted that the concrete grammar (such as Sobel operator, weighted mean approach etc.) described in implementation process below is all only and enumerates explanation, the scope that the present invention contains is not limited to these cited methods.
A1: each frame texture image of input and corresponding depth image gather gained respectively through the color camera of low side depth transducer with depth camera, for Kinect, the information of Same Scene in the same time is comprised mutually by the texture image of the same frame of sensor acquisition and depth image, they have identical resolution, but owing to depth camera camera lens and color camera lens exist a determining deviation, need to through camera calibrated, two images just can exist corresponding relation, the texture being respectively described in scene same point and depth information between the pixel of same position.
A2: extract the present frame texture image border T of input in A1eWith depth image border De, will extract Texture Boundaries TeIt is mapped on depth image, due to corresponding relation between the texture image of same frame and depth image pixel after calibration, thus be arranged in the borderline pixel of texture image and can be quickly found out the corresponding pixel points of its depth image, obtain the depth boundary dislocation figure being benchmark with Texture Boundaries.
Image boundary extraction specifically can adopt the methods such as (but being not limited to) Sobel operator, Roberts operator, Prewitt operator, and the difference of said method is in that to there are differences for the extraction accuracy on dissimilar border.
A3: determine and the misalignment between corresponding depth boundary point along Texture Boundaries pointwise. Obtain with Texture Boundaries point PTeFor end points to its nearest neighbours depth boundary point PDeDirection extends pixel P on ray and has belonged to point set PgoodWith bad point collection PbadFoundation, need to based on the depth value D of pixel P each on rayPDivide, determine boundary misalignment region E with thisr. For determining better depth value DgoodWith bad point depth value DbadDifference TH2, its normal direction P need to be obtained along depth boundary pointwiseDe ⊥Concentration gradient value Grad. Relatively each point depth value D on rayPWith end points depth value DPTeBetween difference and TH2Magnitude relationship divide better PgoodWith bad point Pbad, all bad points constitute dislocation district Er. Criteria for classifying and dislocation zone are formed as follows:
TH2=GradPD⊥e
Pbad∈Er
A4: determined dislocation zone E by A3rSize and location distribution characteristics, adaptively determine window W length of side TH1, this window W is carried out degree of depth enhancement process, revises bad point degree of depth D in windowbad, eliminate boundary misalignment region Er. As described in A3, it is thus achieved that better depth value DgoodWith bad point depth value DbadDifference TH2, divide with Texture Boundaries point PTe 0For end points, to closest depth boundary point PDe 0On the ray that direction extends, pixel P has belonged to point set PgoodWith bad point collection Pbad, bad point number N on note raybadFor Nbad 0, for ensureing better in window W being dominant, i.e. better number NgoodCross bad point number N morebad, then NgoodIt is at least Nbad 0+ 1, therefore set square window W length of side TH1Initial value be Ngood+Nbad=2Nbad 0+ 1. Method described in A3, adds up adjacent Texture Boundaries point PTe 0Rear TH1-1 Texture Boundaries point PTe k(k=1 ..., TH1-1), initial TH on every corresponding ray1Better quantity in-1 point and bad point quantity, respectively better total number N in cumulative final window WgoodNumber N total with bad pointbad.Still it is dominant if better, then by all better depth weighted averages in window W, is calculated as follows:
Revise bad point depth value Dbad, eliminate the dislocation zone E in window Wr, weight w in weighted mean algorithmi(i=1 ..., Ngood) calculating can consider in target bad point that (but being not limited to) to revise and window W other better between the information such as dependency of texture and position; Otherwise, if better quantity is less than bad point quantity, then square shaped window W length of side TH1Add 1, again better total number N in statistics new windowgoodNumber N total with bad pointbad, it is dominant until better in window W. It is as follows that algorithm performs step:
S1: initialize TH1=2Nbad 0+ 1;
S2: statistics TeUpper some PTe 0Rear TH1-1 some PTe k(k=1 ..., TH1-1) the initial TH of corresponding ray1N in-1 pointgood kAnd Nbad k, better total number N in cumulative window W respectivelygoodNumber N total with bad pointbadAs follows:
S3: if Ngood-Nbad> 0 turn of S4, otherwise turn S5;
S4:Process and stop;
S5:TH1=TH1+ 1, turn S2.
The length of side TH of square window W is determined according to above-mentioned algorithm1, there is following two advantage:
1) ensureBy algorithm S4, E can be completely eliminatedR;
2) owing to, before degree of depth enhancement process, namely revising DbadBefore, to be always ensured that in W better is dominant, and therefore improves all better depth weighted averages in window WReliability, thus ensure that the robustness of degree of depth enhancement process.
The foregoing is only embodiments of the invention; not thereby the scope of the claims of the present invention is limited; every equivalent device utilizing description of the present invention and accompanying drawing content to make or equivalent method conversion; or directly or indirectly it is used in other relevant technical fields, all in like manner include in the scope of patent protection of the present invention.
Claims (9)
1. the depth boundary modification method based on texture, it is characterised in that comprise the steps:
A1, the texture image inputting depth transducer collection and corresponding depth image;
A2, respectively extraction texture image border and depth image border, obtain depth boundary dislocation figure with Texture Boundaries for benchmark;
A3, calculate better PgoodWith bad point PbadPixel value difference, it is determined that boundary misalignment region; Wherein better referring to accurate depth point, bad point refers to error depth point;
A4, according to dislocation zone distribution characteristics, adaptively determine the square window W length of side and carry out degree of depth enhancement process, revise the bad point degree of depth in window, eliminate boundary misalignment region Er;
Wherein ensureing better in window W being dominant in step A4, namely better number Ngood crosses bad point number Nbad. .
2. the depth boundary modification method based on texture as claimed in claim 1, it is characterised in that: in step A2, adopt one of following operator to extract texture image border Te and depth image border De:Sobel operator, Roberts operator, Prewitt operator; And the Texture Boundaries Te of extraction is mapped on depth image, obtain the depth boundary dislocation figure being benchmark with Texture Boundaries.
3. the depth boundary modification method based on texture as claimed in claim 1, it is characterised in that: in step A3, the better distribution situation with bad point along Texture Boundaries point-to-point analysis ray, namely based on the depth value D of pixel P each on rayPDivide better PgoodWith bad point Pbad, it is determined that boundary misalignment region Er; Described ray refers to the line that Texture Boundaries point extends for end points to its nearest neighbours depth boundary point direction.
4. the depth boundary modification method based on texture as claimed in claim 1, it is characterised in that: in described step A4, determine degree of depth enhancement process square window W length of side TH according to good bad point distribution characteristics self adaptation1, revise bad point degree of depth D in windowbad, eliminate boundary misalignment region Er。
5. the depth boundary modification method based on texture as claimed in claim 4, it is characterised in that: in step A4, also comprise the steps: to obtain better depth value DgoodWith bad point depth value DbadDifference TH2, divide with Texture Boundaries point PTe 0For end points, to closest depth boundary point PDe 0On the ray that direction extends, pixel P has belonged to point set PgoodWith bad point collection Pbad, on note ray, bad point number is Nbad 0, set square window W length of side TH1Initial value be 2Nbad 0+ 1, add up adjacent Texture Boundaries point PTe 0Rear TH1-1 Texture Boundaries point PTe k(k=1 ..., TH1-1), every initial TH of corresponding ray1Better quantity in-1 point and bad point quantity, respectively better total number N in cumulative final window WgoodNumber N total with bad pointbad。
6. the depth boundary modification method based on texture as claimed in claim 5, is characterized in that in step A4, if also comprising the steps: better to be still dominant, then by all better depth weighted averages in window W, is calculated as follows:
Revise bad point depth value Dbad, eliminate the dislocation zone E in window Wr;Otherwise, square shaped window W length of side TH1Add 1, again better total number N in statistics new windowgoodNumber N total with bad pointbad, it is dominant until better in window W.
7. the depth boundary modification method based on texture as claimed in claim 6, is characterized in that: described step A4 includes following algorithm and performs step:
S1: initialize TH1=2Nbad 0+ 1;
S2: statistics TeUpper some PTe 0Rear TH1-1 some PTe k(k=1 ..., TH1-1) the initial TH of corresponding ray1N in-1 pointgood kAnd Nbad k, better total number N in cumulative window W respectivelygoodNumber N total with bad pointbadAs follows:
S3: if Ngood-Nbad> 0 turn of S4, otherwise turn S5;
S4: Process and stop;
S5:TH1=TH1+ 1, turn S2.
8. the depth boundary modification method based on texture as claimed in claim 1, it is characterized in that: in described step A3, the better distribution situation with bad point along Texture Boundaries point-to-point analysis ray, described ray refers to the line that Texture Boundaries point extends for end points to its nearest neighbours depth boundary point direction, namely based on the depth value D of pixel P each on rayPBetter P is divided with the absolute value of the difference of end points depth valuegoodWith bad point Pbad, it is determined that boundary misalignment region Er。
9. the depth boundary modification method based on texture as claimed in claim 8, is characterized in that determining better depth value D as followsgoodWith bad point depth value DbadDifference TH2: obtain its normal direction P along depth boundary pointwiseDe ⊥Concentration gradient value Grad, as determining dislocation district ErThe foundation of distribution, namely divides with apart from this depth boundary point PDeNearest Texture Boundaries point PTeFor end points, to depth boundary point PDeOn the ray that direction extends, pixel P has belonged to point set PgoodWith bad point collection PbadFoundation, criteria for classifying is as follows:
Pbad∈Er。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610007975.6A CN105678765B (en) | 2016-01-07 | 2016-01-07 | A kind of depth image boundary modification method based on texture |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610007975.6A CN105678765B (en) | 2016-01-07 | 2016-01-07 | A kind of depth image boundary modification method based on texture |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105678765A true CN105678765A (en) | 2016-06-15 |
CN105678765B CN105678765B (en) | 2019-06-28 |
Family
ID=56299130
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610007975.6A Active CN105678765B (en) | 2016-01-07 | 2016-01-07 | A kind of depth image boundary modification method based on texture |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105678765B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116843689A (en) * | 2023-09-01 | 2023-10-03 | 山东众成菌业股份有限公司 | Method for detecting surface damage of fungus cover |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8059911B2 (en) * | 2008-09-30 | 2011-11-15 | Himax Technologies Limited | Depth-based image enhancement |
CN102831582A (en) * | 2012-07-27 | 2012-12-19 | 湖南大学 | Method for enhancing depth image of Microsoft somatosensory device |
CN103413276A (en) * | 2013-08-07 | 2013-11-27 | 清华大学深圳研究生院 | Depth enhancing method based on texture distribution characteristics |
CN103455984A (en) * | 2013-09-02 | 2013-12-18 | 清华大学深圳研究生院 | Method and device for acquiring Kinect depth image |
CN103996174A (en) * | 2014-05-12 | 2014-08-20 | 上海大学 | Method for performing hole repair on Kinect depth images |
-
2016
- 2016-01-07 CN CN201610007975.6A patent/CN105678765B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8059911B2 (en) * | 2008-09-30 | 2011-11-15 | Himax Technologies Limited | Depth-based image enhancement |
CN102831582A (en) * | 2012-07-27 | 2012-12-19 | 湖南大学 | Method for enhancing depth image of Microsoft somatosensory device |
CN103413276A (en) * | 2013-08-07 | 2013-11-27 | 清华大学深圳研究生院 | Depth enhancing method based on texture distribution characteristics |
CN103455984A (en) * | 2013-09-02 | 2013-12-18 | 清华大学深圳研究生院 | Method and device for acquiring Kinect depth image |
CN103996174A (en) * | 2014-05-12 | 2014-08-20 | 上海大学 | Method for performing hole repair on Kinect depth images |
Non-Patent Citations (1)
Title |
---|
李应彬等: "基于改进双边滤波的Kinect深度图像空洞修复算法研究", 《工业控制计算机》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116843689A (en) * | 2023-09-01 | 2023-10-03 | 山东众成菌业股份有限公司 | Method for detecting surface damage of fungus cover |
CN116843689B (en) * | 2023-09-01 | 2023-11-21 | 山东众成菌业股份有限公司 | Method for detecting surface damage of fungus cover |
Also Published As
Publication number | Publication date |
---|---|
CN105678765B (en) | 2019-06-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2023029373A1 (en) | High-precision farmland vegetation information extraction method | |
CN103413276B (en) | A kind of degree of depth Enhancement Method based on grain distribution feature | |
CN101923711B (en) | SAR (Synthetic Aperture Radar) image change detection method based on neighborhood similarity and mask enhancement | |
CN112348775B (en) | Vehicle-mounted looking-around-based pavement pit detection system and method | |
CN107145891B (en) | A method and system for water body extraction based on remote sensing images | |
CN108280450A (en) | A kind of express highway pavement detection method based on lane line | |
CN105279372A (en) | Building height computing method and apparatus | |
CN104102069B (en) | A kind of focusing method of imaging system and device, imaging system | |
CN104809698A (en) | Kinect depth image inpainting method based on improved trilateral filtering | |
CN113763350B (en) | Glue line detection method and device, glue line detection equipment and storage medium | |
CN106920235A (en) | Star-loaded optical remote sensing image automatic correction method based on the matching of vector base map | |
CN103226819A (en) | Segmental counting-based relative radiation correction method | |
CN112019719B (en) | High-resolution light field system and imaging method based on optical framing light field camera | |
CN105913013A (en) | Binocular vision face recognition algorithm | |
CN106228569A (en) | A kind of fish speed of moving body detection method being applicable to water quality monitoring | |
CN104008543A (en) | Image fusion quality evaluation method | |
CN102175993A (en) | Radar scene matching feature reference map preparation method based on satellite SAR (synthetic aperture radar) images | |
CN104182983A (en) | Highway monitoring video definition detection method based on corner features | |
CN106228517A (en) | Image collecting device image-forming component defect calibration steps | |
CN105469413B (en) | It is a kind of based on normalization ring weighting without refer to smear restoration image synthesis method for evaluating quality | |
CN111951339A (en) | Image processing method for performing parallax calculation by using heterogeneous binocular cameras | |
CN105678765A (en) | Texture-based depth boundary correction method | |
KR101841750B1 (en) | Apparatus and Method for correcting 3D contents by using matching information among images | |
CN111062926B (en) | Video data processing method, device and storage medium | |
CN106372035B (en) | A kind of point spread function number estimation method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |