[go: up one dir, main page]

CN107016657B - The restorative procedure of the face picture covered by reticulate pattern - Google Patents

The restorative procedure of the face picture covered by reticulate pattern Download PDF

Info

Publication number
CN107016657B
CN107016657B CN201710226996.1A CN201710226996A CN107016657B CN 107016657 B CN107016657 B CN 107016657B CN 201710226996 A CN201710226996 A CN 201710226996A CN 107016657 B CN107016657 B CN 107016657B
Authority
CN
China
Prior art keywords
picture
iris
channel
pixel
reticulate pattern
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201710226996.1A
Other languages
Chinese (zh)
Other versions
CN107016657A (en
Inventor
张宁
伍萍辉
赵亚东
石学超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hebei University of Technology
Original Assignee
Hebei University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hebei University of Technology filed Critical Hebei University of Technology
Priority to CN201710226996.1A priority Critical patent/CN107016657B/en
Publication of CN107016657A publication Critical patent/CN107016657A/en
Application granted granted Critical
Publication of CN107016657B publication Critical patent/CN107016657B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

本发明涉及被网纹覆盖的人脸图片的修复方法,其特征在于该方法通过先提取网纹边缘,再进行去除网纹、最后填补网纹并对整幅图像进行平滑处理,达到还原人脸的目的;具体步骤如下:步骤S1,图片预处理:将需要处理的图片进行行获取,获得待处理图片的图片高度rol和图片宽度row,以像素点为单位,得到待处理图片的图片大小为rol×row;再将待处理图片进行double格式转换;将进行double格式转换后的图片大小进行处理,使图片高度为220、88或118,对应的图片宽度为178、72或96;步骤S2,对图片进行归类,建立坐标系,定位初始区域:步骤S3,利用边缘检测提取网纹边缘,进行去除网纹操作:步骤S4,提取掩膜、填补网纹并平滑图像。

The invention relates to a method for repairing a face image covered by texture, which is characterized in that the method can restore the face by first extracting the edge of the texture, then removing the texture, filling the texture and smoothing the entire image. The specific steps are as follows: Step S1, picture preprocessing: obtain the picture to be processed, obtain the picture height rol and picture width row of the picture to be processed, and take pixels as the unit, and obtain the picture size of the picture to be processed as rol×row; then convert the image to be processed into double format; process the size of the image after double format conversion, so that the image height is 220, 88 or 118, and the corresponding image width is 178, 72 or 96; step S2, Classify the pictures, establish a coordinate system, and locate the initial area: Step S3, use edge detection to extract the edge of the texture, and perform a texture removal operation: Step S4, extract the mask, fill the texture and smooth the image.

Description

The restorative procedure of the face picture covered by reticulate pattern
Technical field
The present invention relates to image data processing technology fields, the restorative procedure of the face picture especially covered by reticulate pattern.
Background technique
Face recognition technology reaches its maturity, and face recognition application is also more and more extensive.Currently based on the application of recognition of face It is concentrated mainly on face attendance recorder, face clearance system and recognition of face based on video and monitoring etc., is belonged to dynamic State target makes detection and realizes identification;Face in photo identify to belong to that static object is made detection and realized and is known Not.
There is fine-structure mesh since certificate photo can be inevitably coupled on noise, such as face when being converted into digital information storage Line, this influences to use very much.Reticulate pattern on past removal certificate photo generallys use PhotoShop software, artificial to wipe reticulate pattern region, Then manual operation, such working efficiency is very low, and cost of labor is very high.
Can be mainly divided into two major classes currently based on image restoration technology: one kind is the restorative procedure based on diffusion equation, Another kind of is the restorative procedure based on sample block.
Restorative procedure based on diffusion equation is based on parameter model or partial differential equation (Xu Liming, Wu Yajuan, Liu Hang River is based on variational PDEs image restoration technology research [J] China West Normal University journal (natural science edition) .2016.37 (3): 343-348 it), is inwardly gradually seamlessly transitted from the edge of image damaged area, smoothly preferentially will spread through sex intercourse or be distributed to office In portion's structure, such restorative procedure is mainly used for solving the reparation of zonule breakage.This method mainly includes that partial differential equation are calculated Method, Total Variation and based on Curvature-driven diffusion equation model etc..
(Chang Chen, what builds a kind of improved Criminisi image repair method [J] good fortune of agriculture to restorative procedure based on sample block State college journal (natural science edition) .2017.45 (01): 74-79) it is by the Optimum Matching in resource area searching and object block Block, and damaged area is copied directly to realize image repair;Since this method is able to maintain the consistency of textural characteristics, Therefore it is suitble to solve the problems, such as the reparation of big region breakage image.
However, restorative procedure main for both the above when carrying out the image repair with reticulate pattern face, needs in advance It finds and needs the reticulate pattern region that removes, to wipe reticulate pattern region manually, then repaired, while after fully can not repairing erasing Reticulate pattern region, the reparation effect close to original image is not achieved in the especially faces sensitive information such as human eye and its neighboring area region Fruit.
Summary of the invention
Existing deficiency, skill to be solved by this invention when for the picture that processing face is covered by reticulate pattern in the prior art Art problem is the restorative procedure for proposing a kind of face picture covered by reticulate pattern.This method is started with from static object, will be based on expansion Restorative procedure, the restorative procedure based on sample block and the edge detection algorithm for dissipating equation are improved and are integrated, first progress side Edge detection, position character profile complete the removal of reticulate pattern, secondly make exposure mask, prevent later image processing from causing its distortion, so The reticulate pattern region got rid of is filled up by " X-type " structure afterwards, does smoothing processing, finally to reach the optimal defeated of image Out.This method is automatically removing reticulate pattern and capable of quickly repairing image for the reticulate pattern noise proposition occurred for face on certificate photo Method, on certificate photo face occur reticulate pattern noise removal and repairing effect it is fine, be greatly saved artificial removal And the time covered on certificate photo by reticulate pattern noise is repaired, it improves work efficiency simultaneously.
The present invention solve the technical problem the technical solution adopted is that:
The restorative procedure of the face picture covered by reticulate pattern, it is characterised in that this method, which passes through, first extracts reticulate pattern edge, then It is removed reticulate pattern, finally fills up reticulate pattern and entire image is smoothed, achievees the purpose that restore face;Specific steps It is as follows:
Step S1, picture pretreatment:
Picture to be treated is subjected to capable acquisition, obtains the picture height rol and picture width row of picture to be processed, As unit of pixel, the picture size for obtaining picture to be processed is rol × row;Picture to be processed is subjected to double lattice again Formula conversion;Picture size after progress double format conversion is handled, picture height 220,88 or 118 is made, it is corresponding Picture width be 178,72 or 96;
Step S2, sorts out picture, establishes coordinate system, positions prime area:
It is divided into three classes according to the size of the pretreated picture of above-mentioned steps S1, i.e., picture size is 220 × 178,88 × 72 Or 118 × 96;The pretreated picture of judgment step S1 belongs to which kind of picture size in three classes;Then to human eye range into Row positioning, using the top left corner apex of picture after handling as origin, using transverse direction as x-axis, longitudinal is y-axis, and x-axis from left to right get over by numerical value Come bigger, numerical value is increasing from top to bottom for y-axis, establishes xy coordinate system;Set following relevant parameter: left eye x coordinate ratio system Number is a, and right eye x coordinate proportionality coefficient is c, and right and left eyes y-coordinate proportionality coefficient is d, location radii r;It is pre- according to step S1 The size of the picture of processing sets different proportion coefficient, when picture size is 220 × 178, then a=0.387, c=0.645, d= 0.41, r=14;When picture size is 88 × 72, then a=0.38, c=0.65, d=0.38, r=4;When picture size is 118 × 96, then a=0.365, c=0.645, d=0.375, r=8.5;Eyes and periphery half are oriented by above-mentioned set parameter Diameter is the region of r, i.e. positioning obtains the prime area of human eye;
Step S3 extracts reticulate pattern edge using edge detection, is removed reticulate pattern operation:
The pretreated picture of acquisition step S1, traverses picture, obtains the R of all pixels point of picture, and G, B are logical Road pixel value, using the R of all pixels point, G, channel B pixel value carries out the gradient difference that edge detection solves each pixel, benefit Character contour region is obtained with edge detection;Reticulate pattern edge extracting is carried out using the gradient difference of each pixel simultaneously, obtains net Line fringe region, after obtaining reticulate pattern edge, using white pixel point (255,255,255) to the reticulate pattern fringe region detected Assignment is carried out, the operation of removal reticulate pattern is completed, the white space after obtaining removal reticulate pattern;
Step S4 extracts exposure mask, fills up reticulate pattern and smoothed image:
Iris exposure mask and character contour edge exposure mask, line mask of going forward side by side production are generated, smoothed image exports image results;
It comprises the concrete steps that:
S41 chooses blank pixel point (x0, y0) in the white space that step S3 is obtained, then in blank pixel point (x0, y0) 20 vicinity points are chosen according to " X-type " structure in periphery, carry out the channel R pixel value size to this 20 vicinity points and compare And sort, then choose four vicinity points that the channel R pixel value is located at preceding four, the channel the R picture of this four vicinity points Plain value is descending to be denoted as Rout1, Rout2, Rout3 and Rout4 respectively, the corresponding channel G pixel value be denoted as respectively Gout1, Gout2, Gout3 and Gout4, channel B pixel value are denoted as Bout1, Bout2, Bout3 and Bout4 respectively;
S42, by the R of four vicinity points and blank pixel point (x0, y0) obtained in step S41, G, channel B pixel It is worth (R0, G0, B0) and carries out gradient difference calculating according to formula (1) respectively,
Wherein, i=1,2,3,4, Ti are gradient difference;
S43 sets character contour threshold value as 155, the gradient difference Ti (T1, T2, T3, T4) that step S42 is obtained is compared It compared with size, then sorts, selects maximum gradient difference numerical value and be compared with the character contour threshold value of setting, maximum number in Ti Value is greater than character contour threshold value 155 to get character contour edge exposure mask is arrived, by character contour edge exposure mask to character contour side Edge carries out mask fabrication;Conversely, then doing nothing, S44 is entered step;
S44 orients the prime area range of human eye using relevant parameter set in step S2, and it is initial to choose this The subregion Ir of 3 × 3 sizes in region, the pixel value in the channel R of the subregion be denoted as respectively Ra01, Ra02 ..., Ra09, Specific structure is as shown in the table,
Ra01 Ra02 Ra03
Ra04 Ra05 Ra06
Ra07 Ra08 Ra09
Convolution algorithm, warp factor α are carried out in this prime area are as follows:
Using Gp=α * Ir, calculate Gp, Gp=Ra02+Ra04+Ra06+Ra08, i.e. Gp indicate Ra02, Ra04, Ra06, The sum of tetra- positions the Ra08 channel R pixel value;
Iris threshold value is set as 530, Gp is compared with iris threshold value, if Gp < 530, positioning obtains iris mask regions Iris masked areas is then carried out iris periphery mask fabrication by domain;If Gp > 530 are done nothing, enter step S45;
The iris masked areas positioned through step S44 is carried out iris periphery mask fabrication and operated, in setting by S45 Limiting iris threshold value is 1.3, and lower limit iris threshold value is 0.75, chooses iris pixel point (x, y) in iris masked areas, then in the rainbow Choose ten neighbouring reference pixels points, the channel the R pixel value minute of this ten reference pixels points in the upper and lower position film pixel (x, y) Not Wei R101, R102, R103, R104, R105, R108, R109, R110, R111 and R112, R101, R102, R103, R104 and The position R105 iris pixel point (x, y) opposite with the position R108, R109, R110, R111 and R112 is in symmetrical above and below Structure, using the proportionate relationship of formula (6), solution obtains Gpr,
Gpr=(R101+R102+R103+R104+R105)/(R108+R109+R110+R111+R112) (6)
Compare the size of Gpr Yu upper limit iris threshold value and lower limit iris threshold value, if Gpr be less than lower limit iris threshold value 0.75 or Person Gpr is greater than upper limit iris threshold value 1.3, then carries out iris periphery mask fabrication to iris pixel point (x, y);If 0.75≤Gpr ≤ 1.3, then it does nothing, enters step S46;
S46 chooses the subregion Ir of 3 × 3 sizes in picture in its entirety0, subregion Ir0The rapid S44 of structure synchronization in son Region Ir, to subregion Ir0Edge detection is carried out, and carries out convolution algorithm, lateral warp factor Gx are as follows:
Longitudinal warp factor Gy are as follows:
Grx and Gry are calculated separately using formula (7) and (8), Grx and Gry respectively correspond transverse edge detection R channel value R channel value is detected with longitudinal edge;Edge is calculated according to formula (9) and fills up gradient difference Gr,
Grx=Gx*Ir0 (7)
Gry=Gy*Ir0 (8)
S47 sets edge and fills up threshold value as 60, and threshold value 60 is filled up at the edge that gradient difference Gr and setting are filled up in edge and is carried out Compare, if Gr < 60, the channel R pixel value Rout1, the Rout2 positioned at the first two that will be obtained according to step S41, while obtaining opposite Pixel value Gout1, the Gout2 in the channel G answered and pixel value Bout1, Bout2 of channel B, respectively according to formula (10)-(12) To R, G, channel B sums up averaging, obtains average value Rm on the channel R, average value Gm on the channel G, average value in channel B Bm,
Rm=(Rout1+Rout2)/2 (10)
Gm=(Gout1+Gout2)/2 (11)
Bm=(Bout1+Bout2)/2 (12)
Rm is used respectively, and Gm, Bm fill up the R of the reticulate pattern edge detected, G, channel B pixel value;If Gr >=60, carry out Step S48;
S48 chooses smooth pixel point (x1, y1) in the non-reticulate pattern edge of picture in its entirety, then in the smooth pixel point 20 vicinity points are chosen according to " X-type " structure in the periphery (x1, y1), with step S41, choose the channel R pixel value and are located at preceding four Four vicinity points, the mean value in this four each channels of vicinity points is calculated with formula (13)-(15) respectively, is denoted as respectively R1,G1,B1;In addition to before iris exposure mask and character contour edge exposure mask extract region, by R1, G1, B1 replace the smooth picture The R of vegetarian refreshments (x1, y1), G, channel B pixel value realize the smoothing processing to whole picture, export image results;
R1=(Rout1+Rout2+Rout3+Rout4)/4 (13)
G1=(Gout1+Gout2+Gout3+Gout4)/4 (14)
B1=(Bout1+Bout2+Bout3+Bout4)/4 (15).
The restorative procedure of the above-mentioned face picture covered by reticulate pattern, " X-type " structure in the step S41 refers to, with sky Centered on white pixel point, four neighborhood pixels are diagonally chosen respectively in four corner positions of the blank pixel point Point, then a vicinity points, totally 20 neighbours are selected after a pixel on the position up and down of blank pixel point Nearly pixel constitutes " X-type " structure.
The restorative procedure of the above-mentioned face picture covered by reticulate pattern, the ginseng of the iris pixel point (x, y) in the step S45 The choosing method of photograph vegetarian refreshments is: centered on iris pixel point (x, y), in four apex angle positions of the iris pixel point (x, y) It sets and diagonally chooses two reference pixels points respectively, then every a pixel on the upper and lower position of iris pixel point A reference pixels point is selected afterwards, chooses 10 reference pixels points altogether.
Compared with prior art, beneficial effects of the present invention are as follows:
(1) present invention is for the face picture that is covered by reticulate pattern, how to directly input that there are the progress of the face picture of reticulate pattern Face descreening operation, to be effectively removed the problem of reticulate pattern is reduced to original facial image, by repairing based on diffusion equation Compound method, the restorative procedure based on sample block and edge detection algorithm are improved and are integrated, and are first extracted reticulate pattern and are repaired again, Edge detection is carried out first, and position character profile completes the removal of reticulate pattern, secondly makes exposure mask, prevents later image processing from making At its distortion, then the reticulate pattern region got rid of is filled up by " X-type " structure, does smoothing processing, finally to reach people The optimal output of face image.
(2) the method for the present invention is repaired again for there is the picture of reticulate pattern first to carry out descreening on face, using edge detection, It can accurately detect reticulate pattern edge, can thus effectively remove reticulate pattern region, while in order to avoid edge detection is to people It is missed in the sensitive informations regions such as face key position, including pupil of human, iris, sclera, nose profile and upper and lower lip contour Operation, the present invention have done exposure mask protection to face key position by production iris exposure mask and personage's edge contour exposure mask, can be with Simple is removed reticulate pattern operation;Fill up use " X-type " structure fill up strategy, with respect to the background art in two kinds of reparation sides Method is filled up faster, and is more suitable for the removal reparation of face reticulate pattern.Judging from the experimental results, " X-type " is filled up reparation and is achieved Preferable repairing effect (seeing below in embodiment to the explanation of Fig. 3 (a) and Fig. 3 (b) and Fig. 4 (a) and Fig. 4 (b)).
Detailed description of the invention
Fig. 1 is the schematic diagram of " X-type " structure in step S41 in the present invention;
Fig. 2 is the choosing method schematic diagram of the reference pixels point of iris pixel point (x, y) in step S45 in the present invention;
Fig. 3 (a) and Fig. 3 (b) is respectively the example of two input pictures;
Fig. 4 (a) and Fig. 4 (b) is respectively the effect picture after Fig. 3 (a) and the corresponding reparation of Fig. 3 (b);
Specific embodiment
To make the objectives, technical solutions, and advantages of the present invention clearer, below in conjunction with example example and attached drawing to this Invention is further described.But described embodiment is intended merely to facilitate the understanding of the present invention, and not with this be used to pair The restriction of the claims in the present invention protection scope.
The restorative procedure for the face picture that the present invention is covered by reticulate pattern by first extracting reticulate pattern edge, then is removed net Line is finally filled up reticulate pattern and is smoothed to entire image, achievees the purpose that restore face;Specific step is as follows:
Step S1, picture pretreatment:
Picture to be treated is subjected to capable acquisition by MATLAB software, obtains the picture height rol of picture to be processed With picture width row, as unit of pixel, the picture size for obtaining picture to be processed is rol × row;Again by figure to be processed Piece carries out the conversion of double format;Picture size after progress double format conversion is handled, picture height is made 220,88 or 118, corresponding picture width is 178,72 or 96;
Step S2, sorts out picture, establishes coordinate system, positions prime area:
It is divided into three classes according to the size of the pretreated picture of above-mentioned steps S1, i.e., picture size is 220 × 178,88 × 72 Or 118 × 96;The pretreated picture of judgment step S1 belongs to which kind of picture size in three classes;Then to human eye range into Row positioning, using the top left corner apex of picture after handling as origin, using transverse direction as x-axis, longitudinal is y-axis, and x-axis from left to right get over by numerical value Come bigger, numerical value is increasing from top to bottom for y-axis, establishes xy coordinate system;Set following relevant parameter: left eye x coordinate ratio system Number is a, and right eye x coordinate proportionality coefficient is c, and right and left eyes y-coordinate proportionality coefficient is d, location radii r;At step S1 Picture size after reason sets different proportion coefficient, when picture size is 220 × 178, then a=0.387, c=0.645, d= 0.41, r=14;When picture size is 88 × 72, then a=0.38, c=0.65, d=0.38, r=4;When picture size is 118 × 96, then a=0.365, c=0.645, d=0.375, r=8.5;Eyes and periphery half are oriented by above-mentioned set parameter Diameter is the region of r, i.e. positioning obtains the prime area of human eye;Since input picture size is different, but the area where human eye Domain position account for picture in its entirety ratio be it is similar, set specific ratio by the size for testing repeatedly for different pictures and close It is more accurate that system can be such that the exposure mask in later period extracts, and achievees the effect that accurately to remove face reticulate pattern;
Step S3 extracts reticulate pattern edge using edge detection, is removed reticulate pattern operation:
The pretreated picture of acquisition step S1, traverses picture, obtains the R of all pixels point of picture, and G, B are logical Road pixel value, using the R of all pixels point, G, channel B pixel value carries out edge detection, and (Shen Dehai, Hou Jian, E Xu are based on improving Sobel operator edge detection algorithm [J] computer technology and develop .2013.23 (11): 22-25) solve each pixel Gradient difference, obtain character contour region using edge detection;Reticulate pattern edge is carried out using the gradient difference of each pixel simultaneously It extracts, reticulate pattern fringe region is obtained, after obtaining reticulate pattern edge, using white pixel point (255,255,255) to what is detected Reticulate pattern fringe region carries out assignment, completes the operation of removal reticulate pattern, the white space after obtaining removal reticulate pattern;
Step S4 extracts exposure mask, fills up reticulate pattern and smoothed image:
Iris exposure mask and character contour edge exposure mask, line mask of going forward side by side production are generated, then fills up reticulate pattern, smoothed image is defeated Image results out;
It comprises the concrete steps that:
S41 chooses blank pixel point (x0, y0) in the white space that step S3 is obtained, then in blank pixel point (x0, y0) 20 vicinity points are chosen according to " X-type " structure in periphery, carry out the channel R pixel value size to this 20 vicinity points and compare And sort, then choose four vicinity points that the channel R pixel value is located at preceding four, the channel the R picture of this four vicinity points Plain value is descending to be denoted as Rout1, Rout2, Rout3 and Rout4 respectively, the corresponding channel G pixel value be denoted as respectively Gout1, Gout2, Gout3 and Gout4, channel B pixel value are denoted as Bout1, Bout2, Bout3 and Bout4 respectively;
S42, by the R of four vicinity points and blank pixel point (x0, y0) obtained in step S41, G, channel B pixel It is worth (R0, G0, B0) and carries out gradient difference calculating according to formula (1) respectively,
Wherein, i=1,2,3,4, Ti are gradient difference;
S43 sets character contour threshold value as 155, the gradient difference Ti (T1, T2, T3, T4) that step S42 is obtained is compared It compared with size, then sorts, selects maximum gradient difference numerical value and be compared with the character contour threshold value of setting, maximum number in Ti Value is greater than character contour threshold value 155 to get character contour edge exposure mask is arrived, by character contour edge exposure mask to character contour side Edge carries out mask fabrication;Conversely, then doing nothing, S44 is entered step;
S44 orients the prime area range of human eye, using the relevant parameter set in step S2 in order to this step Iris masked areas is more accurately extracted, the subregion Ir of 3 × 3 sizes in the prime area is chosen, the channel R of the subregion Pixel value be denoted as respectively Ra01, Ra02 ..., Ra09, specific structure is as shown in the table,
Ra01 Ra02 Ra03
Ra04 Ra05 Ra06
Ra07 Ra08 Ra09
Convolution algorithm, warp factor α are carried out in this prime area are as follows:
Using Gp=α * Ir, calculate Gp, Gp=Ra02+Ra04+Ra06+Ra08, i.e. Gp indicate Ra02, Ra04, Ra06, The sum of tetra- positions the Ra08 channel R pixel value;
Iris threshold value is set as 530, Gp is compared with iris threshold value, if Gp < 530, positioning obtains iris mask regions Iris masked areas is then carried out iris periphery mask fabrication, that is, masking operation by domain, subsequent to fill up work howsoever Operation will not all have an impact the iris masked areas shielded, until before last picture in its entirety is smoothed;If Gp > 530 are then done nothing, and enter step S45;
The iris masked areas positioned through step S44 is carried out iris periphery mask fabrication and operated, in setting by S45 Limiting iris threshold value is 1.3, and lower limit iris threshold value is 0.75, this upper limit iris threshold value and lower limit iris threshold value are used to obtain iris week The exposure mask on side is chosen iris pixel point (x, y) in iris masked areas, then is chosen in the iris pixel upper and lower position point (x, y) Ten neighbouring reference pixels points, the channel the R pixel value of this ten reference pixels points be respectively R101, R102, R103, R104, R105, R108, R109, R110, R111 and R112, the position R101, R102, R103, R104 and R105 and R108, R109, The position R110, R111 and R112 is in upper and lower symmetrical structure with respect to iris pixel point (x, y), is closed using the ratio of formula (6) System, solution obtain Gpr,
Gpr=(R101+R102+R103+R104+R105)/(R108+R109+R110+R111+R112) (6)
Compare the size of Gpr Yu upper limit iris threshold value and lower limit iris threshold value, if Gpr be less than lower limit iris threshold value 0.75 or Person Gpr is greater than upper limit iris threshold value 1.3, then carries out iris periphery mask fabrication to iris pixel point (x, y), avoids the later period because of figure The change of the iris neighboring pixel value as caused by filling up causes to be distorted;If 0.75≤Gpr≤1.3 are done nothing, into Enter step S46;
S46 chooses the subregion Ir of 3 × 3 sizes in picture in its entirety0, subregion Ir0The rapid S44 of structure synchronization in son Region Ir, to subregion Ir0Edge detection is carried out, and carries out convolution algorithm, lateral warp factor Gx are as follows:
Longitudinal warp factor Gy are as follows:
Grx and Gry are calculated separately using formula (7) and (8), Grx and Gry respectively correspond transverse edge detection R channel value R channel value is detected with longitudinal edge;Edge is calculated according to formula (9) and fills up gradient difference Gr,
Grx=Gx*Ir0 (7)
Gry=Gy*Ir0 (8)
S47 sets edge and fills up threshold value as 60, and threshold value 60 is filled up at the edge that gradient difference Gr and setting are filled up in edge and is carried out Compare, if Gr < 60, the channel R pixel value Rout1, the Rout2 positioned at the first two that will be obtained according to step S41, while obtaining opposite Pixel value Gout1, the Gout2 in the channel G answered and pixel value Bout1, Bout2 of channel B, respectively according to formula (10)-(12) To R, G, channel B sums up averaging, obtains average value Rm on the channel R, average value Gm on the channel G, average value in channel B Bm,
Rm=(Rout1+Rout2)/2 (10)
Gm=(Gout1+Gout2)/2 (11)
Bm=(Bout1+Bout2)/2 (12)
Rm is used respectively, and Gm, Bm fill up the R of the reticulate pattern edge detected, G, channel B pixel value;If Gr >=60, carry out Step S48;
S48 chooses smooth pixel point (x1, y1) in the non-reticulate pattern edge of picture in its entirety, then in the smooth pixel point 20 vicinity points are chosen according to according to " X-type " structure in the periphery (x1, y1), with step S41, choose the channel R pixel value and are located at Preceding four four vicinity points calculate the mean value in this four each channels of vicinity points with formula (13)-(15) respectively, respectively It is denoted as R1, G1, B1;In addition to before iris exposure mask and character contour edge exposure mask extract region, R1, G1, B1 are put down instead of this The R of sliding pixel (x1, y1), G, channel B pixel value realize the smoothing processing to whole picture, export image results;
R1=(Rout1+Rout2+Rout3+Rout4)/4 (13)
G1=(Gout1+Gout2+Gout3+Gout4)/4 (14)
B1=(Bout1+Bout2+Bout3+Bout4)/4 (15).
The method of the present invention is the face picture to be processed that first input is covered with reticulate pattern, passes through the height judgement input figure of picture Piece size.Secondly reticulate pattern, that is, removal reticulate pattern are extracted using edge detection, then carries out mask fabrication, it is therefore an objective to right Face sensitizing range (pupil of human, iris, sclera, the sensitive informations such as nose profile and upper and lower lip contour region) carries out exposure mask Production, no matter how to handle face later, it is finally to carry out " X to the reticulate pattern region of removal that these parts, which are impregnable, Type " is filled up, and the image filled up is smoothed and has just obtained final output photographic result.
Heretofore described exposure mask is to make the pixel value extracted not in setting threshold by changing the pixel value extracted It is worth in range, guarantees to play shielding action not by corresponding algorithm process.The picture to be processed refers to the people with reticulate pattern Face certificate photo, certificate photo size work just for these three 220 × 178,88 × 72,118 × 96 depth-width ratios, other certificate photos Size will be converted into these three sizes and be handled again, and the picture to be processed of the application is color image.
Embodiment illustrated in fig. 1 show in the present invention " X-type " structure described in step S41 be with blank pixel point (x0, Y0 centered on), the blank pixel point four corner positions diagonally respectively choose four vicinity points, then A vicinity points are selected on the position up and down of blank pixel point after a pixel, totally 20 vicinity points " X-type " structure is constituted, the black square in Fig. 1 is 20 vicinity points chosen.Smooth pixel point in step S48 (x1, Y1 vicinity points selection rule) is identical with the vicinity points selection rule of blank pixel point.
Embodiment illustrated in fig. 2 shows the selection of the reference pixels point of iris pixel point (x, y) in step S45 in the present invention Method is: in Fig. 2, each white square is a pixel, centered on iris pixel point (x, y), in the iris pixel Four corner positions of point (x, y) diagonally choose respectively two reference pixels points (R101 and R104, R103 and R105, R110 and R108, R112 and R109), then one is selected after a pixel on the upper and lower position of iris pixel point A reference pixels point (R102 and R111), altogether choose 10 reference pixels points, 10 reference pixels points with respect to iris pixel point (x, Y) structure symmetrical above and below is constituted.
By the way that experimental results demonstrate character contour edge mask fabrication of the invention takes 20, periphery using " X-type " structure Vicinity points, the structure is symmetrical above and below, and bilateral symmetry, advantage is that 20 vicinity points are discontinuous, that is, every one One is taken, is diagonally extended, and this 20 vicinity points are ranked up, chooses the channel R pixel value is located at preceding four four A pixel, while the corresponding G of this four pixels can be obtained, channel B pixel value, using formula (1) it can be concluded that ladder Difference Ti, i=1,2,3,4 are spent, the gradient difference for utilizing (x0, y0) equally distributed vicinity points to obtain can be extracted more reasonably Character contour has done non-character contour and has preferably filtered out, and plays the role of to production personage's profile mask vital.Rainbow It is because iris region pixel value is generally than iris peripheral region using structure symmetrical above and below using formula (6) in film mask fabrication Domain pixel value wants low, is the sum of 5 channel reference pixels point R pixel values dispersed and following structure 5 minutes by above structure The sum of the scattered channel reference pixels point R pixel value is done than obtaining Gpr, it is desirable that above structure and following symmetrical configuration are done so more It can be accurately positioned and arrive iris region, then make iris exposure mask.Fill up reticulate pattern edge use " X-type " structure, can quickly from 20 vicinity points select the channel R pixel value Rout1, Rout2 positioned at the first two, while obtaining the corresponding channel G Pixel value Bout1, Bout2 of pixel value Gout1, Gout2 and channel B, it is best that using formula (10)-, (12), which fill up effect, 's.Smooth pixel point (x1, y1) finally is chosen to the non-reticulate pattern edge of picture in its entirety, then in the smooth pixel point (x1, y1) 20 vicinity points are chosen according to according to " X-type " structure in periphery, calculate this four neighborhood pixels with formula (13)-(15) respectively The mean value in each channel of point, is denoted as R1, G1, B1 respectively;In addition to before iris exposure mask and character contour edge exposure mask extract region, R1, G1, B1 are replaced into the smooth pixel point (x1, y1), realize the smoothing processing to whole picture.This smoothing processing is directed to people The pixel distortion factor on the face is small, can select preceding four pixel from 20, periphery (x1, y1) vicinity points using " X-type " structure Point is smoothed, to be optimal repairing effect.
In image repair, what is be most difficult to is exactly to find blocks and optimal matching blocks to repair area to be repaired, and the present invention is according to " X-type " Structure chooses 20 vicinity points, calculates the mean value for being located at the channel the R pixel value of the first two, can quickly obtain optimal reticulate pattern Matched pixel point is filled up at edge, calculates the mean value of the channel the R pixel value positioned at preceding four, for optimal smoothing processing pixel, Can be obtained picture in its entirety in addition to before iris exposure mask and character contour edge exposure mask extract region.It is relatively simple based on expansion Dissipate restorative procedure, the restorative procedure and edge detection algorithm based on sample block of equation, the method for the present invention it is creative by three In conjunction with obtaining blocks and optimal matching blocks, marginal information can be handled but also protect the marginal information of image, can be simultaneously reached smooth Repairing effect, and can quickly repair.
Embodiment
For the specific embodiment and verifying effectiveness of the invention that the present invention will be described in detail, by side proposed by the present invention Method is applied in multiple face pictures covered by reticulate pattern.The face reticulate pattern of such picture is more sparse and color is shallower.
In the present embodiment, directly the input of picture path is entered, then command window inputs inpainting again, i.e., It can be seen that the effect picture after original image and final descreening.
Picture to be processed is Fig. 3 (a) and Fig. 3 (b) in the present embodiment, and size is 220 × 178, sets a=0.387, c =0.645, d=0.41, r=14;It is repaired according to the method for the present invention, obtains the effect of reparation described in Fig. 4 (a) and Fig. 4 (b) Fruit.
Fig. 4 (a) and Fig. 4 (b) illustrates the effect picture after the removal face reticulate pattern reparation of the method for the present invention, wherein filling up net Line is taken a little using from wait fill up a periphery with " X-type " structure, then screens in obtaining point, it is maximum to find the channel R numerical value Point, while its G is also taken, channel B numerical value, similarly, obtaining, the channel R numerical value time is a little bigger, gets the R of the channel R numerical value the first two point Channel mean value, then to corresponding G, channel B value carries out taking average operation, and the average value in three obtained channel, which is filled up, to be needed On the pixel to be filled up, so faster than the Exemplar-based algorithm of full figure traversal searching similar grain structure, fill up Effect is more preferable;It is equally also more more acurrate than Diffusion-based restorative procedure, because being directly to be taken a little from complement point periphery to be repaired, It is filled up after being averaged, so being substantially not visible the trace filled up.It is compared with both above algorithms, the method for the present invention is to quilt The face picture of reticulate pattern covering has better repairing effect, and the picture after reparation has the distortion of very little compared with original image, and just looks at Less than the trace of reparation, the accuracy identified in recognition of face can be effectively promoted.
Particular embodiments described above has carried out further in detail the purpose of the present invention, technical scheme and beneficial effects It describes in detail bright, it should be understood that the above is only a specific embodiment of the present invention, is not intended to restrict the invention, it is all Within the spirit and principles in the present invention, any modification, equivalent substitution, improvement and etc. done should be included in guarantor of the invention Within the scope of shield.
The methods of heretofore described mask fabrication, edge detection are the prior art.
The present invention does not address place and is suitable for the prior art.

Claims (3)

1. the restorative procedure of the face picture covered by reticulate pattern, it is characterised in that this method, which passes through, first extracts reticulate pattern edge, then into Row removal reticulate pattern is finally filled up reticulate pattern and is smoothed to entire image, achievees the purpose that restore face;Specific steps are such as Under:
Step S1, picture pretreatment:
Picture to be treated is subjected to capable acquisition, the picture height rol and picture width row of picture to be processed are obtained, with picture Vegetarian refreshments is unit, and the picture size for obtaining picture to be processed is rol × row;Picture to be processed double format is carried out again to turn It changes;It will carry out the picture size after the conversion of double format to handle, make the picture height 220, corresponding picture width be 178, so that picture height 88, corresponding picture width is 72 or picture height 118, corresponding picture width is made to be 96;
Step S2, sorts out picture, establishes coordinate system, positions prime area:
It is divided into three classes according to the size of the pretreated picture of above-mentioned steps S1, i.e., picture size is 220 × 178,88 × 72 or 118 ×96;The pretreated picture of judgment step S1 belongs to which kind of picture size in three classes;Then human eye range is determined Position, using the top left corner apex of picture after handling as origin, using transverse direction as x-axis, longitudinal is y-axis, and numerical value is more and more from left to right for x-axis Greatly, numerical value is increasing from top to bottom for y-axis, establishes xy coordinate system;Set following relevant parameter: left eye x coordinate proportionality coefficient as A, right eye x coordinate proportionality coefficient are c, and right and left eyes y-coordinate proportionality coefficient is d, location radii r;It is pre-processed according to step S1 The size of picture set different proportion coefficient, when picture size is 220 × 178, then a=0.387, c=0.645, d= 0.41, r=14;When picture size is 88 × 72, then a=0.38, c=0.65, d=0.38, r=4;When picture size is 118 × 96, then a=0.365, c=0.645, d=0.375, r=8.5;Eyes and periphery half are oriented by above-mentioned set parameter Diameter is the region of r, i.e. positioning obtains the prime area of human eye;
Step S3 extracts reticulate pattern edge using edge detection, is removed reticulate pattern operation:
The pretreated picture of acquisition step S1, traverses picture, obtains the R of all pixels point of picture, G, channel B picture Element value, using the R of all pixels point, G, channel B pixel value carries out the gradient difference that edge detection solves each pixel, utilizes side Edge detects to obtain character contour region;Reticulate pattern edge extracting is carried out using the gradient difference of each pixel simultaneously, obtains reticulate pattern side Edge region carries out the reticulate pattern fringe region detected using white pixel point (255,255,255) after obtaining reticulate pattern edge Assignment completes the operation of removal reticulate pattern, the white space after obtaining removal reticulate pattern;
Step S4 extracts exposure mask, fills up reticulate pattern and smoothed image:
Iris exposure mask and character contour edge exposure mask, line mask of going forward side by side production are generated, smoothed image exports image results;
It comprises the concrete steps that:
S41 chooses blank pixel point (x0, y0) in the white space that step S3 is obtained, then on the periphery blank pixel point (x0, y0) 20 vicinity points are chosen according to " X-type " structure, the channel R pixel value size is carried out to this 20 vicinity points and is compared side by side Then sequence chooses four vicinity points that the channel R pixel value is located at preceding four, the channel the R pixel value of this four vicinity points It is descending to be denoted as Rout1, Rout2, Rout3 and Rout4 respectively, the corresponding channel G pixel value be denoted as respectively Gout1, Gout2, Gout3 and Gout4, channel B pixel value are denoted as Bout1, Bout2, Bout3 and Bout4 respectively;
S42, by the R of four vicinity points and blank pixel point (x0, y0) obtained in step S41, G, channel B pixel value (R0, G0, B0) carries out gradient difference calculating according to formula (1) respectively,
Wherein, i=1,2,3,4, Ti are gradient difference;
S43 sets character contour threshold value as 155, the gradient difference Ti (T1, T2, T3, T4) that step S42 is obtained is compared greatly It is small, it then sorts, selects maximum gradient difference numerical value and be compared with the character contour threshold value of setting, maximum numerical value is big in Ti In character contour threshold value 155 to get arrive character contour edge exposure mask, by character contour edge exposure mask to character contour edge into Line mask production;Conversely, then doing nothing, S44 is entered step;
S44 orients the prime area range of human eye, chooses 3 in the prime area using the relevant parameter set in step S2 The subregion Ir of × 3 sizes, the pixel value in the channel R of the subregion be denoted as respectively Ra01, Ra02 ..., Ra09, specific structure As shown in the table,
Ra01 Ra02 Ra03 Ra04 Ra05 Ra06 Ra07 Ra08 Ra09
Convolution algorithm, warp factor α are carried out in this prime area are as follows:
Using Gp=α * Ir, Gp, Gp=Ra02+Ra04+Ra06+Ra08 are calculated, i.e. Gp indicates Ra02, Ra04, Ra06, Ra08 The sum of four positions channel R pixel value;
Iris threshold value is set as 530, Gp is compared with iris threshold value, if Gp < 530, positioning obtains iris masked areas, then Iris masked areas is subjected to iris periphery mask fabrication;If Gp > 530 are done nothing, S45 is entered step;
The iris masked areas positioned through step S44 is carried out the operation of iris periphery mask fabrication, sets upper limit rainbow by S45 Film threshold value is 1.3, and lower limit iris threshold value is 0.75, chooses iris pixel point (x, y) in iris masked areas, then in the iris picture Ten neighbouring reference pixels points are chosen in the upper and lower position vegetarian refreshments (x, y), and the channel the R pixel value of this ten reference pixels points is respectively R101, R102, R103, R104, R105, R108, R109, R110, R111 and R112, R101, R102, R103, R104 and R105 Position iris pixel point (x, y) opposite with the position R108, R109, R110, R111 and R112 is in upper and lower symmetrical structure, Using the proportionate relationship of formula (6), solution obtains Gpr,
Gpr=(R101+R102+R103+R104+R105)/(R108+R109+R110+R111+R112) (6)
Compare the size of Gpr Yu upper limit iris threshold value and lower limit iris threshold value, if Gpr be less than lower limit iris threshold value 0.75 or Gpr is greater than upper limit iris threshold value 1.3, then carries out iris periphery mask fabrication to iris pixel point (x, y);If 0.75≤Gpr≤ 1.3, then it does nothing, enters step S46;
S46 chooses the subregion Ir of 3 × 3 sizes in picture in its entirety0, subregion Ir0The rapid S44 of structure synchronization in subregion Ir, to subregion Ir0Edge detection is carried out, and carries out convolution algorithm, lateral warp factor Gx are as follows:
Longitudinal warp factor Gy are as follows:
Grx and Gry are calculated separately using formula (7) and (8), Grx and Gry respectively correspond transverse edge detection R channel value and indulge To edge detection R channel value;Edge is calculated according to formula (9) and fills up gradient difference Gr,
Grx=Gx*Ir0 (7)
Gry=Gy*Ir0 (8)
S47 sets edge and fills up threshold value as 60, and threshold value 60 is filled up at the edge that gradient difference Gr and setting are filled up in edge and is compared Compared with, if Gr < 60, the channel R pixel value Rout1, the Rout2 positioned at the first two that will be obtained according to step S41, while obtaining corresponding The channel G pixel value Gout1, Gout2 and channel B pixel value Bout1, Bout2, it is right according to formula (10)-(12) respectively R, G, channel B sum up averaging, obtain average value Rm on the channel R, average value Gm on the channel G, average value Bm in channel B,
Rm=(Rout1+Rout2)/2 (10)
Gm=(Gout1+Gout2)/2 (11)
Bm=(Bout1+Bout2)/2 (12)
Rm is used respectively, and Gm, Bm fill up the R of the reticulate pattern edge detected, G, channel B pixel value;If Gr >=60, step is carried out S48;
S48 chooses smooth pixel point (x1, y1) in the non-reticulate pattern edge of picture in its entirety, then in the smooth pixel point (x1, y1) 20 vicinity points are chosen according to " X-type " structure in periphery, with step S41, choose four neighbours that the channel R pixel value is located at preceding four Nearly pixel calculates the mean value in this four each channels of vicinity points with formula (13)-(15) respectively, be denoted as respectively R1, G1, B1;In addition to before iris exposure mask and character contour edge exposure mask extract region, by R1, G1, B1 replace the smooth pixel point The R of (x1, y1), G, channel B pixel value realize the smoothing processing to whole picture, export image results;
R1=(Rout1+Rout2+Rout3+Rout4)/4 (13)
G1=(Gout1+Gout2+Gout3+Gout4)/4 (14)
B1=(Bout1+Bout2+Bout3+Bout4)/4 (15).
2. the restorative procedure of the face picture according to claim 1 covered by reticulate pattern, it is characterised in that: the step " X-type " structure in S41 refers to, centered on blank pixel point, the blank pixel point four corner positions diagonally Four vicinity points are chosen in direction respectively, then select after a pixel on the position up and down of blank pixel point One vicinity points, totally 20 vicinity points constitute " X-type " structure.
3. the restorative procedure of the face picture according to claim 1 covered by reticulate pattern, it is characterised in that: the step The choosing method of the reference pixels point of iris pixel point (x, y) in S45 is: centered on iris pixel point (x, y), in the rainbow Four corner positions of film pixel (x, y) diagonally choose two reference pixels points respectively, then in iris pixel point Upper and lower position on after a pixel select a reference pixels point, altogether choose 10 reference pixels points.
CN201710226996.1A 2017-04-07 2017-04-07 The restorative procedure of the face picture covered by reticulate pattern Expired - Fee Related CN107016657B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710226996.1A CN107016657B (en) 2017-04-07 2017-04-07 The restorative procedure of the face picture covered by reticulate pattern

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710226996.1A CN107016657B (en) 2017-04-07 2017-04-07 The restorative procedure of the face picture covered by reticulate pattern

Publications (2)

Publication Number Publication Date
CN107016657A CN107016657A (en) 2017-08-04
CN107016657B true CN107016657B (en) 2019-05-28

Family

ID=59446227

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710226996.1A Expired - Fee Related CN107016657B (en) 2017-04-07 2017-04-07 The restorative procedure of the face picture covered by reticulate pattern

Country Status (1)

Country Link
CN (1) CN107016657B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107993190B (en) * 2017-11-14 2020-05-19 中国科学院自动化研究所 Image watermark removal device
CN108010009B (en) * 2017-12-15 2021-12-21 北京小米移动软件有限公司 Method and device for removing interference image
CN108121978A (en) * 2018-01-10 2018-06-05 马上消费金融股份有限公司 Face image processing method, system and equipment and storage medium
CN108447030A (en) * 2018-02-28 2018-08-24 广州布伦南信息科技有限公司 A kind of image processing method of descreening
CN108428218A (en) * 2018-02-28 2018-08-21 广州布伦南信息科技有限公司 A kind of image processing method of removal newton halation
CN109035171B (en) * 2018-08-01 2021-06-15 中国计量大学 A kind of reticulated face image restoration method
CN112418054B (en) * 2020-11-18 2024-07-19 北京字跳网络技术有限公司 Image processing method, apparatus, electronic device, and computer readable medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102567957A (en) * 2010-12-30 2012-07-11 北京大学 Method and system for removing reticulate pattern from image
CN103442159A (en) * 2013-09-02 2013-12-11 安徽理工大学 Edge self-adapting demosaicing method based on RS-SVM integration
CN105930797A (en) * 2016-04-21 2016-09-07 腾讯科技(深圳)有限公司 Face verification method and device
CN106530227A (en) * 2016-10-27 2017-03-22 北京小米移动软件有限公司 Image restoration method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102567957A (en) * 2010-12-30 2012-07-11 北京大学 Method and system for removing reticulate pattern from image
CN103442159A (en) * 2013-09-02 2013-12-11 安徽理工大学 Edge self-adapting demosaicing method based on RS-SVM integration
CN105930797A (en) * 2016-04-21 2016-09-07 腾讯科技(深圳)有限公司 Face verification method and device
CN106530227A (en) * 2016-10-27 2017-03-22 北京小米移动软件有限公司 Image restoration method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"基于样本和结构信息的大范围图像修复修复算法研究";李功清;《万方企业知识服务平台》;20130320;第2-4章

Also Published As

Publication number Publication date
CN107016657A (en) 2017-08-04

Similar Documents

Publication Publication Date Title
CN107016657B (en) The restorative procedure of the face picture covered by reticulate pattern
CN111583157B (en) Image processing method, system and computer readable storage medium
Ram Prabhakar et al. Deepfuse: A deep unsupervised approach for exposure fusion with extreme exposure image pairs
Zhang et al. Exemplar-based image inpainting using color distribution analysis
CN112419210B (en) Underwater image enhancement method based on color correction and three-interval histogram stretching
CN109003233B (en) An Image Denoising Method Based on Adaptive Weight Total Variation Model
CN108090886A (en) A kind of display of high dynamic range infrared image and detail enhancing method
CN109544464A (en) A kind of fire video image analysis method based on contours extract
CN114792310A (en) A Mura Defect Detection Method with Blurred Edges in LCD Screens
CN109325498B (en) Vein extraction method for improving Canny operator based on window dynamic threshold
CN114331873A (en) Non-uniform illumination color image correction method based on region division
CN117274113B (en) Broken silicon wafer cleaning effect visual detection method based on image enhancement
CN107066957A (en) Iris locating method and device in visible ray eyes image
CN107194869A (en) A kind of image processing method and terminal, computer-readable storage medium, computer equipment
CN109300127A (en) Defect inspection method, device, computer equipment and storage medium
CN106846271B (en) Method for removing reticulate pattern in identity card photo
CN112529800A (en) Near-infrared vein image processing method for filtering hair noise
CN118505693A (en) Holographic printing quality detection method and system based on computer vision
CN108053415B (en) Bionic contour detection method based on improved non-classical receptive field
CN106504261A (en) A kind of image partition method and device
CN116630198A (en) A multi-scale fusion underwater image enhancement method combined with adaptive gamma correction
CN114612384B (en) Method and system for detecting defects of appearance material of sport protector
JP4076777B2 (en) Face area extraction device
Qiegen et al. Adaptive image decomposition by improved bilateral filter
CN110298816A (en) A kind of Bridge Crack detection method re-generated based on image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20190528

CF01 Termination of patent right due to non-payment of annual fee