The restorative procedure of the face picture covered by reticulate pattern
Technical field
The present invention relates to image data processing technology fields, the restorative procedure of the face picture especially covered by reticulate pattern.
Background technique
Face recognition technology reaches its maturity, and face recognition application is also more and more extensive.Currently based on the application of recognition of face
It is concentrated mainly on face attendance recorder, face clearance system and recognition of face based on video and monitoring etc., is belonged to dynamic
State target makes detection and realizes identification;Face in photo identify to belong to that static object is made detection and realized and is known
Not.
There is fine-structure mesh since certificate photo can be inevitably coupled on noise, such as face when being converted into digital information storage
Line, this influences to use very much.Reticulate pattern on past removal certificate photo generallys use PhotoShop software, artificial to wipe reticulate pattern region,
Then manual operation, such working efficiency is very low, and cost of labor is very high.
Can be mainly divided into two major classes currently based on image restoration technology: one kind is the restorative procedure based on diffusion equation,
Another kind of is the restorative procedure based on sample block.
Restorative procedure based on diffusion equation is based on parameter model or partial differential equation (Xu Liming, Wu Yajuan, Liu Hang
River is based on variational PDEs image restoration technology research [J] China West Normal University journal (natural science edition) .2016.37
(3): 343-348 it), is inwardly gradually seamlessly transitted from the edge of image damaged area, smoothly preferentially will spread through sex intercourse or be distributed to office
In portion's structure, such restorative procedure is mainly used for solving the reparation of zonule breakage.This method mainly includes that partial differential equation are calculated
Method, Total Variation and based on Curvature-driven diffusion equation model etc..
(Chang Chen, what builds a kind of improved Criminisi image repair method [J] good fortune of agriculture to restorative procedure based on sample block
State college journal (natural science edition) .2017.45 (01): 74-79) it is by the Optimum Matching in resource area searching and object block
Block, and damaged area is copied directly to realize image repair;Since this method is able to maintain the consistency of textural characteristics,
Therefore it is suitble to solve the problems, such as the reparation of big region breakage image.
However, restorative procedure main for both the above when carrying out the image repair with reticulate pattern face, needs in advance
It finds and needs the reticulate pattern region that removes, to wipe reticulate pattern region manually, then repaired, while after fully can not repairing erasing
Reticulate pattern region, the reparation effect close to original image is not achieved in the especially faces sensitive information such as human eye and its neighboring area region
Fruit.
Summary of the invention
Existing deficiency, skill to be solved by this invention when for the picture that processing face is covered by reticulate pattern in the prior art
Art problem is the restorative procedure for proposing a kind of face picture covered by reticulate pattern.This method is started with from static object, will be based on expansion
Restorative procedure, the restorative procedure based on sample block and the edge detection algorithm for dissipating equation are improved and are integrated, first progress side
Edge detection, position character profile complete the removal of reticulate pattern, secondly make exposure mask, prevent later image processing from causing its distortion, so
The reticulate pattern region got rid of is filled up by " X-type " structure afterwards, does smoothing processing, finally to reach the optimal defeated of image
Out.This method is automatically removing reticulate pattern and capable of quickly repairing image for the reticulate pattern noise proposition occurred for face on certificate photo
Method, on certificate photo face occur reticulate pattern noise removal and repairing effect it is fine, be greatly saved artificial removal
And the time covered on certificate photo by reticulate pattern noise is repaired, it improves work efficiency simultaneously.
The present invention solve the technical problem the technical solution adopted is that:
The restorative procedure of the face picture covered by reticulate pattern, it is characterised in that this method, which passes through, first extracts reticulate pattern edge, then
It is removed reticulate pattern, finally fills up reticulate pattern and entire image is smoothed, achievees the purpose that restore face;Specific steps
It is as follows:
Step S1, picture pretreatment:
Picture to be treated is subjected to capable acquisition, obtains the picture height rol and picture width row of picture to be processed,
As unit of pixel, the picture size for obtaining picture to be processed is rol × row;Picture to be processed is subjected to double lattice again
Formula conversion;Picture size after progress double format conversion is handled, picture height 220,88 or 118 is made, it is corresponding
Picture width be 178,72 or 96;
Step S2, sorts out picture, establishes coordinate system, positions prime area:
It is divided into three classes according to the size of the pretreated picture of above-mentioned steps S1, i.e., picture size is 220 × 178,88 × 72
Or 118 × 96;The pretreated picture of judgment step S1 belongs to which kind of picture size in three classes;Then to human eye range into
Row positioning, using the top left corner apex of picture after handling as origin, using transverse direction as x-axis, longitudinal is y-axis, and x-axis from left to right get over by numerical value
Come bigger, numerical value is increasing from top to bottom for y-axis, establishes xy coordinate system;Set following relevant parameter: left eye x coordinate ratio system
Number is a, and right eye x coordinate proportionality coefficient is c, and right and left eyes y-coordinate proportionality coefficient is d, location radii r;It is pre- according to step S1
The size of the picture of processing sets different proportion coefficient, when picture size is 220 × 178, then a=0.387, c=0.645, d=
0.41, r=14;When picture size is 88 × 72, then a=0.38, c=0.65, d=0.38, r=4;When picture size is 118
× 96, then a=0.365, c=0.645, d=0.375, r=8.5;Eyes and periphery half are oriented by above-mentioned set parameter
Diameter is the region of r, i.e. positioning obtains the prime area of human eye;
Step S3 extracts reticulate pattern edge using edge detection, is removed reticulate pattern operation:
The pretreated picture of acquisition step S1, traverses picture, obtains the R of all pixels point of picture, and G, B are logical
Road pixel value, using the R of all pixels point, G, channel B pixel value carries out the gradient difference that edge detection solves each pixel, benefit
Character contour region is obtained with edge detection;Reticulate pattern edge extracting is carried out using the gradient difference of each pixel simultaneously, obtains net
Line fringe region, after obtaining reticulate pattern edge, using white pixel point (255,255,255) to the reticulate pattern fringe region detected
Assignment is carried out, the operation of removal reticulate pattern is completed, the white space after obtaining removal reticulate pattern;
Step S4 extracts exposure mask, fills up reticulate pattern and smoothed image:
Iris exposure mask and character contour edge exposure mask, line mask of going forward side by side production are generated, smoothed image exports image results;
It comprises the concrete steps that:
S41 chooses blank pixel point (x0, y0) in the white space that step S3 is obtained, then in blank pixel point (x0, y0)
20 vicinity points are chosen according to " X-type " structure in periphery, carry out the channel R pixel value size to this 20 vicinity points and compare
And sort, then choose four vicinity points that the channel R pixel value is located at preceding four, the channel the R picture of this four vicinity points
Plain value is descending to be denoted as Rout1, Rout2, Rout3 and Rout4 respectively, the corresponding channel G pixel value be denoted as respectively Gout1,
Gout2, Gout3 and Gout4, channel B pixel value are denoted as Bout1, Bout2, Bout3 and Bout4 respectively;
S42, by the R of four vicinity points and blank pixel point (x0, y0) obtained in step S41, G, channel B pixel
It is worth (R0, G0, B0) and carries out gradient difference calculating according to formula (1) respectively,
Wherein, i=1,2,3,4, Ti are gradient difference;
S43 sets character contour threshold value as 155, the gradient difference Ti (T1, T2, T3, T4) that step S42 is obtained is compared
It compared with size, then sorts, selects maximum gradient difference numerical value and be compared with the character contour threshold value of setting, maximum number in Ti
Value is greater than character contour threshold value 155 to get character contour edge exposure mask is arrived, by character contour edge exposure mask to character contour side
Edge carries out mask fabrication;Conversely, then doing nothing, S44 is entered step;
S44 orients the prime area range of human eye using relevant parameter set in step S2, and it is initial to choose this
The subregion Ir of 3 × 3 sizes in region, the pixel value in the channel R of the subregion be denoted as respectively Ra01, Ra02 ..., Ra09,
Specific structure is as shown in the table,
Ra01 |
Ra02 |
Ra03 |
Ra04 |
Ra05 |
Ra06 |
Ra07 |
Ra08 |
Ra09 |
Convolution algorithm, warp factor α are carried out in this prime area are as follows:
Using Gp=α * Ir, calculate Gp, Gp=Ra02+Ra04+Ra06+Ra08, i.e. Gp indicate Ra02, Ra04, Ra06,
The sum of tetra- positions the Ra08 channel R pixel value;
Iris threshold value is set as 530, Gp is compared with iris threshold value, if Gp < 530, positioning obtains iris mask regions
Iris masked areas is then carried out iris periphery mask fabrication by domain;If Gp > 530 are done nothing, enter step
S45;
The iris masked areas positioned through step S44 is carried out iris periphery mask fabrication and operated, in setting by S45
Limiting iris threshold value is 1.3, and lower limit iris threshold value is 0.75, chooses iris pixel point (x, y) in iris masked areas, then in the rainbow
Choose ten neighbouring reference pixels points, the channel the R pixel value minute of this ten reference pixels points in the upper and lower position film pixel (x, y)
Not Wei R101, R102, R103, R104, R105, R108, R109, R110, R111 and R112, R101, R102, R103, R104 and
The position R105 iris pixel point (x, y) opposite with the position R108, R109, R110, R111 and R112 is in symmetrical above and below
Structure, using the proportionate relationship of formula (6), solution obtains Gpr,
Gpr=(R101+R102+R103+R104+R105)/(R108+R109+R110+R111+R112) (6)
Compare the size of Gpr Yu upper limit iris threshold value and lower limit iris threshold value, if Gpr be less than lower limit iris threshold value 0.75 or
Person Gpr is greater than upper limit iris threshold value 1.3, then carries out iris periphery mask fabrication to iris pixel point (x, y);If 0.75≤Gpr
≤ 1.3, then it does nothing, enters step S46;
S46 chooses the subregion Ir of 3 × 3 sizes in picture in its entirety0, subregion Ir0The rapid S44 of structure synchronization in son
Region Ir, to subregion Ir0Edge detection is carried out, and carries out convolution algorithm, lateral warp factor Gx are as follows:
Longitudinal warp factor Gy are as follows:
Grx and Gry are calculated separately using formula (7) and (8), Grx and Gry respectively correspond transverse edge detection R channel value
R channel value is detected with longitudinal edge;Edge is calculated according to formula (9) and fills up gradient difference Gr,
Grx=Gx*Ir0 (7)
Gry=Gy*Ir0 (8)
S47 sets edge and fills up threshold value as 60, and threshold value 60 is filled up at the edge that gradient difference Gr and setting are filled up in edge and is carried out
Compare, if Gr < 60, the channel R pixel value Rout1, the Rout2 positioned at the first two that will be obtained according to step S41, while obtaining opposite
Pixel value Gout1, the Gout2 in the channel G answered and pixel value Bout1, Bout2 of channel B, respectively according to formula (10)-(12)
To R, G, channel B sums up averaging, obtains average value Rm on the channel R, average value Gm on the channel G, average value in channel B
Bm,
Rm=(Rout1+Rout2)/2 (10)
Gm=(Gout1+Gout2)/2 (11)
Bm=(Bout1+Bout2)/2 (12)
Rm is used respectively, and Gm, Bm fill up the R of the reticulate pattern edge detected, G, channel B pixel value;If Gr >=60, carry out
Step S48;
S48 chooses smooth pixel point (x1, y1) in the non-reticulate pattern edge of picture in its entirety, then in the smooth pixel point
20 vicinity points are chosen according to " X-type " structure in the periphery (x1, y1), with step S41, choose the channel R pixel value and are located at preceding four
Four vicinity points, the mean value in this four each channels of vicinity points is calculated with formula (13)-(15) respectively, is denoted as respectively
R1,G1,B1;In addition to before iris exposure mask and character contour edge exposure mask extract region, by R1, G1, B1 replace the smooth picture
The R of vegetarian refreshments (x1, y1), G, channel B pixel value realize the smoothing processing to whole picture, export image results;
R1=(Rout1+Rout2+Rout3+Rout4)/4 (13)
G1=(Gout1+Gout2+Gout3+Gout4)/4 (14)
B1=(Bout1+Bout2+Bout3+Bout4)/4 (15).
The restorative procedure of the above-mentioned face picture covered by reticulate pattern, " X-type " structure in the step S41 refers to, with sky
Centered on white pixel point, four neighborhood pixels are diagonally chosen respectively in four corner positions of the blank pixel point
Point, then a vicinity points, totally 20 neighbours are selected after a pixel on the position up and down of blank pixel point
Nearly pixel constitutes " X-type " structure.
The restorative procedure of the above-mentioned face picture covered by reticulate pattern, the ginseng of the iris pixel point (x, y) in the step S45
The choosing method of photograph vegetarian refreshments is: centered on iris pixel point (x, y), in four apex angle positions of the iris pixel point (x, y)
It sets and diagonally chooses two reference pixels points respectively, then every a pixel on the upper and lower position of iris pixel point
A reference pixels point is selected afterwards, chooses 10 reference pixels points altogether.
Compared with prior art, beneficial effects of the present invention are as follows:
(1) present invention is for the face picture that is covered by reticulate pattern, how to directly input that there are the progress of the face picture of reticulate pattern
Face descreening operation, to be effectively removed the problem of reticulate pattern is reduced to original facial image, by repairing based on diffusion equation
Compound method, the restorative procedure based on sample block and edge detection algorithm are improved and are integrated, and are first extracted reticulate pattern and are repaired again,
Edge detection is carried out first, and position character profile completes the removal of reticulate pattern, secondly makes exposure mask, prevents later image processing from making
At its distortion, then the reticulate pattern region got rid of is filled up by " X-type " structure, does smoothing processing, finally to reach people
The optimal output of face image.
(2) the method for the present invention is repaired again for there is the picture of reticulate pattern first to carry out descreening on face, using edge detection,
It can accurately detect reticulate pattern edge, can thus effectively remove reticulate pattern region, while in order to avoid edge detection is to people
It is missed in the sensitive informations regions such as face key position, including pupil of human, iris, sclera, nose profile and upper and lower lip contour
Operation, the present invention have done exposure mask protection to face key position by production iris exposure mask and personage's edge contour exposure mask, can be with
Simple is removed reticulate pattern operation;Fill up use " X-type " structure fill up strategy, with respect to the background art in two kinds of reparation sides
Method is filled up faster, and is more suitable for the removal reparation of face reticulate pattern.Judging from the experimental results, " X-type " is filled up reparation and is achieved
Preferable repairing effect (seeing below in embodiment to the explanation of Fig. 3 (a) and Fig. 3 (b) and Fig. 4 (a) and Fig. 4 (b)).
Detailed description of the invention
Fig. 1 is the schematic diagram of " X-type " structure in step S41 in the present invention;
Fig. 2 is the choosing method schematic diagram of the reference pixels point of iris pixel point (x, y) in step S45 in the present invention;
Fig. 3 (a) and Fig. 3 (b) is respectively the example of two input pictures;
Fig. 4 (a) and Fig. 4 (b) is respectively the effect picture after Fig. 3 (a) and the corresponding reparation of Fig. 3 (b);
Specific embodiment
To make the objectives, technical solutions, and advantages of the present invention clearer, below in conjunction with example example and attached drawing to this
Invention is further described.But described embodiment is intended merely to facilitate the understanding of the present invention, and not with this be used to pair
The restriction of the claims in the present invention protection scope.
The restorative procedure for the face picture that the present invention is covered by reticulate pattern by first extracting reticulate pattern edge, then is removed net
Line is finally filled up reticulate pattern and is smoothed to entire image, achievees the purpose that restore face;Specific step is as follows:
Step S1, picture pretreatment:
Picture to be treated is subjected to capable acquisition by MATLAB software, obtains the picture height rol of picture to be processed
With picture width row, as unit of pixel, the picture size for obtaining picture to be processed is rol × row;Again by figure to be processed
Piece carries out the conversion of double format;Picture size after progress double format conversion is handled, picture height is made
220,88 or 118, corresponding picture width is 178,72 or 96;
Step S2, sorts out picture, establishes coordinate system, positions prime area:
It is divided into three classes according to the size of the pretreated picture of above-mentioned steps S1, i.e., picture size is 220 × 178,88 × 72
Or 118 × 96;The pretreated picture of judgment step S1 belongs to which kind of picture size in three classes;Then to human eye range into
Row positioning, using the top left corner apex of picture after handling as origin, using transverse direction as x-axis, longitudinal is y-axis, and x-axis from left to right get over by numerical value
Come bigger, numerical value is increasing from top to bottom for y-axis, establishes xy coordinate system;Set following relevant parameter: left eye x coordinate ratio system
Number is a, and right eye x coordinate proportionality coefficient is c, and right and left eyes y-coordinate proportionality coefficient is d, location radii r;At step S1
Picture size after reason sets different proportion coefficient, when picture size is 220 × 178, then a=0.387, c=0.645, d=
0.41, r=14;When picture size is 88 × 72, then a=0.38, c=0.65, d=0.38, r=4;When picture size is 118
× 96, then a=0.365, c=0.645, d=0.375, r=8.5;Eyes and periphery half are oriented by above-mentioned set parameter
Diameter is the region of r, i.e. positioning obtains the prime area of human eye;Since input picture size is different, but the area where human eye
Domain position account for picture in its entirety ratio be it is similar, set specific ratio by the size for testing repeatedly for different pictures and close
It is more accurate that system can be such that the exposure mask in later period extracts, and achievees the effect that accurately to remove face reticulate pattern;
Step S3 extracts reticulate pattern edge using edge detection, is removed reticulate pattern operation:
The pretreated picture of acquisition step S1, traverses picture, obtains the R of all pixels point of picture, and G, B are logical
Road pixel value, using the R of all pixels point, G, channel B pixel value carries out edge detection, and (Shen Dehai, Hou Jian, E Xu are based on improving
Sobel operator edge detection algorithm [J] computer technology and develop .2013.23 (11): 22-25) solve each pixel
Gradient difference, obtain character contour region using edge detection;Reticulate pattern edge is carried out using the gradient difference of each pixel simultaneously
It extracts, reticulate pattern fringe region is obtained, after obtaining reticulate pattern edge, using white pixel point (255,255,255) to what is detected
Reticulate pattern fringe region carries out assignment, completes the operation of removal reticulate pattern, the white space after obtaining removal reticulate pattern;
Step S4 extracts exposure mask, fills up reticulate pattern and smoothed image:
Iris exposure mask and character contour edge exposure mask, line mask of going forward side by side production are generated, then fills up reticulate pattern, smoothed image is defeated
Image results out;
It comprises the concrete steps that:
S41 chooses blank pixel point (x0, y0) in the white space that step S3 is obtained, then in blank pixel point (x0, y0)
20 vicinity points are chosen according to " X-type " structure in periphery, carry out the channel R pixel value size to this 20 vicinity points and compare
And sort, then choose four vicinity points that the channel R pixel value is located at preceding four, the channel the R picture of this four vicinity points
Plain value is descending to be denoted as Rout1, Rout2, Rout3 and Rout4 respectively, the corresponding channel G pixel value be denoted as respectively Gout1,
Gout2, Gout3 and Gout4, channel B pixel value are denoted as Bout1, Bout2, Bout3 and Bout4 respectively;
S42, by the R of four vicinity points and blank pixel point (x0, y0) obtained in step S41, G, channel B pixel
It is worth (R0, G0, B0) and carries out gradient difference calculating according to formula (1) respectively,
Wherein, i=1,2,3,4, Ti are gradient difference;
S43 sets character contour threshold value as 155, the gradient difference Ti (T1, T2, T3, T4) that step S42 is obtained is compared
It compared with size, then sorts, selects maximum gradient difference numerical value and be compared with the character contour threshold value of setting, maximum number in Ti
Value is greater than character contour threshold value 155 to get character contour edge exposure mask is arrived, by character contour edge exposure mask to character contour side
Edge carries out mask fabrication;Conversely, then doing nothing, S44 is entered step;
S44 orients the prime area range of human eye, using the relevant parameter set in step S2 in order to this step
Iris masked areas is more accurately extracted, the subregion Ir of 3 × 3 sizes in the prime area is chosen, the channel R of the subregion
Pixel value be denoted as respectively Ra01, Ra02 ..., Ra09, specific structure is as shown in the table,
Ra01 |
Ra02 |
Ra03 |
Ra04 |
Ra05 |
Ra06 |
Ra07 |
Ra08 |
Ra09 |
Convolution algorithm, warp factor α are carried out in this prime area are as follows:
Using Gp=α * Ir, calculate Gp, Gp=Ra02+Ra04+Ra06+Ra08, i.e. Gp indicate Ra02, Ra04, Ra06,
The sum of tetra- positions the Ra08 channel R pixel value;
Iris threshold value is set as 530, Gp is compared with iris threshold value, if Gp < 530, positioning obtains iris mask regions
Iris masked areas is then carried out iris periphery mask fabrication, that is, masking operation by domain, subsequent to fill up work howsoever
Operation will not all have an impact the iris masked areas shielded, until before last picture in its entirety is smoothed;If
Gp > 530 are then done nothing, and enter step S45;
The iris masked areas positioned through step S44 is carried out iris periphery mask fabrication and operated, in setting by S45
Limiting iris threshold value is 1.3, and lower limit iris threshold value is 0.75, this upper limit iris threshold value and lower limit iris threshold value are used to obtain iris week
The exposure mask on side is chosen iris pixel point (x, y) in iris masked areas, then is chosen in the iris pixel upper and lower position point (x, y)
Ten neighbouring reference pixels points, the channel the R pixel value of this ten reference pixels points be respectively R101, R102, R103, R104,
R105, R108, R109, R110, R111 and R112, the position R101, R102, R103, R104 and R105 and R108, R109,
The position R110, R111 and R112 is in upper and lower symmetrical structure with respect to iris pixel point (x, y), is closed using the ratio of formula (6)
System, solution obtain Gpr,
Gpr=(R101+R102+R103+R104+R105)/(R108+R109+R110+R111+R112) (6)
Compare the size of Gpr Yu upper limit iris threshold value and lower limit iris threshold value, if Gpr be less than lower limit iris threshold value 0.75 or
Person Gpr is greater than upper limit iris threshold value 1.3, then carries out iris periphery mask fabrication to iris pixel point (x, y), avoids the later period because of figure
The change of the iris neighboring pixel value as caused by filling up causes to be distorted;If 0.75≤Gpr≤1.3 are done nothing, into
Enter step S46;
S46 chooses the subregion Ir of 3 × 3 sizes in picture in its entirety0, subregion Ir0The rapid S44 of structure synchronization in son
Region Ir, to subregion Ir0Edge detection is carried out, and carries out convolution algorithm, lateral warp factor Gx are as follows:
Longitudinal warp factor Gy are as follows:
Grx and Gry are calculated separately using formula (7) and (8), Grx and Gry respectively correspond transverse edge detection R channel value
R channel value is detected with longitudinal edge;Edge is calculated according to formula (9) and fills up gradient difference Gr,
Grx=Gx*Ir0 (7)
Gry=Gy*Ir0 (8)
S47 sets edge and fills up threshold value as 60, and threshold value 60 is filled up at the edge that gradient difference Gr and setting are filled up in edge and is carried out
Compare, if Gr < 60, the channel R pixel value Rout1, the Rout2 positioned at the first two that will be obtained according to step S41, while obtaining opposite
Pixel value Gout1, the Gout2 in the channel G answered and pixel value Bout1, Bout2 of channel B, respectively according to formula (10)-(12)
To R, G, channel B sums up averaging, obtains average value Rm on the channel R, average value Gm on the channel G, average value in channel B
Bm,
Rm=(Rout1+Rout2)/2 (10)
Gm=(Gout1+Gout2)/2 (11)
Bm=(Bout1+Bout2)/2 (12)
Rm is used respectively, and Gm, Bm fill up the R of the reticulate pattern edge detected, G, channel B pixel value;If Gr >=60, carry out
Step S48;
S48 chooses smooth pixel point (x1, y1) in the non-reticulate pattern edge of picture in its entirety, then in the smooth pixel point
20 vicinity points are chosen according to according to " X-type " structure in the periphery (x1, y1), with step S41, choose the channel R pixel value and are located at
Preceding four four vicinity points calculate the mean value in this four each channels of vicinity points with formula (13)-(15) respectively, respectively
It is denoted as R1, G1, B1;In addition to before iris exposure mask and character contour edge exposure mask extract region, R1, G1, B1 are put down instead of this
The R of sliding pixel (x1, y1), G, channel B pixel value realize the smoothing processing to whole picture, export image results;
R1=(Rout1+Rout2+Rout3+Rout4)/4 (13)
G1=(Gout1+Gout2+Gout3+Gout4)/4 (14)
B1=(Bout1+Bout2+Bout3+Bout4)/4 (15).
The method of the present invention is the face picture to be processed that first input is covered with reticulate pattern, passes through the height judgement input figure of picture
Piece size.Secondly reticulate pattern, that is, removal reticulate pattern are extracted using edge detection, then carries out mask fabrication, it is therefore an objective to right
Face sensitizing range (pupil of human, iris, sclera, the sensitive informations such as nose profile and upper and lower lip contour region) carries out exposure mask
Production, no matter how to handle face later, it is finally to carry out " X to the reticulate pattern region of removal that these parts, which are impregnable,
Type " is filled up, and the image filled up is smoothed and has just obtained final output photographic result.
Heretofore described exposure mask is to make the pixel value extracted not in setting threshold by changing the pixel value extracted
It is worth in range, guarantees to play shielding action not by corresponding algorithm process.The picture to be processed refers to the people with reticulate pattern
Face certificate photo, certificate photo size work just for these three 220 × 178,88 × 72,118 × 96 depth-width ratios, other certificate photos
Size will be converted into these three sizes and be handled again, and the picture to be processed of the application is color image.
Embodiment illustrated in fig. 1 show in the present invention " X-type " structure described in step S41 be with blank pixel point (x0,
Y0 centered on), the blank pixel point four corner positions diagonally respectively choose four vicinity points, then
A vicinity points are selected on the position up and down of blank pixel point after a pixel, totally 20 vicinity points
" X-type " structure is constituted, the black square in Fig. 1 is 20 vicinity points chosen.Smooth pixel point in step S48 (x1,
Y1 vicinity points selection rule) is identical with the vicinity points selection rule of blank pixel point.
Embodiment illustrated in fig. 2 shows the selection of the reference pixels point of iris pixel point (x, y) in step S45 in the present invention
Method is: in Fig. 2, each white square is a pixel, centered on iris pixel point (x, y), in the iris pixel
Four corner positions of point (x, y) diagonally choose respectively two reference pixels points (R101 and R104, R103 and
R105, R110 and R108, R112 and R109), then one is selected after a pixel on the upper and lower position of iris pixel point
A reference pixels point (R102 and R111), altogether choose 10 reference pixels points, 10 reference pixels points with respect to iris pixel point (x,
Y) structure symmetrical above and below is constituted.
By the way that experimental results demonstrate character contour edge mask fabrication of the invention takes 20, periphery using " X-type " structure
Vicinity points, the structure is symmetrical above and below, and bilateral symmetry, advantage is that 20 vicinity points are discontinuous, that is, every one
One is taken, is diagonally extended, and this 20 vicinity points are ranked up, chooses the channel R pixel value is located at preceding four four
A pixel, while the corresponding G of this four pixels can be obtained, channel B pixel value, using formula (1) it can be concluded that ladder
Difference Ti, i=1,2,3,4 are spent, the gradient difference for utilizing (x0, y0) equally distributed vicinity points to obtain can be extracted more reasonably
Character contour has done non-character contour and has preferably filtered out, and plays the role of to production personage's profile mask vital.Rainbow
It is because iris region pixel value is generally than iris peripheral region using structure symmetrical above and below using formula (6) in film mask fabrication
Domain pixel value wants low, is the sum of 5 channel reference pixels point R pixel values dispersed and following structure 5 minutes by above structure
The sum of the scattered channel reference pixels point R pixel value is done than obtaining Gpr, it is desirable that above structure and following symmetrical configuration are done so more
It can be accurately positioned and arrive iris region, then make iris exposure mask.Fill up reticulate pattern edge use " X-type " structure, can quickly from
20 vicinity points select the channel R pixel value Rout1, Rout2 positioned at the first two, while obtaining the corresponding channel G
Pixel value Bout1, Bout2 of pixel value Gout1, Gout2 and channel B, it is best that using formula (10)-, (12), which fill up effect,
's.Smooth pixel point (x1, y1) finally is chosen to the non-reticulate pattern edge of picture in its entirety, then in the smooth pixel point (x1, y1)
20 vicinity points are chosen according to according to " X-type " structure in periphery, calculate this four neighborhood pixels with formula (13)-(15) respectively
The mean value in each channel of point, is denoted as R1, G1, B1 respectively;In addition to before iris exposure mask and character contour edge exposure mask extract region,
R1, G1, B1 are replaced into the smooth pixel point (x1, y1), realize the smoothing processing to whole picture.This smoothing processing is directed to people
The pixel distortion factor on the face is small, can select preceding four pixel from 20, periphery (x1, y1) vicinity points using " X-type " structure
Point is smoothed, to be optimal repairing effect.
In image repair, what is be most difficult to is exactly to find blocks and optimal matching blocks to repair area to be repaired, and the present invention is according to " X-type "
Structure chooses 20 vicinity points, calculates the mean value for being located at the channel the R pixel value of the first two, can quickly obtain optimal reticulate pattern
Matched pixel point is filled up at edge, calculates the mean value of the channel the R pixel value positioned at preceding four, for optimal smoothing processing pixel,
Can be obtained picture in its entirety in addition to before iris exposure mask and character contour edge exposure mask extract region.It is relatively simple based on expansion
Dissipate restorative procedure, the restorative procedure and edge detection algorithm based on sample block of equation, the method for the present invention it is creative by three
In conjunction with obtaining blocks and optimal matching blocks, marginal information can be handled but also protect the marginal information of image, can be simultaneously reached smooth
Repairing effect, and can quickly repair.
Embodiment
For the specific embodiment and verifying effectiveness of the invention that the present invention will be described in detail, by side proposed by the present invention
Method is applied in multiple face pictures covered by reticulate pattern.The face reticulate pattern of such picture is more sparse and color is shallower.
In the present embodiment, directly the input of picture path is entered, then command window inputs inpainting again, i.e.,
It can be seen that the effect picture after original image and final descreening.
Picture to be processed is Fig. 3 (a) and Fig. 3 (b) in the present embodiment, and size is 220 × 178, sets a=0.387, c
=0.645, d=0.41, r=14;It is repaired according to the method for the present invention, obtains the effect of reparation described in Fig. 4 (a) and Fig. 4 (b)
Fruit.
Fig. 4 (a) and Fig. 4 (b) illustrates the effect picture after the removal face reticulate pattern reparation of the method for the present invention, wherein filling up net
Line is taken a little using from wait fill up a periphery with " X-type " structure, then screens in obtaining point, it is maximum to find the channel R numerical value
Point, while its G is also taken, channel B numerical value, similarly, obtaining, the channel R numerical value time is a little bigger, gets the R of the channel R numerical value the first two point
Channel mean value, then to corresponding G, channel B value carries out taking average operation, and the average value in three obtained channel, which is filled up, to be needed
On the pixel to be filled up, so faster than the Exemplar-based algorithm of full figure traversal searching similar grain structure, fill up
Effect is more preferable;It is equally also more more acurrate than Diffusion-based restorative procedure, because being directly to be taken a little from complement point periphery to be repaired,
It is filled up after being averaged, so being substantially not visible the trace filled up.It is compared with both above algorithms, the method for the present invention is to quilt
The face picture of reticulate pattern covering has better repairing effect, and the picture after reparation has the distortion of very little compared with original image, and just looks at
Less than the trace of reparation, the accuracy identified in recognition of face can be effectively promoted.
Particular embodiments described above has carried out further in detail the purpose of the present invention, technical scheme and beneficial effects
It describes in detail bright, it should be understood that the above is only a specific embodiment of the present invention, is not intended to restrict the invention, it is all
Within the spirit and principles in the present invention, any modification, equivalent substitution, improvement and etc. done should be included in guarantor of the invention
Within the scope of shield.
The methods of heretofore described mask fabrication, edge detection are the prior art.
The present invention does not address place and is suitable for the prior art.