CN107256555A - A kind of image processing method, device and storage medium - Google Patents
A kind of image processing method, device and storage medium Download PDFInfo
- Publication number
- CN107256555A CN107256555A CN201710387281.4A CN201710387281A CN107256555A CN 107256555 A CN107256555 A CN 107256555A CN 201710387281 A CN201710387281 A CN 201710387281A CN 107256555 A CN107256555 A CN 107256555A
- Authority
- CN
- China
- Prior art keywords
- image
- color
- hair
- pixel
- processing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 30
- 238000003860 storage Methods 0.000 title claims abstract description 19
- 210000004209 hair Anatomy 0.000 claims abstract description 271
- 238000012545 processing Methods 0.000 claims abstract description 187
- 238000011161 development Methods 0.000 claims abstract description 107
- 230000037308 hair color Effects 0.000 claims abstract description 56
- 239000003086 colorant Substances 0.000 claims abstract description 54
- 238000000034 method Methods 0.000 claims abstract description 51
- 230000008569 process Effects 0.000 claims abstract description 44
- 238000004040 coloring Methods 0.000 claims description 3
- 230000018109 developmental process Effects 0.000 description 90
- 230000000875 corresponding effect Effects 0.000 description 30
- 230000006870 function Effects 0.000 description 23
- 238000004364 calculation method Methods 0.000 description 16
- 230000015654 memory Effects 0.000 description 16
- 230000008859 change Effects 0.000 description 15
- 230000000694 effects Effects 0.000 description 9
- 230000006854 communication Effects 0.000 description 8
- 238000004891 communication Methods 0.000 description 7
- 238000001514 detection method Methods 0.000 description 7
- 230000007704 transition Effects 0.000 description 6
- 230000006835 compression Effects 0.000 description 4
- 238000007906 compression Methods 0.000 description 4
- 238000004043 dyeing Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 239000000203 mixture Substances 0.000 description 4
- 206010019030 Hair colour changes Diseases 0.000 description 3
- 239000000284 extract Substances 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 239000002537 cosmetic Substances 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000001815 facial effect Effects 0.000 description 2
- 230000005484 gravity Effects 0.000 description 2
- 238000007689 inspection Methods 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000007774 longterm Effects 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000005314 correlation function Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000008054 signal transmission Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000010897 surface acoustic wave method Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/001—Texturing; Colouring; Generation of texture or colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
- Processing Or Creating Images (AREA)
Abstract
The embodiment of the invention discloses a kind of image processing method, device and storage medium, in embodiments of the present invention, obtain the pending image and target color development of user's selection, it is then determined that the hair zones of the pending image, and color of object image is generated according to the target color development, and obtain the track data of hair in the hair zones, so as to which the track data based on the hair carries out deformation process to color of object image, hereafter, the pixel of the correspondence position of color of object image after the image section of the hair zones and deformation is subjected to colour filter mixed processing, obtain the first object image with target color development, through the above way, the embodiment of the present invention can realize that it is color that the hair color in image is changed into dark color, light color system color or graduated colors, so as to provide including dark color system, the target color development of light color system or gradient color is selected for user, meet the diversified demand of user.
Description
Technical field
The present invention relates to communication technical field, and in particular to a kind of image processing method, device and storage medium.
Background technology
With continuing to develop for electronic technology, the function of mobile terminal (such as smart mobile phone or tablet personal computer) is increasingly
Powerful, in addition to basic call and SMS, also various abundant amusement functions are for example taken pictures, and can also be right
The photo of shooting carries out some picture processings, such as changes effect of shadow, skin-whitening, the change of hair style color development, etc., so that full
The demand of sufficient user.
At present, there are many image procossing applications in Mobile solution field, install on mobile terminals after image procossing application,
User can by taking pictures or choosing photo, to change hair style and hair color etc. using these image procossing applications,
To reach U.S. face effect.It is typically the target color development selected according to user in the color development change technology of conventional images processing application
A color figure layer is obtained, the hair color of user in color figure layer and photo is then subjected to simple superposition, so as to realize hair
Color changes.
In the research and practice process to prior art, it was found by the inventors of the present invention that prior art is due to passing through letter
Single color addition changes color development, therefore is typically only capable to realize the hair dyeing effect based on dark system, it is difficult to realize light color system
Hair dyeing effect, thus selectable target color development species is less, it is impossible to meet the diversified demand of user.
The content of the invention
The embodiment of the present invention provides a kind of image processing method, device and storage medium, can realize the head in image
Hair color change is that dark color is color, light color system color or graduated colors, so as to provide including dark color system, light color system or
The target color development of gradient color is selected for user, meets the diversified demand of user.
The embodiment of the present invention provides a kind of image processing method, including:
Obtain the pending image and target color development of user's selection;
The hair zones of the pending image are determined, and color of object image is generated according to the target color development;
Obtain the track data of hair in the hair zones;
Track data based on the hair carries out deformation process to color of object image;
The pixel of the correspondence position of color of object image after the image section of the hair zones and deformation is filtered
The mixture of colours is handled, and obtains the first object image with target color development.
Correspondingly, the embodiment of the present invention also provides a kind of image processing apparatus, including:
First acquisition module, pending image and target color development for obtaining user's selection;
Determining module, the hair zones for determining the pending image;
Generation module, for generating color of object image according to the target color development;
Second acquisition module, the track data for obtaining hair in the hair zones;
Warped processing module, deformation process is carried out for the track data based on the hair to color of object image;
First processing module, for the image section of the hair zones is corresponding with the color of object image after deformation
The pixel of position carries out colour filter mixed processing, obtains the first object image with target color development.
Alternatively, the warped processing module specifically for:
Determine the center line of color of object image;
The track data of the center line is obtained, the track data of lines on precalculated position is obtained.
Alternatively, image processing apparatus also includes scaling processing module;
The scaling processing module, the external square of the shape for determining the hair zones according to the hair zones
Shape, and processing is zoomed in and out to the color of object image based on the boundary rectangle, to cause the color of object image
Size is consistent with the size of the boundary rectangle;
The warped processing module is used for the track data based on the hair to the color of object image after scaling processing
Carry out deformation process.
Alternatively, the first processing module specifically for:
Obtain position and the red, green and blue three primary colours of the first pending pixel of the image section of the hair zones
The pixel value of component;
Obtain the second pending picture corresponding with the described first pending location of pixels in the color of object image after deformation
The pixel value of the red, green and blue three primary colours component of element;
By primary color values of the pixel value of each primary color component of the described first pending pixel, the second pending pixel
The pixel value of each primary color component is mixing colour, by the same primary colours of the described first pending pixel and the second pending pixel point
The pixel value of amount carries out colour filter mixed processing.
Alternatively, the 3rd processing module specifically for:
The monochrome information of the image section of the hair zones is obtained, is led to generating the brightness of image section of hair zones
Road image;
Histogram equalization processing is carried out to the luminance channel image, the second weight map picture is obtained;
Second weight coefficient is determined according to the second weight map picture;
Alpha blended is carried out to the image section of the hair zones and the second target image based on the second weight coefficient
Processing, obtains the 3rd target image with target color development.
Correspondingly, the embodiment of the present invention also provides a kind of storage medium, and the storage medium is used to store a plurality of instruction, institute
State instruction to be loaded suitable for processor, to perform the step in above-mentioned image processing method.
In the image processing method of the embodiment of the present invention, determine that pending image and target are sent out according to image processing requests
Color, it is then determined that the hair zones of pending image, and color of object image is generated according to target color development, then treated by obtaining
The track data of hair in image is handled, so that the track data based on hair carries out deformation process to color of object image, it
The pixel of the correspondence position of the color of object image after the image section of hair zones and deformation is subjected to colour filter mixed processing afterwards,
Obtain in the first object image with target color development, this programme, by the track data based on hair to color of object image
Deformation process is carried out, so as to obtain the corresponding texture of hair on color of object image, from there through color of object image
The color changed required for single hair is determined so that color development changes not to be constrained by the background color development in pending image,
It can realize that it is color, light color system color or graduated colors that the hair color in image is changed into dark color, so as to carry
Selected for the target color development including dark system, light color system or gradient color for user, meet the diversified demand of user.
Brief description of the drawings
Technical scheme in order to illustrate the embodiments of the present invention more clearly, makes required in being described below to embodiment
Accompanying drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the present invention, for
For those skilled in the art, on the premise of not paying creative work, it can also be obtained according to these accompanying drawings other attached
Figure.
Fig. 1 a are a scene graph of image processing method provided in an embodiment of the present invention;
Fig. 1 b are a flow charts of image processing method provided in an embodiment of the present invention;
During Fig. 1 c are image processing method provided in an embodiment of the present invention, the stream of deformation process is carried out to color of object image
Cheng Tu;
During Fig. 2 a are image processing method provided in an embodiment of the present invention, the flow chart after first object image is obtained;
During Fig. 2 b are image processing method provided in an embodiment of the present invention, to the image section and first object of hair zones
The pixel of the correspondence position of image carries out the flow chart of alpha blended processing;
During Fig. 2 c are image processing method provided in an embodiment of the present invention, image section and the second target to hair zones
The pixel of the correspondence position of image carries out the flow chart of alpha blended processing;
Fig. 3 a are a structural representations of image processing apparatus provided in an embodiment of the present invention;
Fig. 3 b are another structural representations of image processing apparatus provided in an embodiment of the present invention;
Fig. 4 is the structural representation of terminal provided in an embodiment of the present invention.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete
Site preparation is described, it is clear that described embodiment is only a part of embodiment of the invention, rather than whole embodiments.It is based on
Embodiment in the present invention, the every other implementation that those skilled in the art are obtained under the premise of creative work is not made
Example, belongs to the scope of protection of the invention.
The embodiment of the present invention provides a kind of image processing method, device and storage medium.
Wherein, the image processing apparatus can specifically be integrated in smart mobile phone, tablet personal computer, notebook computer or personal meter
In the terminals such as calculation machine.
For example, refering to Fig. 1 a, the image processing apparatus can for example provide an image procossing interface, and user can be at this
A pictures are opened in image procossing interface, then using the image procossing interface provide editting function to picture at
Reason, for example, can carry out word addition, facial cosmetic to picture and change hair color, etc..Wherein, change in hair color
There is provided there is plurality of target color development to be selected for user in the interface of change, the target such as dark color system, light color system and gradient color is sent out
Color, user can select one of target color development that the hair color in picture is revised as into target color development as needed.
Specifically, the image processing apparatus can obtain the pending image and target color development of user's selection, target hair
Color is, for example, yellow, brown or red etc., then, it is determined that the hair zones of pending image, and generated according to target color development
Color of object image, and the track data of the hair in pending image is obtained, so that the track data based on hair is to target
Color image carries out deformation process, afterwards, by the color of object image correspondence position after the image section of hair zones and deformation
Pixel carry out colour filter mixed processing, obtain the first object image with target color development.
Deformation process, example are carried out to color of object image wherein it is possible to be calculated using Moving Least anamorphose
Such as, the track data of every hair is obtained successively, and obtains the track data of lines on precalculated position in color of object image, so
Afterwards successively according to the track data of every hair and the track data of precalculated position lines, pixel in adjustment color of object image
Position, obtains multiple hair color images, so that the pixel of the correspondence position of multiple hair color images is carried out into colour filter mixing
Processing, after being deformed, color of object image.
Deformation process is carried out to color of object image by the track based on hair, so as on color of object image
The corresponding texture of hair is obtained, then color of object image and the hair zones of pending image with hair texture are filtered
The mixture of colours is handled, and can realize that it is color, light color system color that the hair color in image is changed into dark color by such a mode
Or graduated colors, selected so as to provide the target color development including dark system, light color system or gradient color for user, meet and use
The diversified demand at family.
It is described in detail individually below.
Refering to Fig. 1 b, the present embodiment will be described from the angle of image processing apparatus, and the image processing apparatus specifically may be used
To be integrated in the equipment such as terminal, terminal for example can be smart mobile phone, tablet personal computer or personal computer, etc..
As shown in Figure 1 b, the idiographic flow of the image processing method can be as follows:
101st, the pending image and target color development of user's selection are obtained.
For example, in image procossing interface, user can carry out word addition, facial cosmetic to picture and change hair
The editing and processing such as color.Wherein, in the interface of editor's hair color, plurality of target color development can is selected for user, for example
The target color developments such as dark system, light color system and gradient color, user can select one of target color development as needed will scheme
Hair color in piece is revised as target color development.The picture opened according to user in image procossing interface and selected target
Color development, obtains the pending image and target color development of user's selection.
102nd, the hair zones of pending image are determined, and color of object image is generated according to target color development.
It is understood that the hair zones of pending image refer to the area where the hair pixel in pending image
Domain, determines that the hair zones of pending image are mainly to determine the hair pixel of pending image, can be according to hair pixel
Characteristic information carries out hair detection to pending image, and the hair area of pending image is determined according to the hair pixel detected
Domain.The mode of hair detection can have a variety of, for example, detected, based on probabilistic model, Gauss based on the color development that color space is counted
The color development of model is detected or the color development based on neural network machine study is detected, etc..
Target color development is the selected color of user, such as red, brown or it is faint yellow.Wherein, color of object figure
As the image for the display selected target color development of user, depending on it is according to the selected target color development of user, for example, user selects
The target color development selected is red, then color of object image is the red image of display.
103rd, the track data of hair in hair zones is obtained.
Such as, the track of hair in hair zones can be first determined, the track data of hair is then obtained.Hair track
Acquisition modes can have a variety of, for example, the track of hair, the wherein direction of hair are determined by detecting the direction of hair for example
It can be detected, be used for it will be appreciated by persons skilled in the art that Gabor filter is one by Gabor filter
The linear filter of rim detection, its frequency and direction represent the expression for frequency and direction close to human visual system, often
For texture representation and description, can frequency domain different scale, extract correlated characteristic on different directions, therefore can utilize
Gabor filter carries out feature extraction to whole piece hair, so that it is determined that the direction of whole piece hair, and then determine the rail of whole piece hair
Mark.The track data of hair for example can be the positional information of multiple pixels on hair track, wherein it is determined that the rail of hair
After mark, the positional information of multiple pixels on hair track can be selected along hair track, to obtain the track number of hair
According to.
104th, the track data based on hair carries out deformation process to color of object image.
Wherein, deformation process is carried out to color of object image and refers to that the position of the pixel to color of object image is adjusted
It is whole, reach in the purpose of anamorphose, the present embodiment, the track data based on hair is carried out at deformation to color of object image
Reason, to cause color of object image that there is lines corresponding with the hair track in pending image.
Can have a variety of to the mode that color of object image carries out deformation process, such as Moving Least anamorphose
Algorithm, the anamorphose algorithm based on two-dimensional interpolation or anamorphose algorithm based on outline polygon, etc..
Below by with Moving Least anamorphose algorithm to the change deformation of the color of object image of the present embodiment at
Reason elaborates.
Wherein, the track data based on hair to color of object image carry out deformation process before, can also include with
Lower step:The boundary rectangle of the shape of hair zones is determined according to the hair zones of pending image, based on boundary rectangle to mesh
Mark color image zooms in and out processing, to cause the size of color of object image consistent with the size of boundary rectangle.
By zooming in and out processing to color of object image, to cause the size of color of object image and the chi of hair zones
It is very little roughly the same.
The shape of hair zones is usually irregular shape, to ensure the size of hair zones and color of object image substantially
Identical, in the present embodiment, the boundary rectangle of the shape by calculating hair zones regard the size of the boundary rectangle as hair
The size in region, then zooms in and out processing to color of object image so that the length and width of color of object image are respectively equivalent to
The length and width of the boundary rectangle, and then make it that the size of color of object image is roughly the same with the size of hair zones.
By making color of object image roughly the same with the size of hair zones, so that color of object image
Pixel is corresponded with the location of pixels in hair zones.
Wherein, the track data based on hair carries out deformation process to color of object image and specifically includes the rail based on hair
Mark data carry out deformation process to the color of object image after scaling processing, for example, as illustrated in figure 1 c, can include following sub-step
Suddenly:
1041st, the track data of every hair is obtained.
The line segment that the track of hair is appreciated that to be made up of multiple pixels, the track data of hair is hair rail
The positional information of multiple pixels on mark.Wherein, the positional information refers to positional information of the pixel in pending image, tool
For body, coordinate system is set up in the corresponding image range of boundary rectangle of hair zones, the track data of hair refers to head
It is that the pixel on reference frame, hair track is being referred to send out the coordinate system in the corresponding image range of boundary rectangle in region
Coordinate information in coordinate system.Wherein, the track of each hair can be represented with array, each member in point array
Element is the positional information of a pixel on hair track.
For example, the track number of hair is m, i.e., the track of m root hairs is detected in pending image, then the rail of hair
Mark data also correspondingly have m, and the track data of every hair is an array dstlist-n (p1, p2 ... ..., pn), point
The array length of array is n, indicates n elements, and n is more than or equal to 1, wherein each element pn in point array is hair
The coordinate information of a pixel on track, i.e. pn=di (xi, yi), wherein, di represents i-th of picture on hair track
Element, xi represents x-axis coordinate of the ith pixel in above-mentioned reference frame, and yi represents ith pixel in above-mentioned reference frame
In y-axis coordinate.
Wherein, the element in point array dstlist-n (p1, p2 ... ..., pn) can be all pixels point on hair track
Coordinate information or the multiple pixels chosen from all pixels point on hair track coordinate information, for example between
A pixel, which is chosen, every predetermined number is used as the element in an array.
1042nd, the track data of lines on precalculated position in the color of object image after scaling processing is obtained.
Wherein, the lines on precalculated position can be zoomed in and out in the color of object image after processing along the y-axis direction
Any bar lines, for example, can be the center line lines of the color of object image after scaling is handled, or can be after scaling is handled
Color of object image center line leftward position a lines, or center line right positions a lines.By taking center line as an example,
Step S1042 is specifically included:It is determined that the center line of the color of object image after scaling processing, the length direction of the center line is y-axis side
To, then obtain center line track data, so as to obtain the track data of lines on precalculated position.
Wherein, the track data of center line refers to position letter of the pixel in color of object image on middle line tracking
Breath, wherein an array srclist (s1, s2 ... ..., sn) can be used, wherein point array srclist (s1, s2 ... ...,
Sn) with point array dstlist-n (p1, p2 ... ..., pn) array length it is identical, be n.
1043rd, successively according to the track data of every hair and the track data of precalculated position lines, adjustment scaling processing
The position of pixel in color of object image afterwards, obtains multiple hair color images.
Wherein, according to the track data of single hair and the track data of precalculated position lines, after adjustment scaling processing
The position of pixel in color of object image, obtains the corresponding hair color image of single hair.
Wherein, the color of object after the anamorphose algorithm adjustment scaling processing of the Moving Least based on point is utilized
The position of the pixel of image.
In the anamorphose algorithm of Moving Least based on point, pixel is mainly changed according to warping function
Position and cause image be deformed effect.In the present embodiment, the track data of the center line of sets target color image is original
The set of control point position, sets the track data of single hair as the set of control point position after deformation, according to based on point
The anamorphose algorithm principle of Moving Least is understood, according to control point position after the set and deformation of original control point
Set can determine a warping function, so as to be deformed according to the warping function to the color of object image after scaling processing
The position of pixel in processing, namely adjustment color of object image, the wherein variable in the warping function are color of object image
The coordinate of any pixel, and the value of warping function is then the pixel point coordinates after deformation, it is hereby achieved that adjustment pixel position
Color of object image after putting, i.e., the corresponding hair color image of single hair, obtains every hair corresponding by that analogy
Hair color image, thus obtains multiple hair color images.
1044th, the pixel of the correspondence position of multiple hair color images is subjected to colour filter mixed processing, after being deformed
Color of object image.
The colour filter mixed processing of the pixel of two images refers to filter the same primary color component of the pixel of two images
The process that color is calculated.Each pixel includes red (R), green (G) and blue (B) three primary colours component, the pixel value value of three primary colours component
Scope is 0~255.One of image in two of which image is set to upper layer images, and another image is set to lower floor's figure
As (namely background image), the calculation formula of colour filter mixing is as follows:
P0=1- (1-P1) * (1-P2) (1)
Wherein, P1 represents the primary color values of colour filter hybrid algorithm, and its value is appointing for the pixel of any position in upper layer images
The pixel value of one primary color component, P2 represents the mixing colour in colour filter hybrid algorithm, and its value is lower image (namely Background
Picture) in correspondence position pixel corresponding primary color component pixel value, P0 represents that the new images obtained after colour filter mixed processing exist
The pixel value of the corresponding primary color component of the pixel of correspondence position.For example, it is assumed that pictures of the P1 for Q points (x1, y1) place of upper layer images
The pixel value of the red component of element, then P2 is then the picture of the red component of position pixel corresponding with Q points position in lower image
Element value, P0 is then represented in the new images that the pixel of upper layer images and lower image is obtained after colour filter mixed processing in Q points position
Pixel red component pixel value.
In the present embodiment, when detecting the track of m root hairs in pending image, the number of multiple hair color images
Amount also be m, setting m hair color image be respectively imgH1, imgH2, imgH3 ..., imgHm, m hair color image
Colour filter mixed processing mode can have a variety of, such as it is as follows:
The pixel on two hair color image correspondence positions is first subjected to colour filter mixed processing using above-mentioned formula (1),
Then new images colour filter mixed processing result obtained and the progress of the pixel of the correspondence position of next hair color image
Colour filter mixed processing, up to colour filter to last hair color image.
Specifically, colour filter mixed processing is carried out to imgH1 and imgH2 using formula (1), wherein imgH1 is upper layer images,
ImgH2 is lower image, for each primary color component of the pixel of imgH1 and imgH2 same position, and colour filter mixing calculates public
Formula is as follows:
P0r=1- (1-P1r) * (1-P2r)
P0g=1- (1-P1g) * (1-P2g)
P0b=1- (1-P1b) * (1-P2b)
Wherein, P1r, P1g, P1b are respectively the picture of the red, green and blue three primary colours component of the pixel of imgH1 any position
Element value, P2r, P2g, P2b are respectively the pixel value of the red, green and blue three primary colours component of the pixel of imgH2 correspondence position, P0r,
P0g, P0b are respectively the pixel value of the red, green and blue three primary colours component of the correspondence position of the mixed new hair image of colour filter.
The picture of the three primary colours component of the pixel of each correspondence position in using above-mentioned formula successively to imgH1 and imgH2
Element value carries out colour filter mixing and calculated, so as to can obtain the new hair color image after a colour filter mixed processing, then, utilizes public affairs
Formula (1) is to new hair color image and the three primary colours component of the pixel of next hair color image imgH3 correspondence position
Pixel value carries out colour filter mixing and calculated, wherein new hair color image is upper layer images, next hair color image imgH3 is
Lower image, by that analogy, until calculating to last hair color image imgHm, final result of calculation is deformation
Obtained after color of object image afterwards, that is, the multiple hair color image colour filter mixed processings of color of object image after deforming
Image.
105th, the pixel of the correspondence position of the color of object image after the image section of hair zones and deformation is filtered
The mixture of colours is handled, and obtains the first object image with target color development.
For example, specifically may comprise steps of:Obtain the position of the first pending pixel of the image section of hair zones
Put and red, green and blue three primary colours component pixel value, then obtain in the color of object image after deformation with it is first pending
The pixel value of the red, green and blue three primary colours component of the corresponding second pending pixel in position of pixel, with the first pending pixel
The pixel value of each primary color component be that primary color values, the pixel value of each primary color component of the second pending pixel are mixing colour, will
The pixel value of the same primary color component of the first pending pixel and the second pending pixel carries out colour filter mixed processing.
First pending pixel refers to any one pixel of the image section of hair zones, and the second pending pixel refers to
Pixel corresponding with the first pending location of pixels in color of object image after deformation.
For ease of description, hereinafter, the color of object image after deformation is represented with imgL, hair area in pending image
The image section in domain is represented with imgM.Still using each base of the above-mentioned formula (1) to the pixel of imgM and imgL correspondence position
The pixel value of colouring component carries out colour filter mixed processing calculating, wherein, imgM is upper layer images, and imgL is lower image, specifically
The colour filter mixing calculating process that calculating process refers to above-mentioned imgH1 and imgH2 is carried out herein without repeating one by one.Pass through profit
Colour filter is carried out to the pixel value of the same primary color component of the pixel of each correspondence position in imgM and imgL with formula (1) successively to mix
It is total to calculate, the result calculated every time as the same primary color component of the pixel of correspondence position new pixel value, so as to obtain
First object image with target color development, is achieved in the hair color in pending image being revised as user selected
Target color development, realizes the hair dyeing effect of image.
In the present embodiment, the target color development selected according to user generates color of object image, and using in pending image
The track data of hair carries out deformation process to color of object image, so as to obtain hair correspondence on color of object image
Texture, the color changed required for single hair is determined from there through color of object image so that color development change is not treated
The constraint of the background color development in image is handled, can realize that it is color, light color system that the hair color in image is changed into dark color
Color or graduated colors, are selected so as to provide the target color development including dark system, light color system or gradient color for user, full
The diversified demand of sufficient user.
Refering to Fig. 2 a, in another embodiment of image processing method of the present invention, the first mesh with target color development is obtained
After logo image, for example, also including below scheme:
201st, to hair zones image section and the pixel of the correspondence position of first object image carries out alpha blended
Processing, obtains the second target image with target color development.
Specifically, such as Fig. 2 b, including following sub-step:
2011st, hair zones image is generated according to the hair zones of pending image.
Specifically, the pixel generation hair zones image in hair zones, hair zones image refers to only show head
The background colour sent out in the image of the image section in region, hair zones image in addition to the image section of hair zones is black
Color.In present embodiment, hair zones image is single channel gray level image, and its generating process for example can be as follows:Extract and wait to locate
The Pixel Information of the hair zones of image is managed, the Pixel Information is, for example, the pixel of the red, green, blue three primary colours component of each pixel
Value, then generates a triple channel image according to the Pixel Information of extraction, the triple channel image is converted into single channel gray scale afterwards
Image, so as to obtain the hair zones image of single channel gray scale.
2012nd, the pixel of the correspondence position of the color of object image after hair zones image and deformation is subjected to sheen mixing
Processing, obtains the first weight map picture.
The sheen mixed processing of the pixel of two images refers to that the same primary color component progress to the pixel of two images is soft
One of them in the process that light mixing is calculated, two images is upper layer images, and another is lower image, and sheen mixing calculates public
Formula is as follows:
Wherein, sqrt (U2) is U2 square root, and U1 represents any primary colours point of the pixel of any position in upper layer images
The pixel value of amount, U2 represents the pixel value of the corresponding primary color component of the pixel of the correspondence position of lower image, and U0 represents two figures
As the new images that are obtained after sheen mixed processing are in the pixel value of the corresponding primary color component of the pixel of correspondence position.
In the present embodiment, correspondence of the formula (2) to the color of object image imgL after hair area image and deformation is utilized
The pixel of position carries out sheen mixing and calculated.Hair zones image represents that wherein imgF is upper layer images, under imgL is with imgF
Tomographic image, wherein imgF are single channel gray level image, and only one of which primary color component, the pixel value of the primary color component is also known as gray scale
Value, therefore, for each primary color component of the pixel of imgF and imgL same position, sheen mixing calculation formula is as follows:
Wherein, U1x is the pixel value of the primary color component of the pixel of imgF any position, and U2r, U2g, U2b are respectively
The pixel value of the red, green and blue three primary colours component of the pixel of imgL correspondence position, U0r, U0g, U0b are respectively after sheen is mixed
The pixel value of the red, green and blue three primary colours component of the correspondence position of the first obtained weight map picture.
The pixel of the three primary colours component of the pixel of each correspondence position in using above-mentioned formula successively to imgF and imgL
Value carries out sheen mixing and calculated, so as to can obtain the first weight map picture after a sheen mixed processing.
2013rd, the first weight coefficient is determined according to the first weight map picture.
2014th, based on the first weight coefficient to the image sections of hair zones and the picture of the correspondence position of first object image
Element carries out alpha blended processing, obtains the second target image with target color development.
Alpha (alpha) mixed processing of the pixel of two images refers to the same primary colours point to the pixel of two images
It is upper layer images that amount, which carries out one of them in the process of alpha blended calculating, two images, and another is lower image, A Er
Method mixing calculation formula is as follows:
D0=A*D1+ (1-A) * D2 (3)
Wherein, D1 represents the pixel value of any primary color component of the pixel of any position in upper layer images, and D2 represents lower floor
The pixel value of the corresponding primary color component of the pixel of the correspondence position of image, D0 is obtained after representing two image alpha blended processing
The second target image correspondence position pixel corresponding primary color component pixel value.A is the first weight coefficient, and its value is
The pixel value of the corresponding primary color component of the pixel of correspondence position in first weight map picture.
In the present embodiment, using formula (3) to the image section imgM of hair zones and the corresponding position of first object image
The pixel put carries out alpha blended calculating.Wherein, imgM is lower image, and first object image is upper layer images, therefore, right
In each primary color component of the pixel of imgM and the same position of first object image, alpha blended calculation formula is as follows:
D0r=Ar*D1r+ (1-Ar) * D2r
D0g=gr*D1g+ (1-Ag) * D2g
D0b=Ab*D1b+ (1-Ab) * D2b
Wherein, D1r, D1g, D1b are respectively the pixel of the red, green and blue three primary colours component of the pixel of imgM any position
Value, D2r, D2g, D2b are respectively the pixel of the red, green and blue three primary colours component of the pixel of the correspondence position of first object image
Value, Ar, Ag, Ab are the first weight coefficient, and value is respectively the red, green and blue three of the pixel of correspondence position in the first weight map picture
The pixel value of primary color component, D0r, D0g, D0b are respectively the obtained after imgM and the processing of first object image alpha blended
Pixel value of two target images in the red, green and blue three primary colours component of the pixel of correspondence position.
The three primary colours point of the pixel of each correspondence position in using above-mentioned formula successively to imgM and first object image
The pixel value of amount carries out alpha blended calculating, so that the second target image after available alpha blended processing.
, can be with by carrying out further sheen mixed processing to the image sections of hair zones and alpha blended is handled
Optimize hair edge transition so that image transition is more natural.
202nd, to hair zones image section and the pixel of the correspondence position of the second target image carries out alpha blended
Processing, obtains the 3rd target image with target color development.
Specifically, as shown in Figure 2 c, following sub-step can be included:
2021st, the monochrome information of the image section of hair zones in pending image is obtained, to generate the figure of hair zones
As the luminance channel image of part.
Wherein it is possible to the pixel of the red, green and blue three primary colours component of each pixel of image section by obtaining hair zones
Value, is then worth to hsv color spatially the hue value H of each pixel, saturation degree according to the pixel of the three primary colours component of each pixel
Value S and brightness value V, extracts the brightness value V of each pixel afterwards, with the luminance channel image for the image section for generating hair zones.
2022nd, histogram equalization processing is carried out to luminance channel image, obtains the second weight map picture.
Histogram equalization processing is primarily referred to as adjusting the contrast of image according to the histogram information of image, for example, first
The number of pixels of each gray level of luminance channel image is counted, then the gray level more to number of pixels enters line broadening, and
The gray level less to number of pixels is compressed, so as to improve contrast and gray tone change.Wherein it is possible to according to reality
Need to set in the degree of each gray level broadening and compression, the present embodiment, can be according to the pixel of the image section of hair zones
Distribution situation determines each gray level broadening of luminance channel image and the degree of compression, with the bright of the image section to hair zones
Spend channel image and carry out histogram equalization processing, it is possible to increase the contrast of the luminance channel image.
2023rd, the second weight coefficient is determined according to the second weight map picture.
2024th, alpha blended is carried out to the image section of hair zones and the second target image based on the second weight coefficient
Processing, obtains the 3rd target image with target color development.
Wherein it is possible to using above-mentioned formula (3) to the image section imgM of hair zones and the correspondence of the second target image
The pixel of position carries out alpha blended calculating, wherein, in this step, when carrying out alpha blended calculating using formula (3),
A is the second weight coefficient, and its value is the pixel value of the primary color component of the pixel of the second weight map picture.Wherein, imgM is lower floor
Image, the second target image is upper layer images, therefore, for each base of the pixel of imgM and the same position of the second target image
Colouring component, alpha blended calculation formula is as follows:
D0r=Ar*D1r+ (1-Ar) * D2r
D0g=gr*D1g+ (1-Ag) * D2g
D0b=Ab*D1b+ (1-Ab) * D2b
Wherein, D1r, D1g, D1b are respectively the pixel of the red, green and blue three primary colours component of the pixel of imgM any position
Value, D2r, D2g, D2b are respectively the pixel of the red, green and blue three primary colours component of the pixel of the correspondence position of the second target image
Value, Ar, Ag, Ab are the second weight coefficient, and value is respectively the red, green and blue three of the pixel of correspondence position in the second weight map picture
The pixel value of primary color component, D0r, D0g, D0b are respectively the obtained after imgM and the processing of the second target image alpha blended
Pixel value of three target images in the red, green and blue three primary colours component of the pixel of correspondence position.
203rd, threeth target image of the output with target color development.
The 3rd target image with target color development is shown to user as final target image.
The present embodiment, by using the second weight map picture after histogram equalization processing with determine alpha blended handle
Weight coefficient, so as to be done based on the weight coefficient to the image section and the second target image of hair zones in pending image
Alpha blended processing, can improve the contrast of the 3rd target image so that the target image display effect after color development change
More preferably.
Certainly, in other embodiments, the second weight coefficient of the second weight map picture determination is also based on, to hair area
The image section and first object image in domain do alpha blended processing, regard the image obtained after mixed processing as final mesh
Logo image is exported to user.
In order to preferably implement above method, the embodiment of the present invention is also provided at a kind of image processing apparatus, abbreviation image
Manage in device, image processing apparatus terminal device such as can be integrated in mobile phone.As shown in Figure 3 a, image processing apparatus bag
Include:First acquisition module 301, determining module 302, generation module 303, the second acquisition module 304, warped processing module 305 with
And first processing module 306.
Wherein, the first acquisition module 301 is used for the pending image and target color development for obtaining user's selection.
For example, the picture and selected target color development opened according to user in image procossing interface, obtain user's choosing
The pending image and target color development selected.
Determining module 302 is used for the hair zones for determining pending image.
Wherein it is possible to hair detection be carried out to pending image according to the characteristic information of hair pixel, according to what is detected
Hair pixel determines the hair zones of pending image.The mode of hair detection can have a variety of, for example, united based on color space
The color development detection of meter, the color development detection based on probabilistic model, Gauss model or the color development inspection learnt based on neural network machine
Survey, etc..
Generation module 303 is used to generate color of object image according to target color development.
Color of object image is the image of the display selected target color development of user, and it sends out according to the selected target of user
Color and determine, for example, the target color development of user's selection is red, then color of object image is the red image of display.
Second acquisition module 304 is used for the track data for obtaining hair in hair zones.
The acquisition modes of hair track can have a variety of, for example, determine the track of hair by detecting the direction of hair,
Wherein the direction of hair can for example be detected by Gabor filter., can be along hair rail behind the track for determining hair
The positional information of multiple pixels on mark selection hair track, to obtain the track data of hair.
Warped processing module 305 is used for the track data based on hair and carries out deformation process to color of object image.
Deformation process is carried out to color of object image and refers to that the position of the pixel to color of object image is adjusted, is reached
The purpose of anamorphose.Can have a variety of to the mode that color of object image carries out deformation process, such as Moving Least
Anamorphose algorithm, the anamorphose algorithm based on two-dimensional interpolation or anamorphose algorithm based on outline polygon, etc..
Alternatively, image processing apparatus can also further include scaling processing module 307.
Scaling processing module 307 is used for before warped processing module 305 carries out deformation process to color of object image, right
Color of object image zooms in and out processing.For example, scaling processing module 307 is used to be determined according to the hair zones of pending image
The boundary rectangle of the shape of hair zones, processing is zoomed in and out based on boundary rectangle to color of object image, to cause target face
The size of color image is consistent with the size of boundary rectangle.
Wherein, based on boundary rectangle to targeted compression image scaling processing be specially regulation targeted compression image length and
Width, to cause the length and the wide length and width for being respectively equivalent to boundary rectangle of color of object image, so that color of object image
Size it is roughly the same with the size of hair zones.
By taking Moving Least anamorphose algorithm as an example, warped processing module 305 is specifically for based on hair track
Data carry out deformation process to the color of object image after scaling processing, for example, warped processing module 305 is used to obtain every hair
The track data of silk, the track data of every hair is an array dstlist-n (p1, p2 ... ..., pn), in point array
Each element pn be the coordinate information of a pixel on hair track, then obtain the color of object figure after scaling processing
As on precalculated position lines track data, the wherein lines on precalculated position for example can be the target face after scaling processing
The center line of color image, by obtaining the track data of center line on color of object image, to obtain the track of lines on precalculated position
Data, the track data of the center line refers to positional information of the pixel in color of object image on middle line tracking, for example
An array srclist (s1, s2 ... ..., sn) can be used, wherein point array srclist (s1, s2 ... ..., sn) and points
Group dstlist-n (p1, p2 ... ..., pn) array length is identical, is n.
Hereafter, warped processing module 305 is used for rail successively according to the track data of every hair and precalculated position lines
The position of pixel, obtains multiple hair color images in mark data, the color of object image after adjustment scaling processing.
The track data of the center line of sets target color image is the set of original control point position, sets single hair
Track data is the set of control point position after deformation, according to the anamorphose algorithm principle of the Moving Least based on point
Understand, a warping function can be determined according to the set of control point position after the set and deformation of original control point, so that root
Pixel in deformation process, namely adjustment color of object image is carried out to the color of object image after scaling processing according to the warping function
Position, the wherein variable in the warping function is the coordinate of any pixel of color of object image, and the value of warping function
It is then the pixel point coordinates after deformation, it is hereby achieved that the color of object image after adjustment location of pixels, i.e., single hair
Corresponding hair color image, obtains the corresponding hair color image of every hair, thus obtains multiple hair face by that analogy
Color image.
Afterwards, warped processing module 305 is used to mix the pixel progress colour filter of the correspondence position of multiple hair color images
Conjunction is handled, the color of object image after being deformed.
For example, obtaining two images for colour filter mixed processing, one of image is set to upper layer images, and another image is made
Lower image (namely background image) is set to, then the calculation formula of colour filter mixing is as follows:
P0=1- (1-P1) * (1-P2)
Wherein, P1 represents the primary color values of colour filter hybrid algorithm, and its value is appointing for the pixel of any position in upper layer images
The pixel value of one primary color component, P2 represents the mixing colour in colour filter hybrid algorithm, and its value is lower image (namely Background
Picture) in correspondence position pixel corresponding primary color component pixel value, P0 represents that the new images obtained after colour filter mixed processing exist
The pixel value of the corresponding primary color component of the pixel of correspondence position.
In the present embodiment, when detecting the track of m root hairs in pending image, the number of multiple hair color images
Amount also be m, setting m hair color image be respectively imgH1, imgH2, imgH3 ..., imgHm, m hair color image
Colour filter mixed processing mode can have a variety of, such as it is as follows:
Calculation formula is mixed first with above-mentioned colour filter, and colour filter mixed processing, wherein imgH1 are carried out to imgH1 and imgH2
For upper layer images, imgH2 is lower image, successively to the three primary colours component of the pixel of each correspondence position in imgH1 and imgH2
Pixel value carry out colour filter mixing calculate so that can obtain a colour filter mixed processing after new hair color image, then, profit
Calculation formula is mixed to new hair color image and the picture of next hair color image imgH3 correspondence position with above-mentioned colour filter
The pixel value of the three primary colours component of element carries out colour filter mixing and calculated, wherein new hair color image is upper layer images, next hair
Silk color image imgH3 is lower image, by that analogy, until calculate to last hair color image imgHm, final
Result of calculation is the color of object image after deformation, that is, the color of object image after deforming is multiple hair color image colour filters
The image obtained after mixed processing.
First processing module 306 is used for corresponding position of the image section of hair zones with the color of object image after deformation
The pixel put carries out colour filter mixed processing, obtains the first object image with target color development.
For example, first processing module 306 specifically can be used for the first pending picture for obtaining the image section of hair zones
The position of element and the pixel value of red, green and blue three primary colours component, then obtain and wait to locate with first in the color of object after deformation
The pixel value of the red, green and blue three primary colours component of the corresponding second pending pixel in position of pixel is managed, with the first pending picture
The pixel value of each primary color component of element is that primary color values, the pixel value of each primary color component of the second pending pixel are mixing colour,
The pixel value of described first pending pixel and the same primary color component of the second pending pixel is subjected to colour filter mixed processing.
Wherein, the color of object image after deformation is represented with imgL, and the image section of hair zones is used in pending image
ImgM is represented, above-mentioned colour filter can be used to mix each primary color component of the calculation formula to the pixel of imgM and imgL correspondence position
Pixel value carry out colour filter mixed processing calculating, wherein, imgM is upper layer images, and imgL is lower image, is specifically calculated
The colour filter mixing calculating process that journey refers to above-mentioned imgH1 and imgH2 is carried out herein without repeating one by one.Utilize above-mentioned colour filter
Mixing calculation formula is filtered to the pixel value of the same primary color component of the pixel of each correspondence position in imgM and imgL successively
The mixture of colours calculate, the result calculated every time as the same primary color component of the pixel of correspondence position new pixel value, so as to
The first object image with target color development is obtained, is achieved in the hair color in pending image being revised as selected by user
The target color development selected, realizes the hair dyeing effect of image.
In the present embodiment, the target color development selected according to user generates color of object image, and using in pending image
The track data of hair carries out deformation process to color of object image, so as to obtain hair correspondence on color of object image
Texture, the color changed required for single hair is determined from there through color of object image so that color development change is not treated
The constraint of the background color development in image is handled, can realize that it is color, light color system that the hair color in image is changed into dark color
Color or graduated colors, are selected so as to provide the target color development including dark system, light color system or gradient color for user, full
The diversified demand of sufficient user.
Alternatively, as shown in Figure 3 b, image processing apparatus provided in an embodiment of the present invention can also further include second
Processing module 308, the 3rd processing module 309 and output module 310.
Wherein, Second processing module 308 is used to obtain the first object figure with target color development in first processing module 306
As after, the pixel for the image section to hair zones and the correspondence position of first object image is carried out at alpha blended
Reason, obtains the second target image with target color development.
For example, Second processing module 308 according to the hair zones of pending image specifically for generating hair zones image,
Then the pixel of the correspondence position of the color of object image after hair zones image and deformation is subjected to sheen mixed processing, obtained
First weight map picture, so as to determine the first weight coefficient according to the first weight map picture, is then based on the first weight coefficient to hair
The pixel of the image section in region and the correspondence position of first object image carries out alpha blended processing, obtains sending out with target
Second target image of color.
Wherein, hair zones image refers to remove in the image for only showing the image section of hair zones, hair zones image
During background colour outside the image section of hair zones is black, present embodiment, hair zones image is single channel figure
Picture.
In the present embodiment, the pixel to the correspondence position of the color of object image imgL after hair area image and deformation is entered
When the mixing of row sheen is calculated, hair zones image is represented with imgF, and imgF is upper layer images, and imgL is lower image, wherein
ImgF is single channel gray level image, and only one of which primary color component, the pixel value of the primary color component is also known as gray value, therefore, for
Each primary color component of the pixel of imgF and imgL same position, sheen mixing calculation formula is as follows:
Wherein, U1x is the pixel value of the primary color component of the pixel of imgF any position, and U2r, U2g, U2b are respectively
The pixel value of the red, green and blue three primary colours component of the pixel of imgL correspondence position, U0r, U0g, U0b are respectively after sheen is mixed
The pixel value of the red, green and blue three primary colours component of the correspondence position of the first obtained weight map picture.
The pixel of the three primary colours component of the pixel of each correspondence position in using above-mentioned formula successively to imgF and imgL
Value carries out sheen mixing and calculated, so as to can obtain the first weight map picture after a sheen mixed processing.
Wherein, to hair zones image section imgM and the pixel of the correspondence position of first object image carries out Alpha
When mixing is calculated, imgM is lower image, and first object image is upper layer images, therefore, for imgM and first object image
Same position pixel each primary color component, alpha blended calculation formula is as follows:
D0r=Ar*D1r+ (1-Ar) * D2r
D0g=gr*D1g+ (1-Ag) * D2g
D0b=Ab*D1b+ (1-Ab) * D2b
Wherein, D1r, D1g, D1b are respectively the pixel of the red, green and blue three primary colours component of the pixel of imgM any position
Value, D2r, D2g, D2b are respectively the pixel of the red, green and blue three primary colours component of the pixel of the correspondence position of first object image
Value, Ar, Ag, Ab are the first weight coefficient, and value is respectively the red, green and blue three of the pixel of correspondence position in the first weight map picture
The pixel value of primary color component, D0r, D0g, D0b are respectively the obtained after imgM and the processing of first object image alpha blended
Pixel value of two target images in the red, green and blue three primary colours component of the pixel of correspondence position.
The three primary colours point of the pixel of each correspondence position in using above-mentioned formula successively to imgM and first object image
The pixel value of amount carries out alpha blended calculating, so that the second target image after available alpha blended processing.
, can be with by carrying out further sheen mixed processing to the image sections of hair zones and alpha blended is handled
Optimize hair edge transition so that image transition is more natural.
3rd processing module 309 is used for the pixel to the image section of hair zones and the correspondence position of the second target image
Alpha blended processing is carried out, the 3rd target image with target color development is obtained.
For example, the 3rd processing module 309 specifically can be used for obtaining the image section of hair zones in pending image
Monochrome information, with the luminance channel image for the image section for generating hair zones, then enters column hisgram to luminance channel image
Equalization processing, obtains the second weight map picture, so as to determine the second weight coefficient according to the second weight map picture, is hereby based on second
Weight coefficient carries out alpha blended processing to the image section of hair zones and the second target image, obtains with target color development
The 3rd target image.
Wherein, to hair zones image section imgM and the pixel of the correspondence position of the second target image carries out Alpha
When mixing is calculated, imgM is lower image, and the second target image is upper layer images, therefore, for imgM and the second target image
Same position pixel each primary color component, alpha blended calculation formula is as follows:
D0r=Ar*D1r+ (1-Ar) * D2r
D0g=gr*D1g+ (1-Ag) * D2g
D0b=Ab*D1b+ (1-Ab) * D2b
Wherein, D1r, D1g, D1b are respectively the pixel of the red, green and blue three primary colours component of the pixel of imgM any position
Value, D2r, D2g, D2b are respectively the pixel of the red, green and blue three primary colours component of the pixel of the correspondence position of the second target image
Value, Ar, Ag, Ab are the second weight coefficient, and value is respectively the red, green and blue three of the pixel of correspondence position in the second weight map picture
The pixel value of primary color component, D0r, D0g, D0b are respectively the obtained after imgM and the processing of the second target image alpha blended
Pixel value of three target images in the red, green and blue three primary colours component of the pixel of correspondence position.
Output module 310 is used to export the 3rd target image with target color development, for example, carry out the 3rd target image
Display.
By using the second weight map picture after histogram equalization processing with determine alpha blended processing weight system
Number, is mixed so as to be Alpha to the image section and the second target image of hair zones in pending image based on the weight coefficient
Conjunction is handled, and can improve the contrast of the 3rd target image so that the target image display effect after color development change is more preferably.
Accordingly, the embodiment of the present invention also provides a kind of terminal, as shown in figure 4, the terminal can include radio frequency (RF,
Radio Frequency) circuit 401, include the memories 402, defeated of one or more computer-readable recording mediums
Enter unit 403, display unit 404, sensor 405, voicefrequency circuit 406, Wireless Fidelity (WiFi, Wireless Fidelity)
Module 407, include the part such as one or the processor 408 and power supply 409 of more than one processing core.This area skill
Art personnel are appreciated that the restriction of the terminal structure shown in Fig. 4 not structure paired terminal, can include more more or more than illustrating
Few part, either combines some parts or different parts arrangement.Wherein:
RF circuits 401 can be used for receive and send messages or communication process in, the reception and transmission of signal, especially, by base station
After downlink information is received, transfer to one or more than one processor 408 is handled;In addition, being sent to up data are related to
Base station.Generally, RF circuits 401 include but is not limited to antenna, at least one amplifier, tuner, one or more oscillators, use
Family identity module (SIM, Subscriber Identity Module) card, transceiver, coupler, low-noise amplifier
(LNA, Low Noise Amplifier), duplexer etc..In addition, RF circuits 401 can also pass through radio communication and network and its
His equipment communication.The radio communication can use any communication standard or agreement, including but not limited to global system for mobile telecommunications system
Unite (GSM, Global System of Mobile communication), general packet radio service (GPRS, General
Packet Radio Service), CDMA (CDMA, Code Division Multiple Access), wideband code division it is many
Location (WCDMA, Wideband Code Division Multiple Access), Long Term Evolution (LTE, Long Term
Evolution), Email, Short Message Service (SMS, Short Messaging Service) etc..
Memory 402 can be used for storage software program and module, and processor 408 is stored in memory 402 by operation
Software program and module, so as to perform various function application and data processing.Memory 402 can mainly include storage journey
Sequence area and storage data field, wherein, the application program (ratio that storing program area can be needed for storage program area, at least one function
Such as sound-playing function, image player function) etc.;Storage data field can be stored uses created data according to terminal
(such as voice data, phone directory etc.) etc..In addition, memory 402 can include high-speed random access memory, it can also include
Nonvolatile memory, for example, at least one disk memory, flush memory device or other volatile solid-state parts.Phase
Ying Di, memory 402 can also include Memory Controller, to provide processor 408 and input block 403 to memory 402
Access.
Input block 403 can be used for the numeral or character information for receiving input, and generation to be set with user and function
The relevant keyboard of control, mouse, action bars, optics or the input of trace ball signal.Specifically, in a specific embodiment
In, input block 403 may include touch sensitive surface and other input equipments.Touch sensitive surface, also referred to as touch display screen or tactile
Control plate, collect user on or near it touch operation (such as user using any suitable object such as finger, stylus or
Operation of the annex on touch sensitive surface or near touch sensitive surface), and filled according to the corresponding connection of formula set in advance driving
Put.Optionally, touch sensitive surface may include both touch detecting apparatus and touch controller.Wherein, touch detecting apparatus is examined
The touch orientation of user is surveyed, and detects the signal that touch operation is brought, touch controller is transmitted a signal to;Touch controller from
Touch information is received on touch detecting apparatus, and is converted into contact coordinate, then gives processor 408, and can reception processing
Order that device 408 is sent simultaneously is performed.Furthermore, it is possible to a variety of using resistance-type, condenser type, infrared ray and surface acoustic wave etc.
Type realizes touch sensitive surface.Except touch sensitive surface, input block 403 can also include other input equipments.Specifically, other are defeated
Physical keyboard, function key (such as volume control button, switch key etc.), trace ball, mouse can be included but is not limited to by entering equipment
One or more in mark, action bars etc..
Display unit 404 can be used for the various of the information that is inputted by user of display or the information for being supplied to user and terminal
Graphical user interface, these graphical user interface can be made up of figure, text, icon, video and its any combination.Display
Unit 404 may include display panel, optionally, can using liquid crystal display (LCD, Liquid Crystal Display),
The forms such as Organic Light Emitting Diode (OLED, Organic Light-Emitting Diode) configure display panel.Further
, touch sensitive surface can cover display panel, after touch sensitive surface detects the touch operation on or near it, send processing to
Device 408 is provided accordingly on a display panel with determining the type of touch event with preprocessor 408 according to the type of touch event
Visual output.Although in Fig. 4, touch sensitive surface is that input is realized as two independent parts and is inputted with display panel
Function, but in some embodiments it is possible to by touch sensitive surface and display panel it is integrated and realize input and output function.
Terminal may also include at least one sensor 405, such as optical sensor, motion sensor and other sensors.
Specifically, optical sensor may include ambient light sensor and proximity transducer, wherein, ambient light sensor can be according to ambient light
Light and shade adjust the brightness of display panel, proximity transducer can close display panel and/or the back of the body when terminal is moved in one's ear
Light.As one kind of motion sensor, gravity accelerometer can detect in all directions (generally three axles) acceleration
Size, can detect that size and the direction of gravity when static, available for identification mobile phone posture application (such as horizontal/vertical screen switching,
Dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, tap) etc.;It can also configure as terminal
The other sensors such as gyroscope, barometer, hygrometer, thermometer, infrared ray sensor, will not be repeated here.
Voicefrequency circuit 406, loudspeaker, microphone can provide the COBBAIF between user and terminal.Voicefrequency circuit 406 can
Electric signal after the voice data received is changed, is transferred to loudspeaker, and voice signal output is converted to by loudspeaker;It is another
The voice signal of collection is converted to electric signal by aspect, microphone, by voicefrequency circuit 406 receive after be converted to voice data, then
After voice data output processor 408 is handled, through RF circuits 401 to be sent to such as another terminal, or by voice data
Output is to memory 402 so as to further processing.Voicefrequency circuit 406 is also possible that earphone jack, with provide peripheral hardware earphone with
The communication of terminal.
WiFi belongs to short range wireless transmission technology, and terminal can help user's transceiver electronicses postal by WiFi module 407
Part, browse webpage and access streaming video etc., it has provided the user wireless broadband internet and accessed.Although Fig. 4 is shown
WiFi module 407, but it is understood that, it is simultaneously not belonging to must be configured into for terminal, can not change as needed completely
Become in the essential scope of invention and omit.
Processor 408 is the control centre of terminal, using various interfaces and the various pieces of connection whole mobile phone, is led to
Cross operation or perform and be stored in software program and/or module in memory 402, and call and be stored in memory 402
Data, perform the various functions and processing data of terminal, so as to carry out integral monitoring to mobile phone.Optionally, processor 408 can be wrapped
Include one or more processing cores;It is preferred that, processor 408 can integrated application processor and modem processor, wherein, should
Operating system, user interface and application program etc. are mainly handled with processor, modem processor mainly handles radio communication.
It is understood that above-mentioned modem processor can not also be integrated into processor 408.
Terminal also includes the power supply 409 (such as battery) powered to all parts, it is preferred that power supply can pass through power supply pipe
Reason system and processor 408 are logically contiguous, so as to realize management charging, electric discharge and power managed by power-supply management system
Etc. function.Power supply 409 can also include one or more direct current or AC power, recharging system, power failure inspection
The random component such as slowdown monitoring circuit, power supply changeover device or inverter, power supply status indicator.
Although not shown, terminal can also include camera, bluetooth module etc., will not be repeated here.Specifically in this implementation
In example, the processor 408 in terminal can be corresponding by the process of one or more application program according to following instruction
Executable file is loaded into memory 402, and the application program being stored in memory 402 is run by processor 408, from
And realize various functions:
The pending image and target color development of user's selection are obtained, the hair zones of pending image, Yi Jigen are determined
Color of object image is generated according to target color development, the track data of hair in hair zones is then obtained, so that the rail based on hair
Mark data carry out deformation process to color of object image, afterwards by the color of object figure after the image section of hair zones and deformation
The pixel of the correspondence position of picture carries out colour filter mixed processing, obtains the first object image with target color development.
It is for instance possible to use least square method anamorphose algorithm carries out deformation process to color of object image.
Wherein, can also be to the image section and first object image of hair zones after first object image is obtained
Correspondence position pixel carry out alpha blended processing, to obtain the second target image with target color development, so as to
Optimize the hairline edge transition of the second target image.For example, can be to the color of object figure after hair area image and deformation
The pixel of the correspondence position of picture carries out sheen mixed processing, to obtain the first weight map picture, is then based on the first weight map picture institute
The first weight coefficient determined carries out alpha blended processing.
Further, can also be to the image section and the second target of hair zones after the second target image is obtained
The pixel of the correspondence position of image carries out alpha blended processing, to obtain the 3rd target image with target color development.For example,
Histogram equalization processing can be carried out to the luminance channel image of the image section of hair zones, to obtain the second weight map
Picture, so that the second weight coefficient as determined by carries out alpha blended processing based on the second weight map.
The specific implementation of each operation can be found in embodiment above above, will not be repeated here.
From the foregoing, it will be observed that in the embodiment of the present invention, generating color of object image according to the target color development that user selects, and utilize
The track data of hair carries out deformation process to color of object image in pending image, so as on color of object image
The corresponding texture of hair is obtained, the color changed required for single hair is determined from there through color of object image so that hair
Color changes not to be constrained by the background color development in pending image, can realize the hair color in image changing into dark system
Color, light color system color or graduated colors, are supplied so as to provide the target color development including dark system, light color system or gradient color
User selects, and meets the diversified demand of user.
It will appreciated by the skilled person that all or part of step in the various methods of above-described embodiment can be with
Completed by instructing, or control related hardware to complete by instructing, the instruction can be stored in one and computer-readable deposit
In storage media, and loaded and performed by processor.
Therefore, the embodiment of the present invention also provides a kind of storage medium, wherein a plurality of instruction that is stored with, the instruction can be located
Reason device is loaded, to perform the step in any image processing method that the embodiment of the present invention is provided.For example, the instruction
Can be with following steps:
The pending image and target color development of user's selection are obtained, the hair zones of pending image, Yi Jigen are determined
Color of object image is generated according to target color development, the track data of hair in hair zones is then obtained, so that the rail based on hair
Mark data carry out deformation process to color of object image, afterwards by the color of object figure after the image section of hair zones and deformation
The pixel of the correspondence position of picture carries out colour filter mixed processing, obtains the first object image with target color development.
It is for instance possible to use least square method anamorphose algorithm carries out deformation process to color of object image.
Wherein, can also be to the image section and first object image of hair zones after first object image is obtained
Correspondence position pixel carry out alpha blended processing, to obtain the second target image with target color development, so as to
Optimize the hairline edge transition of the second target image.For example, can be to the color of object figure after hair area image and deformation
The pixel of the correspondence position of picture carries out sheen mixed processing, to obtain the first weight map picture, is then based on the first weight map picture institute
The first weight coefficient determined carries out alpha blended processing.
Further, can also be to the image section and the second target of hair zones after the second target image is obtained
The pixel of the correspondence position of image carries out alpha blended processing, to obtain the 3rd target image with target color development.For example,
Histogram equalization processing can be carried out to the luminance channel image of the image section of hair zones, to obtain the second weight map
Picture, so that the second weight coefficient as determined by carries out alpha blended processing based on the second weight map.
The specific implementation of each operation can be found in embodiment above above, will not be repeated here.
Wherein, the storage medium can include:Read-only storage (ROM, Read Only Memory), random access memory
Body (RAM, Random Access Memory), disk or CD etc..
By the instruction stored in the storage medium, it can perform at any image that the embodiment of the present invention is provided
Step in reason method, it is thereby achieved that achieved by any image processing method that the embodiment of the present invention is provided
Beneficial effect, refers to embodiment above, will not be repeated here.
A kind of image processing method, device and the storage medium provided above the embodiment of the present invention has carried out detailed Jie
Continue, specific case used herein is set forth to the principle and embodiment of the present invention, the explanation of above example is only
It is the method and its core concept for being used to help understand the present invention;Simultaneously for those skilled in the art, according to the present invention's
Thought, will change in specific embodiments and applications, in summary, and this specification content should not be construed as
Limitation of the present invention.
Claims (15)
1. a kind of image processing method, it is characterised in that including:
Obtain the pending image and target color development of user's selection;
The hair zones of the pending image are determined, and color of object image is generated according to the target color development;
Obtain the track data of hair in the hair zones;
Track data based on the hair carries out deformation process to color of object image;
The pixel of the correspondence position of color of object image after the image section of the hair zones and deformation is carried out into colour filter to mix
Conjunction is handled, and obtains the first object image with target color development.
2. image processing method according to claim 1, it is characterised in that the track data pair based on the hair
Color of object image carries out deformation process, including:
Obtain the track data of every hair;
Obtain the track data of lines on precalculated position in color of object image;
Successively according to the track data of every hair and the track data of precalculated position lines, pixel in adjustment color of object image
Position, obtain multiple hair color images;
The pixel of the correspondence position of multiple hair color images is subjected to colour filter mixed processing, the color of object figure after being deformed
Picture.
3. image processing method according to claim 2, it is characterised in that pre-determined bit in the acquisition color of object image
The track data of lines is put, including:
Determine the center line of color of object image;
The track data of the center line is obtained, the track data of lines on precalculated position is obtained.
4. image processing method according to claim 1, it is characterised in that the image section by the hair zones
After the pixel progress colour filter mixed processing of the correspondence position of the color of object image after deformation, in addition to:
The pixel of the correspondence position of image section and first object image to the hair zones carries out alpha blended processing,
Obtain the second target image with target color development.
5. image processing method according to claim 4, it is characterised in that the image section to the hair zones
Alpha blended processing is carried out with the pixel of the correspondence position of first object image, including:
Hair zones image is generated according to the hair zones of pending image;
The pixel of the correspondence position of color of object image after the hair zones image and deformation is subjected to sheen mixed processing,
Obtain the first weight map picture;
First weight coefficient is determined according to the first weight map picture;
The pixel of the image sections of the hair zones and the correspondence position of first object image is entered based on the first weight coefficient
The processing of row alpha blended, obtains the second target image with target color development.
6. image processing method according to claim 4, it is characterised in that the image section to the hair zones
After the pixel progress alpha blended processing of the correspondence position of first object image, in addition to:
The pixel of the correspondence position of image section and the second target image to the hair zones carries out alpha blended processing,
Obtain the 3rd target image with target color development.
7. image processing method according to claim 6, it is characterised in that the image section to the hair zones
Alpha blended processing is carried out with the pixel of the correspondence position of the second target image, including:
The monochrome information of the image section of the hair zones is obtained, with the luminance channel figure for the image section for generating hair zones
Picture;
Histogram equalization processing is carried out to the luminance channel image, the second weight map picture is obtained;
Second weight coefficient is determined according to the second weight map picture;
Alpha blended processing is carried out to the image section of the hair zones and the second target image based on the second weight coefficient,
Obtain the 3rd target image with target color development.
8. image processing method according to claim 1, it is characterised in that the track data pair based on the hair
Color of object image is carried out before deformation process, in addition to:
The boundary rectangle of the shape of the hair zones is determined according to the hair zones;
Processing is zoomed in and out to the color of object image based on the boundary rectangle, to cause the chi of the color of object image
It is very little consistent with the size of the boundary rectangle;
The track data based on the hair carries out deformation process to color of object image, including:Based on the hair
Track data carries out deformation process to the color of object image after scaling processing.
9. the image processing method according to any one of claim 1 to 8, it is characterised in that described by the hair zones
Image section and deformation after color of object image correspondence position pixel carry out colour filter mixed processing, including:
Obtain position and the red, green and blue three primary colours component of the first pending pixel of the image section of the hair zones
Pixel value;
Obtain the second pending pixel corresponding with the described first pending location of pixels in the color of object image after deformation
The pixel value of red, green and blue three primary colours component;
By primary color values of the pixel value of each primary color component of the described first pending pixel, each base of the second pending pixel
The pixel value of colouring component is mixing colour, by the same primary color component of the described first pending pixel and the second pending pixel
Pixel value carries out colour filter mixed processing.
10. a kind of image processing apparatus, it is characterised in that including:
First acquisition module, pending image and target color development for obtaining user's selection;
Determining module, the hair zones for determining the pending image;
Generation module, for generating color of object image according to the target color development;
Second acquisition module, the track data for obtaining hair in the hair zones;
Warped processing module, deformation process is carried out for the track data based on the hair to color of object image;
First processing module, for by the correspondence position of the color of object image after the image section of the hair zones and deformation
Pixel carry out colour filter mixed processing, obtain the first object image with target color development.
11. image processing apparatus according to claim 10, it is characterised in that the warped processing module specifically for:
Obtain the track data of every hair;
Obtain the track data of lines on precalculated position in color of object image;
Successively according to the track data of every hair and the track data of precalculated position lines, pixel in adjustment color of object image
Position, obtain multiple hair color images;
The pixel of the correspondence position of multiple hair color images is subjected to colour filter mixed processing, the color of object figure after being deformed
Picture.
12. image processing apparatus according to claim 10, it is characterised in that also include:
Second processing module, the pixel for the image section to the hair zones and the correspondence position of first object image is entered
The processing of row alpha blended, obtains the second target image with target color development.
13. image processing apparatus according to claim 12, it is characterised in that the Second processing module specifically for:
Hair zones image is generated according to the hair zones of pending image;
The pixel of the correspondence position of color of object image after the hair zones image and deformation is subjected to sheen mixed processing,
Obtain the first weight map picture;
First weight coefficient is determined according to the first weight map picture;
The pixel of the image sections of the hair zones and the correspondence position of first object image is entered based on the first weight coefficient
The processing of row alpha blended, obtains the second target image with target color development.
14. image processing apparatus according to claim 12, it is characterised in that also include:
3rd processing module, the pixel for the image section to the hair zones and the correspondence position of the second target image is entered
The processing of row alpha blended, obtains the 3rd target image with target color development.
15. a kind of storage medium, it is characterised in that the storage medium is used to store a plurality of instruction, and the instruction is suitable to processing
Device is loaded, and the step in the image processing method described in any one of 1-9 is required with perform claim.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710387281.4A CN107256555B (en) | 2017-05-25 | 2017-05-25 | Image processing method, device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710387281.4A CN107256555B (en) | 2017-05-25 | 2017-05-25 | Image processing method, device and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107256555A true CN107256555A (en) | 2017-10-17 |
CN107256555B CN107256555B (en) | 2021-11-02 |
Family
ID=60027745
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710387281.4A Active CN107256555B (en) | 2017-05-25 | 2017-05-25 | Image processing method, device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107256555B (en) |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108036746A (en) * | 2017-12-26 | 2018-05-15 | 太原理工大学 | A kind of Gabor transformation based on Spectrum Method realizes carbon fibre composite surface texture analysis method |
CN108198127A (en) * | 2017-11-27 | 2018-06-22 | 维沃移动通信有限公司 | A kind of image processing method, device and mobile terminal |
CN108492247A (en) * | 2018-03-23 | 2018-09-04 | 成都品果科技有限公司 | A kind of eye make-up chart pasting method based on distortion of the mesh |
CN108629819A (en) * | 2018-05-15 | 2018-10-09 | 北京字节跳动网络技术有限公司 | Image hair dyeing treating method and apparatus |
CN109994037A (en) * | 2017-12-29 | 2019-07-09 | 杭州海康威视系统技术有限公司 | Generate method, apparatus, terminal and the storage medium of electronic map |
CN110009708A (en) * | 2019-04-10 | 2019-07-12 | 上海大学 | Method, system and terminal for hair color transformation based on image color segmentation |
CN110930296A (en) * | 2019-11-20 | 2020-03-27 | Oppo广东移动通信有限公司 | Image processing method, device, equipment and storage medium |
CN110969631A (en) * | 2019-11-25 | 2020-04-07 | 杭州趣维科技有限公司 | Fine photo hair dyeing method and system |
CN111047669A (en) * | 2019-12-11 | 2020-04-21 | 广州品唯软件有限公司 | A kind of color picking method and device |
CN111127591A (en) * | 2019-12-24 | 2020-05-08 | 腾讯科技(深圳)有限公司 | Image hair dyeing processing method, device, terminal and storage medium |
CN111260581A (en) * | 2020-01-17 | 2020-06-09 | 北京达佳互联信息技术有限公司 | Image processing method, device and storage medium |
CN111340818A (en) * | 2018-12-19 | 2020-06-26 | 北京京东尚科信息技术有限公司 | Hair dyeing effect picture generation method and device and computer readable storage medium |
CN111586444A (en) * | 2020-06-05 | 2020-08-25 | 广州繁星互娱信息科技有限公司 | Video processing method and device, electronic equipment and storage medium |
CN111724449A (en) * | 2019-03-19 | 2020-09-29 | 北京京东尚科信息技术有限公司 | Image processing method, device and equipment |
CN112634155A (en) * | 2020-12-22 | 2021-04-09 | 北京达佳互联信息技术有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
CN112862807A (en) * | 2021-03-08 | 2021-05-28 | 网易(杭州)网络有限公司 | Data processing method and device based on hair image |
CN113421278A (en) * | 2021-06-22 | 2021-09-21 | 咪咕互动娱乐有限公司 | Range detection method, device and equipment based on edge detection and storage medium |
WO2022105347A1 (en) * | 2020-11-17 | 2022-05-27 | 北京达佳互联信息技术有限公司 | Image processing method and device |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4872056A (en) * | 1987-02-05 | 1989-10-03 | Video Graphic Styling, Inc. | Method for displaying selected hairstyles in video form |
US20060282288A1 (en) * | 2003-12-31 | 2006-12-14 | Klinger Advanced Aesthetics, Inc. | Methods of providing a patient with aesthetic improvement procedures |
CN103955962A (en) * | 2014-04-21 | 2014-07-30 | 华为软件技术有限公司 | Device and method for virtualizing human hair growth |
CN104484664A (en) * | 2014-12-31 | 2015-04-01 | 小米科技有限责任公司 | Human face image processing method and device |
CN105405163A (en) * | 2015-12-28 | 2016-03-16 | 北京航空航天大学 | Vivid static-state hair modeling method based on multiple direction fields |
CN105654420A (en) * | 2015-12-21 | 2016-06-08 | 小米科技有限责任公司 | Face image processing method and device |
CN106203399A (en) * | 2016-07-27 | 2016-12-07 | 厦门美图之家科技有限公司 | A kind of image processing method, device and calculating equipment |
-
2017
- 2017-05-25 CN CN201710387281.4A patent/CN107256555B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4872056A (en) * | 1987-02-05 | 1989-10-03 | Video Graphic Styling, Inc. | Method for displaying selected hairstyles in video form |
US20060282288A1 (en) * | 2003-12-31 | 2006-12-14 | Klinger Advanced Aesthetics, Inc. | Methods of providing a patient with aesthetic improvement procedures |
CN103955962A (en) * | 2014-04-21 | 2014-07-30 | 华为软件技术有限公司 | Device and method for virtualizing human hair growth |
CN104484664A (en) * | 2014-12-31 | 2015-04-01 | 小米科技有限责任公司 | Human face image processing method and device |
CN105654420A (en) * | 2015-12-21 | 2016-06-08 | 小米科技有限责任公司 | Face image processing method and device |
CN105405163A (en) * | 2015-12-28 | 2016-03-16 | 北京航空航天大学 | Vivid static-state hair modeling method based on multiple direction fields |
CN106203399A (en) * | 2016-07-27 | 2016-12-07 | 厦门美图之家科技有限公司 | A kind of image processing method, device and calculating equipment |
Non-Patent Citations (2)
Title |
---|
MARCELINO ANGUIANO-MORALES ET AL.: "image fusion by color texture extraction", 《OPTIK》 * |
梁凌宇: "人脸图像的自适应美化与渲染研究", 《中国优秀博硕士学位论文全文数据库(博士)信息科技辑》 * |
Cited By (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108198127A (en) * | 2017-11-27 | 2018-06-22 | 维沃移动通信有限公司 | A kind of image processing method, device and mobile terminal |
CN108036746A (en) * | 2017-12-26 | 2018-05-15 | 太原理工大学 | A kind of Gabor transformation based on Spectrum Method realizes carbon fibre composite surface texture analysis method |
CN109994037A (en) * | 2017-12-29 | 2019-07-09 | 杭州海康威视系统技术有限公司 | Generate method, apparatus, terminal and the storage medium of electronic map |
CN108492247A (en) * | 2018-03-23 | 2018-09-04 | 成都品果科技有限公司 | A kind of eye make-up chart pasting method based on distortion of the mesh |
CN108629819A (en) * | 2018-05-15 | 2018-10-09 | 北京字节跳动网络技术有限公司 | Image hair dyeing treating method and apparatus |
CN108629819B (en) * | 2018-05-15 | 2019-09-13 | 北京字节跳动网络技术有限公司 | Image hair dyeing treating method and apparatus |
CN111340818A (en) * | 2018-12-19 | 2020-06-26 | 北京京东尚科信息技术有限公司 | Hair dyeing effect picture generation method and device and computer readable storage medium |
CN111340818B (en) * | 2018-12-19 | 2024-05-17 | 北京京东尚科信息技术有限公司 | Hair dyeing effect graph generation method, device and computer readable storage medium |
CN111724449B (en) * | 2019-03-19 | 2024-07-19 | 北京京东尚科信息技术有限公司 | Image processing method, device and equipment |
CN111724449A (en) * | 2019-03-19 | 2020-09-29 | 北京京东尚科信息技术有限公司 | Image processing method, device and equipment |
CN110009708A (en) * | 2019-04-10 | 2019-07-12 | 上海大学 | Method, system and terminal for hair color transformation based on image color segmentation |
CN110930296A (en) * | 2019-11-20 | 2020-03-27 | Oppo广东移动通信有限公司 | Image processing method, device, equipment and storage medium |
CN110930296B (en) * | 2019-11-20 | 2023-08-08 | Oppo广东移动通信有限公司 | Image processing method, device, equipment and storage medium |
CN110969631B (en) * | 2019-11-25 | 2023-04-11 | 杭州小影创新科技股份有限公司 | Method and system for dyeing hair by refined photos |
CN110969631A (en) * | 2019-11-25 | 2020-04-07 | 杭州趣维科技有限公司 | Fine photo hair dyeing method and system |
CN111047669B (en) * | 2019-12-11 | 2023-08-22 | 广州品唯软件有限公司 | A color picking method and device |
CN111047669A (en) * | 2019-12-11 | 2020-04-21 | 广州品唯软件有限公司 | A kind of color picking method and device |
CN111127591A (en) * | 2019-12-24 | 2020-05-08 | 腾讯科技(深圳)有限公司 | Image hair dyeing processing method, device, terminal and storage medium |
CN111127591B (en) * | 2019-12-24 | 2023-08-08 | 腾讯科技(深圳)有限公司 | Image hair dyeing processing method, device, terminal and storage medium |
CN111260581B (en) * | 2020-01-17 | 2023-09-26 | 北京达佳互联信息技术有限公司 | Image processing method, device and storage medium |
CN111260581A (en) * | 2020-01-17 | 2020-06-09 | 北京达佳互联信息技术有限公司 | Image processing method, device and storage medium |
CN111586444B (en) * | 2020-06-05 | 2022-03-15 | 广州繁星互娱信息科技有限公司 | Video processing method and device, electronic equipment and storage medium |
CN111586444A (en) * | 2020-06-05 | 2020-08-25 | 广州繁星互娱信息科技有限公司 | Video processing method and device, electronic equipment and storage medium |
WO2022105347A1 (en) * | 2020-11-17 | 2022-05-27 | 北京达佳互联信息技术有限公司 | Image processing method and device |
CN112634155A (en) * | 2020-12-22 | 2021-04-09 | 北京达佳互联信息技术有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
CN112634155B (en) * | 2020-12-22 | 2024-03-22 | 北京达佳互联信息技术有限公司 | Image processing method, device, electronic equipment and storage medium |
CN112862807A (en) * | 2021-03-08 | 2021-05-28 | 网易(杭州)网络有限公司 | Data processing method and device based on hair image |
CN113421278B (en) * | 2021-06-22 | 2023-08-15 | 咪咕互动娱乐有限公司 | Range detection method, device, equipment and storage medium based on edge detection |
CN113421278A (en) * | 2021-06-22 | 2021-09-21 | 咪咕互动娱乐有限公司 | Range detection method, device and equipment based on edge detection and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN107256555B (en) | 2021-11-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107256555A (en) | A kind of image processing method, device and storage medium | |
CN106991645B (en) | Image split-joint method and device | |
CN106156730B (en) | A kind of synthetic method and device of facial image | |
CN109191410B (en) | Face image fusion method and device and storage medium | |
JP6257848B1 (en) | Training method and apparatus for convolutional neural network model | |
US20210256672A1 (en) | Method, electronic device and storage medium for processing image | |
CN107038715A (en) | A kind of image processing method and device | |
CN108287744A (en) | Character displaying method, device and storage medium | |
CN108717719A (en) | Generation method, device and the computer storage media of cartoon human face image | |
CN104866755B (en) | Setting method and device for background picture of application program unlocking interface and electronic equipment | |
CN108551519A (en) | A kind of information processing method, device, storage medium and system | |
CN103164231A (en) | Input method virtual keyboard skin management method and device | |
CN107436712A (en) | To breathe out the method, apparatus and terminal of menu setting skin | |
CN112825040B (en) | User interface display method, device, equipment and storage medium | |
CN109361867A (en) | A filter processing method and mobile terminal | |
CN110069229A (en) | Screen sharing method, mobile terminal and computer readable storage medium | |
CN106713840A (en) | Virtual information display method and device | |
CN108182668A (en) | A kind of enlarged drawing processing method, terminal and computer readable storage medium | |
CN103854020B (en) | Character recognition method and device | |
CN109300099A (en) | A kind of image processing method, mobile terminal and computer readable storage medium | |
CN112634155B (en) | Image processing method, device, electronic equipment and storage medium | |
CN110807769A (en) | Image display control method and device | |
CN107230065A (en) | A kind of two-dimensional code display method, equipment and computer-readable recording medium | |
CN111428551B (en) | Density detection method, density detection model training method and device | |
CN108234893A (en) | A kind of brightness adjusting method, equipment and computer readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |