The content of the invention
The purpose of the embodiment of the present invention is to provide a kind of image processing method, device, electronic equipment and storage medium, with
Solve the problems, such as that user manually operated just can determine that application point, sphere of action and action intensity by cumbersome.Particular technique side
Case is as follows:
In a first aspect, the embodiments of the invention provide a kind of image processing method, methods described includes:
It whether there is human face region in detection target image;
If in the presence of according to coordinate of the pixel in preset coordinate system in the human face region, obtaining the human face region
Coordinate parameters;
According to the coordinate parameters, the application point and sphere of action of pending image-region in the target image are calculated;
According to the application point and sphere of action, the pending image-region is determined;
According to default image procossing mode, with default action intensity, image is carried out to the pending image-region
Processing, the target image after being handled.
Alternatively, it is described according to the coordinate parameters, calculate the application point of pending image-region in the target image
And the step of sphere of action, including:
According to the first coordinate, the second coordinate, the 3rd coordinate and 4-coordinate, the pending image of the target image is determined
The application point in region, wherein, first coordinate is:The minimum coordinate of numerical value corresponding to abscissa, institute in the coordinate parameters
Stating the second coordinate is:The maximum coordinate of numerical value corresponding to abscissa, the 3rd coordinate are in the coordinate parameters:The coordinate
The maximum coordinate of numerical value corresponding to ordinate, the 4-coordinate are in coordinate in parameter, for identifying eyebrow:The coordinate
The minimum coordinate of numerical value corresponding to ordinate in coordinate in parameter, for identifying chin;
According to first coordinate and second coordinate, the effect of the pending image-region of the target image is determined
Scope.
Alternatively, the pending image-region includes:First subregion and the second subregion;
It is described according to the first coordinate, second coordinate, the 3rd coordinate and 4-coordinate, determine treating for the target image
The step of handling the application point of image-region, including:
The abscissa of first coordinate is defined as to the abscissa of the first application point of first subregion;
The abscissa of second coordinate is defined as to the abscissa of the second application point of second subregion;
The ordinate of the first application point of first subregion and second subregion are obtained using equation below
The ordinate of second application point:
yZ1=yZ2=y4-(y3-y4)
Wherein, yZ1For the ordinate of first application point, yZ2For the ordinate of second application point, y3For described
The ordinate of three coordinates, y4For the ordinate of the 4-coordinate.
Alternatively, it is described according to the first coordinate and the second coordinate, determine the pending image-region of the target image
The step of sphere of action, including:
First coordinate and distance of second coordinate in x-axis are obtained using equation below:
L=| x1-x2|
Wherein, x1For the abscissa of first coordinate, x2For the abscissa of second coordinate, L is first coordinate
With distance of second coordinate in x-axis;
Determine that the sphere of action of first subregion and the sphere of action of second subregion are:Using L as diameter
Border circular areas.
Alternatively, it is described according to the application point and sphere of action, the step of determining the pending image-region, bag
Include:
By using L as diameter, first subregion is used as by the border circular areas that the center of circle determines of first application point;
By using L as diameter, second subregion is used as by the border circular areas that the center of circle determines of second application point.
Alternatively, it is described according to the application point and sphere of action, after the step of determining the pending image-region,
Also include:
The pending image-region is adjusted;
It is described according to default image procossing mode, with default action intensity, the pending image-region is carried out
Image procossing, the target image after being handled, including:
According to default image procossing mode, image procossing is carried out to the pending image-region after adjustment, handled
The target image afterwards.
Alternatively, described the step of being adjusted to the pending image-region, including in following adjustment mode extremely
Few one kind:
The pending image-region is moved to target location;
The sphere of action of the pending image-region is adjusted to intended operating range;
The default action intensity of the pending image-region is adjusted to interacting goals intensity.
Alternatively, it is described according to the application point and sphere of action, before determining the pending image-region, also wrap
Include:
If human face region is not present in the target image, the width and height of the target image are obtained;
According to the width and height of the target image, the application point of pending image-region in the target image is calculated
And sphere of action.
Alternatively, the pending image-region includes:First subregion and the second subregion;
The width and height according to the target image, determine the work of pending image-region in the target image
The step of with point and sphere of action, including:
The abscissa of the first application point of first subregion is obtained using equation below:
Wherein, xZ1For the abscissa of first application point, W is the width of the target image, Q1For the default first ratio
Example value;
The abscissa of the second application point of second subregion is obtained using equation below:
Wherein, xZ2For the abscissa of second application point, Q2For default second ratio value;
The ordinate of the first application point of first subregion and second subregion are obtained using equation below
The ordinate of second application point:
Wherein, yZ1For the ordinate of first application point, yZ2For the ordinate of second application point, H is the mesh
The width of logo image, Q3For default 3rd ratio value.
Computational length is obtained using equation below:
Wherein, W be the target image width, Q4For default 4th ratio value;
Determine that the sphere of action of first subregion and the sphere of action of second subregion are:Using D as diameter
Border circular areas.
Alternatively,
It is described according to default image procossing mode, with default action intensity, the pending image-region is carried out
Image procossing, the step of target image after being handled, including:
According to default image procossing mode, with default least action intensity, the pending image-region is carried out
Image procossing, the target image after being handled.
Alternatively, methods described also includes:
Receive the instruction that Shadows Processing is carried out to the target image;
Concentrated from default shadow image and choose target shadow image;
Determine the saturating of placement location and the target shadow image of the target shadow image in the target image
Bright degree;
With identified transparency, by the target shadow imaging importing to identified placement location.
Second aspect, the embodiments of the invention provide a kind of image processing apparatus, described device includes:
Detection module, it whether there is human face region in target image for detecting;
First acquisition module, for when the detection module detects and human face region be present in target image, according to institute
Coordinate of the pixel in preset coordinate system in human face region is stated, obtains the coordinate parameters of the human face region;
First computing module, for according to the coordinate parameters, calculating pending image-region in the target image
Application point and sphere of action;
First determining module, for according to the application point and sphere of action, determining the pending image-region;
Processing module, for according to default image procossing mode, with default action intensity, to the pending image
Region carries out image procossing, the target image after being handled.
Alternatively, first computing module includes:
First determination sub-module, for according to the first coordinate, the second coordinate, the 3rd coordinate and 4-coordinate, it is determined that described
The application point of the pending image-region of target image, wherein, first coordinate is:Abscissa is corresponding in the coordinate parameters
The minimum coordinate of numerical value, second coordinate is:The maximum coordinate of numerical value corresponding to abscissa, described in the coordinate parameters
3rd coordinate is:The maximum coordinate of numerical value corresponding to ordinate, described in coordinate in the coordinate parameters, for identifying eyebrow
4-coordinate is:The minimum coordinate of numerical value corresponding to ordinate in coordinate in the coordinate parameters, for identifying chin;
Second determination sub-module, for according to first coordinate and second coordinate, determining the target image
The sphere of action of pending image-region.
Alternatively, the pending image-region includes:First subregion and the second subregion;
First determination sub-module includes:
First determining unit, for the abscissa of the first coordinate to be defined as to the first application point of first subregion
Abscissa;
Second determining unit, for the abscissa of the second coordinate to be defined as to the second application point of second subregion
Abscissa;
First computing unit, for obtained using equation below first subregion the first application point ordinate and
The ordinate of second application point of second subregion:
yZ1=yZ2=y4-(y3-y4)
Wherein, yZ1For the ordinate of first application point, yZ2For the ordinate of second application point, y3For described
The ordinate of three coordinates, y4For the ordinate of the 4-coordinate.
Alternatively, second determination sub-module includes:
Second computing unit, for obtaining first coordinate and second coordinate in x-axis using equation below
Distance:
L=| x1-x2|
Wherein, x1For the abscissa of first coordinate, x2For the abscissa of second coordinate, L is first coordinate
With distance of second coordinate in x-axis;
3rd determining unit, for determining the sphere of action of first subregion and the effect model of second subregion
Enclose and be:Border circular areas using L as diameter.
Alternatively, first determining module includes:
3rd determination sub-module, the border circular areas for will be determined using L as diameter, using first application point as the center of circle are made
For first subregion;
4th determination sub-module, the border circular areas for will be determined using L as diameter, using second application point as the center of circle are made
For second subregion.
Alternatively, described device also includes:
Adjusting module, for being adjusted to the pending image-region;
The processing module includes:
First processing submodule, for according to default image procossing mode, entering to the pending image-region after adjustment
Row image procossing, the target image after being handled.
Alternatively, the adjusting module is specifically used for, at least one of following adjustment mode:
The pending image-region is moved to target location;
The sphere of action of the pending image-region is adjusted to intended operating range;
The default action intensity of the pending image-region is adjusted to interacting goals intensity.
Alternatively, described device also includes:
Second acquisition module, for when the detection module detects and human face region is not present in target image, obtaining
The width and height of the target image;
Second computing module, for the width and height according to the target image, calculate in the target image and wait to locate
Manage the application point and sphere of action of image-region.
Alternatively, the pending image-region includes:First subregion and the second subregion;
Second computing module, including:
First calculating sub module, the horizontal seat of the first application point for obtaining first subregion using equation below
Mark:
Wherein, xZ1For the abscissa of first application point, W is the width of the target image, Q1For the default first ratio
Example value;
Second calculating sub module, the horizontal seat of the second application point for obtaining second subregion using equation below
Mark:
Wherein, xZ2For the abscissa of second application point, Q2For default second ratio value;
3rd calculating sub module, the ordinate of the first application point for obtaining first subregion using equation below
With the ordinate of the second application point of second subregion:
Wherein, yZ1For the ordinate of first application point, yZ2For the ordinate of second application point, H is the mesh
The width of logo image, Q3For default 3rd ratio value.
4th calculating sub module, for obtaining computational length using equation below:
Wherein, W be the target image width, Q4For default 4th ratio value;
5th determination sub-module, for determining the sphere of action of first subregion and the effect of second subregion
Scope is:Border circular areas using D as diameter.
Alternatively, the processing module includes:
Submodule is handled, for according to default image procossing mode, with default least action intensity, waiting to locate to described
Manage image-region and carry out image procossing, the target image after being handled.
Alternatively, described device also includes:
Receiving module, the instruction of Shadows Processing is carried out to the target image for receiving;
Module is chosen, target shadow image is chosen for being concentrated from default shadow image;
Second determining module, for determining placement location of the target shadow image in the target image and described
The transparency of target shadow image;
Laminating module, for identified transparency, by the target shadow imaging importing to identified placement
Position.
The third aspect, the embodiments of the invention provide a kind of electronic equipment, including processor, communication interface, memory and
Communication bus, wherein, processor, communication interface, memory completes mutual communication by communication bus;
Memory, for depositing computer program;
Processor, during for performing the program deposited on memory, perform any of the above-described described image processing method.
Fourth aspect, the embodiments of the invention provide a kind of computer-readable recording medium, the computer-readable storage
Dielectric memory contains computer program, and the computer program performs any of the above-described described image procossing when being executed by processor
Method.
5th aspect, the embodiments of the invention provide a kind of computer applied algorithm, the computer applied algorithm is being counted
When being run on calculation machine so that computer performs any described image processing method in above-described embodiment.
In technical scheme provided in an embodiment of the present invention, by the case of detecting and human face region being present in target image,
According to coordinate of the pixel in preset coordinate system in the human face region, the coordinate parameters of the human face region are obtained;According to
The coordinate parameters, calculate the application point and sphere of action of pending image-region in the target image;According to the effect
Point and sphere of action, determine the pending image-region;According to default image procossing mode, with default action intensity,
Image procossing, the target image after being handled are carried out to the pending image-region.Scheme provided in an embodiment of the present invention
In be used as by the human face region in image with reference to the application point and sphere of action for determining pending image-region, and then with default
Action intensity image procossing is carried out to pending image-region, avoid user and manually operated just can determine that work by cumbersome
With point, sphere of action and action intensity, so as to simplify the operation of image procossing, Consumer's Experience is improved.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete
Site preparation describes, it is clear that described embodiment is only part of the embodiment of the present invention, rather than whole embodiments.It is based on
Embodiment in the present invention, those of ordinary skill in the art are obtained every other under the premise of creative work is not made
Embodiment, belong to the scope of protection of the invention.
In order to just solve user manually operated can determine that asking for application point, sphere of action and action intensity by cumbersome
Topic, so as to simplify the operation of image procossing, improve Consumer's Experience.The embodiments of the invention provide a kind of image processing method,
Device, electronic equipment and storage medium.
A kind of image processing method provided in an embodiment of the present invention can be used for the application software of electronic equipment, such as:Hand
The application software of machine, the application software of flat board, application software of intelligent television etc., wherein, application software can be all kinds
Repair figure software, such as:PhotoGrid, U.S. scheme elegant etc..
Image procossing in the embodiment of the present invention can be the image procossing of the types such as chest enlarge, buttocks development surgery, herein with chest enlarge
Image processing method provided in an embodiment of the present invention is illustrated exemplified by image procossing.
A kind of image processing method provided in an embodiment of the present invention is introduced first below.
As shown in figure 1, a kind of image processing method provided in an embodiment of the present invention, comprises the following steps:
S101, detect in target image and whether there is human face region, if in the presence of execution S102.
Target image can be electronic equipment shooting photo, the picture downloaded on network etc., wherein, electronic equipment includes hand
Machine, flat board, camera etc..The form of target image is including but not limited to following several:JPEG(Joint Photographic
Experts Group, Joint Photographic Experts Group), bmp (Bitmap, image file format), PNG (Portable Network
Graphic Format, image file storage format), GIF (Graphics Interchange Format, image exchange lattice
Formula), TIFF (Tag Image File Format, TIF) etc..
In general, the type of image can be divided into landscape image and character image, also, in most cases, in order to
So that the personage in image is more perfect, user can carry out correspondingly image procossing to character image.When carrying out figure to character image
During as processing, i.e., when target image is character image, the target image includes at least one portrait;When only being wrapped in target image
, can be with to the portrait progress image procossing when including a portrait;, can be according to pre- when target image includes multiple portraits
If rule image procossing is carried out respectively to each portrait, wherein, default rule can be:According to from the left side of target image
Order to the right is carried out respectively;Or it can also be:Carried out respectively according to the order from the right of target image to the left side.
It is, of course, understood that default rule is not limited in both the above.
Human face region is the region where the face of personage, the scope in the region can be known by face in the target image
Other technology is identified and extracted.
S102, according to coordinate of the pixel in preset coordinate system in the human face region, obtain the human face region
Coordinate parameters.
Preset coordinate system can be the coordinate system using target image as reference data, for example, preset coordinate system can be with
The lower edge of target image is used as Y-axis as X-axis using the left hand edge of target image.
In preset coordinate system, each pixel on target image corresponds with coordinate, each pixel corresponding one
Individual coordinate points.For example, the pixel in the most lower left corner is coordinate origin position on target image, coordinate is (0,0).
The pixel of human face region is one-to-one relation with coordinate in preset coordinate system, then, human face region
Part included in facial contour, the profile of face in the human face region and each profile etc. can be in preset coordinate system
In represented respectively by corresponding multigroup coordinate.
In a kind of embodiment, the coordinate corresponding to the pixel of whole human face region can be obtained, including face
The coordinate corresponding to all pixels point in coordinate and the facial contour corresponding to the pixel of profile.Pass through this reality
Mode is applied, the coordinate of more complete human face region can be got, so as to can more accurately be made in subsequent steps
With point and sphere of action.
It is possible to further obtain the pixel of facial contour in human face region and the face profile in the human face region
Corresponding coordinate, wherein, face profile includes:Eyebrow outline, eye contour, nose profile, face profile, ear profile.
For human face region, face are representative characteristic portions, therefore obtain the picture of facial contour and face profile
Coordinate corresponding to vegetarian refreshments, human face region can also be represented exactly.
Further, because the scope of human face region can be determined by eyebrow outline and facial contour, therefore,
The coordinate corresponding to facial contour and the pixel of eyebrow outline in human face region can be only obtained, alternatively, for eyebrow wheel
Exterior feature, the coordinate corresponding to the pixel for the profile that any bar eyebrow in two eyebrows can be obtained.
It should be noted that facial contour comprises at least the left and right profile and chin profile of face.
S103, according to the coordinate parameters, calculate the application point of pending image-region and effect in the target image
Scope.
Application point is that user it is expected to carry out the central point in the region of image procossing, and sphere of action is that user it is expected to carry out image
The scope in the region of processing, application point and sphere of action determine pending image-region jointly.For example, using application point as circle
The heart, using sphere of action as diameter, now, pending image-region determined by application point and sphere of action is border circular areas;
Two cornerwise intersection points using application point as square region, sphere of action is as the length of side, now, application point and sphere of action institute
The pending image-region determined is square area.
Pending image-region is the region of selected pending image procossing on target image, also, for not
The image procossing of same type, the separated independent region quantity that pending image-region is included can be different, for example,
For the image procossing of chest enlarge, pending image-region can be two and separate independent region.
When pending image-region separates independent region for two, the application point of pending image-region is two works
With point, respectively to should two separate independent regions;The sphere of action of pending image-region is also two sphere of actions, point
It is other to should two separate independent regions, wherein, two sphere of actions could be arranged to it is the same, it can also be provided that differing
Sample.
In one embodiment, from the coordinate parameters of acquired human face region, numerical value corresponding to abscissa is determined
Minimum coordinate is the first coordinate, determines that the maximum coordinate of numerical value corresponding to abscissa is the second coordinate, determines that ordinate is corresponding
The maximum coordinate of numerical value be the 3rd coordinate, the coordinate for determining numerical value minimum corresponding to ordinate is 4-coordinate.
Wherein, the first coordinate, the second coordinate can determine that the 3rd coordinate can be from mark from the coordinate of mark facial contour
Know in the coordinate of eyebrow outline and determine, 4-coordinate can determine from the coordinate of the chin profile in facial contour.
Specifically, the target image can be determined according to the first coordinate, the second coordinate, the 3rd coordinate and 4-coordinate
Pending image-region application point;The pending of the target image can be determined according to the first coordinate and the second coordinate
The sphere of action of image-region.
In a kind of embodiment, pending image-region is two and separates independent region:First subregion and
Two subregions.Wherein, the application point of the first subregion is the first application point, and the application point of the second subregion is the second application point.
The first coordinate is set as (x1, y1), the second coordinate is (x2, y2), the 3rd coordinate is (x3, y3), 4-coordinate is
(x4, y4), the first application point is (xZ1, yZ1), the second application point is (xZ2, yZ2)。
The abscissa of first coordinate is defined as to the abscissa of first application point, i.e. xZ1=x1;By the second coordinate
Abscissa is defined as the abscissa of second application point, i.e. xZ2=x2。
The ordinate of the ordinate of first application point and the second application point can be the same, can be obtained according to equation below:
yZ1=yZ2=y4-(y3-y4)
Illustratively, the first coordinate is (Isosorbide-5-Nitrae 0), and the second coordinate is (21,40), and the 3rd coordinate is (15,45), 4-coordinate
For (11,30);
So, according to above-mentioned embodiment, the abscissa x of the first application pointZ1=1, the abscissa x of the second application pointZ2=
21;The ordinate y of first application pointZ1=y4-(y3-y4)=30- (45-30)=15, the ordinate y of the second application pointZ2=yZ1
=15;To sum up, the coordinate of the first application point is (1,15), and the coordinate of the second application point is (21,15).
It is circle in the first subregion and the second subregion further, it is determined that in the embodiment of sphere of action
In the case of shape region, first coordinate and distance of second coordinate in x-axis are obtained using equation below:
L=| x1-x2|
And determine that the sphere of action of first subregion and the sphere of action of second subregion are:It is straight using L
The border circular areas in footpath.
Illustratively, the first coordinate is (Isosorbide-5-Nitrae 0), and the second coordinate is (21,40), then can determine that the first coordinate and described second
Distance L of the coordinate in x-axis is 20, then, it may be determined that the effect model of the sphere of action of the first subregion and the second subregion
It is border circular areas with 20 for diameter to enclose.
S104, according to the application point and sphere of action, determine the pending image-region.
Pending image-region includes two and separates independent region:First subregion and the second subregion, and the first son
Region is as the sphere of action of the second subregion, using L as diameter;It is possible to determine using L as diameter, with the first effect
The border circular areas for center of circle determination is put as the first subregion, the circle that will be determined using L as diameter, by the center of circle of the second application point
Region is as the second subregion.
In a kind of embodiment, it is determined that after pending image-region, can pair pending image-region determined enter
Row display, specifically, may be displayed on the screen of corresponding electronic equipment, such as mobile phone screen, flat screens, TV screen
Curtain etc..
, can also be to the pending image-region it is determined that after the pending image-region in a kind of embodiment
It is adjusted, so, in the case where the pending image-region determined according to application point and sphere of action is inaccurate, user
It can also be adjusted again according to demand.
Wherein, can be to the mode that pending image-region is adjusted:The pending image-region is moved to
Target location.Specifically, user can use the pending image-region of finger long-press on electronic equipment screen, when long-press is default
During fixed duration, user can drag pending image-region, and be moved to target location.
The mode of adjustment can also be:The sphere of action of the pending image-region is adjusted to intended operating range.
Specifically, user can use the edge of the pending image-region of finger long-press on electronic equipment screen, when long-press is default solid
When regularly long, user can drag the edge, realize the scaling to pending image-region, so as to realize to pending image district
The adjustment of the sphere of action in domain.
Illustratively, when pending image-region is border circular areas, the round edge of user's long-press border circular areas, to center of circle side
It is then to amplify to treat in the opposite direction dragging to center of circle direction to being the sphere of action that reduces pending image-region during dragging
Handle the sphere of action of image-region.
The mode of adjustment can also be:The default action intensity of the pending image-region is adjusted to interacting goals
Intensity.Also, in a kind of embodiment, adjusting to interacting goals intensity, pending image-region will be with interacting goals
Intensity carries out image procossing, and the target image after display processing.
Specifically, when carrying out action intensity adjustment, functional area occurs on screen, has in functional area and is used for
The progress bar of corrective action intensity, pass through the adjustment for adjusting progress bar so as to realize to action intensity.Also, alternatively, as
During with intensity minimum, then any processing will not be carried out to image.
It is understood that for above-mentioned three kinds of adjustment modes, it individually can correspondingly be adjusted, can also be appointed
Two kinds of adjustment mode combinations of meaning are adjusted to pending image-region simultaneously, it is, of course, also possible to which three kinds of adjustment modes are simultaneously right
Pending image-region is adjusted.
After being adjusted to pending image-region, according to default image procossing mode, to pending after adjustment
Image-region carries out image procossing, the target image after being handled.For example, the action intensity of pending image-region is adjusted
To interacting goals intensity, image procossing, the mesh after being handled will be carried out to pending image-region with interacting goals intensity
Logo image.
S105, according to default image procossing mode, with default action intensity, the pending image-region is carried out
Image procossing, the target image after being handled.
Wherein, default image procossing mode can be the image procossing of chest enlarge, and default action intensity can be made by oneself
Justice setting, specifically, default action intensity can be set as the most frequently used action intensity of the user that counts.
It is determined that after pending image-region, with default action intensity, image procossing is carried out to pending image-region,
Specifically, with default action intensity, chest enlarge processing is carried out to pending image-region, it is right to reach default action intensity institute
The chest enlarge effect answered.
After image procossing completion is carried out, the target image after just being handled, and the target image after display processing.
In technical scheme provided in an embodiment of the present invention, it whether there is human face region in target image by detecting;If deposit
According to coordinate of the pixel in preset coordinate system in the human face region, the coordinate parameters of the human face region are being obtained;Root
According to the coordinate parameters, the application point and sphere of action of pending image-region in the target image are calculated;According to the work
With point and sphere of action, the pending image-region is determined;It is strong with default effect according to default image procossing mode
Degree, image procossing, the target image after being handled are carried out to the pending image-region.The embodiment of the present invention provides
Scheme in be used as by the human face region in image with reference to the application point and sphere of action for determining pending image-region, and then
Image procossing is carried out to pending image-region with default action intensity, user is avoided and passes through cumbersome manually operated ability
Application point, sphere of action and action intensity are determined, so as to simplify the operation of image procossing, improves Consumer's Experience.
With reference to another specific embodiment, a kind of image processing method provided by the invention is introduced.
As shown in Fig. 2 a kind of image processing method provided in an embodiment of the present invention, comprises the following steps:
S201, detect in target image and whether there is human face region, if in the presence of execution S202;If in the absence of execution S204.
S202, according to coordinate of the pixel in preset coordinate system in the human face region, obtain the human face region
Coordinate parameters.
S203, according to the coordinate parameters, calculate the application point of pending image-region and effect in the target image
Scope.
In the present embodiment, S201-S203 is identical with the S101-S103 of above-described embodiment, and therefore not to repeat here.
S204, obtain the width and height of the target image.
Wherein, when target image is square-shaped image, the width and height of target image are the length of side of square, this
When, acquisition be target image the length of side.
When human face region is not present in target image, the width and height of target image are obtained, wherein, acquired width
Degree and height are in preset coordinate system.
S205, according to the width and height of the target image, calculate pending image-region in the target image
Application point and sphere of action.
Pending image-region can include two and separate independent region:First subregion and the second subregion;Wherein,
The application point of first subregion is the first application point, and the application point of the second subregion is the second application point;First subregion and
Each corresponding sphere of action can be the same to two subregions, it can also be provided that different.
In a kind of embodiment, determining the application point of pending image-region in target image can be:Setting first is made
It is (x with pointZ1, yZ1), the second application point is (xZ2, yZ2), then obtain the abscissa of the first application point using equation below:
Wherein, W be the target image width, Q1For default first ratio value;
The abscissa of the second application point of second subregion is obtained using equation below:
Wherein, Q2For default second ratio value;
The ordinate of the first application point of first subregion and second subregion are obtained using equation below
The ordinate of second application point:
Wherein, H be the target image height, Q3For default 3rd ratio value.
Wherein, default first ratio value, the second ratio value and the 3rd ratio value are self-defined settings, the first ratio
Value, the second ratio value and the 3rd ratio value could be arranged to different.
Illustratively, the width of the target image of acquisition is 12, is highly 15, and default first ratio value is 4, the second ratio
It is worth for 1, the 3rd ratio value is 3, then, the abscissa of the first application point:
The abscissa of second application point:
The ordinate of the ordinate of first application point and the second application point:
To sum up, the coordinate of the first application point is (3,5), and the coordinate of the second application point is (12,5).
In a kind of embodiment, determining the sphere of action of pending image-region in target image can be:
In the case where the first subregion and the second subregion are border circular areas, obtain calculating length using equation below
Degree:
Q4For default 4th ratio value, wherein, default 4th ratio value can be self-defined setting.
Determine that the sphere of action of first subregion and the sphere of action of second subregion are:Using D as diameter
Border circular areas.
Illustratively, the width of the target image of acquisition is 12, and default 4th ratio value is 3, then obtaining computational length is:
The sphere of action that the first subregion and the second subregion can so be determined is the border circular areas with 4 for diameter.
It is determined that after the first application point, the second application point and sphere of action, will using D as diameter, with the first application point
For the center of circle determine border circular areas as the first subregion;The circle that will be determined using D as diameter, by the center of circle of the second application point
Domain is as the second subregion.And the first subregion after determination, the second subregion are shown.
In a kind of embodiment, when human face region is not present in target image, pass through the width and height of target image
To determine the application point of pending image-region and sphere of action, application point and sphere of action are determined compared to by human face region
Embodiment, accuracy can be slightly worse by way of determining application point and sphere of action the width of target image and height,
Therefore, to reduce the number of user's operation, default action intensity can be adjusted to least action intensity, i.e., to pending figure
As region is without the processing of any action intensity.
And according to default image procossing mode, with least action intensity, image procossing is carried out to pending image-region,
Target image after being handled.
S206, according to the application point and sphere of action, determine the pending image-region;
S207, according to default image procossing mode, with default action intensity, the pending image-region is carried out
Image procossing, the target image after being handled.
In the present embodiment, S206-S207 is identical with the S104-S105 of above-described embodiment, and therefore not to repeat here.
In technical scheme provided in an embodiment of the present invention, it whether there is human face region in target image by detecting;If deposit
According to coordinate of the pixel in preset coordinate system in the human face region, the coordinate parameters of the human face region are being obtained;Root
According to the coordinate parameters, the application point and sphere of action of pending image-region in the target image are calculated;According to the work
With point and sphere of action, the pending image-region is determined;It is strong with default effect according to default image procossing mode
Degree, image procossing, the target image after being handled are carried out to the pending image-region.Side provided in an embodiment of the present invention
It is used as in case by the human face region in image with reference to the application point and sphere of action for determining pending image-region, and then with pre-
If action intensity image procossing is carried out to pending image-region, avoid user and manually operated just can determine that by cumbersome
Application point, sphere of action and action intensity, so as to simplify the operation of image procossing, improve Consumer's Experience.
With reference to another specific embodiment, a kind of image processing method provided by the invention is introduced.
As shown in figure 3, a kind of image processing method provided in an embodiment of the present invention, can also comprise the following steps:
S301, receive the instruction that Shadows Processing is carried out to target image.
Shadows Processing can act on chest, by setting the transparency of shade, so that visually chest
It is more plentiful.
It should be noted that Shadows Processing can individually enter with image procossings such as the chest enlarge in above-described embodiment, buttocks development surgeries
Capable image procossing, i.e., when carrying out image procossing to target image, Shadows Processing is only done, or only do the figure such as chest enlarge, buttocks development surgery
As processing;It is, of course, also possible to above two image procossing is carried out simultaneously to target image, for example, after chest enlarge processing is carried out
Shadows Processing is carried out again.
S302, concentrated from default shadow image and choose target shadow image.
Echo image set has multiple different types of shadow images to be set in advance, shadow image is centrally stored, uses
Family can concentrate from shadow image and choose shadow image according to demand, as target shadow image.Deposited for example, shadow image is concentrated
The shadow image of 6 types is contained, user can choose No. 1 shadow image and carry out Shadows Processing as target shadow image, when
So, user can also change other shadow images.
S303, determine placement location and the target shadow image of the target shadow image in the target image
Transparency.
In the case of human face region being present in the target image, placement location can be the coordinate parameters according to human face region
Determine, this embodiment with it is above-mentioned according to the coordinate parameters, calculate pending image-region in the target image
Application point is similar with the embodiment of sphere of action, will not be repeated here.
In the case of human face region is not present in the target image, placement location can be according to the width of target image and
Highly determine, this embodiment and above-mentioned width and height according to the target image, calculate in the target image
The application point of pending image-region is similar with the embodiment of sphere of action, will not be repeated here.
In addition, the transparency of target shadow image can be advance self-defined setting, also, user according to demand may be used
To be adjusted again.
S304, with identified transparency, by the target shadow imaging importing to identified placement location.
Target shadow image is shown with identified transparency, and is superimposed to identified placement location, so
The target image for being superimposed target shadow image is shown afterwards.
In technical scheme provided in an embodiment of the present invention, it whether there is human face region in target image by detecting;If deposit
According to coordinate of the pixel in preset coordinate system in the human face region, the coordinate parameters of the human face region are being obtained;Root
According to the coordinate parameters, the application point and sphere of action of pending image-region in the target image are calculated;According to the work
With point and sphere of action, the pending image-region is determined;It is strong with default effect according to default image procossing mode
Degree, image procossing, the target image after being handled are carried out to the pending image-region.Side provided in an embodiment of the present invention
It is used as in case by the human face region in image with reference to the application point and sphere of action for determining pending image-region, and then with pre-
If action intensity image procossing is carried out to pending image-region, avoid user and manually operated just can determine that by cumbersome
Application point, sphere of action and action intensity, so as to simplify the operation of image procossing, improve Consumer's Experience.
Relative to above method embodiment, the embodiment of the present invention additionally provides a kind of image processing apparatus, as shown in figure 4,
Described device includes:
Detection module 410, it whether there is human face region in target image for detecting;
First acquisition module 420, for when the detection module detects and human face region be present in target image, according to
Coordinate of the pixel in preset coordinate system in the human face region, obtain the coordinate parameters of the human face region;
First computing module 430, for according to the coordinate parameters, calculating pending image-region in the target image
Application point and sphere of action;
First determining module 440, for according to the application point and sphere of action, determining the pending image-region;
Processing module 450, for according to default image procossing mode, with default action intensity, to described pending
Image-region carries out image procossing, the target image after being handled.
Alternatively, in a kind of embodiment, first computing module 430 includes:
First determination sub-module, for according to the first coordinate, the second coordinate, the 3rd coordinate and 4-coordinate, it is determined that described
The application point of the pending image-region of target image, wherein, first coordinate is:Abscissa is corresponding in the coordinate parameters
The minimum coordinate of numerical value, second coordinate is:The maximum coordinate of numerical value corresponding to abscissa, described in the coordinate parameters
3rd coordinate is:The maximum coordinate of numerical value corresponding to ordinate, described in coordinate in the coordinate parameters, for identifying eyebrow
4-coordinate is:The minimum coordinate of numerical value corresponding to ordinate in coordinate in the coordinate parameters, for identifying chin;
Second determination sub-module, for according to first coordinate and second coordinate, determining the target image
The sphere of action of pending image-region.
Alternatively, in a kind of embodiment, the pending image-region includes:First subregion and the second subregion;
First determination sub-module includes:
First determining unit, for the abscissa of the first coordinate to be defined as to the first application point of first subregion
Abscissa;
Second determining unit, for the abscissa of the second coordinate to be defined as to the second application point of second subregion
Abscissa;
First computing unit, for obtained using equation below first subregion the first application point ordinate and
The ordinate of second application point of second subregion:
yZ1=yZ2=y4-(y3-y4)
Wherein, yZ1For the ordinate of first application point, yZ2For the ordinate of second application point, y3For described
The ordinate of three coordinates, y4For the ordinate of the 4-coordinate.
Alternatively, in a kind of embodiment, second determination sub-module includes:
Second computing unit, for obtaining first coordinate and second coordinate in x-axis using equation below
Distance:
L=| x1-x2|
Wherein, x1For the abscissa of first coordinate, x2For the abscissa of second coordinate, L is first coordinate
With distance of second coordinate in x-axis;
3rd determining unit, for determining the sphere of action of first subregion and the effect model of second subregion
Enclose and be:Border circular areas using L as diameter.
Alternatively, in a kind of embodiment, first determining module 440 includes:
3rd determination sub-module, the border circular areas for will be determined using L as diameter, using first application point as the center of circle are made
For first subregion;
4th determination sub-module, the border circular areas for will be determined using L as diameter, using second application point as the center of circle are made
For second subregion.
Alternatively, in a kind of embodiment, described device also includes:
Adjusting module, for being adjusted to the pending image-region;
The processing module 450 includes:
First processing submodule, for according to default image procossing mode, entering to the pending image-region after adjustment
Row image procossing, the target image after being handled.
Alternatively, in a kind of embodiment, the adjusting module is specifically used for, at least one of following adjustment mode:
The pending image-region is moved to target location;
The sphere of action of the pending image-region is adjusted to intended operating range;
The default action intensity of the pending image-region is adjusted to interacting goals intensity.
On the basis of above-mentioned Fig. 4, the embodiment of the present invention also provides another embodiment, as shown in figure 5, the dress
Putting also includes:
Second acquisition module 510, for when the detection module detects and human face region is not present in target image, obtaining
Take the width and height of the target image;
Second computing module 520, for the width and height according to the target image, calculate and treated in the target image
Handle the application point and sphere of action of image-region.
Alternatively, in a kind of embodiment, the pending image-region includes:First subregion and the second subregion;
Second computing module 520, including:
First calculating sub module, the horizontal seat of the first application point for obtaining first subregion using equation below
Mark:
Wherein, xZ1For the abscissa of first application point, W is the width of the target image, Q1For the default first ratio
Example value;
Second calculating sub module, the horizontal seat of the second application point for obtaining second subregion using equation below
Mark:
Wherein, xZ2For the abscissa of second application point, Q2For default second ratio value;
3rd calculating sub module, the ordinate of the first application point for obtaining first subregion using equation below
With the ordinate of the second application point of second subregion:
Wherein, yZ1For the ordinate of first application point, yZ2For the ordinate of second application point, H is the mesh
The width of logo image, Q3For default 3rd ratio value.
4th calculating sub module, for obtaining computational length using equation below:
Wherein, W be the target image width, Q4For default 4th ratio value;
5th determination sub-module, for determining the sphere of action of first subregion and the effect of second subregion
Scope is:Border circular areas using D as diameter.
Alternatively, in a kind of embodiment, the processing module 450 includes:
Submodule is handled, for according to default image procossing mode, with default least action intensity, waiting to locate to described
Manage image-region and carry out image procossing, the target image after being handled.
The embodiment of the present invention also provides another embodiment, as shown in fig. 6, described device also includes:
Receiving module 610, the instruction of Shadows Processing is carried out to target image for receiving;
Module 620 is chosen, target shadow image is chosen for being concentrated from default shadow image;
Second determining module 630, for determine placement location of the target shadow image in the target image and
The transparency of the target shadow image;
Laminating module 640, for identified transparency, the target shadow imaging importing to be put to identified
Seated position.
In technical scheme provided in an embodiment of the present invention, it whether there is human face region in target image by detecting;If deposit
According to coordinate of the pixel in preset coordinate system in the human face region, the coordinate parameters of the human face region are being obtained;Root
According to the coordinate parameters, the application point and sphere of action of pending image-region in the target image are calculated;According to the work
With point and sphere of action, the pending image-region is determined;It is strong with default effect according to default image procossing mode
Degree, image procossing, the target image after being handled are carried out to the pending image-region.Side provided in an embodiment of the present invention
It is used as in case by the human face region in image with reference to the application point and sphere of action for determining pending image-region, and then with pre-
If action intensity image procossing is carried out to pending image-region, avoid user and manually operated just can determine that by cumbersome
Application point, sphere of action and action intensity, so as to simplify the operation of image procossing, improve Consumer's Experience.
For device embodiment, because it is substantially similar to embodiment of the method, so describing fairly simple, correlation
Part illustrates referring to the part of embodiment of the method.
The embodiment of the present invention additionally provides a kind of electronic equipment, as shown in fig. 7, comprises processor 710, communication interface 720,
Memory 730 and communication bus 740, wherein, processor 710, communication interface 720, memory 730 is complete by communication bus 740
Into mutual communication.
Memory 730, for depositing computer program;
Processor 710, during for performing the program deposited on memory 730, realize following steps:
It whether there is human face region in detection target image;
If in the presence of according to coordinate of the pixel in preset coordinate system in the human face region, obtaining the human face region
Coordinate parameters;
According to the coordinate parameters, the application point and sphere of action of pending image-region in the target image are calculated;
According to the application point and sphere of action, the pending image-region is determined;
According to default image procossing mode, with default action intensity, image is carried out to the pending image-region
Processing, the target image after being handled.
It is understood that electronic equipment can also carry out any of above-described embodiment image processing method, herein
Do not repeat.
The communication bus that above-mentioned electronic equipment is mentioned can be Peripheral Component Interconnect standard (Peripheral Component
Interconnect, PCI) bus or EISA (Extended Industry Standard
Architecture, EISA) bus etc..The communication bus can be divided into address bus, data/address bus, controlling bus etc..For just
Only represented in expression, figure with a thick line, it is not intended that an only bus or a type of bus.
The communication that communication interface is used between above-mentioned electronic equipment and other equipment.
Memory can include random access memory (Random Access Memory, RAM), can also include non-easy
The property lost memory (Non-Volatile Memory, NVM), for example, at least a magnetic disk storage.Optionally, memory may be used also
To be at least one storage device for being located remotely from aforementioned processor.
Above-mentioned processor can be general processor, including central processing unit (Central Processing Unit,
CPU), network processing unit (Network Processor, NP) etc.;It can also be digital signal processor (Digital Signal
Processing, DSP), it is application specific integrated circuit (Application Specific Integrated Circuit, ASIC), existing
It is field programmable gate array (Field-Programmable Gate Array, FPGA) or other PLDs, discrete
Door or transistor logic, discrete hardware components.
The embodiment of the present invention additionally provides a kind of computer-readable recording medium, is stored in the computer-readable recording medium
There is computer program, the computer program performs any of the above-described described image processing method when being executed by processor.
The embodiment of the present invention additionally provides a kind of computer applied algorithm, and the computer applied algorithm is run on computers
When so that computer performs any described image processing method in above-described embodiment.
In technical scheme provided in an embodiment of the present invention, it whether there is human face region in target image by detecting;If deposit
According to coordinate of the pixel in preset coordinate system in the human face region, the coordinate parameters of the human face region are being obtained;Root
According to the coordinate parameters, the application point and sphere of action of pending image-region in the target image are calculated;According to the work
With point and sphere of action, the pending image-region is determined;It is strong with default effect according to default image procossing mode
Degree, image procossing, the target image after being handled are carried out to the pending image-region.Side provided in an embodiment of the present invention
It is used as in case by the human face region in image with reference to the application point and sphere of action for determining pending image-region, and then with pre-
If action intensity image procossing is carried out to pending image-region, avoid user and manually operated just can determine that by cumbersome
Application point, sphere of action and action intensity, so as to simplify the operation of image procossing, improve Consumer's Experience.
The term used in the embodiment of the present application is only merely for the purpose of description specific embodiment, and is not intended to be limiting
The application." one kind ", " described " and "the" of singulative used in the embodiment of the present application and appended claims
It is also intended to including most forms, unless context clearly shows that other implications.It is also understood that term used herein
"and/or" refers to and any or all may be combined comprising the associated list items purpose of one or more.
It will be appreciated that though it may be described in the embodiment of the present application using term " first ", " second ", " the 3rd " etc.
Various connectivity ports and identification information etc., but these connectivity ports and identification information etc. should not necessarily be limited by these terms.These terms
Only it is used for connectivity port and identification information etc. being distinguished from each other out.For example, in the case where not departing from the embodiment of the present application scope,
First connectivity port can also be referred to as second connection end mouth, and similarly, second connection end mouth can also be referred to as the first connection
Port.
Depending on linguistic context, word as used in this " if " can be construed to " ... when " or " when ...
When " or " in response to determining " or " in response to detection ".Similarly, depending on linguistic context, phrase " if it is determined that " or " if detection
(condition or event of statement) " can be construed to " when it is determined that when " or " in response to determine " or " when the detection (condition of statement
Or event) when " or " in response to detecting (condition or event of statement) ".
Through the above description of the embodiments, it is apparent to those skilled in the art that, for description
It is convenient and succinct, can as needed will be upper only with the division progress of above-mentioned each functional module for example, in practical application
State function distribution to be completed by different functional modules, i.e., the internal structure of device is divided into different functional modules, to complete
All or part of function described above.The specific work process of the system, apparatus, and unit of foregoing description, before may be referred to
The corresponding process in embodiment of the method is stated, will not be repeated here.
In several embodiments provided herein, it should be understood that disclosed system, apparatus and method can be with
Realize by another way.For example, device embodiment described above is only schematical, for example, the module or
The division of unit, only a kind of division of logic function, can there are other dividing mode, such as multiple units when actually realizing
Or component can combine or be desirably integrated into another system, or some features can be ignored, or not perform.It is another, institute
Display or the mutual coupling discussed or direct-coupling or communication connection can be by some interfaces, device or unit
INDIRECT COUPLING or communication connection, can be electrical, mechanical or other forms.
The unit illustrated as separating component can be or may not be physically separate, show as unit
The part shown can be or may not be physical location, you can with positioned at a place, or can also be distributed to multiple
On NE.Some or all of unit therein can be selected to realize the mesh of this embodiment scheme according to the actual needs
's.
In addition, each functional unit in each embodiment of the application can be integrated in a processing unit, can also
That unit is individually physically present, can also two or more units it is integrated in a unit.Above-mentioned integrated list
Member can both be realized in the form of hardware, can also be realized in the form of SFU software functional unit.
If the integrated unit is realized in the form of SFU software functional unit and is used as independent production marketing or use
When, it can be stored in a computer read/write memory medium.Based on such understanding, the technical scheme of the application is substantially
The part to be contributed in other words to prior art or all or part of the technical scheme can be in the form of software products
Embody, the computer software product is stored in a storage medium, including some instructions are causing a computer
It is each that equipment (can be personal computer, server, or network equipment etc.) or processor (processor) perform the application
The all or part of step of embodiment methods described.And foregoing storage medium includes:USB flash disk, mobile hard disk, read-only storage
(Read Only Memory;Hereinafter referred to as:ROM), random access memory (Random Access Memory;Hereinafter referred to as:
RAM), magnetic disc or CD etc. are various can be with the medium of store program codes.
Described above, the only embodiment of the application, but the protection domain of the application is not limited thereto is any
Those familiar with the art can readily occur in change or replacement in the technical scope that the application discloses, and should all contain
Cover within the protection domain of the application.Therefore, the protection domain of the application should be based on the protection scope of the described claims.