[go: up one dir, main page]

CN107395958A - Image processing method and device, electronic equipment and storage medium - Google Patents

Image processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN107395958A
CN107395958A CN201710527387.XA CN201710527387A CN107395958A CN 107395958 A CN107395958 A CN 107395958A CN 201710527387 A CN201710527387 A CN 201710527387A CN 107395958 A CN107395958 A CN 107395958A
Authority
CN
China
Prior art keywords
coordinate
image
region
action
application point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710527387.XA
Other languages
Chinese (zh)
Other versions
CN107395958B (en
Inventor
张启峰
曹莎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jupiter Technology Co ltd
Original Assignee
Beijing Kingsoft Internet Security Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kingsoft Internet Security Software Co Ltd filed Critical Beijing Kingsoft Internet Security Software Co Ltd
Priority to CN201710527387.XA priority Critical patent/CN107395958B/en
Publication of CN107395958A publication Critical patent/CN107395958A/en
Application granted granted Critical
Publication of CN107395958B publication Critical patent/CN107395958B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention provides an image processing method, an image processing device, electronic equipment and a storage medium, wherein the image processing method comprises the following steps: when a face area exists in a detected target image, acquiring coordinate parameters of the face area according to coordinates of pixel points in the face area in a preset coordinate system; calculating an action point and an action range of an image area to be processed in the target image according to the coordinate parameters; determining an image area to be processed according to the action point and the action range; and according to a preset image processing mode, carrying out image processing on the image area to be processed with a preset action intensity. According to the scheme provided by the embodiment of the invention, the action point and the action range of the image area to be processed are determined by taking the face area in the image as a reference, and then the image processing is carried out on the image area to be processed with the preset action intensity, so that the situation that the action point, the action range and the action intensity can only be determined by a user through complicated manual operation is avoided, the image processing operation is simplified, and the user experience is improved.

Description

A kind of image processing method, device, electronic equipment and storage medium
Technical field
The present invention relates to electronic technology field, more particularly to a kind of image processing method, device, electronic equipment and storage Medium.
Background technology
Repair figure to become more and more popular in daily life, especially in schoolgirl colony, figure is repaiied compared to only specialty in the past What Shi Liyong PhotoShop were compared complexity repaiies graphic operation, and present graphic operation of repairing increasingly is intended to easy, using repairing Figure software is only needed by shirtsleeve operation, can be beautified image.It is this easy to repair figure mode increasingly by the joyous of people Meet, thus, the various application programs for having and repairing figure function also occur.
Repair the concrete operations mode of figure have it is a variety of, such as:Increase filtering effects, eliminate blood-shot eye illness, remove noise, partial enlargement Deformation etc., wherein, partial enlargement deforms the beautification for being mainly used in character image, such as eye is finely adjusted, chest is entered Row adjustment etc..At present, for user when carrying out chest adjustment to character image, user may decide that adjustment region in character image Position and desired adjustment regional extent, and can also determine adjustment after can achieve the effect that intensity.
However, when user is carrying out chest adjustment to character image, user first must artificially determine that adjustment region exists Intensity is can achieve the effect that after the regional extent of position, desired adjustment in character image and adjustment, otherwise, can not Carry out the operation of chest adjustment.As can be seen here, the cumbersome of chest adjustment is carried out to character image in the prior art, have impact on Consumer's Experience.
The content of the invention
The purpose of the embodiment of the present invention is to provide a kind of image processing method, device, electronic equipment and storage medium, with Solve the problems, such as that user manually operated just can determine that application point, sphere of action and action intensity by cumbersome.Particular technique side Case is as follows:
In a first aspect, the embodiments of the invention provide a kind of image processing method, methods described includes:
It whether there is human face region in detection target image;
If in the presence of according to coordinate of the pixel in preset coordinate system in the human face region, obtaining the human face region Coordinate parameters;
According to the coordinate parameters, the application point and sphere of action of pending image-region in the target image are calculated;
According to the application point and sphere of action, the pending image-region is determined;
According to default image procossing mode, with default action intensity, image is carried out to the pending image-region Processing, the target image after being handled.
Alternatively, it is described according to the coordinate parameters, calculate the application point of pending image-region in the target image And the step of sphere of action, including:
According to the first coordinate, the second coordinate, the 3rd coordinate and 4-coordinate, the pending image of the target image is determined The application point in region, wherein, first coordinate is:The minimum coordinate of numerical value corresponding to abscissa, institute in the coordinate parameters Stating the second coordinate is:The maximum coordinate of numerical value corresponding to abscissa, the 3rd coordinate are in the coordinate parameters:The coordinate The maximum coordinate of numerical value corresponding to ordinate, the 4-coordinate are in coordinate in parameter, for identifying eyebrow:The coordinate The minimum coordinate of numerical value corresponding to ordinate in coordinate in parameter, for identifying chin;
According to first coordinate and second coordinate, the effect of the pending image-region of the target image is determined Scope.
Alternatively, the pending image-region includes:First subregion and the second subregion;
It is described according to the first coordinate, second coordinate, the 3rd coordinate and 4-coordinate, determine treating for the target image The step of handling the application point of image-region, including:
The abscissa of first coordinate is defined as to the abscissa of the first application point of first subregion;
The abscissa of second coordinate is defined as to the abscissa of the second application point of second subregion;
The ordinate of the first application point of first subregion and second subregion are obtained using equation below The ordinate of second application point:
yZ1=yZ2=y4-(y3-y4)
Wherein, yZ1For the ordinate of first application point, yZ2For the ordinate of second application point, y3For described The ordinate of three coordinates, y4For the ordinate of the 4-coordinate.
Alternatively, it is described according to the first coordinate and the second coordinate, determine the pending image-region of the target image The step of sphere of action, including:
First coordinate and distance of second coordinate in x-axis are obtained using equation below:
L=| x1-x2|
Wherein, x1For the abscissa of first coordinate, x2For the abscissa of second coordinate, L is first coordinate With distance of second coordinate in x-axis;
Determine that the sphere of action of first subregion and the sphere of action of second subregion are:Using L as diameter Border circular areas.
Alternatively, it is described according to the application point and sphere of action, the step of determining the pending image-region, bag Include:
By using L as diameter, first subregion is used as by the border circular areas that the center of circle determines of first application point;
By using L as diameter, second subregion is used as by the border circular areas that the center of circle determines of second application point.
Alternatively, it is described according to the application point and sphere of action, after the step of determining the pending image-region, Also include:
The pending image-region is adjusted;
It is described according to default image procossing mode, with default action intensity, the pending image-region is carried out Image procossing, the target image after being handled, including:
According to default image procossing mode, image procossing is carried out to the pending image-region after adjustment, handled The target image afterwards.
Alternatively, described the step of being adjusted to the pending image-region, including in following adjustment mode extremely Few one kind:
The pending image-region is moved to target location;
The sphere of action of the pending image-region is adjusted to intended operating range;
The default action intensity of the pending image-region is adjusted to interacting goals intensity.
Alternatively, it is described according to the application point and sphere of action, before determining the pending image-region, also wrap Include:
If human face region is not present in the target image, the width and height of the target image are obtained;
According to the width and height of the target image, the application point of pending image-region in the target image is calculated And sphere of action.
Alternatively, the pending image-region includes:First subregion and the second subregion;
The width and height according to the target image, determine the work of pending image-region in the target image The step of with point and sphere of action, including:
The abscissa of the first application point of first subregion is obtained using equation below:
Wherein, xZ1For the abscissa of first application point, W is the width of the target image, Q1For the default first ratio Example value;
The abscissa of the second application point of second subregion is obtained using equation below:
Wherein, xZ2For the abscissa of second application point, Q2For default second ratio value;
The ordinate of the first application point of first subregion and second subregion are obtained using equation below The ordinate of second application point:
Wherein, yZ1For the ordinate of first application point, yZ2For the ordinate of second application point, H is the mesh The width of logo image, Q3For default 3rd ratio value.
Computational length is obtained using equation below:
Wherein, W be the target image width, Q4For default 4th ratio value;
Determine that the sphere of action of first subregion and the sphere of action of second subregion are:Using D as diameter Border circular areas.
Alternatively,
It is described according to default image procossing mode, with default action intensity, the pending image-region is carried out Image procossing, the step of target image after being handled, including:
According to default image procossing mode, with default least action intensity, the pending image-region is carried out Image procossing, the target image after being handled.
Alternatively, methods described also includes:
Receive the instruction that Shadows Processing is carried out to the target image;
Concentrated from default shadow image and choose target shadow image;
Determine the saturating of placement location and the target shadow image of the target shadow image in the target image Bright degree;
With identified transparency, by the target shadow imaging importing to identified placement location.
Second aspect, the embodiments of the invention provide a kind of image processing apparatus, described device includes:
Detection module, it whether there is human face region in target image for detecting;
First acquisition module, for when the detection module detects and human face region be present in target image, according to institute Coordinate of the pixel in preset coordinate system in human face region is stated, obtains the coordinate parameters of the human face region;
First computing module, for according to the coordinate parameters, calculating pending image-region in the target image Application point and sphere of action;
First determining module, for according to the application point and sphere of action, determining the pending image-region;
Processing module, for according to default image procossing mode, with default action intensity, to the pending image Region carries out image procossing, the target image after being handled.
Alternatively, first computing module includes:
First determination sub-module, for according to the first coordinate, the second coordinate, the 3rd coordinate and 4-coordinate, it is determined that described The application point of the pending image-region of target image, wherein, first coordinate is:Abscissa is corresponding in the coordinate parameters The minimum coordinate of numerical value, second coordinate is:The maximum coordinate of numerical value corresponding to abscissa, described in the coordinate parameters 3rd coordinate is:The maximum coordinate of numerical value corresponding to ordinate, described in coordinate in the coordinate parameters, for identifying eyebrow 4-coordinate is:The minimum coordinate of numerical value corresponding to ordinate in coordinate in the coordinate parameters, for identifying chin;
Second determination sub-module, for according to first coordinate and second coordinate, determining the target image The sphere of action of pending image-region.
Alternatively, the pending image-region includes:First subregion and the second subregion;
First determination sub-module includes:
First determining unit, for the abscissa of the first coordinate to be defined as to the first application point of first subregion Abscissa;
Second determining unit, for the abscissa of the second coordinate to be defined as to the second application point of second subregion Abscissa;
First computing unit, for obtained using equation below first subregion the first application point ordinate and The ordinate of second application point of second subregion:
yZ1=yZ2=y4-(y3-y4)
Wherein, yZ1For the ordinate of first application point, yZ2For the ordinate of second application point, y3For described The ordinate of three coordinates, y4For the ordinate of the 4-coordinate.
Alternatively, second determination sub-module includes:
Second computing unit, for obtaining first coordinate and second coordinate in x-axis using equation below Distance:
L=| x1-x2|
Wherein, x1For the abscissa of first coordinate, x2For the abscissa of second coordinate, L is first coordinate With distance of second coordinate in x-axis;
3rd determining unit, for determining the sphere of action of first subregion and the effect model of second subregion Enclose and be:Border circular areas using L as diameter.
Alternatively, first determining module includes:
3rd determination sub-module, the border circular areas for will be determined using L as diameter, using first application point as the center of circle are made For first subregion;
4th determination sub-module, the border circular areas for will be determined using L as diameter, using second application point as the center of circle are made For second subregion.
Alternatively, described device also includes:
Adjusting module, for being adjusted to the pending image-region;
The processing module includes:
First processing submodule, for according to default image procossing mode, entering to the pending image-region after adjustment Row image procossing, the target image after being handled.
Alternatively, the adjusting module is specifically used for, at least one of following adjustment mode:
The pending image-region is moved to target location;
The sphere of action of the pending image-region is adjusted to intended operating range;
The default action intensity of the pending image-region is adjusted to interacting goals intensity.
Alternatively, described device also includes:
Second acquisition module, for when the detection module detects and human face region is not present in target image, obtaining The width and height of the target image;
Second computing module, for the width and height according to the target image, calculate in the target image and wait to locate Manage the application point and sphere of action of image-region.
Alternatively, the pending image-region includes:First subregion and the second subregion;
Second computing module, including:
First calculating sub module, the horizontal seat of the first application point for obtaining first subregion using equation below Mark:
Wherein, xZ1For the abscissa of first application point, W is the width of the target image, Q1For the default first ratio Example value;
Second calculating sub module, the horizontal seat of the second application point for obtaining second subregion using equation below Mark:
Wherein, xZ2For the abscissa of second application point, Q2For default second ratio value;
3rd calculating sub module, the ordinate of the first application point for obtaining first subregion using equation below With the ordinate of the second application point of second subregion:
Wherein, yZ1For the ordinate of first application point, yZ2For the ordinate of second application point, H is the mesh The width of logo image, Q3For default 3rd ratio value.
4th calculating sub module, for obtaining computational length using equation below:
Wherein, W be the target image width, Q4For default 4th ratio value;
5th determination sub-module, for determining the sphere of action of first subregion and the effect of second subregion Scope is:Border circular areas using D as diameter.
Alternatively, the processing module includes:
Submodule is handled, for according to default image procossing mode, with default least action intensity, waiting to locate to described Manage image-region and carry out image procossing, the target image after being handled.
Alternatively, described device also includes:
Receiving module, the instruction of Shadows Processing is carried out to the target image for receiving;
Module is chosen, target shadow image is chosen for being concentrated from default shadow image;
Second determining module, for determining placement location of the target shadow image in the target image and described The transparency of target shadow image;
Laminating module, for identified transparency, by the target shadow imaging importing to identified placement Position.
The third aspect, the embodiments of the invention provide a kind of electronic equipment, including processor, communication interface, memory and Communication bus, wherein, processor, communication interface, memory completes mutual communication by communication bus;
Memory, for depositing computer program;
Processor, during for performing the program deposited on memory, perform any of the above-described described image processing method.
Fourth aspect, the embodiments of the invention provide a kind of computer-readable recording medium, the computer-readable storage Dielectric memory contains computer program, and the computer program performs any of the above-described described image procossing when being executed by processor Method.
5th aspect, the embodiments of the invention provide a kind of computer applied algorithm, the computer applied algorithm is being counted When being run on calculation machine so that computer performs any described image processing method in above-described embodiment.
In technical scheme provided in an embodiment of the present invention, by the case of detecting and human face region being present in target image, According to coordinate of the pixel in preset coordinate system in the human face region, the coordinate parameters of the human face region are obtained;According to The coordinate parameters, calculate the application point and sphere of action of pending image-region in the target image;According to the effect Point and sphere of action, determine the pending image-region;According to default image procossing mode, with default action intensity, Image procossing, the target image after being handled are carried out to the pending image-region.Scheme provided in an embodiment of the present invention In be used as by the human face region in image with reference to the application point and sphere of action for determining pending image-region, and then with default Action intensity image procossing is carried out to pending image-region, avoid user and manually operated just can determine that work by cumbersome With point, sphere of action and action intensity, so as to simplify the operation of image procossing, Consumer's Experience is improved.
Brief description of the drawings
In order to illustrate more clearly about the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing There is the required accompanying drawing used in technology description to be briefly described, it should be apparent that, drawings in the following description are only this Some embodiments of invention, for those of ordinary skill in the art, on the premise of not paying creative work, can be with Other accompanying drawings are obtained according to these accompanying drawings.
Fig. 1 is a kind of a kind of flow chart of image processing method provided in an embodiment of the present invention;
Fig. 2 is a kind of another flow chart of image processing method provided in an embodiment of the present invention;
Fig. 3 is a kind of another flow chart of image processing method provided in an embodiment of the present invention;
Fig. 4 is a kind of a kind of structural representation of image processing apparatus provided in an embodiment of the present invention;
Fig. 5 is a kind of another structural representation of image processing apparatus provided in an embodiment of the present invention;
Fig. 6 is a kind of another structural representation of image processing apparatus provided in an embodiment of the present invention;
Fig. 7 is the structural representation of a kind of electronic equipment provided in an embodiment of the present invention.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete Site preparation describes, it is clear that described embodiment is only part of the embodiment of the present invention, rather than whole embodiments.It is based on Embodiment in the present invention, those of ordinary skill in the art are obtained every other under the premise of creative work is not made Embodiment, belong to the scope of protection of the invention.
In order to just solve user manually operated can determine that asking for application point, sphere of action and action intensity by cumbersome Topic, so as to simplify the operation of image procossing, improve Consumer's Experience.The embodiments of the invention provide a kind of image processing method, Device, electronic equipment and storage medium.
A kind of image processing method provided in an embodiment of the present invention can be used for the application software of electronic equipment, such as:Hand The application software of machine, the application software of flat board, application software of intelligent television etc., wherein, application software can be all kinds Repair figure software, such as:PhotoGrid, U.S. scheme elegant etc..
Image procossing in the embodiment of the present invention can be the image procossing of the types such as chest enlarge, buttocks development surgery, herein with chest enlarge Image processing method provided in an embodiment of the present invention is illustrated exemplified by image procossing.
A kind of image processing method provided in an embodiment of the present invention is introduced first below.
As shown in figure 1, a kind of image processing method provided in an embodiment of the present invention, comprises the following steps:
S101, detect in target image and whether there is human face region, if in the presence of execution S102.
Target image can be electronic equipment shooting photo, the picture downloaded on network etc., wherein, electronic equipment includes hand Machine, flat board, camera etc..The form of target image is including but not limited to following several:JPEG(Joint Photographic Experts Group, Joint Photographic Experts Group), bmp (Bitmap, image file format), PNG (Portable Network Graphic Format, image file storage format), GIF (Graphics Interchange Format, image exchange lattice Formula), TIFF (Tag Image File Format, TIF) etc..
In general, the type of image can be divided into landscape image and character image, also, in most cases, in order to So that the personage in image is more perfect, user can carry out correspondingly image procossing to character image.When carrying out figure to character image During as processing, i.e., when target image is character image, the target image includes at least one portrait;When only being wrapped in target image , can be with to the portrait progress image procossing when including a portrait;, can be according to pre- when target image includes multiple portraits If rule image procossing is carried out respectively to each portrait, wherein, default rule can be:According to from the left side of target image Order to the right is carried out respectively;Or it can also be:Carried out respectively according to the order from the right of target image to the left side. It is, of course, understood that default rule is not limited in both the above.
Human face region is the region where the face of personage, the scope in the region can be known by face in the target image Other technology is identified and extracted.
S102, according to coordinate of the pixel in preset coordinate system in the human face region, obtain the human face region Coordinate parameters.
Preset coordinate system can be the coordinate system using target image as reference data, for example, preset coordinate system can be with The lower edge of target image is used as Y-axis as X-axis using the left hand edge of target image.
In preset coordinate system, each pixel on target image corresponds with coordinate, each pixel corresponding one Individual coordinate points.For example, the pixel in the most lower left corner is coordinate origin position on target image, coordinate is (0,0).
The pixel of human face region is one-to-one relation with coordinate in preset coordinate system, then, human face region Part included in facial contour, the profile of face in the human face region and each profile etc. can be in preset coordinate system In represented respectively by corresponding multigroup coordinate.
In a kind of embodiment, the coordinate corresponding to the pixel of whole human face region can be obtained, including face The coordinate corresponding to all pixels point in coordinate and the facial contour corresponding to the pixel of profile.Pass through this reality Mode is applied, the coordinate of more complete human face region can be got, so as to can more accurately be made in subsequent steps With point and sphere of action.
It is possible to further obtain the pixel of facial contour in human face region and the face profile in the human face region Corresponding coordinate, wherein, face profile includes:Eyebrow outline, eye contour, nose profile, face profile, ear profile. For human face region, face are representative characteristic portions, therefore obtain the picture of facial contour and face profile Coordinate corresponding to vegetarian refreshments, human face region can also be represented exactly.
Further, because the scope of human face region can be determined by eyebrow outline and facial contour, therefore, The coordinate corresponding to facial contour and the pixel of eyebrow outline in human face region can be only obtained, alternatively, for eyebrow wheel Exterior feature, the coordinate corresponding to the pixel for the profile that any bar eyebrow in two eyebrows can be obtained.
It should be noted that facial contour comprises at least the left and right profile and chin profile of face.
S103, according to the coordinate parameters, calculate the application point of pending image-region and effect in the target image Scope.
Application point is that user it is expected to carry out the central point in the region of image procossing, and sphere of action is that user it is expected to carry out image The scope in the region of processing, application point and sphere of action determine pending image-region jointly.For example, using application point as circle The heart, using sphere of action as diameter, now, pending image-region determined by application point and sphere of action is border circular areas; Two cornerwise intersection points using application point as square region, sphere of action is as the length of side, now, application point and sphere of action institute The pending image-region determined is square area.
Pending image-region is the region of selected pending image procossing on target image, also, for not The image procossing of same type, the separated independent region quantity that pending image-region is included can be different, for example, For the image procossing of chest enlarge, pending image-region can be two and separate independent region.
When pending image-region separates independent region for two, the application point of pending image-region is two works With point, respectively to should two separate independent regions;The sphere of action of pending image-region is also two sphere of actions, point It is other to should two separate independent regions, wherein, two sphere of actions could be arranged to it is the same, it can also be provided that differing Sample.
In one embodiment, from the coordinate parameters of acquired human face region, numerical value corresponding to abscissa is determined Minimum coordinate is the first coordinate, determines that the maximum coordinate of numerical value corresponding to abscissa is the second coordinate, determines that ordinate is corresponding The maximum coordinate of numerical value be the 3rd coordinate, the coordinate for determining numerical value minimum corresponding to ordinate is 4-coordinate.
Wherein, the first coordinate, the second coordinate can determine that the 3rd coordinate can be from mark from the coordinate of mark facial contour Know in the coordinate of eyebrow outline and determine, 4-coordinate can determine from the coordinate of the chin profile in facial contour.
Specifically, the target image can be determined according to the first coordinate, the second coordinate, the 3rd coordinate and 4-coordinate Pending image-region application point;The pending of the target image can be determined according to the first coordinate and the second coordinate The sphere of action of image-region.
In a kind of embodiment, pending image-region is two and separates independent region:First subregion and Two subregions.Wherein, the application point of the first subregion is the first application point, and the application point of the second subregion is the second application point.
The first coordinate is set as (x1, y1), the second coordinate is (x2, y2), the 3rd coordinate is (x3, y3), 4-coordinate is (x4, y4), the first application point is (xZ1, yZ1), the second application point is (xZ2, yZ2)。
The abscissa of first coordinate is defined as to the abscissa of first application point, i.e. xZ1=x1;By the second coordinate Abscissa is defined as the abscissa of second application point, i.e. xZ2=x2
The ordinate of the ordinate of first application point and the second application point can be the same, can be obtained according to equation below:
yZ1=yZ2=y4-(y3-y4)
Illustratively, the first coordinate is (Isosorbide-5-Nitrae 0), and the second coordinate is (21,40), and the 3rd coordinate is (15,45), 4-coordinate For (11,30);
So, according to above-mentioned embodiment, the abscissa x of the first application pointZ1=1, the abscissa x of the second application pointZ2= 21;The ordinate y of first application pointZ1=y4-(y3-y4)=30- (45-30)=15, the ordinate y of the second application pointZ2=yZ1 =15;To sum up, the coordinate of the first application point is (1,15), and the coordinate of the second application point is (21,15).
It is circle in the first subregion and the second subregion further, it is determined that in the embodiment of sphere of action In the case of shape region, first coordinate and distance of second coordinate in x-axis are obtained using equation below:
L=| x1-x2|
And determine that the sphere of action of first subregion and the sphere of action of second subregion are:It is straight using L The border circular areas in footpath.
Illustratively, the first coordinate is (Isosorbide-5-Nitrae 0), and the second coordinate is (21,40), then can determine that the first coordinate and described second Distance L of the coordinate in x-axis is 20, then, it may be determined that the effect model of the sphere of action of the first subregion and the second subregion It is border circular areas with 20 for diameter to enclose.
S104, according to the application point and sphere of action, determine the pending image-region.
Pending image-region includes two and separates independent region:First subregion and the second subregion, and the first son Region is as the sphere of action of the second subregion, using L as diameter;It is possible to determine using L as diameter, with the first effect The border circular areas for center of circle determination is put as the first subregion, the circle that will be determined using L as diameter, by the center of circle of the second application point Region is as the second subregion.
In a kind of embodiment, it is determined that after pending image-region, can pair pending image-region determined enter Row display, specifically, may be displayed on the screen of corresponding electronic equipment, such as mobile phone screen, flat screens, TV screen Curtain etc..
, can also be to the pending image-region it is determined that after the pending image-region in a kind of embodiment It is adjusted, so, in the case where the pending image-region determined according to application point and sphere of action is inaccurate, user It can also be adjusted again according to demand.
Wherein, can be to the mode that pending image-region is adjusted:The pending image-region is moved to Target location.Specifically, user can use the pending image-region of finger long-press on electronic equipment screen, when long-press is default During fixed duration, user can drag pending image-region, and be moved to target location.
The mode of adjustment can also be:The sphere of action of the pending image-region is adjusted to intended operating range. Specifically, user can use the edge of the pending image-region of finger long-press on electronic equipment screen, when long-press is default solid When regularly long, user can drag the edge, realize the scaling to pending image-region, so as to realize to pending image district The adjustment of the sphere of action in domain.
Illustratively, when pending image-region is border circular areas, the round edge of user's long-press border circular areas, to center of circle side It is then to amplify to treat in the opposite direction dragging to center of circle direction to being the sphere of action that reduces pending image-region during dragging Handle the sphere of action of image-region.
The mode of adjustment can also be:The default action intensity of the pending image-region is adjusted to interacting goals Intensity.Also, in a kind of embodiment, adjusting to interacting goals intensity, pending image-region will be with interacting goals Intensity carries out image procossing, and the target image after display processing.
Specifically, when carrying out action intensity adjustment, functional area occurs on screen, has in functional area and is used for The progress bar of corrective action intensity, pass through the adjustment for adjusting progress bar so as to realize to action intensity.Also, alternatively, as During with intensity minimum, then any processing will not be carried out to image.
It is understood that for above-mentioned three kinds of adjustment modes, it individually can correspondingly be adjusted, can also be appointed Two kinds of adjustment mode combinations of meaning are adjusted to pending image-region simultaneously, it is, of course, also possible to which three kinds of adjustment modes are simultaneously right Pending image-region is adjusted.
After being adjusted to pending image-region, according to default image procossing mode, to pending after adjustment Image-region carries out image procossing, the target image after being handled.For example, the action intensity of pending image-region is adjusted To interacting goals intensity, image procossing, the mesh after being handled will be carried out to pending image-region with interacting goals intensity Logo image.
S105, according to default image procossing mode, with default action intensity, the pending image-region is carried out Image procossing, the target image after being handled.
Wherein, default image procossing mode can be the image procossing of chest enlarge, and default action intensity can be made by oneself Justice setting, specifically, default action intensity can be set as the most frequently used action intensity of the user that counts.
It is determined that after pending image-region, with default action intensity, image procossing is carried out to pending image-region, Specifically, with default action intensity, chest enlarge processing is carried out to pending image-region, it is right to reach default action intensity institute The chest enlarge effect answered.
After image procossing completion is carried out, the target image after just being handled, and the target image after display processing.
In technical scheme provided in an embodiment of the present invention, it whether there is human face region in target image by detecting;If deposit According to coordinate of the pixel in preset coordinate system in the human face region, the coordinate parameters of the human face region are being obtained;Root According to the coordinate parameters, the application point and sphere of action of pending image-region in the target image are calculated;According to the work With point and sphere of action, the pending image-region is determined;It is strong with default effect according to default image procossing mode Degree, image procossing, the target image after being handled are carried out to the pending image-region.The embodiment of the present invention provides Scheme in be used as by the human face region in image with reference to the application point and sphere of action for determining pending image-region, and then Image procossing is carried out to pending image-region with default action intensity, user is avoided and passes through cumbersome manually operated ability Application point, sphere of action and action intensity are determined, so as to simplify the operation of image procossing, improves Consumer's Experience.
With reference to another specific embodiment, a kind of image processing method provided by the invention is introduced.
As shown in Fig. 2 a kind of image processing method provided in an embodiment of the present invention, comprises the following steps:
S201, detect in target image and whether there is human face region, if in the presence of execution S202;If in the absence of execution S204.
S202, according to coordinate of the pixel in preset coordinate system in the human face region, obtain the human face region Coordinate parameters.
S203, according to the coordinate parameters, calculate the application point of pending image-region and effect in the target image Scope.
In the present embodiment, S201-S203 is identical with the S101-S103 of above-described embodiment, and therefore not to repeat here.
S204, obtain the width and height of the target image.
Wherein, when target image is square-shaped image, the width and height of target image are the length of side of square, this When, acquisition be target image the length of side.
When human face region is not present in target image, the width and height of target image are obtained, wherein, acquired width Degree and height are in preset coordinate system.
S205, according to the width and height of the target image, calculate pending image-region in the target image Application point and sphere of action.
Pending image-region can include two and separate independent region:First subregion and the second subregion;Wherein, The application point of first subregion is the first application point, and the application point of the second subregion is the second application point;First subregion and Each corresponding sphere of action can be the same to two subregions, it can also be provided that different.
In a kind of embodiment, determining the application point of pending image-region in target image can be:Setting first is made It is (x with pointZ1, yZ1), the second application point is (xZ2, yZ2), then obtain the abscissa of the first application point using equation below:
Wherein, W be the target image width, Q1For default first ratio value;
The abscissa of the second application point of second subregion is obtained using equation below:
Wherein, Q2For default second ratio value;
The ordinate of the first application point of first subregion and second subregion are obtained using equation below The ordinate of second application point:
Wherein, H be the target image height, Q3For default 3rd ratio value.
Wherein, default first ratio value, the second ratio value and the 3rd ratio value are self-defined settings, the first ratio Value, the second ratio value and the 3rd ratio value could be arranged to different.
Illustratively, the width of the target image of acquisition is 12, is highly 15, and default first ratio value is 4, the second ratio It is worth for 1, the 3rd ratio value is 3, then, the abscissa of the first application point:
The abscissa of second application point:
The ordinate of the ordinate of first application point and the second application point:
To sum up, the coordinate of the first application point is (3,5), and the coordinate of the second application point is (12,5).
In a kind of embodiment, determining the sphere of action of pending image-region in target image can be:
In the case where the first subregion and the second subregion are border circular areas, obtain calculating length using equation below Degree:
Q4For default 4th ratio value, wherein, default 4th ratio value can be self-defined setting.
Determine that the sphere of action of first subregion and the sphere of action of second subregion are:Using D as diameter Border circular areas.
Illustratively, the width of the target image of acquisition is 12, and default 4th ratio value is 3, then obtaining computational length is:
The sphere of action that the first subregion and the second subregion can so be determined is the border circular areas with 4 for diameter.
It is determined that after the first application point, the second application point and sphere of action, will using D as diameter, with the first application point For the center of circle determine border circular areas as the first subregion;The circle that will be determined using D as diameter, by the center of circle of the second application point Domain is as the second subregion.And the first subregion after determination, the second subregion are shown.
In a kind of embodiment, when human face region is not present in target image, pass through the width and height of target image To determine the application point of pending image-region and sphere of action, application point and sphere of action are determined compared to by human face region Embodiment, accuracy can be slightly worse by way of determining application point and sphere of action the width of target image and height, Therefore, to reduce the number of user's operation, default action intensity can be adjusted to least action intensity, i.e., to pending figure As region is without the processing of any action intensity.
And according to default image procossing mode, with least action intensity, image procossing is carried out to pending image-region, Target image after being handled.
S206, according to the application point and sphere of action, determine the pending image-region;
S207, according to default image procossing mode, with default action intensity, the pending image-region is carried out Image procossing, the target image after being handled.
In the present embodiment, S206-S207 is identical with the S104-S105 of above-described embodiment, and therefore not to repeat here.
In technical scheme provided in an embodiment of the present invention, it whether there is human face region in target image by detecting;If deposit According to coordinate of the pixel in preset coordinate system in the human face region, the coordinate parameters of the human face region are being obtained;Root According to the coordinate parameters, the application point and sphere of action of pending image-region in the target image are calculated;According to the work With point and sphere of action, the pending image-region is determined;It is strong with default effect according to default image procossing mode Degree, image procossing, the target image after being handled are carried out to the pending image-region.Side provided in an embodiment of the present invention It is used as in case by the human face region in image with reference to the application point and sphere of action for determining pending image-region, and then with pre- If action intensity image procossing is carried out to pending image-region, avoid user and manually operated just can determine that by cumbersome Application point, sphere of action and action intensity, so as to simplify the operation of image procossing, improve Consumer's Experience.
With reference to another specific embodiment, a kind of image processing method provided by the invention is introduced.
As shown in figure 3, a kind of image processing method provided in an embodiment of the present invention, can also comprise the following steps:
S301, receive the instruction that Shadows Processing is carried out to target image.
Shadows Processing can act on chest, by setting the transparency of shade, so that visually chest It is more plentiful.
It should be noted that Shadows Processing can individually enter with image procossings such as the chest enlarge in above-described embodiment, buttocks development surgeries Capable image procossing, i.e., when carrying out image procossing to target image, Shadows Processing is only done, or only do the figure such as chest enlarge, buttocks development surgery As processing;It is, of course, also possible to above two image procossing is carried out simultaneously to target image, for example, after chest enlarge processing is carried out Shadows Processing is carried out again.
S302, concentrated from default shadow image and choose target shadow image.
Echo image set has multiple different types of shadow images to be set in advance, shadow image is centrally stored, uses Family can concentrate from shadow image and choose shadow image according to demand, as target shadow image.Deposited for example, shadow image is concentrated The shadow image of 6 types is contained, user can choose No. 1 shadow image and carry out Shadows Processing as target shadow image, when So, user can also change other shadow images.
S303, determine placement location and the target shadow image of the target shadow image in the target image Transparency.
In the case of human face region being present in the target image, placement location can be the coordinate parameters according to human face region Determine, this embodiment with it is above-mentioned according to the coordinate parameters, calculate pending image-region in the target image Application point is similar with the embodiment of sphere of action, will not be repeated here.
In the case of human face region is not present in the target image, placement location can be according to the width of target image and Highly determine, this embodiment and above-mentioned width and height according to the target image, calculate in the target image The application point of pending image-region is similar with the embodiment of sphere of action, will not be repeated here.
In addition, the transparency of target shadow image can be advance self-defined setting, also, user according to demand may be used To be adjusted again.
S304, with identified transparency, by the target shadow imaging importing to identified placement location.
Target shadow image is shown with identified transparency, and is superimposed to identified placement location, so The target image for being superimposed target shadow image is shown afterwards.
In technical scheme provided in an embodiment of the present invention, it whether there is human face region in target image by detecting;If deposit According to coordinate of the pixel in preset coordinate system in the human face region, the coordinate parameters of the human face region are being obtained;Root According to the coordinate parameters, the application point and sphere of action of pending image-region in the target image are calculated;According to the work With point and sphere of action, the pending image-region is determined;It is strong with default effect according to default image procossing mode Degree, image procossing, the target image after being handled are carried out to the pending image-region.Side provided in an embodiment of the present invention It is used as in case by the human face region in image with reference to the application point and sphere of action for determining pending image-region, and then with pre- If action intensity image procossing is carried out to pending image-region, avoid user and manually operated just can determine that by cumbersome Application point, sphere of action and action intensity, so as to simplify the operation of image procossing, improve Consumer's Experience.
Relative to above method embodiment, the embodiment of the present invention additionally provides a kind of image processing apparatus, as shown in figure 4, Described device includes:
Detection module 410, it whether there is human face region in target image for detecting;
First acquisition module 420, for when the detection module detects and human face region be present in target image, according to Coordinate of the pixel in preset coordinate system in the human face region, obtain the coordinate parameters of the human face region;
First computing module 430, for according to the coordinate parameters, calculating pending image-region in the target image Application point and sphere of action;
First determining module 440, for according to the application point and sphere of action, determining the pending image-region;
Processing module 450, for according to default image procossing mode, with default action intensity, to described pending Image-region carries out image procossing, the target image after being handled.
Alternatively, in a kind of embodiment, first computing module 430 includes:
First determination sub-module, for according to the first coordinate, the second coordinate, the 3rd coordinate and 4-coordinate, it is determined that described The application point of the pending image-region of target image, wherein, first coordinate is:Abscissa is corresponding in the coordinate parameters The minimum coordinate of numerical value, second coordinate is:The maximum coordinate of numerical value corresponding to abscissa, described in the coordinate parameters 3rd coordinate is:The maximum coordinate of numerical value corresponding to ordinate, described in coordinate in the coordinate parameters, for identifying eyebrow 4-coordinate is:The minimum coordinate of numerical value corresponding to ordinate in coordinate in the coordinate parameters, for identifying chin;
Second determination sub-module, for according to first coordinate and second coordinate, determining the target image The sphere of action of pending image-region.
Alternatively, in a kind of embodiment, the pending image-region includes:First subregion and the second subregion;
First determination sub-module includes:
First determining unit, for the abscissa of the first coordinate to be defined as to the first application point of first subregion Abscissa;
Second determining unit, for the abscissa of the second coordinate to be defined as to the second application point of second subregion Abscissa;
First computing unit, for obtained using equation below first subregion the first application point ordinate and The ordinate of second application point of second subregion:
yZ1=yZ2=y4-(y3-y4)
Wherein, yZ1For the ordinate of first application point, yZ2For the ordinate of second application point, y3For described The ordinate of three coordinates, y4For the ordinate of the 4-coordinate.
Alternatively, in a kind of embodiment, second determination sub-module includes:
Second computing unit, for obtaining first coordinate and second coordinate in x-axis using equation below Distance:
L=| x1-x2|
Wherein, x1For the abscissa of first coordinate, x2For the abscissa of second coordinate, L is first coordinate With distance of second coordinate in x-axis;
3rd determining unit, for determining the sphere of action of first subregion and the effect model of second subregion Enclose and be:Border circular areas using L as diameter.
Alternatively, in a kind of embodiment, first determining module 440 includes:
3rd determination sub-module, the border circular areas for will be determined using L as diameter, using first application point as the center of circle are made For first subregion;
4th determination sub-module, the border circular areas for will be determined using L as diameter, using second application point as the center of circle are made For second subregion.
Alternatively, in a kind of embodiment, described device also includes:
Adjusting module, for being adjusted to the pending image-region;
The processing module 450 includes:
First processing submodule, for according to default image procossing mode, entering to the pending image-region after adjustment Row image procossing, the target image after being handled.
Alternatively, in a kind of embodiment, the adjusting module is specifically used for, at least one of following adjustment mode:
The pending image-region is moved to target location;
The sphere of action of the pending image-region is adjusted to intended operating range;
The default action intensity of the pending image-region is adjusted to interacting goals intensity.
On the basis of above-mentioned Fig. 4, the embodiment of the present invention also provides another embodiment, as shown in figure 5, the dress Putting also includes:
Second acquisition module 510, for when the detection module detects and human face region is not present in target image, obtaining Take the width and height of the target image;
Second computing module 520, for the width and height according to the target image, calculate and treated in the target image Handle the application point and sphere of action of image-region.
Alternatively, in a kind of embodiment, the pending image-region includes:First subregion and the second subregion;
Second computing module 520, including:
First calculating sub module, the horizontal seat of the first application point for obtaining first subregion using equation below Mark:
Wherein, xZ1For the abscissa of first application point, W is the width of the target image, Q1For the default first ratio Example value;
Second calculating sub module, the horizontal seat of the second application point for obtaining second subregion using equation below Mark:
Wherein, xZ2For the abscissa of second application point, Q2For default second ratio value;
3rd calculating sub module, the ordinate of the first application point for obtaining first subregion using equation below With the ordinate of the second application point of second subregion:
Wherein, yZ1For the ordinate of first application point, yZ2For the ordinate of second application point, H is the mesh The width of logo image, Q3For default 3rd ratio value.
4th calculating sub module, for obtaining computational length using equation below:
Wherein, W be the target image width, Q4For default 4th ratio value;
5th determination sub-module, for determining the sphere of action of first subregion and the effect of second subregion Scope is:Border circular areas using D as diameter.
Alternatively, in a kind of embodiment, the processing module 450 includes:
Submodule is handled, for according to default image procossing mode, with default least action intensity, waiting to locate to described Manage image-region and carry out image procossing, the target image after being handled.
The embodiment of the present invention also provides another embodiment, as shown in fig. 6, described device also includes:
Receiving module 610, the instruction of Shadows Processing is carried out to target image for receiving;
Module 620 is chosen, target shadow image is chosen for being concentrated from default shadow image;
Second determining module 630, for determine placement location of the target shadow image in the target image and The transparency of the target shadow image;
Laminating module 640, for identified transparency, the target shadow imaging importing to be put to identified Seated position.
In technical scheme provided in an embodiment of the present invention, it whether there is human face region in target image by detecting;If deposit According to coordinate of the pixel in preset coordinate system in the human face region, the coordinate parameters of the human face region are being obtained;Root According to the coordinate parameters, the application point and sphere of action of pending image-region in the target image are calculated;According to the work With point and sphere of action, the pending image-region is determined;It is strong with default effect according to default image procossing mode Degree, image procossing, the target image after being handled are carried out to the pending image-region.Side provided in an embodiment of the present invention It is used as in case by the human face region in image with reference to the application point and sphere of action for determining pending image-region, and then with pre- If action intensity image procossing is carried out to pending image-region, avoid user and manually operated just can determine that by cumbersome Application point, sphere of action and action intensity, so as to simplify the operation of image procossing, improve Consumer's Experience.
For device embodiment, because it is substantially similar to embodiment of the method, so describing fairly simple, correlation Part illustrates referring to the part of embodiment of the method.
The embodiment of the present invention additionally provides a kind of electronic equipment, as shown in fig. 7, comprises processor 710, communication interface 720, Memory 730 and communication bus 740, wherein, processor 710, communication interface 720, memory 730 is complete by communication bus 740 Into mutual communication.
Memory 730, for depositing computer program;
Processor 710, during for performing the program deposited on memory 730, realize following steps:
It whether there is human face region in detection target image;
If in the presence of according to coordinate of the pixel in preset coordinate system in the human face region, obtaining the human face region Coordinate parameters;
According to the coordinate parameters, the application point and sphere of action of pending image-region in the target image are calculated;
According to the application point and sphere of action, the pending image-region is determined;
According to default image procossing mode, with default action intensity, image is carried out to the pending image-region Processing, the target image after being handled.
It is understood that electronic equipment can also carry out any of above-described embodiment image processing method, herein Do not repeat.
The communication bus that above-mentioned electronic equipment is mentioned can be Peripheral Component Interconnect standard (Peripheral Component Interconnect, PCI) bus or EISA (Extended Industry Standard Architecture, EISA) bus etc..The communication bus can be divided into address bus, data/address bus, controlling bus etc..For just Only represented in expression, figure with a thick line, it is not intended that an only bus or a type of bus.
The communication that communication interface is used between above-mentioned electronic equipment and other equipment.
Memory can include random access memory (Random Access Memory, RAM), can also include non-easy The property lost memory (Non-Volatile Memory, NVM), for example, at least a magnetic disk storage.Optionally, memory may be used also To be at least one storage device for being located remotely from aforementioned processor.
Above-mentioned processor can be general processor, including central processing unit (Central Processing Unit, CPU), network processing unit (Network Processor, NP) etc.;It can also be digital signal processor (Digital Signal Processing, DSP), it is application specific integrated circuit (Application Specific Integrated Circuit, ASIC), existing It is field programmable gate array (Field-Programmable Gate Array, FPGA) or other PLDs, discrete Door or transistor logic, discrete hardware components.
The embodiment of the present invention additionally provides a kind of computer-readable recording medium, is stored in the computer-readable recording medium There is computer program, the computer program performs any of the above-described described image processing method when being executed by processor.
The embodiment of the present invention additionally provides a kind of computer applied algorithm, and the computer applied algorithm is run on computers When so that computer performs any described image processing method in above-described embodiment.
In technical scheme provided in an embodiment of the present invention, it whether there is human face region in target image by detecting;If deposit According to coordinate of the pixel in preset coordinate system in the human face region, the coordinate parameters of the human face region are being obtained;Root According to the coordinate parameters, the application point and sphere of action of pending image-region in the target image are calculated;According to the work With point and sphere of action, the pending image-region is determined;It is strong with default effect according to default image procossing mode Degree, image procossing, the target image after being handled are carried out to the pending image-region.Side provided in an embodiment of the present invention It is used as in case by the human face region in image with reference to the application point and sphere of action for determining pending image-region, and then with pre- If action intensity image procossing is carried out to pending image-region, avoid user and manually operated just can determine that by cumbersome Application point, sphere of action and action intensity, so as to simplify the operation of image procossing, improve Consumer's Experience.
The term used in the embodiment of the present application is only merely for the purpose of description specific embodiment, and is not intended to be limiting The application." one kind ", " described " and "the" of singulative used in the embodiment of the present application and appended claims It is also intended to including most forms, unless context clearly shows that other implications.It is also understood that term used herein "and/or" refers to and any or all may be combined comprising the associated list items purpose of one or more.
It will be appreciated that though it may be described in the embodiment of the present application using term " first ", " second ", " the 3rd " etc. Various connectivity ports and identification information etc., but these connectivity ports and identification information etc. should not necessarily be limited by these terms.These terms Only it is used for connectivity port and identification information etc. being distinguished from each other out.For example, in the case where not departing from the embodiment of the present application scope, First connectivity port can also be referred to as second connection end mouth, and similarly, second connection end mouth can also be referred to as the first connection Port.
Depending on linguistic context, word as used in this " if " can be construed to " ... when " or " when ... When " or " in response to determining " or " in response to detection ".Similarly, depending on linguistic context, phrase " if it is determined that " or " if detection (condition or event of statement) " can be construed to " when it is determined that when " or " in response to determine " or " when the detection (condition of statement Or event) when " or " in response to detecting (condition or event of statement) ".
Through the above description of the embodiments, it is apparent to those skilled in the art that, for description It is convenient and succinct, can as needed will be upper only with the division progress of above-mentioned each functional module for example, in practical application State function distribution to be completed by different functional modules, i.e., the internal structure of device is divided into different functional modules, to complete All or part of function described above.The specific work process of the system, apparatus, and unit of foregoing description, before may be referred to The corresponding process in embodiment of the method is stated, will not be repeated here.
In several embodiments provided herein, it should be understood that disclosed system, apparatus and method can be with Realize by another way.For example, device embodiment described above is only schematical, for example, the module or The division of unit, only a kind of division of logic function, can there are other dividing mode, such as multiple units when actually realizing Or component can combine or be desirably integrated into another system, or some features can be ignored, or not perform.It is another, institute Display or the mutual coupling discussed or direct-coupling or communication connection can be by some interfaces, device or unit INDIRECT COUPLING or communication connection, can be electrical, mechanical or other forms.
The unit illustrated as separating component can be or may not be physically separate, show as unit The part shown can be or may not be physical location, you can with positioned at a place, or can also be distributed to multiple On NE.Some or all of unit therein can be selected to realize the mesh of this embodiment scheme according to the actual needs 's.
In addition, each functional unit in each embodiment of the application can be integrated in a processing unit, can also That unit is individually physically present, can also two or more units it is integrated in a unit.Above-mentioned integrated list Member can both be realized in the form of hardware, can also be realized in the form of SFU software functional unit.
If the integrated unit is realized in the form of SFU software functional unit and is used as independent production marketing or use When, it can be stored in a computer read/write memory medium.Based on such understanding, the technical scheme of the application is substantially The part to be contributed in other words to prior art or all or part of the technical scheme can be in the form of software products Embody, the computer software product is stored in a storage medium, including some instructions are causing a computer It is each that equipment (can be personal computer, server, or network equipment etc.) or processor (processor) perform the application The all or part of step of embodiment methods described.And foregoing storage medium includes:USB flash disk, mobile hard disk, read-only storage (Read Only Memory;Hereinafter referred to as:ROM), random access memory (Random Access Memory;Hereinafter referred to as: RAM), magnetic disc or CD etc. are various can be with the medium of store program codes.
Described above, the only embodiment of the application, but the protection domain of the application is not limited thereto is any Those familiar with the art can readily occur in change or replacement in the technical scope that the application discloses, and should all contain Cover within the protection domain of the application.Therefore, the protection domain of the application should be based on the protection scope of the described claims.

Claims (10)

1. a kind of image processing method, it is characterised in that methods described includes:
It whether there is human face region in detection target image;
If in the presence of according to coordinate of the pixel in preset coordinate system in the human face region, obtaining the seat of the human face region Mark parameter;
According to the coordinate parameters, the application point and sphere of action of pending image-region in the target image are calculated;
According to the application point and sphere of action, the pending image-region is determined;
According to default image procossing mode, with default action intensity, image procossing is carried out to the pending image-region, The target image after being handled.
2. according to the method for claim 1, it is characterised in that it is described according to the coordinate parameters, calculate the target figure As in the step of the application point and sphere of action of pending image-region, including:
According to the first coordinate, the second coordinate, the 3rd coordinate and 4-coordinate, the pending image-region of the target image is determined Application point, wherein, first coordinate is:The minimum coordinate of numerical value corresponding to abscissa in the coordinate parameters, described the Two coordinates are:The maximum coordinate of numerical value corresponding to abscissa, the 3rd coordinate are in the coordinate parameters:The coordinate parameters In, the maximum coordinate of numerical value corresponding to ordinate in the coordinate for identifying eyebrow, the 4-coordinate is:The coordinate parameters In, the minimum coordinate of numerical value corresponding to ordinate in the coordinate for identifying chin;
According to first coordinate and second coordinate, the effect model of the pending image-region of the target image is determined Enclose.
3. according to the method for claim 2, it is characterised in that the pending image-region includes:First subregion and Second subregion;
It is described according to the first coordinate, second coordinate, the 3rd coordinate and 4-coordinate, determine the pending of the target image The step of application point of image-region, including:
The abscissa of first coordinate is defined as to the abscissa of the first application point of first subregion;
The abscissa of second coordinate is defined as to the abscissa of the second application point of second subregion;
Obtained using equation below the first application point of first subregion ordinate and second subregion second The ordinate of application point:
yZ1=yZ2=y4-(y3-y4)
Wherein, yZ1For the ordinate of first application point, yZ2For the ordinate of second application point, y3Sat for the described 3rd Target ordinate, y4For the ordinate of the 4-coordinate.
4. according to the method for claim 3, it is characterised in that it is described according to the first coordinate and the second coordinate, it is determined that described The step of sphere of action of the pending image-region of target image, including:
First coordinate and distance of second coordinate in x-axis are obtained using equation below:
L=| x1-x2|
Wherein, x1For the abscissa of first coordinate, x2For the abscissa of second coordinate, L is first coordinate and institute State distance of second coordinate in x-axis;
Determine that the sphere of action of first subregion and the sphere of action of second subregion are:Circle using L as diameter Shape region.
5. according to the method for claim 4, it is characterised in that it is described according to the application point and sphere of action, determine institute The step of stating pending image-region, including:
By using L as diameter, first subregion is used as by the border circular areas that the center of circle determines of first application point;
By using L as diameter, second subregion is used as by the border circular areas that the center of circle determines of second application point.
6. according to the method for claim 1, it is characterised in that it is described according to the application point and sphere of action, determine institute After the step of stating pending image-region, in addition to:
The pending image-region is adjusted;
It is described according to default image procossing mode, with default action intensity, image is carried out to the pending image-region Processing, the target image after being handled, including:
According to default image procossing mode, image procossing is carried out to the pending image-region after adjustment, after being handled The target image.
7. according to the method for claim 6, it is characterised in that the step being adjusted to the pending image-region Including at least one of following adjustment mode suddenly,:
The pending image-region is moved to target location;
The sphere of action of the pending image-region is adjusted to intended operating range;
The default action intensity of the pending image-region is adjusted to interacting goals intensity.
8. a kind of image processing apparatus, it is characterised in that described device includes:
Detection module, it whether there is human face region in target image for detecting;
First acquisition module, for when the detection module detects and human face region be present in target image, according to the people Coordinate of the pixel in preset coordinate system in face region, obtain the coordinate parameters of the human face region;
First computing module, for according to the coordinate parameters, calculating the effect of pending image-region in the target image Point and sphere of action;
First determining module, for according to the application point and sphere of action, determining the pending image-region;
Processing module, for according to default image procossing mode, with default action intensity, to the pending image-region Carry out image procossing, the target image after being handled.
9. a kind of electronic equipment, it is characterised in that including processor, communication interface, memory and communication bus, wherein, processing Device, communication interface, memory complete mutual communication by communication bus;
Memory, for depositing computer program;
Processor, during for performing the program deposited on memory, realize any described method and steps of claim 1-7.
10. a kind of computer-readable recording medium, it is characterised in that the computer-readable recording medium internal memory contains computer Program, the computer program realize claim 1-7 any described method and steps when being executed by processor.
CN201710527387.XA 2017-06-30 2017-06-30 Image processing method and device, electronic equipment and storage medium Active CN107395958B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710527387.XA CN107395958B (en) 2017-06-30 2017-06-30 Image processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710527387.XA CN107395958B (en) 2017-06-30 2017-06-30 Image processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN107395958A true CN107395958A (en) 2017-11-24
CN107395958B CN107395958B (en) 2019-11-15

Family

ID=60335015

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710527387.XA Active CN107395958B (en) 2017-06-30 2017-06-30 Image processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN107395958B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108346130A (en) * 2018-03-20 2018-07-31 北京奇虎科技有限公司 Image processing method, device and electronic equipment
CN108364254A (en) * 2018-03-20 2018-08-03 北京奇虎科技有限公司 Image processing method, device and electronic equipment
CN108389155A (en) * 2018-03-20 2018-08-10 北京奇虎科技有限公司 Image processing method, device and electronic equipment
CN108399599A (en) * 2018-03-20 2018-08-14 北京奇虎科技有限公司 Image processing method, device and electronic equipment
CN108447023A (en) * 2018-03-20 2018-08-24 北京奇虎科技有限公司 Image processing method, device and electronic equipment
CN109214317A (en) * 2018-08-22 2019-01-15 北京慕华信息科技有限公司 A kind of information content determines method and device
CN111476201A (en) * 2020-04-29 2020-07-31 Oppo广东移动通信有限公司 Certificate photo manufacturing method, terminal and storage medium
CN112966578A (en) * 2021-02-23 2021-06-15 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN113297641A (en) * 2020-11-26 2021-08-24 阿里巴巴集团控股有限公司 Stamp processing method, content element processing method, device, equipment and medium
TWI743843B (en) * 2019-12-25 2021-10-21 中國商北京市商湯科技開發有限公司 Image processing method, image processing device and storage medium thereof
CN113591710A (en) * 2021-07-30 2021-11-02 康佳集团股份有限公司 Image processing method, device, terminal and computer readable storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2031868B1 (en) * 2007-08-31 2011-12-14 Casio Computer Co., Ltd. Apparatus for correcting the tone of an image
CN103632165A (en) * 2013-11-28 2014-03-12 小米科技有限责任公司 Picture processing method, device and terminal equipment
CN105512605A (en) * 2015-11-23 2016-04-20 小米科技有限责任公司 Face image processing method and device
CN106067167A (en) * 2016-06-06 2016-11-02 广东欧珀移动通信有限公司 Image processing method and device
CN106210522A (en) * 2016-07-15 2016-12-07 广东欧珀移动通信有限公司 A kind of image processing method, device and mobile terminal
CN106558040A (en) * 2015-09-23 2017-04-05 腾讯科技(深圳)有限公司 Character image treating method and apparatus
CN106846240A (en) * 2015-12-03 2017-06-13 阿里巴巴集团控股有限公司 A kind of method for adjusting fusion material, device and equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2031868B1 (en) * 2007-08-31 2011-12-14 Casio Computer Co., Ltd. Apparatus for correcting the tone of an image
CN103632165A (en) * 2013-11-28 2014-03-12 小米科技有限责任公司 Picture processing method, device and terminal equipment
CN106558040A (en) * 2015-09-23 2017-04-05 腾讯科技(深圳)有限公司 Character image treating method and apparatus
CN105512605A (en) * 2015-11-23 2016-04-20 小米科技有限责任公司 Face image processing method and device
CN106846240A (en) * 2015-12-03 2017-06-13 阿里巴巴集团控股有限公司 A kind of method for adjusting fusion material, device and equipment
CN106067167A (en) * 2016-06-06 2016-11-02 广东欧珀移动通信有限公司 Image processing method and device
CN106210522A (en) * 2016-07-15 2016-12-07 广东欧珀移动通信有限公司 A kind of image processing method, device and mobile terminal

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108364254B (en) * 2018-03-20 2021-07-23 北京奇虎科技有限公司 Image processing method, device and electronic device
CN108346130B (en) * 2018-03-20 2021-07-23 北京奇虎科技有限公司 Image processing method, device and electronic device
CN108389155A (en) * 2018-03-20 2018-08-10 北京奇虎科技有限公司 Image processing method, device and electronic equipment
CN108399599A (en) * 2018-03-20 2018-08-14 北京奇虎科技有限公司 Image processing method, device and electronic equipment
CN108447023A (en) * 2018-03-20 2018-08-24 北京奇虎科技有限公司 Image processing method, device and electronic equipment
CN108346130A (en) * 2018-03-20 2018-07-31 北京奇虎科技有限公司 Image processing method, device and electronic equipment
CN108399599B (en) * 2018-03-20 2021-11-26 北京奇虎科技有限公司 Image processing method and device and electronic equipment
CN108389155B (en) * 2018-03-20 2021-10-01 北京奇虎科技有限公司 Image processing method, device and electronic device
CN108364254A (en) * 2018-03-20 2018-08-03 北京奇虎科技有限公司 Image processing method, device and electronic equipment
CN108447023B (en) * 2018-03-20 2021-08-24 北京奇虎科技有限公司 Image processing method, device and electronic device
CN109214317A (en) * 2018-08-22 2019-01-15 北京慕华信息科技有限公司 A kind of information content determines method and device
CN109214317B (en) * 2018-08-22 2021-11-12 北京慕华信息科技有限公司 Information quantity determination method and device
TWI743843B (en) * 2019-12-25 2021-10-21 中國商北京市商湯科技開發有限公司 Image processing method, image processing device and storage medium thereof
US11734829B2 (en) 2019-12-25 2023-08-22 Beijing Sensetime Technology Development Co., Ltd. Method and device for processing image, and storage medium
CN111476201A (en) * 2020-04-29 2020-07-31 Oppo广东移动通信有限公司 Certificate photo manufacturing method, terminal and storage medium
CN113297641A (en) * 2020-11-26 2021-08-24 阿里巴巴集团控股有限公司 Stamp processing method, content element processing method, device, equipment and medium
CN112966578A (en) * 2021-02-23 2021-06-15 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN113591710A (en) * 2021-07-30 2021-11-02 康佳集团股份有限公司 Image processing method, device, terminal and computer readable storage medium

Also Published As

Publication number Publication date
CN107395958B (en) 2019-11-15

Similar Documents

Publication Publication Date Title
CN107395958A (en) Image processing method and device, electronic equipment and storage medium
CN110210571B (en) Image recognition method and device, computer equipment and computer readable storage medium
CN112669197B (en) Image processing method, device, mobile terminal and storage medium
CN108594997B (en) Gesture skeleton construction method, device, equipment and storage medium
CN104486552B (en) A kind of method and electronic equipment obtaining image
CN110175980A (en) Image definition recognition methods, image definition identification device and terminal device
CN104517265B (en) Intelligent grinding skin method and apparatus
US11308655B2 (en) Image synthesis method and apparatus
CN107204034B (en) An image processing method and terminal
CN109829456A (en) Image-recognizing method, device and terminal
CN106650615B (en) A kind of image processing method and terminal
CN108830186B (en) Text image content extraction method, device, equipment and storage medium
CN109684980A (en) Automatic marking method and device
EP3514724A1 (en) Depth map-based heuristic finger detection method
CN107622483A (en) A kind of image combining method and terminal
CN108564082A (en) Image processing method, device, server and medium
CN104463782B (en) Image processing method, device and electronic equipment
CN106341574A (en) Color gamut mapping method and color gamut mapping device
CN102567969B (en) Color image edge detection method
CN109815854A (en) It is a kind of for the method and apparatus of the related information of icon to be presented on a user device
CN110473281A (en) Threedimensional model retouches side processing method, device, processor and terminal
CN107426490A (en) A kind of photographic method and terminal
WO2007074844A1 (en) Detecting method and detecting system for positions of face parts
CN106331427A (en) Saturation enhancement method and device
CN115761207A (en) Image processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20201124

Address after: Room 115, area C, 1 / F, building 8, yard 1, yaojiayuan South Road, Chaoyang District, Beijing 100123

Patentee after: Beijing LEMI Technology Co.,Ltd.

Address before: 100085 Beijing City, Haidian District Road 33, two floor East Xiaoying

Patentee before: BEIJING KINGSOFT INTERNET SECURITY SOFTWARE Co.,Ltd.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230831

Address after: 3870A, 3rd Floor, Building 4, Courtyard 49, Badachu Road, Shijingshan District, Beijing, 100144

Patentee after: Beijing Jupiter Technology Co.,Ltd.

Address before: 100123 room 115, area C, 1st floor, building 8, yard 1, yaojiayuan South Road, Chaoyang District, Beijing

Patentee before: Beijing LEMI Technology Co.,Ltd.