CN107231529A - Image processing method, mobile terminal and storage medium - Google Patents
Image processing method, mobile terminal and storage medium Download PDFInfo
- Publication number
- CN107231529A CN107231529A CN201710531012.0A CN201710531012A CN107231529A CN 107231529 A CN107231529 A CN 107231529A CN 201710531012 A CN201710531012 A CN 201710531012A CN 107231529 A CN107231529 A CN 107231529A
- Authority
- CN
- China
- Prior art keywords
- person
- taken
- face
- region
- pending picture
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 39
- 238000012545 processing Methods 0.000 claims description 48
- 238000009877 rendering Methods 0.000 claims description 14
- 230000002087 whitening effect Effects 0.000 claims description 12
- 230000009471 action Effects 0.000 claims description 6
- 230000003255 anti-acne Effects 0.000 claims description 6
- 239000000203 mixture Substances 0.000 claims description 6
- JEIPFZHSYJVQDO-UHFFFAOYSA-N iron(III) oxide Inorganic materials O=[Fe]O[Fe]=O JEIPFZHSYJVQDO-UHFFFAOYSA-N 0.000 claims description 5
- 241000208340 Araliaceae Species 0.000 claims 1
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 claims 1
- 235000003140 Panax quinquefolius Nutrition 0.000 claims 1
- 235000008434 ginseng Nutrition 0.000 claims 1
- 238000000034 method Methods 0.000 abstract description 27
- 230000006870 function Effects 0.000 description 15
- 238000004891 communication Methods 0.000 description 10
- 230000001815 facial effect Effects 0.000 description 10
- 238000005516 engineering process Methods 0.000 description 7
- 238000001514 detection method Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 6
- 230000008569 process Effects 0.000 description 5
- 239000000284 extract Substances 0.000 description 4
- 238000003384 imaging method Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000012937 correction Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 210000003128 head Anatomy 0.000 description 2
- 238000007689 inspection Methods 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000008520 organization Effects 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000005282 brightening Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000005314 correlation function Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000009527 percussion Methods 0.000 description 1
- 210000001747 pupil Anatomy 0.000 description 1
- 230000008439 repair process Effects 0.000 description 1
- 230000011664 signaling Effects 0.000 description 1
- 238000010897 surface acoustic wave method Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
- H04N23/611—Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/166—Detection; Localisation; Normalisation using acquisition arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/62—Control of parameters via user interfaces
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a kind of image processing method, this method includes:Recognition of face is carried out to pending picture, the human face region in the pending picture is determined;According to the corresponding image depth information of human face region in the pending picture, corresponding first shooting distance of the human face region is calculated;According to the image depth information and above-mentioned first shooting distance that the pending picture is overall, the region residing for the person of being taken in the pending picture is determined;Image procossing is carried out to the region residing for the person of being taken according to default image procossing mode.The invention also discloses a kind of mobile terminal, and a kind of storage medium.The present invention can make a distinction the person of being taken in pending picture with background scenery, it is last that image procossing only is carried out to the region residing for the person of being taken, solve effectively can not make a distinction the personage in photo with background scenery in the prior art, and the technical problem of image procossing is individually carried out to personage.
Description
Technical field
It is situated between the present invention relates to technical field of image processing, more particularly to a kind of image processing method, mobile terminal and storage
Matter.
Background technology
With developing rapidly for nearly twenty or thirty year digital technology, people record the point drop of life by way of taking pictures
Drop turns into a kind of fashion, nowadays, more and more available for the electronic installation taken pictures, and such as mobile phone, tablet personal computer are mobile eventually
End, user using these mobile terminals when being taken pictures, and in order to take satisfied photo, increasing people is liked using tool
The software for having modification photo function is modified come the photo to shooting, such as U.S. face.
As the function of repairing figure software is more and more, everybody is intended to processing to repairing the requirement of figure software also more and more higher
Photo afterwards can show most beautiful oneself.Existing solution is typically all that whole pictures are carried out with the whole fuzzy
Processing and hue adjustment, such as reach that visual whitening and skin grind the effect of skin, and after full figure processing, figure can be lost unavoidably
Household, the color of scenery can be with household, the true colors difference of scenery in reality very in piece background detail, such as picture background
Greatly.
In addition, have repair figure software may by face recognition technology only to face part carry out image procossing, still,
This method is due to other positions of None- identified to human body, such as neck, arm, shoulder, therefore can not be to these positions
Also handled simultaneously, cause the photo after processing, the colour of skin of personage's face and the bad skin of neck, arm, shoulder etc. away from obvious,
Influence the attractive in appearance of photo.Effectively the personage in photo can not be made a distinction with background scenery in the prior art, it is individually right
Personage carries out image procossing.
The content of the invention
It is a primary object of the present invention to propose a kind of image processing method, mobile terminal and storage medium, it is intended to solve
Effectively the personage in photo can not be made a distinction with background scenery in the prior art, image procossing individually is carried out to personage
Technical problem.
To achieve the above object, the present invention provides a kind of image processing method, and described image processing method includes:
Recognition of face is carried out to pending picture, the human face region in the pending picture is determined;
According to the corresponding image depth information of human face region in the pending picture, the human face region is calculated corresponding
First shooting distance;
According to the image depth information and first shooting distance that the pending picture is overall, determine described pending
Region in picture residing for the person of being taken, wherein, the region residing for the person of being taken includes the human face region;
Image procossing is carried out to the region residing for the person of being taken according to default image procossing mode.
Optionally, also include before the step of progress recognition of face to pending picture:
Using the preset corresponding 3D rendering of the camera collection person of being taken, and it regard the 3D rendering collected as institute
State pending picture.
Optionally, according to the corresponding image depth information of human face region in the pending picture, the face area is calculated
The step of corresponding first shooting distance in domain, includes:
The second shooting distance between the subject and the camera in the human face region each pixel is extracted respectively;
Calculate the average value of corresponding second shooting distance of the human face region each pixel, and by the average value calculated
It is used as first shooting distance.
Optionally, according to the pending picture overall image depth information and the first shooting distance, it is determined that described treat
Include in processing picture the step of region residing for the person of being taken:
Extract respectively the third shot between subject and the camera in described each pixel of pending picture take the photograph away from
From;
The corresponding third shot photographic range of described each pixel of pending picture and first shooting distance are calculated respectively
The absolute value of difference;
Region according to residing for the absolute value determines the person of being taken in pending picture.
Optionally, the step of region according to residing for the absolute value determines the person of being taken in pending picture includes:
According to the absolute value, pixel corresponding with the person of being taken is selected in the pending picture, will be with institute
The region for stating the person's of being taken corresponding pixel composition is defined as region residing for the person of being taken.
Optionally, according to the absolute value, pixel corresponding with the person of being taken is selected in the pending picture
The step of include:
When the difference of the corresponding third shot photographic range of any pixel and first shooting distance in the pending picture
Absolute value be less than predetermined threshold value when, then any pixel is defined as pixel corresponding with the person of being taken.
Optionally, it is described that image procossing is carried out to the region residing for the person of being taken according to default image procossing mode
The step of include:
If detecting U.S. face triggering command, according to the U.S. face triggering command detected, to residing for the person of being taken
Region carries out U.S. face processing.
Optionally, the U.S. face triggering command includes the U.S. face instruction of a key and self-defined U.S. face instruction, described according to detection
The U.S. face triggering command arrived, the step of U.S. face is handled is carried out to the region residing for the person of being taken to be included:
When detecting the face instruction of key U.S., then according to default U.S. face efficacy parameter to the area residing for the person of being taken
Domain carries out U.S. face processing, and the U.S. face efficacy parameter includes thin face parameter, eyes size adjusting parameter, mill skin anti-acne parameter, skin
One or more in skin whitening parameter, tooth whitening parameter, rouge parameter;
When detecting self-defined U.S. face instruction, then according to the U.S. face trigger action received to residing for the person of being taken
Region carry out U.S. face processing.
In addition, to achieve the above object, the present invention also provides a kind of mobile terminal, and the mobile terminal includes:Camera,
Memory, processor and it is stored in the image processing program that can be run on the memory and on the processor, the figure
As realizing each corresponding step of image processing method as described above when processing routine is by the computing device.
In addition, to achieve the above object, the present invention also provides the image that is stored with a kind of storage medium, the storage medium
Processing routine, described image processing routine realizes each corresponding step of image processing method as described above when being executed by processor
Suddenly.
Image processing method, mobile terminal and storage medium that the present invention is provided, can be achieved:By entering to pending picture
Row recognition of face, determines the human face region in pending picture, then corresponding according to the human face region and pending picture
Image depth information, further determines that the region residing for the person of being taken in pending picture, will be clapped in pending picture
The person of taking the photograph makes a distinction with background scenery, last only to carry out image procossing to the region residing for the person of being taken, and can both ensure to be clapped
The colour of skin of the person's of taking the photograph corporal parts can be consistent after image procossing, can show pending to greatest extent again
The authenticity of background scenery in picture, solve effectively can not distinguish the personage in photo and background scenery in the prior art
Come, the technical problem of image procossing is individually carried out to personage.
Brief description of the drawings
Fig. 1 is the hardware architecture diagram for realizing the optional mobile terminal of each embodiment one of the invention;
A kind of communications network system Organization Chart that Fig. 2 provides for each embodiment of the invention;
Fig. 3 is the schematic flow sheet of image processing method first embodiment of the present invention;
The refinement step flow signal that Fig. 4 is step S30 shown in Fig. 3 in image processing method 3rd embodiment of the present invention
Figure;
Fig. 5 is pending picture human face region and the schematic diagram in non-face region in the present invention;
Fig. 6 for the present invention in determine the person of being taken residing for region schematic diagram of a scenario;
Fig. 7 is the structural representation for the software runtime environment that mobile terminal of the present invention is related to.
The realization, functional characteristics and advantage of the object of the invention will be described further referring to the drawings in conjunction with the embodiments.
Embodiment
It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not intended to limit the present invention.
In follow-up description, the suffix using such as " module ", " part " or " unit " for representing element is only
Be conducive to the explanation of the present invention, itself there is no a specific meaning.Therefore, " module ", " part " or " unit " can be mixed
Ground is used.
Terminal can be implemented in a variety of manners.For example, the terminal described in the present invention can include such as mobile phone, flat board
Computer, notebook computer, palm PC, personal digital assistant (Personal Digital Assistant, PDA), portable
Media player (Portable Media Player, PMP), guider, wearable device, Intelligent bracelet, pedometer etc. are moved
Move the fixed terminals such as terminal, and numeral TV, desktop computer.
It will be illustrated in subsequent descriptions by taking mobile terminal as an example, it will be appreciated by those skilled in the art that except special
Outside element for moving purpose, construction according to the embodiment of the present invention can also apply to the terminal of fixed type.
Referring to Fig. 1, its hardware architecture diagram for a kind of mobile terminal of realization each embodiment of the invention, the shifting
Dynamic terminal 100 can include:A/V (audio/video) input block 104, sensor 105, display unit 106, user's input are single
The parts such as member 107, interface unit 108, memory 109, processor 110 and power supply 111.Those skilled in the art can manage
The mobile terminal structure shown in solution, Fig. 1 does not constitute the restriction to mobile terminal, and mobile terminal can include more more than illustrating
Or less part, either combine some parts or different parts arrangement.
The all parts of mobile terminal are specifically introduced with reference to Fig. 1:
A/V input blocks 104 are used to receive audio or video signal.A/V input blocks 104 can include graphics processor
(Graphics Processing Unit, GPU) 1041,1041 pairs of graphics processor is in video acquisition mode or image capture mould
The static images or the view data of video obtained in formula by image capture apparatus (such as camera) are handled.Figure after processing
As frame may be displayed on display unit 106.Picture frame after being handled through graphics processor 1041 can be stored in memory 109
In (or other storage mediums), and can be voice data by such acoustic processing.
Mobile terminal 1 00 also includes at least one sensor 105, such as optical sensor, motion sensor and other biographies
Sensor.Specifically, optical sensor includes ambient light sensor and proximity transducer, wherein, ambient light sensor can be according to environment
The light and shade of light adjusts the brightness of display panel 1061, and proximity transducer can close when mobile terminal 1 00 is moved in one's ear
Display panel 1061 and/or backlight.As one kind of motion sensor, accelerometer sensor can detect in all directions (general
For three axles) size of acceleration, size and the direction of gravity are can detect that when static, the application available for identification mobile phone posture
(such as horizontal/vertical screen switching, dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, percussion) etc.;
The fingerprint sensor that can also configure as mobile phone, pressure sensor, iris sensor, molecule sensor, gyroscope, barometer,
The other sensors such as hygrometer, thermometer, infrared ray sensor, will not be repeated here.
Display unit 106 is used for the information for showing the information inputted by user or being supplied to user.Display unit 106 can be wrapped
Display panel 1061 is included, liquid crystal display (Liquid Crystal Display, LCD), Organic Light Emitting Diode can be used
Forms such as (Organic Light-Emitting Diode, OLED) configures display panel 1061.
User input unit 107 can be used for the numeral or character information for receiving input, and produce the use with mobile terminal
The key signals input that family is set and function control is relevant.Specifically, user input unit 107 may include contact panel 1071 with
And other input equipments 1072.Contact panel 1071, also referred to as touch-screen, collect touch operation of the user on or near it
(such as user is using any suitable objects such as finger, stylus or annex on contact panel 1071 or in contact panel 1071
Neighbouring operation), and corresponding attachment means are driven according to formula set in advance.Contact panel 1071 may include touch detection
Two parts of device and touch controller.Wherein, touch detecting apparatus detects the touch orientation of user, and detects touch operation band
The signal come, transmits a signal to touch controller;Touch controller receives touch information from touch detecting apparatus, and by it
It is converted into contact coordinate, then gives processor 110, and the order sent of reception processing device 110 and can be performed.In addition, can
To realize contact panel 1071 using polytypes such as resistance-type, condenser type, infrared ray and surface acoustic waves.Except contact panel
1071, user input unit 107 can also include other input equipments 1072.Specifically, other input equipments 1072 can be wrapped
Include but be not limited to physical keyboard, in function key (such as volume control button, switch key etc.), trace ball, mouse, action bars etc.
One or more, do not limit herein specifically.
Further, contact panel 1071 can cover display panel 1061, detect thereon when contact panel 1071 or
After neighbouring touch operation, processor 110 is sent to determine the type of touch event, with preprocessor 110 according to touch thing
The type of part provides corresponding visual output on display panel 1061.Although in Fig. 1, contact panel 1071 and display panel
1061 be input and the output function that mobile terminal is realized as two independent parts, but in certain embodiments, can
By contact panel 1071 and the input that is integrated and realizing mobile terminal of display panel 1061 and output function, not do specifically herein
Limit.
Interface unit 108 is connected the interface that can pass through as at least one external device (ED) with mobile terminal 1 00.For example,
External device (ED) can include wired or wireless head-band earphone port, external power source (or battery charger) port, wired or nothing
Line FPDP, memory card port, the port for connecting the device with identification module, audio input/output (I/O) end
Mouth, video i/o port, ear port etc..Interface unit 108 can be used for receiving the input from external device (ED) (for example, number
It is believed that breath, electric power etc.) and the input received is transferred to one or more elements in mobile terminal 1 00 or can be with
For transmitting data between mobile terminal 1 00 and external device (ED).
Memory 109 can be used for storage software program and various data.Memory 109 can mainly include storing program area
And storage data field, wherein, application program (the such as sound that storing program area can be needed for storage program area, at least one function
Sound playing function, image player function etc.) etc.;Storage data field can be stored uses created data (such as according to mobile phone
Voice data, phone directory etc.) etc..In addition, memory 109 can include high-speed random access memory, it can also include non-easy
The property lost memory, for example, at least one disk memory, flush memory device or other volatile solid-state parts.
Processor 110 is the control centre of mobile terminal, utilizes each of various interfaces and the whole mobile terminal of connection
Individual part, by operation or performs and is stored in software program and/or module in memory 109, and calls and be stored in storage
Data in device 109, perform the various functions and processing data of mobile terminal, so as to carry out integral monitoring to mobile terminal.Place
Reason device 110 may include one or more processing units;It is preferred that, processor 110 can integrated application processor and modulatedemodulate mediate
Device is managed, wherein, application processor mainly handles operating system, user interface and application program etc., and modem processor is main
Handle radio communication.It is understood that above-mentioned modem processor can not also be integrated into processor 110.
Mobile terminal 1 00 can also include the power supply 111 (such as battery) powered to all parts, it is preferred that power supply 111
Can be logically contiguous by power-supply management system and processor 110, so as to realize management charging by power-supply management system, put
The function such as electricity and power managed.
Although Fig. 1 is not shown, mobile terminal 1 00 can also will not be repeated here including bluetooth module etc..
For the ease of understanding the embodiment of the present invention, the communications network system that the mobile terminal of the present invention is based on is entered below
Row description.
Referring to Fig. 2, Fig. 2 is a kind of communications network system Organization Chart provided in an embodiment of the present invention, the communication network system
Unite as the LTE system of universal mobile communications technology, UE (User Equipment, use of the LTE system including communicating connection successively
Family equipment) 201, E-UTRAN (Evolved UMTS Terrestrial Radio Access Network, evolved UMTS lands
Ground wireless access network) 202, EPC (Evolved Packet Core, evolved packet-based core networks) 203 and operator IP operation
204。
Specifically, UE201 can be above-mentioned terminal 100, and here is omitted.
E-UTRAN202 includes eNodeB2021 and other eNodeB2022 etc..Wherein, eNodeB2021 can be by returning
Journey (backhaul) (such as X2 interface) is connected with other eNodeB2022, and eNodeB2021 is connected to EPC203,
ENodeB2021 can provide UE201 to EPC203 access.
EPC203 can include MME (Mobility Management Entity, mobility management entity) 2031, HSS
(Home Subscriber Server, home subscriber server) 2032, other MME2033, SGW (Serving Gate Way,
Gateway) 2034, PGW (PDN Gate Way, grouped data network gateway) 2035 and PCRF (Policy and
Charging Rules Function, policy and rate functional entity) 2036 etc..Wherein, MME2031 be processing UE201 and
There is provided carrying and connection management for the control node of signaling between EPC203.HSS2032 is all to manage for providing some registers
Such as function of attaching position register (not shown) etc, and some are preserved about the use such as service features, data rate
The special information in family.All customer data can be transmitted by SGW2034, and PGW2035 can provide UE 201 IP
Address is distributed and other functions, and PCRF2036 is strategy and the charging control strategic decision-making of business data flow and IP bearing resources
Point, it selects and provided available strategy and charging control decision-making with charge execution function unit (not shown) for strategy.
IP operation 204 can include internet, Intranet, IMS (IP Multimedia Subsystem, IP multimedia
System) or other IP operations etc..
Although above-mentioned be described by taking LTE system as an example, those skilled in the art it is to be understood that the present invention not only
Suitable for LTE system, be readily applicable to other wireless communication systems, such as GSM, CDMA2000, WCDMA, TD-SCDMA with
And following new network system etc., do not limit herein.
Based on above-mentioned mobile terminal hardware configuration and communications network system, each embodiment of the inventive method is proposed.
Following examples of the present invention provide a kind of image processing method, and this method to pending picture by carrying out face knowledge
Not, the human face region in pending picture is determined, then according to the human face region and the corresponding picture depth of pending picture
Information, further determines that the region residing for the person of being taken in pending picture, by the person of being taken in pending picture and the back of the body
Scape scenery makes a distinction, last only to carry out image procossing to the region residing for the person of being taken, and can both ensure the person's of being taken body
The colour of skin at each position can be consistent after image procossing, can to greatest extent be showed again in pending picture and be carried on the back
The authenticity of scape scenery.
Reference picture 3, Fig. 3 is the schematic flow sheet of image processing method first embodiment of the present invention, image procossing of the present invention
In method first embodiment, above-mentioned image processing method includes:
Step S10, carries out recognition of face to pending picture, determines the human face region in the pending picture.
In the present embodiment, it is above-mentioned to pending picture carry out recognition of face the step of before also include:
Using the preset corresponding 3D rendering of the camera collection person of being taken, and it regard the 3D rendering collected as institute
State pending picture.
Wherein, the pixel that above-mentioned camera is caught carries range information, i.e. image depth information, it is preferred that above-mentioned shooting
Head can use 3D cameras.
Wherein, above-mentioned camera is mountable to above-mentioned mobile terminal 1 00, and directly the camera is passed through by mobile terminal 1 00
The corresponding 3D rendering of the person of being taken is gathered, or, the image capture device collection that camera can also be provided with by other is clapped
The corresponding 3D rendering of the person of taking the photograph, then the 3D rendering collected is sent to above-mentioned mobile terminal 1 00 handled.
Specifically, after the corresponding 3D rendering of the person of being taken is collected using preset camera, the 3D pictures are made
Recognition of face is carried out for pending picture, the human face region in the pending picture is determined.
Wherein, recognition of face is a kind of biological identification technology that the facial feature information based on people carries out identification.People
Face identifying system mainly includes four parts, is respectively:Facial image detection, facial image pretreatment, facial image are special
Levy extraction and matching and identification.
Wherein, facial image detection is mainly used in the pretreatment of recognition of face in practice, i.e. accurate calibration in the picture
Go out position and the size of face.The pattern feature very abundant included in facial image, such as histogram feature, color characteristic, mould
Plate features, architectural feature etc..Facial image detection is exactly that wherein useful information is picked out, and realizes people using these features
Face is detected.
Facial image pretreatment is to be based on Face datection result, and image is handled and feature extraction is finally served
Process.Original image tends not to directly use, it is necessary in image procossing due to being limited and random disturbances by various conditions
Early stage gray correction, the image preprocessing such as noise filtering are carried out to it.For facial image, its preprocessing process
The main light compensation including facial image, greyscale transformation, histogram equalization, normalization, geometric correction, filtering and sharpening
Deng.
Step S20, according to the corresponding image depth information of human face region in the pending picture, calculates the face area
Corresponding first shooting distance in domain.
In the present embodiment, it is determined that after human face region in above-mentioned pending picture, according to pending picture correspondence
Image depth information, calculate corresponding first shooting distance of above-mentioned human face region.Wherein, shooting distance is primarily referred to as:Shooting
Each photo-sensitive cell or each photosensitive sensor in head is apart from the distance of actual photographed thing, and above-mentioned first shooting area refers to
State average distance of the face apart from above-mentioned camera of the person of being taken in pending picture.
Wherein, the corresponding image depth information of above-mentioned pending picture includes the photosensitive sensor of above-mentioned camera apart from quilt
The actual range of photographer, the method for obtaining image depth information is broadly divided into monocular depth method of estimation and binocular depth estimation
Method, monocular is to be based on a camera lens, and binocular is to be based on two camera lenses, specifically:
Monocular depth method of estimation is to estimate its depth information based on piece image, is estimated relative to binocular depth
Method, there is certain difficulty, there is image content-based understanding, based on focusing, based on defocusing, based on estimation sides such as light and shade changes
The depth estimation method that method, such as image content-based understand is mainly by classifying to each scenery piecemeal in image,
Then their depth information is estimated with the method for each self application respectively to the scenery of each classification;Estimation of Depth based on focusing
Method mainly make video camera relative to measured point be in focus on position, then according to lens imaging formula can try to achieve measured point relative to
The distance of video camera.
Binocular depth method of estimation is the depth estimation method based on twin-lens parallax, it be with two camera imagings,
Because there is a certain distance between two cameras, same scenery has certain difference by two camera lens imagings
Not, i.e. parallax, according to parallax information, just it is estimated that the substantially depth information of scenery.
Step S30, according to the image depth information and first shooting distance that the pending picture is overall, determines institute
The region residing for the person of being taken in pending picture is stated, wherein, the region residing for the person of being taken includes the human face region.
It is deep according to above-mentioned image after corresponding first shooting distance of above-mentioned human face region is calculated in the present embodiment
Information and first shooting distance are spent, the region residing for the person of being taken in pending picture is determined.
It is understood that in above-mentioned pending picture, other positions for the body that is taken, such as neck, shoulder, arm
Arm, leg etc., the substantially face with the person of being taken are in approximately the same plane, therefore, are taken using camera collection
During the corresponding 3D pictures of person, other positions of the person's of being taken body should with the person of being taken face distance apart from the distance of the camera
The distance difference of camera can within an interval (such as 0cm-50cm), and in above-mentioned pending picture, background scenery away from
From can then be far longer than distance of the person of the being taken face apart from camera with a distance from camera.
It therefore, it can according to above-mentioned first shooting distance, shooting distance and first shooting selected in pending picture
The pixel of the difference of distance within the specific limits, the region of selected pixel composition is to be regarded as the area residing for the person of being taken
Domain.
Step S40, image procossing is carried out according to default image procossing mode to the region residing for the person of being taken.
In the present embodiment, it is determined that after region in above-mentioned pending picture residing for the person of being taken, only to the person of being taken
Residing region carries out image procossing, to other regions outside region residing for the person of being taken, then without any processing.
Wherein, above-mentioned image procossing includes the processing to the person of being taken face, such as whitening, thin face, U.S. pupil, mill skin anti-acne
Processing;Also include the processing to other positions of the person of being taken, whitening processing etc. such as is carried out to neck, shoulder, arm position;Separately
Outside, it can also include to regulations such as the processing of the person's of being taken clothes, such as color brightening, and contrast, brightness, colour temperature etc..
Image processing method described in the present embodiment, this method to pending picture by carrying out recognition of face, it is determined that treating
The human face region in picture is handled, then according to the human face region and the corresponding image depth information of pending picture, enters one
Step determines the region residing for the person of being taken in pending picture, and the person of being taken in pending picture and background scenery are distinguished
Come, it is last that image procossing only is carried out to the region residing for the person of being taken, it can both ensure the person's of being taken corporal parts
The colour of skin can be consistent after image procossing, can show the true of background scenery in pending picture to greatest extent again
Reality, solve effectively can not make a distinction the personage in photo with background scenery in the prior art, and individually personage is entered
The technical problem of row image procossing.
Further, based on the invention described above image processing method first embodiment, image processing method of the present invention is proposed
In second embodiment, the present embodiment, shown in above-mentioned Fig. 3 described in step S20 according to human face region in the pending picture
Corresponding image depth information, the step of calculating the human face region corresponding first shooting distance includes:
The second shooting distance between the subject and the camera in the human face region each pixel is extracted respectively;
Calculate the average value of corresponding second shooting distance of the human face region each pixel, and by the average value calculated
It is used as first shooting distance.
Wherein, the shooting distance between the corresponding subject of pixel and camera is primarily referred to as:Each sense in camera
The distance of optical element or each photosensitive sensor apart from actual photographed thing.
It is understood that in the pending picture that above-mentioned camera is collected, human face region is not plane, the people
The corresponding shooting distance of each pixel of face region is simultaneously differed, for example, the corresponding shooting distance of face nasal portion substantially can be small
In the corresponding shooting distance of eye portion.
It therefore, it can by the average value of corresponding second shooting distance of above-mentioned human face region each pixel, and as above-mentioned
Corresponding first shooting distance of human face region.
Image processing method described in the present embodiment, by extracting the subject in above-mentioned human face region each pixel respectively
The second shooting distance between above-mentioned camera, then calculates corresponding second shooting distance of above-mentioned human face region each pixel
Average value, and using the average value calculated as corresponding first shooting distance of above-mentioned human face region, first shot according to this
Distance and the corresponding image depth information of pending picture, you can determine the area residing for the person of being taken in pending picture
Domain, so as to realize the purpose that image procossing is individually carried out to the region residing for the person of being taken in pending picture.
Further, based on the invention described above image processing method first embodiment, image processing method of the present invention is proposed
In 3rd embodiment, the present embodiment, reference picture 4, Fig. 4 is step shown in Fig. 3 in image processing method 3rd embodiment of the present invention
S30 refinement step schematic flow sheet, the figure overall according to the pending picture shown in above-mentioned Fig. 3 described in step S30
As depth information and the first shooting distance, the step of determining in the pending picture region residing for the person of being taken includes:
Step S31, extracts between the subject and the camera in described each pixel of pending picture respectively
Three shooting distances;
Step S32, calculates the corresponding third shot photographic range of described each pixel of pending picture and the first count respectively
The absolute value of the difference of photographic range;
Step S33, the region according to residing for the absolute value determines the person of being taken in pending picture.
In the present embodiment, after corresponding first shooting distance of human face region in obtaining pending picture, carry respectively
The third shot photographic range between the subject and above-mentioned camera in each pixel of pending picture is taken, then calculates what is extracted
The corresponding third shot photographic range of each pixel and the absolute value of the difference of above-mentioned first distance, then determine to treat according to the absolute value
Handle the region residing for the person of being taken in picture.
In order to be better understood from the present invention, reference picture 5, Fig. 5 is pending picture human face region in the present invention and non-face
The schematic diagram in region.
Wherein, described in above-mentioned steps S33 according to residing for the absolute value determines the person of being taken in pending picture
The step of region, includes:
According to the absolute value, pixel corresponding with the person of being taken is selected in the pending picture, will be with institute
The region for stating the person's of being taken corresponding pixel composition is defined as region residing for the person of being taken.
Further, according to the absolute value, picture corresponding with the person of being taken is selected in the pending picture
The step of element includes:
When the difference of the corresponding third shot photographic range of any pixel and first shooting distance in the pending picture
Absolute value be less than predetermined threshold value when, then any pixel is defined as pixel corresponding with the person of being taken.
Wherein, the environment that above-mentioned threshold value can be according to residing for the person of being taken is configured, for example, when the person of being taken be in than
During more spacious environment (such as square, highway, grassland), above-mentioned threshold value may be configured as the larger value of such as 1m, 2m, 3m;Work as quilt
When photographer is in before certain building or scenery, above-mentioned threshold value may be configured as 0.5m, 0.7m are equivalent;When the person of being taken is in certain
During narrow and small environment (such as in room), above-mentioned threshold value may be configured as the less value of such as 0.2m, 0.3m.
In addition, above-mentioned threshold value can also be configured according to the person of being taken apart from the actual range of above-mentioned camera, for example,
For auto heterodyne or shooting at close range, above-mentioned threshold value may be configured as the less value of such as 0.2m, 0.3m, for wide-long shot,
Above-mentioned threshold value may be configured as the larger value of such as 1m, 2m, 3m.
In order to be better understood from the present invention, reference picture 6, Fig. 6 is determines the field in the region residing for the person of being taken in the present invention
Scape schematic diagram.In figure 6, d1 represents corresponding first shooting distance of human face region, and d2 and d3 represent pending picture any two
The third shot photographic range between subject and the camera in individual pixel, wherein, the absolute value of d2 and d1 differences is less than upper
Default threshold value is stated, represents that its corresponding pixel is in the region residing for the above-mentioned person of being taken, the absolute value of d3 and d1 differences is big
In or equal to above-mentioned default threshold value, represent that its corresponding pixel is in background area.
Further it will be understood that the corresponding pixel of the above-mentioned person of being taken should be continuous and adjacent, wait to locate when above-mentioned
The absolute value for managing the difference of the corresponding third shot photographic range of some pixel of picture and first shooting distance is less than predetermined threshold value,
But pixel pixel corresponding with other persons of being taken is discontinuous or adjacent, then not using the pixel as corresponding with the person of being taken
Pixel.Shadow region can represent the region residing for the person of being taken in reference picture 6, Fig. 6.
Image processing method described in the present embodiment, by extracting the shooting in above-mentioned each pixel of pending picture respectively
Third shot photographic range between thing and above-mentioned camera, then calculates the corresponding third shot of pending picture each pixel and takes the photograph respectively
The absolute value of distance and the difference of first shooting distance, is that can determine that to be clapped in pending picture finally according to the absolute value
Region residing for the person of taking the photograph, individually image procossing is carried out so as to realize to the region residing for the person of being taken in pending picture
Purpose, shows the authenticity of background scenery in pending picture to greatest extent.
Further, based on the embodiment of the invention described above image processing method first, second, third, present invention figure is proposed
As processing method fourth embodiment, in the present embodiment, shown in above-mentioned Fig. 3 described in step S40 according to default image procossing
The step of mode carries out image procossing to the region residing for the person of being taken includes:
If detecting U.S. face triggering command, according to the U.S. face triggering command detected, to residing for the person of being taken
Region carries out U.S. face processing.
Wherein, it is determined that after region in above-mentioned pending picture residing for the person of being taken, user can be according to itself love
Good, the U.S. face menu on triggering mobile terminal display interface carries out U.S. face to the region residing for the person of being taken and handled.
Specifically, above-mentioned U.S. face triggering command includes the U.S. face instruction of a key and self-defined U.S. face instruction, it is above-mentioned according to detection
The U.S. face triggering command arrived, the step of U.S. face is handled is carried out to the region residing for the person of being taken to be included:
When detecting the face instruction of key U.S., then according to default U.S. face efficacy parameter to the area residing for the person of being taken
Domain carries out U.S. face processing, and the U.S. face efficacy parameter includes thin face parameter, eyes size adjusting parameter, mill skin anti-acne parameter, skin
One or more in skin whitening parameter, tooth whitening parameter, rouge parameter;
When detecting self-defined U.S. face instruction, then according to the U.S. face trigger action received to residing for the person of being taken
Region carry out U.S. face processing.
, accordingly, can also be to being clapped after U.S. face processing is carried out to the region residing for the person of being taken in the present embodiment
Other regions outside region residing for the person of taking the photograph individually are handled, for example, adjust contrast, brightness, the colour temperature in other regions
Etc..
Image processing method described in the present embodiment, region in pending picture pair is determined residing for the person of being taken it
Afterwards, if detecting U.S. face triggering command, according to the U.S. face triggering command detected, the region residing for the person of being taken is entered
The face processing of row U.S., so that can both ensure the colour of skin of the person's of being taken corporal parts can be consistent after U.S. face, again may be used
To show the authenticity of background scenery in pending picture to greatest extent.
The present invention also provides a kind of mobile terminal, and the mobile terminal includes:Camera, memory, processor and it is stored in
On the memory and the image processing program that can run on the processor, described image processing routine is by the processor
Following steps are realized during execution:
Recognition of face is carried out to pending picture, the human face region in the pending picture is determined;
According to the corresponding image depth information of human face region in the pending picture, the human face region is calculated corresponding
First shooting distance;
According to the image depth information and first shooting distance that the pending picture is overall, determine described pending
Region in picture residing for the person of being taken, wherein, the region residing for the person of being taken includes the human face region;
Image procossing is carried out to the region residing for the person of being taken according to default image procossing mode.
Further, before the step of above-mentioned progress recognition of face to pending picture, described image processing routine is by institute
When stating computing device, following steps can also be realized:
Using the preset corresponding 3D rendering of the camera collection person of being taken, and it regard the 3D rendering collected as institute
State pending picture.
Further, it is above-mentioned according to the corresponding image depth information of human face region in the pending picture, calculate described
The step of human face region corresponding first shooting distance, includes:
The second shooting distance between the subject and the camera in the human face region each pixel is extracted respectively;
Calculate the average value of corresponding second shooting distance of the human face region each pixel, and by the average value calculated
It is used as first shooting distance.
Further, the above-mentioned image depth information and first shooting distance overall according to the pending picture, it is determined that
Include in the pending picture the step of region residing for the person of being taken:
Extract respectively the third shot between subject and the camera in described each pixel of pending picture take the photograph away from
From;
The corresponding third shot photographic range of described each pixel of pending picture and first shooting distance are calculated respectively
The absolute value of difference;
Region according to residing for the absolute value determines the person of being taken in pending picture.
Further, the step of above-mentioned region according to residing for the absolute value determines the person of being taken in pending picture is wrapped
Include:
According to the absolute value, pixel corresponding with the person of being taken is selected in the pending picture, will be with institute
The region for stating the person's of being taken corresponding pixel composition is defined as region residing for the person of being taken.
Further, it is above-mentioned according to the absolute value, select corresponding with the person of being taken in the pending picture
Pixel the step of include:
When the difference of the corresponding third shot photographic range of any pixel and first shooting distance in the pending picture
Absolute value be less than predetermined threshold value when, then any pixel is defined as pixel corresponding with the person of being taken.
Further, it is above-mentioned that the region residing for the person of being taken is carried out at image according to default image procossing mode
The step of reason, includes:
If detecting U.S. face triggering command, according to the U.S. face triggering command detected, to residing for the person of being taken
Region carries out U.S. face processing.
Further, above-mentioned U.S. face triggering command includes the U.S. face instruction of a key and self-defined U.S. face instruction, described according to inspection
The U.S. face triggering command measured, the step of U.S. face is handled is carried out to the region residing for the person of being taken to be included:
When detecting the face instruction of key U.S., then according to default U.S. face efficacy parameter to the area residing for the person of being taken
Domain carries out U.S. face processing, and the U.S. face efficacy parameter includes thin face parameter, eyes size adjusting parameter, mill skin anti-acne parameter, skin
One or more in skin whitening parameter, tooth whitening parameter, rouge parameter;
When detecting self-defined U.S. face instruction, then according to the U.S. face trigger action received to residing for the person of being taken
Region carry out U.S. face processing.
In order to be better understood from the present invention, reference picture 7, Fig. 7 is the software runtime environment that mobile terminal of the present invention is related to
In structural representation, the present embodiment, above-mentioned mobile terminal may include:Processor 110, such as CPU, network interface 1004, user
Interface 1003, memory 109, communication bus 1002.Wherein, communication bus 1002 is used to realize that the connection between these components to be led to
Letter;User interface 1003 can include the interface unit 108 and display unit 106 shown in above-mentioned Fig. 1;Network interface 1004 is optional
Can include wireline interface, the wave point (such as WI-FI interfaces) of standard;Memory 109 can be high-speed RAM memory,
Can also be stable memory (non-volatile memory), such as magnetic disk storage;Memory 109 optionally may be used also
To be independently of the storage device of aforementioned processor 110.
It will be understood by those skilled in the art that the structure shown in Fig. 7 does not constitute the restriction to above-mentioned mobile terminal, can
With including than illustrating more or less parts, either combining some parts or different parts arrangement.
As shown in fig. 7, as in a kind of memory 109 of storage medium, operating system, network service mould can be included
Block, Subscriber Interface Module SIM and image processing program.
As shown in fig. 7, network interface 1004 is mainly used in connecting background server, carrying out data with background server leads to
Letter;User interface 1003 is mainly used in connection client (user terminal), and row data communication is entered with client;And processor 110 can
For calling the image processing program stored in memory 109, and perform corresponding operation.
Above-mentioned mobile terminal can be realized:By carrying out recognition of face to pending picture, the people in pending picture is determined
Face region, then according to the human face region and the corresponding image depth information of pending picture, further determines that pending figure
Region in piece residing for the person of being taken, the person of being taken in pending picture is made a distinction with background scenery, finally only right
Region residing for the person of being taken carries out image procossing, can both ensure that the colour of skin of the person's of being taken corporal parts is passing through image
It can be consistent after processing, the authenticity of background scenery in pending picture can be showed to greatest extent again, solved existing
Having in technology effectively can not make a distinction the personage in photo with background scenery, and the skill of image procossing is individually carried out to personage
Art problem.
Wherein, the corresponding embodiment of the above-mentioned mobile terminal basic phase of each embodiment corresponding with foregoing image processing method
Together, thus will not be repeated here.
The present invention also provides a kind of storage medium, and be stored with image processing program on the storage medium, described image processing
Following steps are realized when program is executed by processor:
Recognition of face is carried out to pending picture, the human face region in the pending picture is determined;
According to the corresponding image depth information of human face region in the pending picture, the human face region is calculated corresponding
First shooting distance;
According to the image depth information and first shooting distance that the pending picture is overall, determine described pending
Region in picture residing for the person of being taken, wherein, the region residing for the person of being taken includes the human face region;
Image procossing is carried out to the region residing for the person of being taken according to default image procossing mode.
Further, before the step of above-mentioned progress recognition of face to pending picture, described image processing routine is by institute
When stating computing device, following steps can also be realized:
Using the preset corresponding 3D rendering of the camera collection person of being taken, and it regard the 3D rendering collected as institute
State pending picture.
Further, it is above-mentioned according to the corresponding image depth information of human face region in the pending picture, calculate described
The step of human face region corresponding first shooting distance, includes:
The second shooting distance between the subject and the camera in the human face region each pixel is extracted respectively;
Calculate the average value of corresponding second shooting distance of the human face region each pixel, and by the average value calculated
It is used as first shooting distance.
Further, the above-mentioned image depth information and first shooting distance overall according to the pending picture, it is determined that
Include in the pending picture the step of region residing for the person of being taken:
Extract respectively the third shot between subject and the camera in described each pixel of pending picture take the photograph away from
From;
The corresponding third shot photographic range of described each pixel of pending picture and first shooting distance are calculated respectively
The absolute value of difference;
Region according to residing for the absolute value determines the person of being taken in pending picture.
Further, the step of above-mentioned region according to residing for the absolute value determines the person of being taken in pending picture is wrapped
Include:
According to the absolute value, pixel corresponding with the person of being taken is selected in the pending picture, will be with institute
The region for stating the person's of being taken corresponding pixel composition is defined as region residing for the person of being taken.
Further, it is above-mentioned according to the absolute value, select corresponding with the person of being taken in the pending picture
Pixel the step of include:
When the difference of the corresponding third shot photographic range of any pixel and first shooting distance in the pending picture
Absolute value be less than predetermined threshold value when, then any pixel is defined as pixel corresponding with the person of being taken.
Further, it is above-mentioned that the region residing for the person of being taken is carried out at image according to default image procossing mode
The step of reason, includes:
If detecting U.S. face triggering command, according to the U.S. face triggering command detected, to residing for the person of being taken
Region carries out U.S. face processing.
Further, above-mentioned U.S. face triggering command includes the U.S. face instruction of a key and self-defined U.S. face instruction, described according to inspection
The U.S. face triggering command measured, the step of U.S. face is handled is carried out to the region residing for the person of being taken to be included:
When detecting the face instruction of key U.S., then according to default U.S. face efficacy parameter to the area residing for the person of being taken
Domain carries out U.S. face processing, and the U.S. face efficacy parameter includes thin face parameter, eyes size adjusting parameter, mill skin anti-acne parameter, skin
One or more in skin whitening parameter, tooth whitening parameter, rouge parameter;
When detecting self-defined U.S. face instruction, then according to the U.S. face trigger action received to residing for the person of being taken
Region carry out U.S. face processing.
Above-mentioned storage medium can be realized:By carrying out recognition of face to pending picture, the people in pending picture is determined
Face region, then according to the human face region and the corresponding image depth information of pending picture, further determines that pending figure
Region in piece residing for the person of being taken, the person of being taken in pending picture is made a distinction with background scenery, finally only right
Region residing for the person of being taken carries out image procossing, can both ensure that the colour of skin of the person's of being taken corporal parts is passing through image
It can be consistent after processing, the authenticity of background scenery in pending picture can be showed to greatest extent again, solved existing
Having in technology effectively can not make a distinction the personage in photo with background scenery, and the skill of image procossing is individually carried out to personage
Art problem.
Wherein, the corresponding embodiment of the above-mentioned storage medium basic phase of each embodiment corresponding with foregoing image processing method
Together, thus will not be repeated here.
It should be noted that herein, term " comprising ", "comprising" or its any other variant are intended to non-row
His property is included, so that process, method, article or system including a series of key elements not only include those key elements, and
And also including other key elements being not expressly set out, or also include for this process, method, article or system institute inherently
Key element.In the absence of more restrictions, the key element limited by sentence "including a ...", it is not excluded that including this
Also there is other identical element in process, method, article or the system of key element.
The embodiments of the present invention are for illustration only, and the quality of embodiment is not represented.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side
Method can add the mode of required general hardware platform to realize by software, naturally it is also possible to by hardware, but in many cases
The former is more preferably embodiment.Understood based on such, technical scheme is substantially done to prior art in other words
Going out the part of contribution can be embodied in the form of software product, and the computer software product is stored in one as described above
In storage medium (such as ROM/RAM, magnetic disc, CD), including some instructions to cause a station terminal equipment (can be mobile phone,
Computer, server, or network equipment etc.) perform method described in each of the invention embodiment.
The preferred embodiments of the present invention are these are only, are not intended to limit the scope of the invention, it is every to utilize this hair
Equivalent structure or equivalent flow conversion that bright specification and accompanying drawing content are made, or directly or indirectly it is used in other related skills
Art field, is included within the scope of the present invention.
Claims (10)
1. a kind of image processing method, it is characterised in that described image processing method includes:
Recognition of face is carried out to pending picture, the human face region in the pending picture is determined;
According to the corresponding image depth information of human face region in the pending picture, the human face region corresponding first is calculated
Shooting distance;
According to the image depth information and first shooting distance that the pending picture is overall, the pending picture is determined
In region residing for the person of being taken, wherein, the region residing for the person of being taken includes the human face region;
Image procossing is carried out to the region residing for the person of being taken according to default image procossing mode.
2. image processing method as claimed in claim 1, it is characterised in that described that recognition of face is carried out to pending picture
Also include before step:
Using the preset corresponding 3D rendering of the camera collection person of being taken, and the 3D rendering collected is treated as described
Handle picture.
3. image processing method as claimed in claim 2, it is characterised in that according to human face region pair in the pending picture
The image depth information answered, the step of calculating the human face region corresponding first shooting distance includes:
The second shooting distance between the subject and the camera in the human face region each pixel is extracted respectively;
Calculate the average value of corresponding second shooting distance of the human face region each pixel, and using the average value calculated as
First shooting distance.
4. image processing method as claimed in claim 2, it is characterised in that deep according to the image that the pending picture is overall
Information and first shooting distance are spent, the step of determining in the pending picture region residing for the person of being taken includes:
The third shot photographic range between the subject and the camera in described each pixel of pending picture is extracted respectively;
The difference of the corresponding third shot photographic range of described each pixel of pending picture and first shooting distance is calculated respectively
Absolute value;
Region according to residing for the absolute value determines the person of being taken in pending picture.
5. image processing method as claimed in claim 4, it is characterised in that determined according to the absolute value in pending picture
The step of region residing for the person of being taken, includes:
According to the absolute value, pixel corresponding with the person of being taken is selected in the pending picture, will be with the quilt
The region of the corresponding pixel composition of photographer is defined as the region residing for the person of being taken.
6. image processing method as claimed in claim 5, it is characterised in that according to the absolute value, in the pending figure
Select to include the step of pixel corresponding with the person of being taken in piece:
When in the pending picture the corresponding third shot photographic range of any pixel and the difference of first shooting distance it is exhausted
When being less than predetermined threshold value to value, then any pixel is defined as pixel corresponding with the person of being taken.
7. the image processing method as described in claim 1 to 6 any one, it is characterised in that described according to default image
The step of processing mode carries out image procossing to the region residing for the person of being taken includes:
If detecting U.S. face triggering command, according to the U.S. face triggering command detected, to the region residing for the person of being taken
Carry out U.S. face processing.
8. image processing method as claimed in claim 7, it is characterised in that the U.S. face triggering command includes the U.S. face of a key and referred to
Order and self-defined U.S. face instruction, the U.S. face triggering command that the basis is detected are carried out to the region residing for the person of being taken
The step of U.S. face is handled includes:
When detecting the face instruction of key U.S., then the region residing for the person of being taken is entered according to default U.S. face efficacy parameter
The face processing of row U.S., it is beautiful that the U.S. face efficacy parameter includes thin face parameter, eyes size adjusting parameter, mill skin anti-acne parameter, skin
One or more in white ginseng number, tooth whitening parameter, rouge parameter;
When detecting self-defined U.S. face instruction, then according to the U.S. face trigger action received to the area residing for the person of being taken
Domain carries out U.S. face processing.
9. a kind of mobile terminal, it is characterised in that the mobile terminal includes:Camera, memory, processor and it is stored in institute
The image processing program that can be run on memory and on the processor is stated, described image processing routine is held by the processor
Each corresponding step of image processing method as any one of claim 1 to 8 is realized during row.
10. a kind of storage medium, it is characterised in that be stored with image processing program on the storage medium, described image processing
Each corresponding step of image processing method as any one of claim 1 to 8 is realized when program is executed by processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710531012.0A CN107231529A (en) | 2017-06-30 | 2017-06-30 | Image processing method, mobile terminal and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710531012.0A CN107231529A (en) | 2017-06-30 | 2017-06-30 | Image processing method, mobile terminal and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107231529A true CN107231529A (en) | 2017-10-03 |
Family
ID=59956794
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710531012.0A Pending CN107231529A (en) | 2017-06-30 | 2017-06-30 | Image processing method, mobile terminal and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107231529A (en) |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107767333A (en) * | 2017-10-27 | 2018-03-06 | 努比亚技术有限公司 | Method, equipment and the computer that U.S. face is taken pictures can storage mediums |
CN108040208A (en) * | 2017-12-18 | 2018-05-15 | 信利光电股份有限公司 | A kind of depth U.S. face method, apparatus, equipment and computer-readable recording medium |
CN108346128A (en) * | 2018-01-08 | 2018-07-31 | 北京美摄网络科技有限公司 | A kind of method and apparatus of U.S.'s face mill skin |
CN109491739A (en) * | 2018-10-30 | 2019-03-19 | 北京字节跳动网络技术有限公司 | A kind of theme color is dynamically determined method, apparatus, electronic equipment and storage medium |
CN109561215A (en) * | 2018-12-13 | 2019-04-02 | 北京达佳互联信息技术有限公司 | Method, apparatus, terminal and the storage medium that U.S. face function is controlled |
CN109583385A (en) * | 2018-11-30 | 2019-04-05 | 深圳市脸萌科技有限公司 | Face image processing process, device, electronic equipment and computer storage medium |
CN109859100A (en) * | 2019-01-30 | 2019-06-07 | 深圳安泰创新科技股份有限公司 | Display methods, electronic equipment and the computer readable storage medium of virtual background |
CN109937434A (en) * | 2017-10-18 | 2019-06-25 | 腾讯科技(深圳)有限公司 | Image processing method, device, terminal and storage medium |
CN110971827A (en) * | 2019-12-09 | 2020-04-07 | Oppo广东移动通信有限公司 | Portrait mode shooting method and device, terminal equipment and storage medium |
CN112396559A (en) * | 2019-08-19 | 2021-02-23 | 王艳苓 | Live broadcast picture orientation beauty platform |
CN112492211A (en) * | 2020-12-01 | 2021-03-12 | 咪咕文化科技有限公司 | Shooting method, electronic equipment and storage medium |
CN112767241A (en) * | 2021-01-29 | 2021-05-07 | 北京达佳互联信息技术有限公司 | Image processing method and device |
CN112866555A (en) * | 2019-11-27 | 2021-05-28 | 北京小米移动软件有限公司 | Shooting method, shooting device, shooting equipment and storage medium |
CN113850165A (en) * | 2021-09-13 | 2021-12-28 | 支付宝(杭州)信息技术有限公司 | Face recognition method and device |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103353935A (en) * | 2013-07-19 | 2013-10-16 | 电子科技大学 | 3D dynamic gesture identification method for intelligent home system |
CN105608699A (en) * | 2015-12-25 | 2016-05-25 | 联想(北京)有限公司 | Image processing method and electronic device |
CN106331492A (en) * | 2016-08-29 | 2017-01-11 | 广东欧珀移动通信有限公司 | An image processing method and terminal |
CN106530241A (en) * | 2016-10-31 | 2017-03-22 | 努比亚技术有限公司 | Image blurring processing method and apparatus |
-
2017
- 2017-06-30 CN CN201710531012.0A patent/CN107231529A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103353935A (en) * | 2013-07-19 | 2013-10-16 | 电子科技大学 | 3D dynamic gesture identification method for intelligent home system |
CN105608699A (en) * | 2015-12-25 | 2016-05-25 | 联想(北京)有限公司 | Image processing method and electronic device |
CN106331492A (en) * | 2016-08-29 | 2017-01-11 | 广东欧珀移动通信有限公司 | An image processing method and terminal |
CN106530241A (en) * | 2016-10-31 | 2017-03-22 | 努比亚技术有限公司 | Image blurring processing method and apparatus |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109937434A (en) * | 2017-10-18 | 2019-06-25 | 腾讯科技(深圳)有限公司 | Image processing method, device, terminal and storage medium |
US11120535B2 (en) | 2017-10-18 | 2021-09-14 | Tencent Technology (Shenzhen) Company Limited | Image processing method, apparatus, terminal, and storage medium |
CN107767333B (en) * | 2017-10-27 | 2021-08-10 | 努比亚技术有限公司 | Method and equipment for beautifying and photographing and computer storage medium |
CN107767333A (en) * | 2017-10-27 | 2018-03-06 | 努比亚技术有限公司 | Method, equipment and the computer that U.S. face is taken pictures can storage mediums |
CN108040208A (en) * | 2017-12-18 | 2018-05-15 | 信利光电股份有限公司 | A kind of depth U.S. face method, apparatus, equipment and computer-readable recording medium |
CN108346128A (en) * | 2018-01-08 | 2018-07-31 | 北京美摄网络科技有限公司 | A kind of method and apparatus of U.S.'s face mill skin |
CN108346128B (en) * | 2018-01-08 | 2021-11-23 | 北京美摄网络科技有限公司 | Method and device for beautifying and peeling |
CN109491739A (en) * | 2018-10-30 | 2019-03-19 | 北京字节跳动网络技术有限公司 | A kind of theme color is dynamically determined method, apparatus, electronic equipment and storage medium |
CN109491739B (en) * | 2018-10-30 | 2023-04-07 | 北京字节跳动网络技术有限公司 | Theme color dynamic determination method and device, electronic equipment and storage medium |
CN109583385A (en) * | 2018-11-30 | 2019-04-05 | 深圳市脸萌科技有限公司 | Face image processing process, device, electronic equipment and computer storage medium |
CN109561215A (en) * | 2018-12-13 | 2019-04-02 | 北京达佳互联信息技术有限公司 | Method, apparatus, terminal and the storage medium that U.S. face function is controlled |
CN109859100A (en) * | 2019-01-30 | 2019-06-07 | 深圳安泰创新科技股份有限公司 | Display methods, electronic equipment and the computer readable storage medium of virtual background |
CN112396559A (en) * | 2019-08-19 | 2021-02-23 | 王艳苓 | Live broadcast picture orientation beauty platform |
CN112866555A (en) * | 2019-11-27 | 2021-05-28 | 北京小米移动软件有限公司 | Shooting method, shooting device, shooting equipment and storage medium |
CN112866555B (en) * | 2019-11-27 | 2022-08-05 | 北京小米移动软件有限公司 | Shooting method, shooting device, shooting equipment and storage medium |
CN110971827A (en) * | 2019-12-09 | 2020-04-07 | Oppo广东移动通信有限公司 | Portrait mode shooting method and device, terminal equipment and storage medium |
CN112492211A (en) * | 2020-12-01 | 2021-03-12 | 咪咕文化科技有限公司 | Shooting method, electronic equipment and storage medium |
CN112767241A (en) * | 2021-01-29 | 2021-05-07 | 北京达佳互联信息技术有限公司 | Image processing method and device |
CN113850165A (en) * | 2021-09-13 | 2021-12-28 | 支付宝(杭州)信息技术有限公司 | Face recognition method and device |
CN113850165B (en) * | 2021-09-13 | 2024-07-19 | 支付宝(杭州)信息技术有限公司 | Face recognition method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107231529A (en) | Image processing method, mobile terminal and storage medium | |
CN107767333B (en) | Method and equipment for beautifying and photographing and computer storage medium | |
CN108108704A (en) | Face identification method and mobile terminal | |
CN109167910A (en) | focusing method, mobile terminal and computer readable storage medium | |
CN108712603B (en) | Image processing method and mobile terminal | |
CN107172364A (en) | A kind of image exposure compensation method, device and computer-readable recording medium | |
CN107835367A (en) | A kind of image processing method, device and mobile terminal | |
CN108269230A (en) | Certificate photo generation method, mobile terminal and computer readable storage medium | |
CN108989678A (en) | An image processing method and a mobile terminal | |
CN107255813A (en) | Distance-finding method, mobile terminal and storage medium based on 3D technology | |
CN108076290A (en) | A kind of image processing method and mobile terminal | |
CN108063901A (en) | A kind of image-pickup method, terminal and computer readable storage medium | |
CN107231470A (en) | Image processing method, mobile terminal and computer-readable recording medium | |
CN107959795A (en) | A kind of information collecting method, equipment and computer-readable recording medium | |
CN107786811B (en) | A kind of photographic method and mobile terminal | |
CN108848268A (en) | Intelligent adjusting method, mobile terminal and the readable storage medium storing program for executing of screen intensity | |
CN107707751A (en) | Video playback electricity saving method and corresponding mobile terminal | |
CN109461124A (en) | A kind of image processing method and terminal device | |
CN108549853A (en) | A kind of image processing method, mobile terminal and computer readable storage medium | |
CN109348137A (en) | Mobile terminal camera control method, device, mobile terminal and storage medium | |
CN107730433A (en) | One kind shooting processing method, terminal and computer-readable recording medium | |
CN109358831A (en) | A kind of display control method, mobile terminal and computer readable storage medium | |
CN109167914A (en) | A kind of image processing method and mobile terminal | |
CN107295269A (en) | A kind of light measuring method and terminal, computer-readable storage medium | |
CN109218527A (en) | screen brightness control method, mobile terminal and computer readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20171003 |