CN108363965A - A kind of distributed face state appraisal procedure - Google Patents
A kind of distributed face state appraisal procedure Download PDFInfo
- Publication number
- CN108363965A CN108363965A CN201810084750.XA CN201810084750A CN108363965A CN 108363965 A CN108363965 A CN 108363965A CN 201810084750 A CN201810084750 A CN 201810084750A CN 108363965 A CN108363965 A CN 108363965A
- Authority
- CN
- China
- Prior art keywords
- face
- detection zone
- user
- image
- skin
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Human Computer Interaction (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
The invention discloses a kind of distributed face state appraisal procedures, belong to Skin Detection field;Method includes:The facial image of user's face is obtained using an image collecting device, and as image to be detected, and image to be detected is uploaded in the control server remotely being connect with image collecting device;Control server identifies to obtain all facial skin characteristic points in image to be detected, and each first detection zone is positioned according to facial skin characteristic point, positioning each of is obtained the image in the first detection zone according to label and is sent in corresponding execute server by distribution module;Corresponding execute server carries out skin state assessment to the image in the first detection zone of reception, and exports in assessment result to control server;All assessment results are issued to the user terminal of long-range connection control server by control server, so that user checks;The advantageous effect of above-mentioned technical proposal is:It is supplied to that user is rapid, efficient skin test service, user experience is good, the power consumption dispersion of generation.
Description
Technical field
The present invention relates to Skin Detection field more particularly to a kind of distributed face state appraisal procedures.
Background technology
With the promotion of people's quality of the life, more and more people especially women begins to focus on the skin of itself,
More and more maintenance products for skin also occupy very important status on the market.Wherein, women is especially
The skin of face can be paid close attention to, such as whether canthus has whether wrinkle, face have decree line etc., and can be according to these skins
Situation selection uses different maintenance products.
It is current in the market, although there are some skin detection equipment, such as skin detection instrument etc., these skins inspection
The price of measurement equipment is more expensive, and operate it is extremely complex, be not suitable for user use at home.Meanwhile this kind of skin detection
The different zones that equipment can not accurately distinguish skin detect some distinctive skin problems in these regions, so as to cause skin
Testing result is more general, can not accurate response user's skin time of day.
Although having there are some skin detection products at present, more accurate, various testing result can be formed,
But this detection speed is slower, needs the time that user waits for more long, greatly affected user experience, the work(of product consumption
Consumption is concentrated mainly on independent processing apparatus.
Invention content
According to the above-mentioned problems in the prior art, a kind of technical solution of face state appraisal procedure, purport are now provided
In the face state assessment result for being supplied to user comprehensive and accurate, user is helped to grasp face state at any time, and realize letter
Single, detect and assess process is not necessarily to professional equipment, reduces detection threshold.
Above-mentioned technical proposal specifically includes:
A kind of distributed face state appraisal procedure, wherein in multiple facial skin characteristic points are arranged in human face region,
And all facial skin characteristic points are divided into for positioning in different multiple first detection zones, each described first
Detection zone be used to assess a skin condition of face, and be respectively set one for each first detection zone
Execute server, each execute server are separately connected the same control server, are provided in the control server
It is stored with label corresponding with each execute server in one distribution module and the distribution module, further includes:
Step S1 obtains the facial image of user's face using an image collecting device, and as image to be detected, and will
Described image to be detected is uploaded in the control server remotely being connect with described image harvester;
Step S2, the control server identify to obtain all facial skin features in described image to be detected
Point, and each first detection zone is positioned according to the facial skin characteristic point, the distribution module is according to institute
It states label positioning each of is obtained into the image in first detection zone and be sent in the corresponding execute server;
Step S3, the corresponding execute server carry out skin shape to the image in first detection zone of reception
State is assessed, and is exported in assessment result to the control server;
Step S4, all assessment results are issued to by the control server remotely connects the control server
User terminal, so that user checks.
Preferably, face state appraisal procedure, wherein described image harvester is arranged on a vanity mirror, and even
Connect the communication device being set in the vanity mirror;
The vanity mirror remotely connects the control server by the communication device, and will by the communication device
The facial image that described image harvester collects is uploaded in the control server.
Preferably, face state appraisal procedure, wherein first detection zone includes one for user's face
The oil detection zone that skin oil state is assessed;
The oil detection zone further comprises:
The forehead region of user's face;And/or
The left cheek region of user's face;And/or
The right cheek region of user's face;And/or
The chin area of user's face.
Preferably, face state appraisal procedure, wherein first detection zone includes one for user's face
The cleannes detection zone that skin clean conditions are assessed;
The cleannes detection zone further comprises:
The nasal area of user's face;And/or
The full face region of user's face.
Preferably, face state appraisal procedure, wherein the assessment result packet of the corresponding cleannes detection zone
It includes:
The first sub- result of assessment for the skin cleannes for indicating the nasal area;And/or
For indicating whether the full face region has the sub- result of remaining second assessment of color make-up;And/or
For indicating whether the full face region has the third of fluorescence to assess sub- result.
Preferably, face state appraisal procedure, wherein first detection zone includes one for user's face
The allergy detection zone that skin allergy state is assessed;
The allergy detection zone further comprises:
The left cheek region of user's face;And/or
The right cheek region of user's face.
Preferably, face state appraisal procedure, wherein first detection zone includes one for user's face
The color spot detection zone that skin splash state is assessed;
The color spot detection zone further comprises:
The full face region of user's face.
Preferably, face state appraisal procedure, wherein each execute server includes that a training in advance is formed
Assessment models;
Using deep neural network, according to pre-set multiple training datas to assessing the assessment models;
Each training data centering includes image in corresponding first detection zone and is directed to the figure
The assessment result of picture.
Preferably, face state appraisal procedure, wherein further include one second detection zone, second detection zone
It is assessed for the skin complexion state to user's face;
In the step S3, the corresponding execute server carries out the image in first detection zone of reception
While skin state assessment, second detection zone is assessed, and export corresponding assessment result;
In the step S4, all assessment results that the execute server exports are issued to far by the control server
Journey connects the user terminal of the Cloud Server, so that user checks;
Second detection zone further comprises:
The left cheek region of user's face and the right cheek region of user's face.
Preferably, face state appraisal procedure, wherein in the step S3, second detection zone is commented
The process estimated specifically includes:
Step S31, processing obtain the rgb value of each pixel of the left cheek region, and processing obtains the right face
The rgb value of each pixel in buccal region domain;
Step S32, according to each pixel of the rgb value of each pixel of the left cheek region and the right cheek region
Rgb value carry out average computation, to obtain a colour of skin numerical value;
Step S33 inquires the colour of skin numerical value according to preset colour of skin numerical value tables, to obtain being used for table
Show the assessment result of user colour and exports.
The advantageous effect of above-mentioned technical proposal is:A kind of face state appraisal procedure is provided, be capable of providing it is rapid to user,
Efficient skin test service, user experience is good, the power consumption dispersion of generation.
Description of the drawings
Fig. 1 is in the preferred embodiment of the present invention, and a kind of overall procedure of distributed face state appraisal procedure shows
It is intended to;
Fig. 2-7 is the schematic diagram of the different detection zones in human face region in the preferred embodiment of the present invention;
Fig. 8 is the tool assessed using the second detection zone of execute server pair in the preferred embodiment of the present invention
Body flow diagram.
Specific implementation mode
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation describes, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on
Embodiment in the present invention, those of ordinary skill in the art obtained under the premise of not making creative work it is all its
His embodiment, shall fall within the protection scope of the present invention.
It should be noted that in the absence of conflict, the feature in embodiment and embodiment in the present invention can phase
Mutually combination.
The invention will be further described in the following with reference to the drawings and specific embodiments, but not as limiting to the invention.
According to the above-mentioned problems in the prior art, a kind of distributed face state appraisal procedure, the party are now provided
In method, first in multiple facial skin characteristic points are arranged in human face region, and all facial skin characteristic points is divided into and are used for
It positions in different multiple detection zones, each detection zone be used to assess a skin condition of face, and for every
An execute server is respectively set in a detection zone, and each execute server is separately connected the same control server, control
It is provided in server in a distribution module and distribution module and is stored with label corresponding with each execute server.
This method is specifically as shown in fig. 1, including:
Step S1 obtains the facial image of user's face using an image collecting device, and as image to be detected, and will
Image to be detected is uploaded in the control server remotely being connect with image collecting device;
Step S2, control server identify to obtain all facial skin characteristic points in image to be detected, and according to face
Skin characteristic point positions each first detection zone, and positioning each of is obtained the first detection by distribution module according to label
Image in region is sent in corresponding execute server;
Step S3, corresponding execute server carry out skin state assessment to the image in the first detection zone of reception,
And it exports in assessment result to control server;
All assessment results are issued to the user terminal of long-range connection control server by step S4, control server, with
It is checked for user.
Specifically, in the present embodiment, during skin state assessment, the mistake of skin information acquisition is first carried out
Journey during being somebody's turn to do, the facial image of user's face is obtained using an image collecting device, and as in image to be detected
It reaches in the control server for remotely connecting the image collecting device.Above-mentioned facial image be face general image, more into
It is one step the general image of front face, the image of front face may insure the accuracy of skin state assessment.
In the present embodiment, after above-mentioned control server gets facial image, according to preset facial skin feature before
Facial image is identified in point, to be divided to facial image according to obtained facial skin characteristic point is identified, to
Form the first multiple and different detection zones.Specifically, preset facial skin characteristic point has 68 in the present invention, is distributed feelings
Condition refers to Fig. 2.Control server is identified from the facial image obtains all preset facial feature points, to form as in Fig. 2
Shown in the characteristic image that is formed by facial feature points.
Also, control server according to be located at features described above image in all facial feature points to entire characteristic image into
Row divides, and to form multiple first detection zones, the first different detection zones is supplied to different execute servers to skin
Skin state is detected and assesses.
Be respectively divided on facial image after forming each first detection zone, control server in distribution module according to
Positioning each of is obtained the image in first detection zone and is sent in corresponding execute server by label, receives figure
The image that the execute server of picture is then directed to corresponding first detection zone is assessed, and uniform business is accused to export assessment result
In device.Each assessment result is sent to the user terminal of long-range connection Cloud Server by control server, to complete face's skin
The assessment of skin state.
In the preferred embodiment of the present invention, the function of skin assessment in order to facilitate the use of the user, by above-mentioned Image Acquisition
Device is arranged on a vanity mirror.When user uses vanity mirror, so that it may to pass through the face of image acquisition device user
Image.
Further, above-mentioned image collecting device can be a camera, i.e., camera is mounted on vanity mirror, with
User's face is shot, facial image is obtained.
Further, in order to which facial image is uploaded to control server, should also be arranged inside above-mentioned vanity mirror
One communication device, above-mentioned image collecting device connecting communication device, and uploaded above-mentioned facial image by communication device
Into control server.Specifically, above-mentioned communication device can be the wireless communication module being built in inside vanity mirror, and lead to
It crosses indoor router and is connected to long-range control server.
In the preferred embodiment of the present invention, each not phase of the type of the skin assessment corresponding to the first different detection zones
Together, specific as follows:
1) the first detection zone includes an oil detection zone for being assessed the skin oil state of user's face
Domain;
Oil detection zone further comprises one or more described below:
The forehead region of user's face;
The left cheek region of user's face;
The right cheek region of user's face;
The chin area of user's face.
Above-mentioned oil detection zone is specifically as shown in Figure 4, wherein 1 region is forehead region, 2 regions are left cheek area
Domain, 3 regions are right cheek region, and 4 regions are chin area.These regions in Fig. 4 are all that face is easiest to fuel-displaced part,
The skin oil state of face can be assessed by the detect and assess to these regions.
Further, during actually detected, any one in above-mentioned zone or several regions can be selected
Constitute oil detection zone, or the order of accuarcy of detection selects all above-mentioned zones to constitute oil detection zones, comes
The skin oil state of face is detected and is assessed.
2) the first detection zone includes that the cleannes for being assessed the skin clean conditions of user's face detect
Region;
Cleannes detection zone further comprises one or more described below:
The nasal area of user's face;
The full face region of user's face.
Above-mentioned cleannes detection zone is specifically as shown in Figure 5, wherein 1 region is nasal area, full face region is as whole
The facial image of body, in Figure 5 without mark.It can be to face by the detect and assess to these regions in Fig. 5
Skin cleannes state is assessed.
Further, during actually detected, any one region in above-mentioned zone can be selected to constitute cleaning
Spend detection zone, or the order of accuarcy of detection selects all above-mentioned zones to constitute cleannes detection zones, to face
The skin cleannes state in portion is detected and assesses.
Further, the assessment result of corresponding cleannes detection zone includes for indicating that the skin of nasal area cleans
The first sub- result of assessment of degree;And/or for indicating whether full face region has the sub- result of remaining second assessment of color make-up;And/or
For indicating whether full face region has the third of fluorescence to assess sub- result.
Specifically, cleannes assessment is always divided into three parts:First part is the cleannes assessment of nasal portion image, i.e.,
The sub- result of above-mentioned first assessment;Second part is the color make-up residue detection of full face's partial image, i.e., it is above-mentioned second assessment son as a result,
When the second sub- result of assessment indicates color make-up residual alarm can be carried out to user terminal by controlling server;Third
Part is the fluoroscopic examination of full face's partial image, i.e., above-mentioned third assessment indicates that face has as a result, assessing sub- result when third
When fluorescence alarm can be carried out to user terminal by controlling server.
Since cleannes detection is divided into three parts, the execute server for corresponding to cleannes detection zone should also be as
Including three units, that is, corresponds to the first execute server unit of the first sub- result of assessment, corresponds to the second sub- result of assessment
The second execute server unit and the third execute server unit of sub- result is assessed corresponding to third.Above-mentioned first executes clothes
Business device unit, the formation of the second execute server unit and third execute server unit and operation principle and other execute service
Device is identical, can hereinafter be described in detail.
3) the first detection zone includes an allergy detection zone for being assessed the skin allergy state of user's face
Domain;
Allergy detection zone further comprises one or several kinds hereinafter:
The left cheek region of user's face;
The right cheek region of user's face.
Above-mentioned allergy detection zone is specifically as shown in Figure 6, wherein 1 region is left cheek region, 2 regions are right cheek area
Domain.The skin allergy state of face can be assessed by the detect and assess to these regions in Fig. 6.
Further, during actually detected, any one region in above-mentioned zone can be selected to constitute allergy
Detection zone, or the order of accuarcy of detection selects all above-mentioned zones to constitute allergy detection zones, to face
Skin allergy state is detected and assesses.
Further, it during actually detected, needs to carry out the red blood silk image of left cheek and/or right cheek
It detects to assess skin allergy state, that is, the input data for corresponding to the above-mentioned execute server of allergy detection zone is left cheek
The red blood silk image of region and/or right cheek region.
4) the first detection zone includes a color spot detection zone for being assessed the skin splash state of user's face
Domain;
Color spot detection zone further comprises the full face region of user's face.
Specifically, as shown in Figure 7, above-mentioned color spot detection zone includes the full face region in facial image, but its is heavy
Point detection zone is the above eyes of cheekbone position below, i.e. 1 region and 2 regions in Fig. 7.In other words, 1 region and 2 regions
Weight of the result of detect and assess in whole color spot assessment result is relatively high, and the proportion in remaining full face region is relatively low.Pass through
Above-mentioned detection and evaluation can be detected and assess to the skin splash state of face.
In the preferred embodiment of the present invention, each execute server includes the assessment models that training is formed in advance;
Using deep neural network, according to pre-set multiple training datas to assessing assessment models;
Each training data centering includes the image in corresponding first detection zone and the assessment knot for the image
Fruit.
Specifically, in the present embodiment, the assessment result of above-mentioned training data centering can be the assessment score manually marked.
By taking skin oil is assessed as an example, training is corresponding to the assessment models in the first execute server of oil detection zone
Each training data centering include an above-mentioned oil detection zone image, and for the image manually mark comment
Estimate score, which is ultimately formed by the training of multiple training datas pair.
Again by taking skin allergy is assessed as an example, training is corresponding to the assessment mould in the first execute server of allergy detection zone
Each training data centering of type includes the image of an above-mentioned allergy detection zone, and manually marking for the image
Score is assessed, which is ultimately formed by the training of multiple training datas pair.
In the present embodiment, corresponds to the first execute server unit of above-mentioned cleannes detection zone, second executes service
Assessment models in device unit and third execute server unit train to be formed also according to aforesaid way, specially:
The training data centering of first execute server unit includes the image of an above-mentioned nasal area, and being directed to should
Image manually mark for indicating the whether clean assessment score of nasal portion;
The training data centering of second execute server unit includes the image in an above-mentioned full face region, and being directed to should
Image manually mark for indicating whether whole-face detection has the remaining assessment result of color make-up, the assessment result that can directly be
The judging result of "Yes" or "No", without being indicated with specific score numerical value.Further, only when second executes clothes
When the assessment result of business device unit output is "Yes", control server just issues assessment result to user terminal, i.e., whole to user
Hold alarm.
The training data centering of third execute server unit includes the image in an above-mentioned full face region, and being directed to should
Image manually mark for indicating the assessment result of full face fluoroscopic examination, the assessment result equally can be "Yes" or
The judging result of "No", without being indicated with specific score numerical value.Further, only when third execute server unit
When the assessment result of output is "Yes", control server just issues assessment result to user terminal, i.e., is carried to user terminal alarm
Show.
In the preferred embodiment of the present invention, in the above method, one second detection zone is set also on facial image, it should
Second detection zone specifically includes the left cheek area of user's face for assessing the skin complexion state of user's face
The right cheek region of domain and user's face.
The facial feature points that above-mentioned second detection zone is equally obtained by above-mentioned identification, which divide, to be formed, formed principle with
Above-mentioned first detection zone is identical, and details are not described herein.
Specifically, above-mentioned second detection zone is overlapped with allergy detection zone shown in Fig. 6, i.e. 1 region indicates left face
Buccal region domain, 2 regions indicate right cheek region.Therefore skin tone detection region is equally indicated using Fig. 6.
The detection of above-mentioned second detection zone is carried out using the second execute server, i.e. it is corresponding in above-mentioned steps S3
While execute server carries out skin state assessment to the image in first detection zone of reception, to the second detection zone
Domain is assessed, and exports corresponding assessment result;
In above-mentioned steps S4, assessment result and the second execute server that control server exports the first execute server
The assessment result of output is issued to the user terminal of long-range connection control server, so that user checks.
Further, in preferred embodiment of the invention, in above-mentioned steps S3, the second detection zone is assessed
Process is specifically as shown in Figure 8, including:
Step S31, processing obtain the rgb value of each pixel of left cheek region, and processing obtains right cheek region
The rgb value of each pixel;
Step S32, according to the rgb value of each pixel of the rgb value of each pixel of left cheek region and right cheek region
Average computation is carried out, to obtain a colour of skin numerical value;
Step S33 inquires colour of skin numerical value according to preset colour of skin numerical value tables, to obtain for indicating to use
The assessment result of the family colour of skin and output.
Specifically, in the present embodiment, above-mentioned second execute server is different from the first execute server, not according to instruction
Practice the assessment models of formation to be detected the second detection zone, but according to the rgb value of each pixel of left cheek region
Average computation, which is carried out, with the rgb value of each pixel of right cheek region obtains above-mentioned assessment result.
In one embodiment of the present of invention, as mentioned above it is possible, in above-mentioned steps S4, control server can be selected institute
There is the assessment result for the assessment result and the output of the second execute server that the first execute server exports all to be issued to user
Terminal, so that user checks.
In an alternative embodiment of the invention, in above-mentioned steps S4, control server can integrate all first and execute clothes
The assessment result of business device output, and it is issued to user terminal together with the assessment result of the second execute server output, for
User checks.In the present embodiment, the mode that the assessment result setting weighted value for each first execute server may be used is come
A total evaluation is obtained according to the assessment result weighted calculation of all first execute servers as a result, and holding it together with second
The assessment result of row server output is issued to user terminal together.It should be noted that the above-mentioned second sub- result of evaluation and the
The three sub- results of evaluation are not involved in weighted calculation, need individually to be issued to user terminal due to not being score numeric form.
In an alternative embodiment of the invention, in above-mentioned steps S4, control server can also integrate all first and execute
The assessment result of server output and the assessment result of the second execute server output, the meter equally by the way of weighted calculation
Calculation obtains a total evaluation result and is issued to user terminal.Similarly, the sub- result of above-mentioned second evaluation and third evaluation
As a result due to not being score numeric form, it is not involved in weighted calculation, needs individually to be issued to user terminal.
The foregoing is merely preferred embodiments of the present invention, are not intended to limit embodiments of the present invention and protection model
It encloses, to those skilled in the art, should can appreciate that all with made by description of the invention and diagramatic content
Equivalent replacement and obviously change obtained scheme, should all be included within the scope of the present invention.
Claims (10)
1. a kind of distributed face state appraisal procedure, which is characterized in that special in multiple facial skins are arranged in human face region
Point is levied, and all facial skin characteristic points are divided into for positioning in different multiple first detection zones, Mei Gesuo
The skin condition that the first detection zone be used to assess face is stated, one is respectively set for each first detection zone
A execute server, each execute server are separately connected the same control server, are arranged in the control server
Have and be stored with label corresponding with each execute server in a distribution module and the distribution module, further includes:
Step S1 obtains the facial image of user's face using an image collecting device, and as image to be detected, and will be described
Image to be detected is uploaded in the control server remotely being connect with described image harvester;
Step S2, the control server identify to obtain all facial skin characteristic points in described image to be detected, and
Each first detection zone is positioned according to the facial skin characteristic point, the distribution module is according to the label
Positioning each of is obtained the image in first detection zone to be sent in the corresponding execute server;
Step S3, the corresponding execute server carry out skin condition to the image in first detection zone of reception and comment
Estimate, and exports in assessment result to the control server;
All assessment results are issued to the user for remotely connecting the control server by step S4, the control server
Terminal, so that user checks.
2. face state appraisal procedure as described in claim 1, which is characterized in that the setting of described image harvester is changed one
On adornment mirror, and the communication device being connected in the vanity mirror;
The vanity mirror remotely connects the control server by the communication device, and will be described by the communication device
The facial image that image acquisition device obtains is uploaded in the control server.
3. face state appraisal procedure as described in claim 1, which is characterized in that first detection zone is used for including one
The oil detection zone that the skin oil state of user's face is assessed;
The oil detection zone further comprises:
The forehead region of user's face;And/or
The left cheek region of user's face;And/or
The right cheek region of user's face;And/or
The chin area of user's face.
4. face state appraisal procedure as described in claim 1, which is characterized in that first detection zone is used for including one
The cleannes detection zone that the skin clean conditions of user's face are assessed;
The cleannes detection zone further comprises:
The nasal area of user's face;And/or
The full face region of user's face.
5. face state appraisal procedure as claimed in claim 4, which is characterized in that the institute of the corresponding cleannes detection zone
Stating assessment result includes:
The first sub- result of assessment for the skin cleannes for indicating the nasal area;And/or for indicating the areas Quan Lian
Whether domain has the sub- result of remaining second assessment of color make-up;And/or
For indicating whether the full face region has the third of fluorescence to assess sub- result.
6. face state appraisal procedure as described in claim 1, which is characterized in that first detection zone is used for including one
The allergy detection zone that the skin allergy state of user's face is assessed;
The allergy detection zone further comprises:
The left cheek region of user's face;And/or
The right cheek region of user's face.
7. face state appraisal procedure as described in claim 1, which is characterized in that first detection zone is used for including one
The color spot detection zone that the skin splash state of user's face is assessed;
The color spot detection zone further comprises:
The full face region of user's face.
8. face state appraisal procedure as described in claim 1, which is characterized in that each execute server includes one
The assessment models that training is formed in advance;
Using deep neural network, according to pre-set multiple training datas to assessing the assessment models;
Each training data centering includes image in corresponding first detection zone and for described image
Assessment result.
9. face state appraisal procedure as described in claim 1, which is characterized in that further include one second detection zone, it is described
Second detection zone is for assessing the skin complexion state of user's face;
In the step S3, the corresponding execute server carries out skin to the image in first detection zone of reception
While status assessment, second detection zone is assessed, and export corresponding assessment result;
In the step S4, all assessment results that the execute server exports are issued to by the control server remotely to be connected
The user terminal for connecing the Cloud Server, so that user checks;
Second detection zone further comprises:
The left cheek region of user's face and the right cheek region of user's face.
10. face state appraisal procedure as claimed in claim 9, which is characterized in that in the step S3, to second inspection
The process that region is assessed is surveyed to specifically include:
Step S31, processing obtain the rgb value of each pixel of the left cheek region, and processing obtains the right cheek area
The rgb value of each pixel in domain;
Step S32, according to each pixel of the rgb value of each pixel of the left cheek region and the right cheek region
Rgb value carries out average computation, to obtain a colour of skin numerical value;
Step S33 inquires the colour of skin numerical value according to preset colour of skin numerical value tables, to obtain for indicating to use
The assessment result of the family colour of skin simultaneously exports.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810084750.XA CN108363965A (en) | 2018-01-29 | 2018-01-29 | A kind of distributed face state appraisal procedure |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810084750.XA CN108363965A (en) | 2018-01-29 | 2018-01-29 | A kind of distributed face state appraisal procedure |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108363965A true CN108363965A (en) | 2018-08-03 |
Family
ID=63007241
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810084750.XA Pending CN108363965A (en) | 2018-01-29 | 2018-01-29 | A kind of distributed face state appraisal procedure |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108363965A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110084815A (en) * | 2019-06-03 | 2019-08-02 | 上海孚锐思医疗器械有限公司 | The method that skin allergy decision-making system and skin allergy determine |
CN114494104A (en) * | 2020-11-13 | 2022-05-13 | 上海旁午智能科技有限公司 | Intelligent device with skin management analysis function |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103152476A (en) * | 2013-01-31 | 2013-06-12 | 广东欧珀移动通信有限公司 | Mobile phone capable of detecting skin state and use method thereof |
CN104586364A (en) * | 2015-01-19 | 2015-05-06 | 武汉理工大学 | Skin detection system and method |
CN104732214A (en) * | 2015-03-24 | 2015-06-24 | 吴亮 | Quantification skin detecting method based on face image recognition |
CN104887183A (en) * | 2015-05-22 | 2015-09-09 | 杭州雪肌科技有限公司 | Intelligent skin health monitoring and pre-diagnosis method based on optics |
CN105101836A (en) * | 2013-02-28 | 2015-11-25 | 松下知识产权经营株式会社 | Makeup assistance device, makeup assistance method, and makeup assistance program |
CN105120747A (en) * | 2013-04-26 | 2015-12-02 | 株式会社资生堂 | Skin darkening evaluation device and skin darkening evaluation method |
CN106388781A (en) * | 2016-09-29 | 2017-02-15 | 深圳可思美科技有限公司 | Method for detecting skin colors and pigmentation situation of skin |
CN107157447A (en) * | 2017-05-15 | 2017-09-15 | 精诚工坊电子集成技术(北京)有限公司 | The detection method of skin surface roughness based on image RGB color |
CN107184023A (en) * | 2017-07-18 | 2017-09-22 | 上海勤答信息科技有限公司 | A kind of Intelligent mirror |
CN107437073A (en) * | 2017-07-19 | 2017-12-05 | 竹间智能科技(上海)有限公司 | Face skin quality analysis method and system based on deep learning with generation confrontation networking |
-
2018
- 2018-01-29 CN CN201810084750.XA patent/CN108363965A/en active Pending
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103152476A (en) * | 2013-01-31 | 2013-06-12 | 广东欧珀移动通信有限公司 | Mobile phone capable of detecting skin state and use method thereof |
CN105101836A (en) * | 2013-02-28 | 2015-11-25 | 松下知识产权经营株式会社 | Makeup assistance device, makeup assistance method, and makeup assistance program |
CN105120747A (en) * | 2013-04-26 | 2015-12-02 | 株式会社资生堂 | Skin darkening evaluation device and skin darkening evaluation method |
CN104586364A (en) * | 2015-01-19 | 2015-05-06 | 武汉理工大学 | Skin detection system and method |
CN104732214A (en) * | 2015-03-24 | 2015-06-24 | 吴亮 | Quantification skin detecting method based on face image recognition |
CN104887183A (en) * | 2015-05-22 | 2015-09-09 | 杭州雪肌科技有限公司 | Intelligent skin health monitoring and pre-diagnosis method based on optics |
CN106388781A (en) * | 2016-09-29 | 2017-02-15 | 深圳可思美科技有限公司 | Method for detecting skin colors and pigmentation situation of skin |
CN107157447A (en) * | 2017-05-15 | 2017-09-15 | 精诚工坊电子集成技术(北京)有限公司 | The detection method of skin surface roughness based on image RGB color |
CN107184023A (en) * | 2017-07-18 | 2017-09-22 | 上海勤答信息科技有限公司 | A kind of Intelligent mirror |
CN107437073A (en) * | 2017-07-19 | 2017-12-05 | 竹间智能科技(上海)有限公司 | Face skin quality analysis method and system based on deep learning with generation confrontation networking |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110084815A (en) * | 2019-06-03 | 2019-08-02 | 上海孚锐思医疗器械有限公司 | The method that skin allergy decision-making system and skin allergy determine |
CN114494104A (en) * | 2020-11-13 | 2022-05-13 | 上海旁午智能科技有限公司 | Intelligent device with skin management analysis function |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108399364A (en) | A kind of face state appraisal procedure of major-minor camera setting | |
US7764303B2 (en) | Imaging apparatus and methods for capturing and analyzing digital images of the skin | |
CN109949193B (en) | Learning attention detection and prejudgment device under variable light environment | |
CN107110754B (en) | Spray quality detection device, system, method and sampling auxiliary device | |
CN111491537B (en) | System and method for estimating three-dimensional pose of oral hygiene device with visual indicia | |
US10959611B2 (en) | Visual field testing method, system, and testing apparatus based on head-mounted testing equipment | |
KR101555636B1 (en) | Multiple division type cosmetics providing method using gene analysis test | |
CN108364207A (en) | A kind of facial skin care product and skin care proposal recommending method | |
CN108363965A (en) | A kind of distributed face state appraisal procedure | |
CN108334589A (en) | A kind of facial skin care product recommendation method | |
CN104866717A (en) | Mutually-selected network hospital system and information interaction method therefor | |
CN107184023A (en) | A kind of Intelligent mirror | |
CN108403078A (en) | A kind of eye eyesight check device | |
CN105866118A (en) | System and method for detecting composition of animal excrement | |
CN104867090A (en) | Network hospital system based on big data applications and data application method for database | |
CN107402732A (en) | Method for monitoring the shading criteria in printing machine | |
CN108553083A (en) | A kind of face state appraisal procedure under voice instruction | |
CN107235397B (en) | Advertisement putting method and system | |
CN109497925A (en) | Eye visual function evaluating apparatus and eye Evaluation of visual function | |
US11622683B2 (en) | Pupillometry systems, methods, and devices | |
CN108389185A (en) | A kind of face state appraisal procedure | |
CN108354590A (en) | A kind of face state appraisal procedure based on burst mode | |
CN109199334B (en) | Tongue picture constitution identification method and device based on deep neural network | |
CN208888621U (en) | Radiate jamproof audio select test macro | |
CN110251074B (en) | Multifunctional medical detection system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180803 |
|
RJ01 | Rejection of invention patent application after publication |