CN112749634A - Control method and device based on beauty equipment and electronic equipment - Google Patents
Control method and device based on beauty equipment and electronic equipment Download PDFInfo
- Publication number
- CN112749634A CN112749634A CN202011587624.XA CN202011587624A CN112749634A CN 112749634 A CN112749634 A CN 112749634A CN 202011587624 A CN202011587624 A CN 202011587624A CN 112749634 A CN112749634 A CN 112749634A
- Authority
- CN
- China
- Prior art keywords
- beauty
- cosmetic
- user
- timing
- action
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000003796 beauty Effects 0.000 title claims abstract description 728
- 238000000034 method Methods 0.000 title claims abstract description 126
- 230000001815 facial effect Effects 0.000 claims abstract description 46
- 239000002537 cosmetic Substances 0.000 claims description 279
- 230000009471 action Effects 0.000 claims description 190
- 230000033001 locomotion Effects 0.000 claims description 32
- 230000004044 response Effects 0.000 claims description 20
- 230000036555 skin type Effects 0.000 claims description 20
- 238000001514 detection method Methods 0.000 claims description 16
- 210000001061 forehead Anatomy 0.000 claims description 16
- 238000013528 artificial neural network Methods 0.000 claims description 12
- 238000004590 computer program Methods 0.000 claims description 8
- 238000012544 monitoring process Methods 0.000 claims description 6
- 230000008569 process Effects 0.000 abstract description 12
- 230000006870 function Effects 0.000 description 12
- 238000010586 diagram Methods 0.000 description 10
- 230000006399 behavior Effects 0.000 description 8
- 238000004891 communication Methods 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 7
- 230000002087 whitening effect Effects 0.000 description 6
- 230000000694 effects Effects 0.000 description 5
- 230000008921 facial expression Effects 0.000 description 5
- 238000003062 neural network model Methods 0.000 description 4
- 230000003068 static effect Effects 0.000 description 4
- 238000004140 cleaning Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000003631 expected effect Effects 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 206010013786 Dry skin Diseases 0.000 description 2
- 206010039792 Seborrhoea Diseases 0.000 description 2
- 230000004888 barrier function Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 230000035617 depilation Effects 0.000 description 2
- 239000003814 drug Substances 0.000 description 2
- 230000037336 dry skin Effects 0.000 description 2
- 230000037312 oily skin Effects 0.000 description 2
- 210000000056 organ Anatomy 0.000 description 2
- 230000003716 rejuvenation Effects 0.000 description 2
- 230000037307 sensitive skin Effects 0.000 description 2
- 238000001308 synthesis method Methods 0.000 description 2
- 206010027145 Melanocytic naevus Diseases 0.000 description 1
- 208000007256 Nevus Diseases 0.000 description 1
- 230000003712 anti-aging effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 229940079593 drug Drugs 0.000 description 1
- 238000001827 electrotherapy Methods 0.000 description 1
- 238000000802 evaporation-induced self-assembly Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000035764 nutrition Effects 0.000 description 1
- 235000016709 nutrition Nutrition 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000035790 physiological processes and functions Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- A—HUMAN NECESSITIES
- A45—HAND OR TRAVELLING ARTICLES
- A45D—HAIRDRESSING OR SHAVING EQUIPMENT; EQUIPMENT FOR COSMETICS OR COSMETIC TREATMENTS, e.g. FOR MANICURING OR PEDICURING
- A45D44/00—Other cosmetic or toiletry articles, e.g. for hairdressers' rooms
-
- A—HUMAN NECESSITIES
- A45—HAND OR TRAVELLING ARTICLES
- A45D—HAIRDRESSING OR SHAVING EQUIPMENT; EQUIPMENT FOR COSMETICS OR COSMETIC TREATMENTS, e.g. FOR MANICURING OR PEDICURING
- A45D44/00—Other cosmetic or toiletry articles, e.g. for hairdressers' rooms
- A45D2044/007—Devices for determining the condition of hair or skin or for selecting the appropriate cosmetic or hair treatment
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
The application provides a control method and device based on beauty equipment and electronic equipment, relates to the technical field of beauty equipment, and comprises the following steps: acquiring at least one target face image; generating first guide data corresponding to a beauty course according to the at least one target face image; displaying first guide data corresponding to the beauty tutorial on the at least one target facial image. Therefore, the application enhances the accuracy and the interactivity of the beauty equipment in the using process, and makes the use of the beauty instrument more convenient and interesting.
Description
Technical Field
The present application relates to the field of beauty equipment technologies, and in particular, to a control method and apparatus based on a beauty equipment, and an electronic device.
Background
The beauty equipment can be adjusted according to the physiological function of the human body, thereby improving the body and the face. According to its functional role, it has many types, such as whitening, skin-tendering, spot-removing, wrinkle-removing, depilation, weight-reducing, etc., for example, cleaning beauty instrument, anti-aging beauty instrument, spot-removing beauty instrument, whitening beauty instrument, etc. Including by name ultrasonic wave introduction, photon skin rejuvenation, high frequency electrotherapy, RF radiofrequency instruments, electric wave skin stretching, electronic speckle nevus removal, E-light permanent depilation skin rejuvenation, gavania nutrition introduction and delivery, and the like. However, in any type, when a user operates and uses the beauty equipment, the user needs to refer to the use instruction of characters or diagrams, so that the understanding difficulty is often generated, the use difficulty is increased, and the real-time interaction with an instrument is lacked, so that the problems that the beauty operation cannot be visualized and the operation interestingness is lacked are generated.
Disclosure of Invention
The invention aims to provide a control method based on a beauty device, which aims to solve the technical problems that the beauty operation cannot be visualized and the operation interestingness is lacked.
In a first aspect, an embodiment of the present application provides a control method based on a cosmetic device, including:
acquiring at least one target face image;
generating first guide data corresponding to a beauty course according to the at least one target face image;
displaying first guide data corresponding to the beauty tutorial on the at least one target facial image.
In combination with the first aspect, an embodiment of the present invention provides another possible implementation manner of the first aspect, wherein the at least one target face image includes a target face video or a real-time continuous target face image.
With reference to the first aspect, the first guidance data includes at least one of a guidance graphic for guiding the cosmetic operation, a guidance animation for guiding the cosmetic operation, a guidance text for guiding the cosmetic operation, and a guidance voice for guiding the cosmetic operation.
With reference to the first aspect, before the generating first guide data corresponding to a beauty tutorial from the at least one target face image, the method further includes: generating second guide data corresponding to a result of a selection operation in response to at least one of a beauty mode selection operation, a beauty range selection operation, a beauty area selection operation, a beauty technique selection operation, and a skin care product selection operation for a beauty tutorial; the second index data includes at least one of:
an indication figure indicating at least one of the mode selection operation, the gear selection operation, and the region selection operation;
an indication animation indicating at least one of the mode selection operation, the gear selection operation, and the region selection operation;
an indication character indicating at least one of the mode selection operation, the gear selection operation, and the region selection operation;
an indication voice indicating at least one of the mode selection operation, the gear selection operation, and the region selection operation.
With reference to the first aspect, the target facial image is divided into at least one preset region, and the at least one preset region includes: at least one of the left cheek, right cheek, T-zone, periocular, perilabial, chin, forehead and neck; the method further comprises the following steps:
and recommending a corresponding beauty mode and/or a corresponding beauty gear to the user based on the preset area.
With reference to the first aspect, the target facial image is divided into at least one preset region, and the at least one preset region includes: at least one of the left cheek, right cheek, T-zone, periocular, perilabial, chin, forehead and neck; the method further comprises the following steps:
identifying at least one preset area, and extracting skin features of the at least one preset area;
determining a skin type corresponding to the skin characteristic;
and recommending the corresponding beauty mode and/or the corresponding beauty gear to the user based on the skin type.
In combination with the first aspect, the first guideline data includes at least one of:
operation speed guide data of a cosmetic operation and/or operation direction guide data of a cosmetic operation for at least one of the preset regions;
at least one guide data of operation start position data of a cosmetic operation, operation end position data of a cosmetic operation, and intermediate operation stop position data between the operation start position data and the operation end position data for at least one of the preset regions;
operation force guide data of a cosmetic operation for at least one of the preset regions.
With reference to the first aspect, the acquiring the target face image includes:
acquiring a current face image through a camera, and identifying the current face image to obtain a target face image; or,
photographing a human face through the camera, and identifying a picture obtained by photographing to obtain a target face image; or,
receiving a picture which is uploaded by a user and contains a human face, and identifying the picture to obtain a target face image; or,
receiving a virtual human face generated by a computer, and determining the virtual human face as the target face image.
In combination with the first aspect, the method further comprises:
controlling a cosmetic device to perform a cosmetic action based on the first guideline data.
In combination with the first aspect, the method further comprises: controlling a cosmetic device to perform a cosmetic action based on the second guidance data.
With reference to the first aspect, the controlling the cosmetic device to perform a cosmetic action based on the first guide data includes:
prompting a user to perform a cosmetic operation on the cosmetic device according to the first guide data;
detecting whether the beauty operation is received on a button for executing beauty action;
if the beauty operation is received, determining that a first operation instruction aiming at the beauty operation is received;
and controlling the beauty device to execute a beauty action according to the first operation instruction.
In combination with the first aspect, the controlling the cosmetic device to perform a cosmetic action based on the second guidance data includes:
prompting a user to perform a cosmetic operation on the cosmetic device according to the second guide data;
detecting whether the beauty operation is received on a button for executing beauty action;
if the cosmetic operation is received, determining that a second operation instruction aiming at the cosmetic operation is received;
and controlling the beauty equipment to execute corresponding beauty actions according to the second operation instruction.
In combination with the first aspect, the controlling a cosmetic device to perform a cosmetic action based on the second guidance data includes:
generating a third operation instruction for correspondingly controlling the beauty equipment according to the second guide data;
and controlling the beauty equipment to execute corresponding beauty actions according to the third operation instruction.
In combination with the first aspect, the cosmetic device incorporates a motion sensing component; the method further comprises the following steps:
monitoring, by the motion sensing component, an actual motion of a user;
judging whether the actual action is consistent with the standard action;
and if the actual action is inconsistent with the standard action, prompting the user of an operation error.
In combination with the first aspect, the method further comprises:
collecting the beauty actions of a user to generate a plurality of images;
identifying a plurality of images and extracting action characteristics in the images;
determining the actual action of the user according to the action characteristics;
judging whether the actual action is consistent with a standard action;
if the actual action is inconsistent with the standard action, determining the actual action error of the user, and prompting the user of an operation error; or,
collecting the beauty actions of a user to generate a plurality of images;
determining whether the cosmetic action of the user is consistent with the standard action according to the plurality of images and the neural network;
and if the standard action is inconsistent with the user operation error, determining the cosmetic action error of the user, and prompting the user operation error.
In combination with the first aspect, the method further comprises:
collecting a beauty area of a user to generate a plurality of images;
identifying the images, and determining the actual beauty area of the user according to the face area characteristics;
judging whether the actual beauty area of the user is consistent with a preset beauty area or not;
if the actual beauty area is inconsistent with the preset beauty area, determining that the actual beauty area of the user is wrong, and prompting the user of an operation error; or,
collecting a beauty area of a user to generate a plurality of images;
determining whether the beauty area of the user is consistent with a preset beauty area or not according to the plurality of images and the neural network;
and if the actual cosmetic area is inconsistent with the preset cosmetic area, determining that the actual cosmetic area of the user is wrong, and prompting the user of an operation error.
With reference to the first aspect, the prompting the user of the operation error includes:
prompting the user of the operation error by voice; or,
displaying the dynamic actual operation of the user, and displaying a prompt language about the operation error of the user.
In combination with the first aspect, the method further comprises:
judging whether the current environment corresponding to the current face image accords with a preset beauty environment or not through ambient light detection and/or distance detection;
and if the preset beauty environment is met, performing the operation of identifying the current face image to obtain a target face image.
With reference to the first aspect, the first guide data includes a beauty timing; the method further comprises the following steps:
sending timing reminding information based on the beauty timing;
wherein the beauty timing includes at least one of beauty timing of an instruction figure for instructing a beauty operation, beauty timing of an instruction animation for instructing a beauty operation, beauty timing of an instruction character for instructing a beauty operation, and beauty timing of an instruction voice for instructing a beauty operation, which are included in the first instruction data.
With reference to the first aspect, the first guidance data and/or the second guidance data comprise a cosmetic timing; the method further comprises the following steps:
sending timing reminding information based on the beauty timing; wherein,
the beauty timing comprises at least one of beauty timing of an indication graph for guiding beauty operation, beauty timing of an indication animation for guiding beauty operation, beauty timing of an indication character for guiding beauty operation and beauty timing of an indication voice for guiding beauty operation, which are included in the first guide data;
the beauty timing further comprises at least one of beauty timing in response to a beauty mode selection operation, beauty timing of a beauty gear selection operation, and beauty timing of a beauty area selection operation for a beauty tutorial corresponding to the second guide data.
With reference to the first aspect, the beauty tutorial further includes at least one of a pause control, a previous step control, and a next step control; the method further comprises the following steps:
controlling the beauty treatment equipment to stop running in response to the selection operation of the pause control;
in response to the selection operation of the previous step control, controlling the beauty device to perform the beauty operation of the previous step;
and responding to the selection operation of the next step control, and controlling the beauty equipment to perform the next beauty operation.
In combination with the first aspect, the method further comprises:
and displaying the target face image comprising the first guide data on the terminal.
In a second aspect, an embodiment of the present invention provides a control device based on a cosmetic apparatus, including:
an acquirer for acquiring at least one target face image;
the processor is used for generating first guide data corresponding to a beauty course according to the at least one target face image;
a display for displaying first guide data corresponding to the beauty tutorial on the at least one target facial image.
With reference to the second aspect, the at least one target face image includes a target face video or a real-time continuous target face image.
With reference to the second aspect, the first guidance data includes at least one of a guidance graphic for guiding the cosmetic operation, a guidance animation for guiding the cosmetic operation, a guidance text for guiding the cosmetic operation, and a guidance voice for guiding the cosmetic operation.
With reference to the second aspect, before the processor, the processor further includes a responder, where the responder is configured to: before the processor generates first guide data corresponding to a beauty course according to the at least one target face image, responding to at least one of a beauty mode selection operation, a beauty gear selection operation, a beauty area selection operation, a beauty manipulation selection operation and a skin care product selection operation aiming at the beauty course, and generating second guide data corresponding to a selection operation result; the second index data includes at least one of:
an indication figure indicating at least one of the mode selection operation, the gear selection operation, and the region selection operation;
an indication animation indicating at least one of the mode selection operation, the gear selection operation, and the region selection operation;
an indication character indicating at least one of the mode selection operation, the gear selection operation, and the region selection operation;
an indication voice indicating at least one of the mode selection operation, the gear selection operation, and the region selection operation.
With reference to the second aspect, the target facial image is divided into at least one preset region, and the at least one preset region includes: at least one of the left cheek, right cheek, T-zone, periocular, perilabial, chin, forehead and neck;
the processor is further configured to:
and recommending a corresponding beauty mode and/or a corresponding beauty gear to the user based on the preset area.
With reference to the second aspect, the target facial image is divided into at least one preset region, and the at least one preset region includes: at least one of the left cheek, right cheek, T-zone, periocular, perilabial, chin, forehead and neck;
the processor is further configured to:
identifying at least one preset area, and extracting skin features of the at least one preset area;
determining a skin type corresponding to the skin characteristic;
and recommending the corresponding beauty mode and/or the corresponding beauty gear to the user based on the skin type.
In combination with the second aspect, the first guideline data includes at least one of:
operation speed guide data of a cosmetic operation and/or operation direction guide data of a cosmetic operation for at least one of the preset regions;
at least one guide data of operation start position data of a cosmetic operation, operation end position data of a cosmetic operation, and intermediate operation stop position data between the operation start position data and the operation end position data for at least one of the preset regions;
operation force guide data of a cosmetic operation for at least one of the preset regions.
In combination with the second aspect, the obtainer is configured to:
acquiring a current face image through a camera, and identifying the current face image to obtain a target face image; or,
photographing a human face through the camera, and identifying a picture obtained by photographing to obtain a target face image; or,
receiving a picture which is uploaded by a user and contains a human face, and identifying the picture to obtain a target face image; or,
receiving a virtual human face generated by a computer, and determining the virtual human face as the target face image.
With reference to the second aspect, the apparatus further includes a first controller configured to:
controlling a cosmetic device to perform a cosmetic action based on the first guideline data.
With reference to the second aspect, the apparatus further comprises a second controller configured to:
controlling a cosmetic device to perform a cosmetic action based on the second guidance data.
With reference to the second aspect, the first controller is specifically configured to:
prompting a user to perform a cosmetic operation on the cosmetic device according to the first guide data;
detecting whether the beauty operation is received on a button for executing beauty action;
if the beauty operation is received, determining that a first operation instruction aiming at the beauty operation is received;
and controlling the beauty device to execute a beauty action according to the first operation instruction.
With reference to the second aspect, the second controller is specifically configured to:
prompting a user to perform a cosmetic operation on the cosmetic device according to the second guide data;
detecting whether the beauty operation is received on a button for executing beauty action;
if the cosmetic operation is received, determining that a second operation instruction aiming at the cosmetic operation is received;
and controlling the beauty equipment to execute corresponding beauty actions according to the second operation instruction.
With reference to the second aspect, the second controller is configured to:
generating a third operation instruction for correspondingly controlling the beauty equipment according to the second guide data;
and controlling the beauty equipment to execute corresponding beauty actions according to the third operation instruction.
In combination with the second aspect, further comprising a motion sensing component built into the cosmetic device; the motion sensing assembly is configured to:
monitoring, by the motion sensing component, an actual motion of a user;
the processor is further configured to:
judging whether the actual action is consistent with the standard action;
and if the actual action is inconsistent with the standard action, prompting the user of an operation error.
In combination with the second aspect, the obtainer is further configured to:
collecting the beauty actions of a user to generate a plurality of images;
the processor is further configured to:
identifying a plurality of images and extracting action characteristics in the images;
determining the actual action of the user according to the action characteristics;
judging whether the actual action is consistent with a standard action;
if the actual action is inconsistent with the standard action, determining the actual action error of the user, and prompting the user of an operation error; or, the obtainer is further configured to:
collecting the beauty actions of a user to generate a plurality of images;
the processor is further configured to:
determining whether the cosmetic action of the user is consistent with the standard action according to the plurality of images and the neural network;
and if the standard action is inconsistent with the user operation error, determining the cosmetic action error of the user, and prompting the user operation error.
In combination with the second aspect, the obtainer is further configured to:
collecting a beauty area of a user to generate a plurality of images;
the processor is further configured to:
identifying the images, and determining the actual beauty area of the user according to the face area characteristics;
judging whether the actual beauty area of the user is consistent with a preset beauty area or not;
if the actual beauty area is inconsistent with the preset beauty area, determining that the actual beauty area of the user is wrong, and prompting the user of an operation error; or, the obtainer is further configured to:
collecting a beauty area of a user to generate a plurality of images;
the processor is further configured to:
determining whether the beauty area of the user is consistent with a preset beauty area or not according to the plurality of images and the neural network;
and if the actual cosmetic area is inconsistent with the preset cosmetic area, determining that the actual cosmetic area of the user is wrong, and prompting the user of an operation error.
With reference to the second aspect, the prompting the user for the operation error includes:
prompting the user of the operation error by voice; or,
displaying the dynamic actual operation of the user, and displaying a prompt language about the operation error of the user.
With reference to the second aspect, the apparatus further comprises a detector configured to:
ambient light detection and/or distance detection;
the processor is further configured to:
judging whether the current environment corresponding to the current face image accords with a preset beauty environment or not;
and if the preset beauty environment is met, performing the operation of identifying the current face image to obtain a target face image.
With reference to the second aspect, the first guide data includes a beauty timing; the processor is further configured to:
sending timing reminding information based on the beauty timing;
wherein the beauty timing includes at least one of beauty timing of an instruction figure for instructing a beauty operation, beauty timing of an instruction animation for instructing a beauty operation, beauty timing of an instruction character for instructing a beauty operation, and beauty timing of an instruction voice for instructing a beauty operation, which are included in the first instruction data.
With reference to the second aspect, the first guidance data and/or the second guidance data includes a cosmetic timing; the processor is further configured to:
sending timing reminding information based on the beauty timing; wherein,
the beauty timing comprises at least one of beauty timing of an indication graph for guiding beauty operation, beauty timing of an indication animation for guiding beauty operation, beauty timing of an indication character for guiding beauty operation and beauty timing of an indication voice for guiding beauty operation, which are included in the first guide data;
the beauty timing further comprises at least one of beauty timing in response to a beauty mode selection operation, beauty timing of a beauty gear selection operation, and beauty timing of a beauty area selection operation for a beauty tutorial corresponding to the second guide data.
With reference to the second aspect, the beauty tutorial further includes at least one of a pause control, a previous step control, and a next step control; the processor is further configured to:
controlling the beauty treatment equipment to stop running in response to the selection operation of the pause control;
in response to the selection operation of the previous step control, controlling the beauty device to perform the beauty operation of the previous step;
and responding to the selection operation of the next step control, and controlling the beauty equipment to perform the next beauty operation.
In combination with the second aspect, the display is further configured to:
and displaying the target face image comprising the first guide data on the terminal.
In a third aspect, an embodiment of the present invention further provides an electronic device, including a memory and a processor, where the memory stores a computer program operable on the processor, and the processor implements the steps of the method according to the first aspect when executing the computer program.
In a fourth aspect, embodiments of the present invention also provide a computer-readable storage medium storing computer-executable instructions, which, when invoked and executed by a processor, cause the processor to execute the method according to the first aspect.
The embodiment of the application brings the following beneficial effects:
the control method and device based on the beauty equipment and the electronic equipment provided by the embodiment of the application comprise the following steps: acquiring at least one target face image; generating first guide data corresponding to a beauty course according to the at least one target face image; displaying first guide data corresponding to the beauty tutorial on the at least one target facial image. In the scheme, first guide data corresponding to the beauty course is generated through the acquired target face image, specific first guide data corresponding to the beauty course can be displayed according to the target face image, a user can observe beauty guide conditions aiming at the face of the user, the technical problem that beauty operation cannot be visualized and operation interestingness is lacked is solved, moreover, the whole skin care process can be directly and dynamically presented to the user, the user is guided to accurately operate, the expected effect of beauty equipment is realized, any potential safety hazard caused by improper use is avoided, the interactivity and the accuracy of the whole beauty process are enhanced according to the guide data, and the beauty equipment is more convenient to use.
Drawings
In order to more clearly illustrate the detailed description of the present application or the technical solutions in the prior art, the drawings needed to be used in the detailed description of the present application or the prior art description will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic flow chart of a control method based on a cosmetic device according to an embodiment of the present application;
FIG. 2 is a reference diagram of a UI design of application software provided by an embodiment of the present application;
fig. 3 is a conceptual diagram of a usage guide of a beauty treatment apparatus provided by an embodiment of the present application;
fig. 4 is a schematic structural diagram of a control device based on a cosmetic apparatus according to an embodiment of the present application;
fig. 5 is a schematic structural diagram illustrating an electronic device provided in an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions of the present application will be clearly and completely described below with reference to the accompanying drawings, and it is obvious that the described embodiments are some, but not all embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
At present, a beauty camera in the prior art can recognize a human face and take related photos or videos of the human face, but the beauty camera is only applied to the field of beauty cameras, and does not use guidance for related operations of user beauty operations on beauty equipment, and meanwhile, for related application software of a fitness coach, a following exercise for guiding a fitness user to perform fitness actions according to the actions of the fitness coach in the application can be provided, but the fitness coach in the following exercise is a third person rather than the user himself, so that interestingness brought to the user by fitness or beauty is greatly reduced, and enthusiasm and use frequency of the user for using the fitness equipment or the beauty equipment are reduced; meanwhile, in the use of the beauty equipment, relevant motion picture guidance is not carried out for the user aiming at the specific use method of the beauty equipment, and meanwhile, the user can not better master the use condition of the beauty equipment because the countdown for the use of the beauty equipment is not carried out and sound or picture reminding is not carried out.
Based on this, the embodiment of the application provides a control method and device based on a beauty device and an electronic device, by which the technical problems that the beauty operation cannot be visualized and the operation is lack of interest can be alleviated, so that the use process of the beauty device is simpler, more scientific and better in effect.
The first embodiment is as follows:
fig. 1 is a schematic flow chart of a control method based on a cosmetic device according to an embodiment of the present application. The method may be applied to a terminal (e.g., a mobile phone), and as shown in fig. 1, the method includes:
step S110, at least one target face image is acquired.
The terminal refers to a terminal having a shooting function and a display function, and the terminal includes but is not limited to: the intelligent mobile phone comprises a smart phone, a watch, a pad, a smart mirror, a computer, an on-vehicle display screen, a television with a camera, a refrigerator with a screen, an intelligent door with a screen, an intelligent sound box with a screen and the like, wherein the following available terminals are used as execution main bodies.
Step S120, generating first guide data corresponding to the beauty course according to at least one target face image.
In this step, when the terminal acquires the target facial image, first guide data corresponding to the beauty tutorial may be generated, the first guide data being a portion containing guide information, for example, the first guide data being a dynamic guide, a static guide, and/or a voice guide, etc. for the target facial image. Specifically, the first guidance data may be a text guidance, a voice guidance, a straight arrow or a curved arrow, a dynamic arrow, or an icon guidance, and the like, where the type of the first guidance data is not limited. Therefore, the terminal guides the user to use the beauty equipment through the first guide data, so that the user can operate the beauty equipment without barrier, and the interestingness of using the beauty equipment can be increased.
Step S130, displaying first guide data corresponding to the beauty course on at least one target face image.
Through displaying the first guide data corresponding to the beauty tutorial, the specific first guide data can be displayed aiming at the target face image, the user can observe the beauty guide condition aiming at the face of the user, the beauty device can execute the beauty action according to the guide data, the use of the beauty device is more efficient, the technical problems that the beauty operation cannot be visualized and the operation interest is lacked are solved.
In practical application, the real-time facial expression of the user himself, which is acquired and recognized previously, is displayed according to the display screen in the terminal, so that the technical problems that the beauty operation cannot be visualized and the operation is lack of interest are solved.
In some embodiments, in the step of acquiring the target face image in step S110, as shown in fig. 2, at least one target face image includes a target face video or a real-time continuous target face image.
It should be noted that the terminal can recognize and three-dimensionally model according to algorithms such as an analysis synthesis method and the like according to a single photo, multiple photos or a target face video uploaded or shot by a user, and reconstruct a 3D face model which is very similar to a real human face of the user.
The embodiment can clearly identify the real-time facial expression of the user by identifying the target face video or the real-time continuous target face image, and increases the interestingness of the user in using the beauty equipment.
In some embodiments, the first guidance data includes at least one of a guidance graphic for guiding the cosmetic procedure, a guidance animation for guiding the cosmetic procedure, a guidance text for guiding the cosmetic procedure, and a guidance voice for guiding the cosmetic procedure.
Specifically, the indication graph for guiding the cosmetic operation in the first guide data may include a straight arrow or a curved arrow as a guide, and the straight arrow or the curved arrow guides the user to perform the cosmetic operation in a corresponding direction and at a corresponding speed in a specific target face region, for example, an upward curved arrow is used on the right cheek of the person to indicate the upward operation, and the speed of the cosmetic operation may be indicated according to the speed of the graph change; pointed buttons or options and the like can help a user to better operate the beauty equipment; the instruction animation for guiding the beauty operation can be a motion picture or a small video and is used for guiding a user to carry out the beauty operation of corresponding direction, corresponding force, speed and the like in a specific target face area; the instruction words for guiding the cosmetic operation can be used for prompting the user how to guide or prompt the cosmetic operation in the corresponding area of the target face, such as the direction, strength, speed and the like of the cosmetic operation, and specifically, the prompt words that can appear are "please perform the cosmetic operation on the forehead with a certain strength and a certain speed from left to right direction" when the user reads the instruction words, the cosmetic operation can be performed according to the instruction words; the instruction voice for directing the beauty operation is a voice manner, and performs guidance prompt of the beauty operation in a corresponding region of the target face, such as a direction, strength, speed, and the like of the beauty operation, specifically, the prompt voice that can be played is "please perform the beauty operation on the left cheek, from bottom to top, with a certain strength and a certain speed" when the user hears the instruction voice, then the corresponding beauty operation can be performed according to the prompt of the instruction voice.
In some embodiments, before step S120, the method further includes:
step a), responding to at least one of beauty mode selection operation, beauty gear selection operation, beauty area selection operation, beauty technique selection operation and skin care product selection operation in a beauty course, and generating second guide data corresponding to a selection operation result; the second index data includes at least one of:
an indication pattern indicating at least one of a mode selection operation, a shift selection operation, and a region selection operation;
an indication animation indicating at least one of a mode selection operation, a gear selection operation, and a region selection operation;
an indication character indicating at least one of a mode selection operation, a gear selection operation, and a region selection operation;
and an indication voice indicating at least one of a mode selection operation, a shift selection operation, and a region selection operation.
Note that the second guide data is also a portion containing guide information, and for example, the second guide data is a dynamic guide, a static guide, and/or a voice guide, etc. for instructing a mode selecting operation, a shift selecting operation, or a region selecting operation. Specifically, the instruction graph may include a straight arrow or a curved arrow as a guide, or the instruction animation may be a small video about how to use the beauty equipment, and the user may select to play the small video, and further, may quickly understand the step of operating the beauty equipment by watching the small video; or, the method can be used for prompting the user how to perform the next operation; or, the instruction voice may be used to prompt the user how to perform the next operation, and the like, and the type of the second instruction data is not limited here.
In practical applications, the second guidance data may include guidance contents for a beauty mode, a beauty gear, a beauty timing, and the like, the beauty mode includes a whitening mode, a quick pull-up mode, and the like, the beauty gear includes a first gear, a second gear, and a third gear, when the beauty mode is selected, a beauty procedure introduction, a beauty efficacy, and a skin to which the beauty mode or the gear is applied of the beauty mode are displayed on a display screen of the terminal, and after beauty is started by clicking, other menus displayed on the display screen of the terminal are retracted.
In some embodiments, the target facial image is divided into at least one preset region, the at least one preset region comprising: at least one of the left cheek, right cheek, T-zone, periocular, perilabial, chin, forehead and neck; the method further comprises the following steps:
and b), recommending a corresponding beauty mode and/or a corresponding beauty gear to the user based on the preset area.
It should be noted that, when the beauty mode and/or beauty range is recommended to the user, in particular, various information beneficial to the skin, such as care products or medicines, can be recommended. Therefore, the method and the device can help the user to better care the skin by recommending the beauty mode and/or the beauty gear to the user.
In some embodiments, the target facial image is divided into at least one preset region, the at least one preset region comprising: at least one of the left cheek, right cheek, T-zone, periocular, perilabial, chin, forehead and neck; the method further comprises the following steps:
step c1), identifying at least one preset area, and extracting skin characteristics of the at least one preset area;
step c2), determining the skin type corresponding to the skin characteristic;
step c3), recommending the corresponding beauty mode and/or the corresponding beauty gear to the user based on the skin type.
In the embodiment of the invention, the terminal can identify the target face image by using an image identification technology, specifically, the target face image is divided into a plurality of preset areas, then the images where the preset areas are located are respectively identified, the actual skin characteristic in each preset area is extracted, the actual skin characteristic is compared with the standard skin characteristic to obtain a difference part, further, the skin type corresponding to the skin characteristic is determined, and the corresponding beauty mode and/or the corresponding beauty gear is recommended to the user according to the skin characteristic.
Illustratively, the extracted actual skin feature may be smoothness of the skin, or skin color, etc., and accordingly, the skin type of the user may be determined according to the smoothness of the skin, or the skin type of the user may be determined according to the skin color, etc., and the skin type mainly includes at least one of dry skin, sensitive skin, oily skin, or mixed skin, and the skin feature is not limited herein.
In some embodiments, the first guideline data comprises at least one of:
operation speed guide data and/or operation direction guide data of the cosmetic operation for at least one preset area;
at least one guide data of operation start position data of the cosmetic operation, operation end position data of the cosmetic operation, and intermediate operation stay position data between the operation start position data and the operation end position data for at least one preset region;
operation force guide data for cosmetic operation of at least one preset region.
In the embodiment of the present invention, the first guide data may be operation speed guide data for a preset region and/or operation direction guide data for a cosmetic operation, for example, the first guide data may be operation speed guide data and operation direction guide data for a left cheek, the operation speed guide data is an operation performed every 3 seconds, and the operation direction guide data is a direction in which a side of the left cheek close to the nose is directed to be close to the ear, so that the user may perform the cosmetic operation on the left cheek according to the above two kinds of guide data, specifically, the cosmetic operation is a cosmetic operation performed at a speed in which a slide is performed every 3 seconds on the left cheek in a manner that the side of the left cheek close to the nose is directed to be close to the ear. Therefore, by prompting the use direction and the use speed of the beauty equipment, the user can accurately operate the beauty equipment and the interestingness of using the beauty equipment is guaranteed to be increased.
The first guide data may be operation start position data for a preset area, operation end position data, and intermediate operation stop position data between the operation start position data and the operation end position data, for example, the first guide data is operation start position data for a chin, operation end position data, and intermediate operation stop position data between the operation start position data and the operation end position data, the operation start position data is a position above the chin, i.e., near the mouth, the operation end position data is below the chin, and the intermediate operation stop position data between the operation start position data and the operation end position data is an intermediate position between the above and below the chin, so that the user can perform a cosmetic operation on the chin according to the above three guide data, specifically, the cosmetic operation is from above the chin down, stay in the middle position and continue downwards until reaching the position below the chin. Therefore, by prompting the position data used by the cosmetic equipment of the user, the user can not only accurately operate the cosmetic equipment, but also the interestingness of using the cosmetic equipment is ensured to be increased.
The first guidance data may also be operation force guidance data for a preset area, for example, the first guidance data is operation force guidance data for a chin, and the operation force guidance data is light, so that a user may perform a cosmetic operation on the chin according to the above guidance data, specifically, the cosmetic operation is performed on the chin with light force. Therefore, through prompting the operation force guide data used by the user cosmetic equipment, the user can operate the cosmetic equipment with accurate force, and the interestingness of using the cosmetic equipment can be increased.
Therefore, through the first guide data or the plurality of first guide data, the technical problems that the beauty operation cannot be visualized and the operation is lack of interest can be effectively solved, in addition, the whole skin care process can be directly and dynamically presented in front of a user to guide the user to carry out accurate operation, the expected effect of the beauty equipment is realized, any potential safety hazard caused by improper use is avoided, the interactivity and the accuracy of the whole beauty process are enhanced according to the guide data, and the use of the beauty equipment is more convenient.
In some embodiments, step S110 includes the steps of:
acquiring a current face image through a camera, and identifying the current face image to obtain a target face image; or,
the method comprises the steps of photographing a human face through a camera, and identifying a picture obtained by photographing to obtain a target face image; or,
receiving a picture containing a face uploaded by a user, and identifying the picture to obtain a target face image; or,
receiving a virtual human face generated by a computer, and determining the virtual human face as a target face image.
It should be noted that the front camera of the terminal may be used to acquire the face image. The method comprises the steps of judging whether a face image exists in an image or not by identifying a current face image, extracting position information in the face image if the face image exists, judging the size of the face and the position information of each main facial organ, and modeling based on the position information to obtain a more accurate target face image.
In some embodiments, step S110 includes the steps of:
and identifying the current face image by following the moving position of the current face image based on a mode of combining the motion and the model to obtain the target face image.
Therefore, the terminal can follow the movement of the face of the user and move by identifying the moving position of the current face image, and the interest of using the beauty equipment is increased.
In some embodiments, the method further comprises:
and d), controlling the beauty device to execute the beauty action based on the first guide data.
Specifically, the user manually performs the relevant cosmetic operation according to the guidance of the first guidance data.
In some embodiments, the method further comprises:
and e), controlling the beauty device to execute the beauty action based on the second index data.
Specifically, the user manually performs the related cosmetic action according to the guidance of the second guidance data.
In some embodiments, step d) may comprise the steps of:
step d1), prompting the user to perform the cosmetic operation on the cosmetic device according to the first guide data;
step d2), detecting whether a beauty operation is received on the button for executing the beauty action;
step d3), if the beauty treatment operation is received, determining that a first operation instruction for the beauty treatment operation is received;
step d4), controlling the beauty device to execute the beauty action according to the first operation instruction.
In the embodiment of the invention, the terminal can control the beauty equipment to execute the beauty action according to the manual operation of the user, specifically, the user is prompted to carry out the beauty operation on the beauty equipment according to the first guide data, when the user carries out the beauty operation on the buttons or options on the beauty equipment according to the first guide data by holding the beauty equipment by hand, the terminal receives the behavior of the beauty operation of the user and generates a corresponding first operation instruction according to the behavior of the beauty operation, and finally the terminal can execute the beauty action according to the first operation instruction; the first operation instruction may include an operation instruction specifically required to be performed by the user, for example, a looping cosmetic operation is performed on the forehead, or a lifting action is performed from bottom to top on the left cheek.
In some embodiments, step e) may comprise the steps of:
step e1), prompting the user to execute the beauty treatment operation on the beauty treatment equipment according to the second guide data;
step e2), detecting whether a beauty operation is received on the button for executing the beauty action;
step e3), if the beauty treatment operation is received, determining that a second operation instruction for the beauty treatment operation is received;
and e4), controlling the beauty device to execute the corresponding beauty action according to the second operation instruction.
In the embodiment of the invention, the terminal can control the beauty equipment to execute the beauty action according to the manual operation of the user, specifically, the user is prompted to carry out the beauty operation on the beauty equipment according to the second guide data, when the user carries out the beauty operation on the buttons or options and the like on the beauty equipment according to the second guide data, the terminal can receive the behavior of the beauty operation of the user and generate a corresponding second operation instruction according to the behavior of the beauty operation, and finally the terminal can execute the beauty action according to the second operation instruction; specifically, the second operation instruction may include an operation of performing a whitening operation, or a cleaning cosmetic operation, or another cosmetic mode or a cosmetic shift; in this embodiment, the user manually selects or sets the corresponding beauty mode or beauty gear, and after the beauty device receives the beauty mode or beauty gear selected by the user, the corresponding second operation instruction is executed.
In some embodiments, step e) may comprise the steps of:
step e5), generating a third operation instruction for correspondingly controlling the beauty equipment according to the second guide data;
step e6), controlling the beauty device to execute corresponding beauty action according to the third operation instruction.
In the embodiment of the present invention, the second guidance data includes an indication graph indicating at least one of a mode selection operation, a gear selection operation, and a region selection operation, and different from the above embodiments, in the embodiment, the manner of setting or selecting the second guidance data may be to select or set related contents such as a beauty treatment mode, a beauty treatment gear, or a beauty treatment region by using another intelligent application such as APP or applet, so that a user is prevented from having to manually perform a trouble of selecting a beauty treatment device, a beauty treatment gear, or a beauty treatment region; when the user selects the instruction graph corresponding to the operation according to the area in the second instruction data to perform the operation, a third operation instruction corresponding to the instruction graph is generated, the third operation instruction is an operation control instruction sent to the beauty equipment through an APP or an applet in the intelligent terminal, for example, the third operation instruction is an instruction sent to the beauty equipment by the APP or the applet to perform the beauty operation on the selected area, and finally, the terminal controls the beauty equipment to perform the corresponding beauty action according to the third operation instruction.
In some embodiments, the cosmetic device houses a motion-sensing component; the method further comprises the following steps:
step f1), monitoring the actual action of the user through the motion sensing assembly;
step f2), judging whether the actual action is consistent with the standard action;
step f3), if the actual action is not consistent with the standard action, prompting the user of an operation error.
In the embodiment of the invention, the terminal can monitor the actual action of the user according to the motion sensing assembly, for example, the actual action comprises the operation speed, the operation direction, the operation starting position, the operation ending position, the middle operation stopping position or the operation strength of the beauty operation, and the like of the beauty operation, and when the actual action of the user is monitored to be inconsistent with the standard action, the user is reminded of an operation error, so that the actual action of the user can be monitored in real time through the motion sensing assembly, further, the user can be reminded of the operation error in time, the user is prevented from executing the wrong beauty operation, and the beauty effect is improved.
In some embodiments, the method further comprises:
step g1), collecting the beauty actions of the user to generate a plurality of images;
step g2), identifying the multiple images and extracting the action characteristics in the images;
step g3), determining the actual action of the user according to the action characteristics;
step g4), judging whether the actual action is consistent with the standard action;
step g5), if the action is inconsistent with the standard action, determining the actual action error of the user, and prompting the user of an operation error; or,
step g6), collecting the beauty actions of the user to generate a plurality of images;
step g7), determining whether the cosmetic action of the user is consistent with the standard action according to the plurality of images and the neural network;
step g8), if the standard action is not consistent with the standard action, determining that the cosmetic action of the user is wrong, and prompting the user of an operation error.
Therefore, the method and the device can identify the collected image according to the image identification technology or the neural network model, further monitor the actual action of the user, and remind the user when the actual cosmetic action of the user is wrong.
In some embodiments, the method further comprises:
step h1), collecting the beauty area of the user to generate a plurality of images;
step h2), recognizing the multiple images, and determining the actual beauty area of the user according to the face area characteristics;
step h3), judging whether the actual beauty area of the user is consistent with the preset beauty area;
step h4), if the actual cosmetic area is inconsistent with the preset cosmetic area, determining that the actual cosmetic area of the user is wrong, and prompting the user to operate the wrong; or,
step h5), collecting the beauty area of the user to generate a plurality of images;
step h6), determining whether the beauty area of the user is consistent with the preset beauty area according to the plurality of images and the neural network;
step h7), if the cosmetic area is inconsistent with the preset cosmetic area, determining that the actual cosmetic area of the user is wrong, and prompting the user to operate the wrong.
In the embodiment of the invention, the terminal can identify the acquired image according to the image identification technology or the neural network model, further judge whether the actual beauty area of the user is consistent with the preset beauty area, and determine that the actual beauty area of the user is wrong if the actual beauty area is inconsistent with the preset beauty area, and prompt the user to operate the mistake. Therefore, when the actual beauty area is determined to be inconsistent with the preset beauty area, the user is timely reminded that the current actual beauty area is wrong, the user can be further reminded of the correct preset beauty area, the user is further helped to find the correct preset beauty area, and the beauty efficiency of the user is improved.
In some embodiments, prompting the user for the operational error comprises:
prompting the user of an operation error by voice; or,
displaying the dynamic actual operation of the user and displaying a prompt language about the operation error of the user.
It should be noted that, when the user performs the cosmetic operation, the terminal may provide a real-time dynamic operation guide of the image of the user, and also synchronously display the use condition of the user, for example, whether the user performs the correct manipulation in a certain area, for example, if the user should perform a lifting operation, the user performs a sliding operation, and the terminal timely prompts the user so that the user can use the cosmetic instrument more accurately. Therefore, the method and the device can remind the user according to the voice or remind the user according to the prompt words displayed by the terminal.
Specifically, when the terminal reminds the user according to the voice, the user can adjust the volume, and if the user does not hear the voice content or remember the voice content for the first time, the user can select to play the voice again, so that the user can obtain accurate prompt information according to the voice reminding.
In some embodiments, the method further comprises:
step i1), judging whether the current environment corresponding to the current face image accords with the preset beauty environment through ambient light detection and/or distance detection;
step i2), if the preset beauty environment is met, performing the operation of recognizing the current face image to obtain the target face image.
Through ambient light detection and distance detection, the terminal can more accurately judge whether the current environment of the user accords with a preset beauty environment, for example, the distance between the user terminal and the face of the user and/or the light of the environment where the user is located are detected, the detected distance parameter or light parameter is used for judging whether the current environment accords with the preset beauty environment, if the current environment does not accord with the preset beauty environment, prompt information can be sent to the user to inform the user, wherein, the prompt information can also comprise the requirement of the preset beauty environment, so that the user can search a proper environment according to the requirement, when the searched environment accords with the preset beauty environment, the current face image is identified, the operation of the target face image is obtained, and the real information of the face of the user is obtained. Therefore, by determining that the current environment corresponding to the current face image conforms to the preset beauty environment, the real information of the face of the user can be obtained, namely the information such as the real skin state of the face of the user can be obtained, and then, a corresponding beauty mode and the like are recommended to the user, so that the beauty recommendation accuracy of the user is improved, and the user experience is improved.
In some embodiments, the first guidance data includes a cosmetic timing; the method further comprises the following steps:
step j1), sending timing reminding information based on the beauty timing;
the beauty timing includes at least one of beauty timing of an instruction figure for instructing beauty operation, beauty timing of an instruction animation for instructing beauty operation, beauty timing of an instruction character for instructing beauty operation, and beauty timing of an instruction voice for instructing beauty operation, which are included in the first instruction data.
In practical application, the beauty timer comprises: the forward count and the countdown may be, for example, a digital countdown or a progress bar countdown. By means of the timing function, the user can know the time spent for beautifying and the residual beautifying time in real time, the psychological burden that the user feels too much time spent during beautifying is greatly relieved, and the beautifying becomes more interesting and controllable.
In some embodiments, the first guidance data and/or the second guidance data includes a cosmetic timing; the method further comprises the following steps:
step j2), sending timing reminding information based on the beauty timing; wherein,
the beauty timing comprises at least one of beauty timing of an indication graph for guiding beauty operation, beauty timing of an indication animation for guiding beauty operation, beauty timing of an indication character for guiding beauty operation and beauty timing of an indication voice for guiding beauty operation, wherein the indication graph is included in the first indication data;
the beauty timing further includes at least one of beauty timing in response to a beauty mode selection operation, beauty timing of a beauty gear selection operation, and beauty timing of a beauty area selection operation for the beauty tutorial in correspondence with the second guide data.
In an embodiment of the present invention, the first guide data and the second guide data each have a beauty timing, and the beauty timing includes: the forward count and the countdown may be, for example, a digital countdown or a progress bar countdown. When the user performs any cosmetic operation, the corresponding cosmetic timing is provided, so that the user can be reminded of the cosmetic time in time, and the user can select forward timing or countdown or select operation without a cosmetic timing function and the like.
Specifically, when the user selects forward timing or countdown, the user sends a voice instruction to the terminal, the voice instruction is an instruction for selecting countdown, and when the terminal receives the voice instruction, the terminal analyzes a keyword included in the voice instruction, searches a preset instruction matched with the keyword in a preset instruction set, and selects countdown according to the preset instruction.
In some embodiments, the beauty tutorial further comprises at least one of a pause control, a previous step control and a next step control; the method further comprises the following steps:
step k1), in response to the selection operation of the pause control, controlling the cosmetic device to stop running;
step k2), responding to the selection operation of the previous step control, and controlling the beauty device to perform the beauty operation of the previous step;
step k3), responding to the selection operation of the next step control, and controlling the beauty device to perform the next beauty operation.
In the embodiment of the invention, when the user uses the beauty equipment, the user can manually control the beauty equipment to stop running or control the beauty equipment to carry out the previous beauty operation or control the beauty equipment to carry out the next beauty operation according to the actual situation of the user.
In some embodiments, the method further comprises:
and displaying the target face image comprising the first guide data on the terminal.
In this embodiment, the target face image is acquired through the terminal, and after the user selects at least one of the beauty mode, the gear position, or the beauty area, the target face image including the first guide data is displayed on the terminal.
In some embodiments, as shown in fig. 3, the method further comprises a photographing control, the method further comprising:
and step l), in response to the selection operation of the photographing control, acquiring the facial image at the moment corresponding to the selection operation.
Through the cosmetic equipment in the use guide's in-process, can take a picture at any time, can save and/or share the photo of shooing.
Example two:
fig. 4 is a schematic structural diagram of a control device based on a cosmetic device according to an embodiment of the present application, where the control device may be applied to a terminal (e.g., a mobile phone), and as shown in fig. 4, the control device 400 based on a cosmetic device includes:
an acquirer 401 for acquiring at least one target face image;
a processor 402 for generating first guide data corresponding to a beauty tutorial from at least one target facial image;
a display 403 for displaying first guide data corresponding to the beauty tutorial on the at least one target face image.
For the acquirer 401, it should be noted that the terminal refers to a terminal having a shooting function and a display function, and the terminal includes but is not limited to: the intelligent mobile phone comprises a smart phone, a watch, a pad, a smart mirror, a computer, an on-vehicle display screen, a television with a camera, a refrigerator with a screen, an intelligent door with a screen, an intelligent sound box with a screen and the like, wherein the following available terminals are used as execution main bodies.
For the processor 402, in this step, when the terminal acquires the target facial image, first guide data corresponding to the beauty tutorial may be generated, the first guide data being a portion containing guide information, for example, the first guide data being a dynamic guide, a static guide, and/or a voice guide, etc. for the target facial image. Specifically, the first guidance data may be a text guidance, a voice guidance, a straight arrow or a curved arrow, a dynamic arrow, or an icon guidance, and the like, where the type of the first guidance data is not limited. Therefore, the terminal guides the user to use the beauty equipment through the first guide data, so that the user can operate the beauty equipment without barrier, and the interestingness of using the beauty equipment can be increased.
For the display 403, by displaying the first guide data corresponding to the beauty tutorial, specific first guide data can be displayed for the target face image, so that the user can observe the beauty guide condition for the face of the user, and the beauty device can perform beauty action according to the guide data, so that the use of the beauty device is more efficient, the technical problems that the beauty operation cannot be visualized and the operation interestingness is lacked are solved.
In practical application, the real-time facial expression of the user himself, which is acquired and recognized previously, is displayed according to the display screen in the terminal, so that the technical problems that the beauty operation cannot be visualized and the operation is lack of interest are solved.
In some embodiments, the at least one target facial image comprises a target facial video or a real-time continuous target facial image.
In the embodiment of the device, it should be noted that the terminal may perform recognition and three-dimensional modeling according to algorithms such as an analysis synthesis method and the like according to a single photo, multiple photos or a target face video uploaded or shot by a user, and reconstruct a 3D face model very similar to a real human face of the user, so that the target face image is conveniently and accurately acquired because the 3D face model has a very accurate contour size or a vivid color, or the terminal may perform tracking and re-engraving of a facial expression through a three-dimensional deformation model, perform mapping on the reconstructed human face or a virtual 3D cartoon face, and thereby obtain the target face image in real time.
The embodiment of the device can clearly identify the real-time facial expression of the user by identifying the target face video or the real-time continuous target face image, and increases the interestingness of the user in using the beauty equipment.
In some embodiments, the first guidance data includes at least one of a guidance graphic for guiding the cosmetic procedure, a guidance animation for guiding the cosmetic procedure, a guidance text for guiding the cosmetic procedure, and a guidance voice for guiding the cosmetic procedure.
In this embodiment of the apparatus, specifically, the indication graph for guiding the cosmetic operation in the first guiding data may include a straight arrow or a curved arrow as a guide, and the straight arrow or the curved arrow guides the user to perform the cosmetic operation in a corresponding direction and at a corresponding speed in a specific target face region, for example, an upward curved arrow is used on a right cheek of a person to indicate the upward operation, and the speed of the cosmetic operation may be indicated according to the speed at which the graph changes; pointed buttons or options and the like can help a user to better operate the beauty equipment; the instruction animation for guiding the beauty operation can be a motion picture or a small video and is used for guiding a user to carry out the beauty operation of corresponding direction, corresponding force, speed and the like in a specific target face area; the instruction words for guiding the cosmetic operation can be used for prompting the user how to guide or prompt the cosmetic operation in the corresponding area of the target face, such as the direction, strength, speed and the like of the cosmetic operation, and specifically, the prompt words that can appear are "please perform the cosmetic operation on the forehead with a certain strength and a certain speed from left to right direction" when the user reads the instruction words, the cosmetic operation can be performed according to the instruction words; the instruction voice for directing the beauty operation is a voice manner, and performs guidance prompt of the beauty operation in a corresponding region of the target face, such as a direction, strength, speed, and the like of the beauty operation, specifically, the prompt voice that can be played is "please perform the beauty operation on the left cheek, from bottom to top, with a certain strength and a certain speed" when the user hears the instruction voice, then the corresponding beauty operation can be performed according to the prompt of the instruction voice.
In some embodiments, before the processor, a responder is further included for:
before the processor generates first guide data corresponding to a beauty course according to the at least one target face image, second guide data corresponding to a selection operation result is generated in response to at least one of a beauty mode selection operation, a beauty gear selection operation, a beauty area selection operation, a beauty technique selection operation and a skin care product selection operation aiming at the beauty course; the second index data includes at least one of:
an indication pattern indicating at least one of a mode selection operation, a shift selection operation, and a region selection operation;
an indication animation indicating at least one of a mode selection operation, a gear selection operation, and a region selection operation;
an indication character indicating at least one of a mode selection operation, a gear selection operation, and a region selection operation;
and an indication voice indicating at least one of a mode selection operation, a shift selection operation, and a region selection operation.
In the embodiment of the present apparatus, it should be noted that the second guidance data is also a portion containing guidance information, for example, the second guidance data is a dynamic guidance, a static guidance, and/or a voice guidance for instructing a mode selection operation, a shift selection operation, or a region selection operation, and the like. Specifically, the instruction graph may include a straight arrow or a curved arrow as a guide, or the instruction animation may be a small video about how to use the beauty equipment, and the user may select to play the small video, and further, may quickly understand the step of operating the beauty equipment by watching the small video; or, the method can be used for prompting the user how to perform the next operation; or, the instruction voice may be used to prompt the user how to perform the next operation, and the like, and the type of the second instruction data is not limited here.
In practical applications, the second guidance data may include guidance contents for a beauty mode, a beauty gear, a beauty timing, and the like, the beauty mode includes a whitening mode, a quick pull-up mode, and the like, the beauty gear includes a first gear, a second gear, and a third gear, when the beauty mode is selected, a beauty procedure introduction, a beauty efficacy, and a skin to which the beauty mode or the gear is applied of the beauty mode are displayed on a display screen of the terminal, and after beauty is started by clicking, other menus displayed on the display screen of the terminal are retracted.
In some embodiments, the target facial image is divided into at least one preset region, the at least one preset region comprising: at least one of the left cheek, right cheek, T-zone, periocular, perilabial, chin, forehead and neck; the processor is further configured to:
and recommending the corresponding beauty mode and/or the corresponding beauty gear to the user based on the preset area.
In the embodiment of the device, it should be noted that when the embodiment recommends the beauty treatment mode and/or beauty treatment gear to the user, specifically, it is able to recommend various information of benefit to the skin, such as a care product or a medicine. Therefore, the method and the device can help the user to better care the skin by recommending the beauty mode and/or the beauty gear to the user.
In some embodiments, the target facial image is divided into at least one preset region, the at least one preset region comprising: at least one of the left cheek, right cheek, T-zone, periocular, perilabial, chin, forehead and neck; the processor is further configured to:
identifying at least one preset area, and extracting skin features of the at least one preset area;
determining skin types corresponding to the skin characteristics;
based on the skin type, a corresponding beauty mode and/or a corresponding beauty gear is recommended to the user.
In the embodiment of the device, the terminal can recognize the target face image by using an image recognition technology, specifically, the target face image is divided into a plurality of preset areas, then the images where the preset areas are located are respectively recognized, the actual skin characteristics in each preset area are extracted, the actual skin characteristics are compared with the standard skin characteristics to obtain the difference, further, the skin type corresponding to the skin characteristics is determined, and the corresponding beauty mode and/or the corresponding beauty gear is recommended to the user according to the skin characteristics.
Illustratively, the extracted actual skin feature may be smoothness of the skin, or skin color, etc., and accordingly, the skin type of the user may be determined according to the smoothness of the skin, or the skin type of the user may be determined according to the skin color, etc., and the skin type mainly includes at least one of dry skin, sensitive skin, oily skin, or mixed skin, and the skin feature is not limited herein.
In some embodiments, the first guideline data comprises at least one of:
operation speed guide data and/or operation direction guide data of the cosmetic operation for at least one preset area;
at least one guide data of operation start position data of the cosmetic operation, operation end position data of the cosmetic operation, and intermediate operation stay position data between the operation start position data and the operation end position data for at least one preset region;
operation force guide data for cosmetic operation of at least one preset region.
In the embodiment of the present apparatus, the first guide data may be operation speed guide data for a preset region and/or operation direction guide data for a cosmetic operation, for example, the first guide data may be operation speed guide data and operation direction guide data for a left cheek, the operation speed guide data is an operation performed every 3 seconds, and the operation direction guide data is a direction in which a side of the left cheek close to the nose is directed to be close to the ear, so that the user may perform the cosmetic operation on the left cheek based on the above two kinds of guide data, specifically, the cosmetic operation is performed at a speed in which the side of the left cheek close to the nose is directed to be close to the ear, and a slide is performed on the left cheek every 3 seconds. Therefore, by prompting the use direction and the use speed of the beauty equipment, the user can accurately operate the beauty equipment and the interestingness of using the beauty equipment is guaranteed to be increased.
The first guide data may be operation start position data for a preset area, operation end position data, and intermediate operation stop position data between the operation start position data and the operation end position data, for example, the first guide data is operation start position data for a chin, operation end position data, and intermediate operation stop position data between the operation start position data and the operation end position data, the operation start position data is a position above the chin, i.e., near the mouth, the operation end position data is below the chin, and the intermediate operation stop position data between the operation start position data and the operation end position data is an intermediate position between the above and below the chin, so that the user can perform a cosmetic operation on the chin according to the above three guide data, specifically, the cosmetic operation is from above the chin down, stay in the middle position and continue downwards until reaching the position below the chin. Therefore, by prompting the position data used by the cosmetic equipment of the user, the user can not only accurately operate the cosmetic equipment, but also the interestingness of using the cosmetic equipment is ensured to be increased.
The first guidance data may also be operation force guidance data for a preset area, for example, the first guidance data is operation force guidance data for a chin, and the operation force guidance data is light, so that a user may perform a cosmetic operation on the chin according to the above guidance data, specifically, the cosmetic operation is performed on the chin with light force. Therefore, through prompting the operation force guide data used by the user cosmetic equipment, the user can operate the cosmetic equipment with accurate force, and the interestingness of using the cosmetic equipment can be increased.
Therefore, through the first guide data or the plurality of first guide data, the technical problems that the beauty operation cannot be visualized and the operation is lack of interest can be effectively solved, in addition, the whole skin care process can be directly and dynamically presented in front of a user to guide the user to carry out accurate operation, the expected effect of the beauty equipment is realized, any potential safety hazard caused by improper use is avoided, the interactivity and the accuracy of the whole beauty process are enhanced according to the guide data, and the use of the beauty equipment is more convenient.
In some embodiments, the obtainer is to:
acquiring a current face image through a camera, and identifying the current face image to obtain a target face image; or,
the method comprises the steps of photographing a human face through a camera, and identifying a picture obtained by photographing to obtain a target face image; or,
receiving a picture containing a face uploaded by a user, and identifying the picture to obtain a target face image; or,
receiving a virtual human face generated by a computer, and determining the virtual human face as a target face image.
In this embodiment of the apparatus, it should be noted that the front-facing camera of the terminal may be used to acquire the face image. The method comprises the steps of judging whether a face image exists in an image or not by identifying a current face image, extracting position information in the face image if the face image exists, judging the size of the face and the position information of each main facial organ, and modeling based on the position information to obtain a more accurate target face image.
In some embodiments, the obtainer is to:
and identifying the current face image by following the moving position of the current face image based on a mode of combining the motion and the model to obtain the target face image.
Therefore, the terminal can follow the movement of the face of the user and move by identifying the moving position of the current face image, and the interest of using the beauty equipment is increased.
In some embodiments, further comprising a first controller for:
and controlling the beauty treatment equipment to execute the beauty treatment action based on the first guide data.
Specifically, the user manually performs the relevant cosmetic operation according to the guidance of the first guidance data.
In some embodiments, the apparatus further comprises a second controller to:
and controlling the beauty device to execute the beauty action based on the second index data.
Specifically, the user manually performs the related cosmetic action according to the guidance of the second guidance data.
In some embodiments, the first controller is specifically configured to:
prompting a user to execute a beauty operation on the beauty equipment according to the first guide data;
detecting whether a beauty operation is received on a button for executing a beauty action;
if the beauty operation is received, determining that a first operation instruction aiming at the beauty operation is received;
and controlling the beauty device to execute the beauty action according to the first operation instruction.
In the embodiment of the device, the terminal can control the beauty equipment to execute the beauty action according to the manual operation of the user, specifically, the user is prompted to hold the beauty equipment by hands according to the first guide data to carry out the beauty operation on the beauty equipment, when the user carries out the beauty operation on a button or an option and the like on the beauty equipment according to the first guide data, the terminal receives the behavior of the beauty operation of the user and generates a corresponding first operation instruction according to the behavior of the beauty operation, and finally the terminal can execute the beauty action according to the first operation instruction; the first operation instruction may include an operation instruction specifically required to be performed by the user, for example, a looping cosmetic operation is performed on the forehead, or a lifting action is performed from bottom to top on the left cheek.
In some embodiments, the second controller is specifically configured to:
prompting the user to perform the beauty operation on the beauty equipment according to the second guide data;
detecting whether a beauty operation is received on a button for executing a beauty action;
if the cosmetic operation is received, determining that a second operation instruction aiming at the cosmetic operation is received;
and controlling the beauty equipment to execute the corresponding beauty action according to the second operation instruction.
In the embodiment of the device, the terminal can control the beauty equipment to execute the beauty action according to the manual operation of the user, specifically, the user is prompted to carry out the beauty operation on the beauty equipment according to the second guide data, when the user carries out the beauty operation on the buttons or options and the like on the beauty equipment according to the second guide data, the terminal receives the behavior of the beauty operation of the user and generates a corresponding second operation instruction according to the behavior of the beauty operation, and finally the terminal can execute the beauty action according to the second operation instruction; specifically, the second operation instruction may include an operation of performing a whitening operation, or a cleaning cosmetic operation, or another cosmetic mode or a cosmetic shift; in this embodiment, the user manually selects or sets the corresponding beauty mode or beauty gear, and after the beauty device receives the beauty mode or beauty gear selected by the user, the corresponding second operation instruction is executed. In some embodiments, the second controller is to:
generating a third operation instruction for correspondingly controlling the beauty equipment according to the second guide data;
and controlling the beauty equipment to execute corresponding beauty actions according to the third operation instruction.
In the embodiment of the apparatus, the second guidance data includes an indication graph indicating at least one of a mode selection operation, a gear selection operation, and a region selection operation, and different from the above embodiments, in the embodiment, the manner of setting or selecting the second guidance data may be to select or set related contents such as a beauty treatment mode, a beauty treatment gear, or a beauty treatment region by using another intelligent application such as APP or applet, so that a user is prevented from having to manually perform a trouble of selecting a beauty treatment mode, a beauty treatment gear, or a beauty treatment region of a beauty treatment device; when the user is according to the secondIndication graph corresponding to region selection operation in indication data is operatedWhen the temperature of the water is higher than the set temperature,and generating a third operation instruction corresponding to the indication graph, wherein the third operation instruction is an operation control instruction sent to the beauty equipment through an APP or an applet in the intelligent terminal, for example, the third operation instruction is an instruction sent to the beauty equipment by the APP or the applet for performing beauty operation on the selected area, and finally, the terminal controls the beauty equipment to execute a corresponding beauty action according to the third operation instruction.
In some embodiments, further comprising a motion sensing component built into the cosmetic device; the motion sensing assembly is configured to:
monitoring the actual motion of the user through a motion sensing component;
the processor is further configured to:
judging whether the actual action is consistent with the standard action;
and if the actual action is inconsistent with the standard action, prompting the user of an operation error.
In the embodiment of the device, the terminal can monitor the actual action of the user according to the motion sensing assembly, for example, the actual action comprises the operation speed, the operation direction, the operation starting position, the operation ending position, the middle operation stopping position or the operation strength of the cosmetic operation, and the like of the cosmetic operation, and when the actual action of the user is monitored to be inconsistent with the standard action, the operation error of the user is reminded, so that the actual action of the user can be monitored in real time through the motion sensing assembly, further, the operation error of the user can be reminded in time, the user is prevented from executing the wrong cosmetic operation, and the cosmetic effect is improved.
In some embodiments, the obtainer is further for:
collecting the beauty actions of a user to generate a plurality of images;
the processor is further configured to:
identifying a plurality of images, and extracting action characteristics in the images;
determining the actual action of the user according to the action characteristics;
judging whether the actual action is consistent with the standard action;
if the action is inconsistent with the standard action, determining the actual action error of the user, and prompting the user of an operation error; or, the obtainer is further configured to:
collecting the beauty actions of a user to generate a plurality of images;
the processor is further configured to:
determining whether the cosmetic action of the user is consistent with the standard action according to the plurality of images and the neural network;
and if the standard action is inconsistent with the user operation, determining that the user cosmetic action is wrong, and prompting the user operation error.
In the embodiment of the device, the terminal can identify the acquired image according to an image identification technology or a neural network model, so as to monitor the actual action of the user and remind the user when the actual cosmetic action of the user is wrong.
In some embodiments, the obtainer is further for:
collecting a beauty area of a user to generate a plurality of images;
the processor is further configured to:
identifying a plurality of images, and determining the actual beauty area of the user according to the face area characteristics;
judging whether the actual beauty area of the user is consistent with a preset beauty area or not;
if the facial area is inconsistent with the preset facial area, determining that the actual facial area of the user is wrong, and prompting the user to operate the mistake; or, the obtainer is further configured to:
collecting a beauty area of a user to generate a plurality of images;
the processor is further configured to:
determining whether the beauty area of the user is consistent with a preset beauty area or not according to the plurality of images and the neural network;
and if the facial area is inconsistent with the preset facial area, determining that the actual facial area of the user is wrong, and prompting the user to operate the mistake.
In the embodiment of the device, the terminal can identify the acquired image according to an image identification technology or a neural network model, further judge whether the actual beauty area of the user is consistent with the preset beauty area, and determine that the actual beauty area of the user is wrong if the actual beauty area is inconsistent with the preset beauty area, so as to prompt the user to operate the mistake. Therefore, when the actual beauty area is determined to be inconsistent with the preset beauty area, the user is timely reminded that the current actual beauty area is wrong, the user can be further reminded of the correct preset beauty area, the user is further helped to find the correct preset beauty area, and the beauty efficiency of the user is improved.
In some embodiments, prompting the user for the operational error comprises:
prompting the user of an operation error by voice; or,
displaying the dynamic actual operation of the user and displaying a prompt language about the operation error of the user.
In the embodiment of the device, it should be noted that, when the user performs the cosmetic operation, the terminal may provide a real-time dynamic operation guide of the image of the user, and also synchronously display the use condition of the user, for example, whether the user performs the correct manipulation in a certain area, if the manipulation is a lifting motion, the user performs a sliding motion, and the terminal prompts in time, so that the user can use the cosmetic instrument more accurately. Therefore, the method and the device can remind the user according to the voice or remind the user according to the prompt words displayed by the terminal.
Specifically, when the terminal reminds the user according to the voice, the user can adjust the volume, and if the user does not hear the voice content or remember the voice content for the first time, the user can select to play the voice again, so that the user can obtain accurate prompt information according to the voice reminding.
In some embodiments, further comprising a detector for:
ambient light detection and/or distance detection;
the processor is further configured to:
judging whether the current environment corresponding to the current face image accords with a preset beauty environment or not;
and if the facial image accords with the preset beauty environment, performing the operation of identifying the current facial image to obtain the target facial image.
In the embodiment of the device, through ambient light detection and distance detection, the terminal can more accurately judge whether the current environment of the user accords with the preset beauty environment, for example, the distance between the user terminal and the face of the user and/or the light of the environment where the user is located are detected, the detected distance parameters or light parameters are used for judging whether the current environment accords with the preset beauty environment, if the current environment does not accord with the preset beauty environment, prompt information can be sent to the user to inform the user, wherein the prompt information can also comprise the requirement of the preset beauty environment, so that the user can search for a proper environment according to the requirement, when the searched environment accords with the preset beauty environment, the current face image is identified, the operation of the target face image is obtained, and the real information of the face of the user is obtained. Therefore, by determining that the current environment corresponding to the current face image conforms to the preset beauty environment, the real information of the face of the user can be obtained, namely the information such as the real skin state of the face of the user can be obtained, and then, a corresponding beauty mode and the like are recommended to the user, so that the beauty recommendation accuracy of the user is improved, and the user experience is improved.
In some embodiments, the first guidance data includes a cosmetic timing; the processor is further configured to:
sending timing reminding information based on the beauty timing;
the beauty timing includes at least one of beauty timing of an instruction figure for instructing beauty operation, beauty timing of an instruction animation for instructing beauty operation, beauty timing of an instruction character for instructing beauty operation, and beauty timing of an instruction voice for instructing beauty operation, which are included in the first instruction data.
In the embodiment of the device, in practical application, the beauty timing comprises the following steps: the forward count and the countdown may be, for example, a digital countdown or a progress bar countdown. By means of the timing function, the user can know the time spent for beautifying and the residual beautifying time in real time, the psychological burden that the user feels too much time spent during beautifying is greatly relieved, and the beautifying becomes more interesting and controllable.
In some embodiments, the first guidance data and/or the second guidance data includes a cosmetic timing; the processor is further configured to:
sending timing reminding information based on the beauty timing; wherein,
the beauty timing comprises at least one of beauty timing of an indication graph for guiding beauty operation, beauty timing of an indication animation for guiding beauty operation, beauty timing of an indication character for guiding beauty operation and beauty timing of an indication voice for guiding beauty operation, wherein the indication graph is included in the first indication data;
the beauty timing further includes at least one of beauty timing in response to a beauty mode selection operation, beauty timing of a beauty gear selection operation, and beauty timing of a beauty area selection operation for the beauty tutorial in correspondence with the second guide data.
In an embodiment of the apparatus, the first guide data and the second guide data each have a beauty timing, and the beauty timing includes: the forward count and the countdown may be, for example, a digital countdown or a progress bar countdown. When the user performs any cosmetic operation, the corresponding cosmetic timing is provided, so that the user can be reminded of the cosmetic time in time, and the user can select forward timing or countdown or select operation without a cosmetic timing function and the like.
Specifically, when the user selects forward timing or countdown, the user sends a voice instruction to the terminal, the voice instruction is an instruction for selecting countdown, and when the terminal receives the voice instruction, the terminal analyzes a keyword included in the voice instruction, searches a preset instruction matched with the keyword in a preset instruction set, and selects countdown according to the preset instruction.
In some embodiments, the beauty tutorial further comprises at least one of a pause control, a previous step control and a next step control; the processor is further configured to:
controlling the beauty equipment to stop running in response to the selection operation of the pause control;
responding to the selection operation of the previous step control, and controlling the beauty equipment to perform the beauty operation of the previous step;
and responding to the selection operation of the next step control, and controlling the beauty device to perform the next beauty operation.
In the embodiment of the device, when the user uses the beauty equipment, the user can manually control the beauty equipment to stop running or control the beauty equipment to perform the previous beauty operation or control the beauty equipment to perform the next beauty operation according to the actual situation of the user.
In some embodiments, the display is further configured to:
and displaying the target face image comprising the first guide data on the terminal.
In this embodiment, the target face image is acquired through the terminal, and after the user selects at least one of the beauty mode, the gear position, or the beauty area, the target face image including the first guide data is displayed on the terminal.
In some embodiments, the beauty appliance-based control device further comprises a photographing control, the beauty appliance-based control device is configured to:
and responding to the selection operation of the photographing control, and acquiring the facial image at the moment corresponding to the selection operation.
In the embodiment of the device, the beautifying equipment can take pictures at any time in the use guiding process, and the taken pictures can be stored and/or shared.
The control device based on the beauty equipment provided by the embodiment of the application has the same technical characteristics as the control method based on the beauty equipment provided by the embodiment, so the same technical problems can be solved, and the same technical effects are achieved.
Example three:
as shown in fig. 5, an electronic device 500 includes a memory 501 and a processor 502, where the memory stores a computer program that can run on the processor, and the processor executes the computer program to implement the steps of the method provided in the foregoing embodiment.
Referring to fig. 5, the electronic device further includes: a bus 503 and a communication interface 504, and the processor 502, the communication interface 504 and the memory 501 are connected by the bus 503; the processor 502 is used to execute executable, e.g. computer programs, stored in the memory 501.
The Memory 501 may include a high-speed Random Access Memory (RAM), and may also include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. The communication connection between the network element of the system and at least one other network element is realized through at least one communication interface 504 (which may be wired or wireless), and the internet, a wide area network, a local network, a metropolitan area network, and the like can be used.
Bus 503 may be an ISA bus, PCI bus, EISA bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one double-headed arrow is shown in FIG. 5, but this does not indicate only one bus or one type of bus.
The memory 501 is used for storing a program, and the processor 502 executes the program after receiving an execution instruction, and the method performed by the apparatus defined by the process disclosed in any of the foregoing embodiments of the present application may be applied to the processor 502, or implemented by the processor 502.
The processor 502 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 502. The Processor 502 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the device can also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA), or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software in the decoding processor. The software may be in ram, flash, rom, prom, or eprom, registers, among other storage media as is well known in the art. The storage medium is located in the memory 501, and the processor 502 reads the information in the memory 501, and completes the steps of the method in combination with the hardware thereof.
Example four:
corresponding to the control method based on the beauty treatment equipment, the embodiment of the application also provides a computer readable storage medium, wherein the computer readable storage medium stores machine executable instructions, and when the computer executable instructions are called and executed by the processor, the computer executable instructions cause the processor to execute the steps of the control method based on the beauty treatment equipment.
The control device based on the beauty treatment equipment provided by the embodiment of the application can be specific hardware on the equipment or software or firmware installed on the equipment. The device provided by the embodiment of the present application has the same implementation principle and technical effect as the foregoing method embodiments, and for the sake of brief description, reference may be made to the corresponding contents in the foregoing method embodiments where no part of the device embodiments is mentioned. It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the foregoing systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
For another example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a machine, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments provided in the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the control method based on the beauty appliance according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus once an item is defined in one figure, it need not be further defined and explained in subsequent figures, and moreover, the terms "first", "second", "third", etc. are used merely to distinguish one description from another and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present application, and are used for illustrating the technical solutions of the present application, but not limiting the same, and the scope of the present application is not limited thereto, and although the present application is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope disclosed in the present application; such modifications, changes or substitutions do not depart from the scope of the embodiments of the present application. Are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (46)
1. A control method based on a cosmetic device, characterized by comprising:
acquiring at least one target face image;
generating first guide data corresponding to a beauty course according to the at least one target face image;
displaying first guide data corresponding to the beauty tutorial on the at least one target facial image.
2. The cosmetic device-based control method according to claim 1, wherein the at least one target facial image comprises a target facial video or a real-time continuous target facial image.
3. The cosmetic device-based control method according to claim 1, wherein the first guidance data includes at least one of a guidance graphic for guiding a cosmetic operation, a guidance animation for guiding a cosmetic operation, a guidance text for guiding a cosmetic operation, and a guidance voice for guiding a cosmetic operation.
4. The cosmetic device-based control method according to claim 3, further comprising, before the generating first guide data corresponding to a cosmetic tutorial from the at least one target facial image: generating second guide data corresponding to a result of a selection operation in response to at least one of a beauty mode selection operation, a beauty range selection operation, a beauty area selection operation, a beauty technique selection operation, and a skin care product selection operation for a beauty tutorial; the second index data includes at least one of:
an indication figure indicating at least one of the mode selection operation, the gear selection operation, and the region selection operation;
an indication animation indicating at least one of the mode selection operation, the gear selection operation, and the region selection operation;
an indication character indicating at least one of the mode selection operation, the gear selection operation, and the region selection operation;
an indication voice indicating at least one of the mode selection operation, the gear selection operation, and the region selection operation.
5. The cosmetic-device-based control method according to claim 1, wherein the target facial image is divided into at least one preset region, the at least one preset region comprising: at least one of the left cheek, right cheek, T-zone, periocular, perilabial, chin, forehead and neck; the method further comprises the following steps:
and recommending a corresponding beauty mode and/or a corresponding beauty gear to the user based on the preset area.
6. The cosmetic-device-based control method according to claim 1, wherein the target facial image is divided into at least one preset region, the at least one preset region comprising: at least one of the left cheek, right cheek, T-zone, periocular, perilabial, chin, forehead and neck; the method further comprises the following steps:
identifying at least one preset area, and extracting skin features of the at least one preset area;
determining a skin type corresponding to the skin characteristic;
and recommending the corresponding beauty mode and/or the corresponding beauty gear to the user based on the skin type.
7. The cosmetic device-based control method according to claim 5 or 6, wherein the first guide data includes at least one of:
operation speed guide data of a cosmetic operation and/or operation direction guide data of a cosmetic operation for at least one of the preset regions;
at least one guide data of operation start position data of a cosmetic operation, operation end position data of a cosmetic operation, and intermediate operation stop position data between the operation start position data and the operation end position data for at least one of the preset regions;
operation force guide data of a cosmetic operation for at least one of the preset regions.
8. The cosmetic device-based control method according to claim 1, wherein the acquiring a target face image comprises:
acquiring a current face image through a camera, and identifying the current face image to obtain a target face image; or,
photographing a human face through the camera, and identifying a picture obtained by photographing to obtain a target face image; or,
receiving a picture which is uploaded by a user and contains a human face, and identifying the picture to obtain a target face image; or,
receiving a virtual human face generated by a computer, and determining the virtual human face as the target face image.
9. The cosmetic device-based control method according to claim 1, further comprising:
controlling a cosmetic device to perform a cosmetic action based on the first guideline data.
10. The cosmetic device-based control method according to claim 4, further comprising: controlling a cosmetic device to perform a cosmetic action based on the second guidance data.
11. The cosmetic device-based control method according to claim 9,
the controlling the cosmetic device to perform a cosmetic action based on the first guideline data comprises:
prompting a user to perform a cosmetic operation on the cosmetic device according to the first guide data;
detecting whether the beauty operation is received on a button for executing beauty action;
if the beauty operation is received, determining that a first operation instruction aiming at the beauty operation is received;
and controlling the beauty device to execute a beauty action according to the first operation instruction.
12. The cosmetic device-based control method according to claim 10,
the controlling the cosmetic device to perform a cosmetic action based on the second guidance data includes:
prompting a user to perform a cosmetic operation on the cosmetic device according to the second guide data;
detecting whether the beauty operation is received on a button for executing beauty action;
if the cosmetic operation is received, determining that a second operation instruction aiming at the cosmetic operation is received;
and controlling the beauty equipment to execute corresponding beauty actions according to the second operation instruction.
13. The cosmetic device-based control method according to claim 10, wherein the controlling a cosmetic device to perform a cosmetic action based on the second guidance data comprises:
generating a third operation instruction for correspondingly controlling the beauty equipment according to the second guide data;
and controlling the beauty equipment to execute corresponding beauty actions according to the third operation instruction.
14. The cosmetic device-based control method according to claim 1, wherein a motion sensing component is built in the cosmetic device; the method further comprises the following steps:
monitoring, by the motion sensing component, an actual motion of a user;
judging whether the actual action is consistent with the standard action;
and if the actual action is inconsistent with the standard action, prompting the user of an operation error.
15. The cosmetic device-based control method according to claim 1, further comprising:
collecting the beauty actions of a user to generate a plurality of images;
identifying a plurality of images and extracting action characteristics in the images;
determining the actual action of the user according to the action characteristics;
judging whether the actual action is consistent with a standard action;
if the actual action is inconsistent with the standard action, determining the actual action error of the user, and prompting the user of an operation error; or,
collecting the beauty actions of a user to generate a plurality of images;
determining whether the cosmetic action of the user is consistent with the standard action according to the plurality of images and the neural network;
and if the standard action is inconsistent with the user operation error, determining the cosmetic action error of the user, and prompting the user operation error.
16. The cosmetic device-based control method according to claim 1, further comprising:
collecting a beauty area of a user to generate a plurality of images;
identifying the images, and determining the actual beauty area of the user according to the face area characteristics;
judging whether the actual beauty area of the user is consistent with a preset beauty area or not;
if the actual beauty area is inconsistent with the preset beauty area, determining that the actual beauty area of the user is wrong, and prompting the user of an operation error; or,
collecting a beauty area of a user to generate a plurality of images;
determining whether the beauty area of the user is consistent with a preset beauty area or not according to the plurality of images and the neural network;
and if the actual cosmetic area is inconsistent with the preset cosmetic area, determining that the actual cosmetic area of the user is wrong, and prompting the user of an operation error.
17. The cosmetic device-based control method according to any one of claims 14 to 16, wherein the prompting of the user operation error comprises:
prompting the user of the operation error by voice; or,
displaying the dynamic actual operation of the user, and displaying a prompt language about the operation error of the user.
18. The cosmetic device-based control method according to claim 8, further comprising:
judging whether the current environment corresponding to the current face image accords with a preset beauty environment or not through ambient light detection and/or distance detection;
and if the preset beauty environment is met, performing the operation of identifying the current face image to obtain a target face image.
19. The cosmetic device-based control method according to claim 1, wherein the first guide data includes a cosmetic timing; the method further comprises the following steps:
sending timing reminding information based on the beauty timing;
wherein the beauty timing includes at least one of beauty timing of an instruction figure for instructing a beauty operation, beauty timing of an instruction animation for instructing a beauty operation, beauty timing of an instruction character for instructing a beauty operation, and beauty timing of an instruction voice for instructing a beauty operation, which are included in the first instruction data.
20. The cosmetic device-based control method according to claim 4, wherein the first guidance data and/or the second guidance data includes a cosmetic timing; the method further comprises the following steps:
sending timing reminding information based on the beauty timing; wherein,
the beauty timing comprises at least one of beauty timing of an indication graph for guiding beauty operation, beauty timing of an indication animation for guiding beauty operation, beauty timing of an indication character for guiding beauty operation and beauty timing of an indication voice for guiding beauty operation, which are included in the first guide data;
the beauty timing further comprises at least one of beauty timing in response to a beauty mode selection operation, beauty timing of a beauty gear selection operation, and beauty timing of a beauty area selection operation for a beauty tutorial corresponding to the second guide data.
21. The beauty treatment apparatus-based control method according to claim 1, wherein the beauty tutorial further comprises at least one of a pause control, a previous step control, and a next step control; the method further comprises the following steps:
controlling the beauty treatment equipment to stop running in response to the selection operation of the pause control;
in response to the selection operation of the previous step control, controlling the beauty device to perform the beauty operation of the previous step;
and responding to the selection operation of the next step control, and controlling the beauty equipment to perform the next beauty operation.
22. The cosmetic device-based control method according to claim 1, further comprising:
and displaying the target face image comprising the first guide data on the terminal.
23. A control device based on a cosmetic apparatus, comprising:
an acquirer for acquiring at least one target face image;
the processor is used for generating first guide data corresponding to a beauty course according to the at least one target face image;
a display for displaying first guide data corresponding to the beauty tutorial on the at least one target facial image.
24. The cosmetic device-based control apparatus of claim 23, wherein the at least one target facial image comprises a target facial video or a real-time continuous target facial image.
25. The cosmetic device-based control apparatus according to claim 23, wherein the first guidance data includes at least one of a guidance graphic for guiding a cosmetic operation, a guidance animation for guiding a cosmetic operation, a guidance text for guiding a cosmetic operation, and a guidance voice for guiding a cosmetic operation.
26. The cosmetic device-based control apparatus of claim 25, further comprising a responder to:
before the processor generates first guide data corresponding to a beauty course according to the at least one target face image, responding to at least one of a beauty mode selection operation, a beauty gear selection operation, a beauty area selection operation, a beauty manipulation selection operation and a skin care product selection operation aiming at the beauty course, and generating second guide data corresponding to a selection operation result; the second index data includes at least one of:
an indication figure indicating at least one of the mode selection operation, the gear selection operation, and the region selection operation;
an indication animation indicating at least one of the mode selection operation, the gear selection operation, and the region selection operation;
an indication character indicating at least one of the mode selection operation, the gear selection operation, and the region selection operation;
an indication voice indicating at least one of the mode selection operation, the gear selection operation, and the region selection operation.
27. The cosmetic device-based control apparatus according to claim 23,
the target facial image is divided into at least one preset region, the at least one preset region comprising: at least one of the left cheek, right cheek, T-zone, periocular, perilabial, chin, forehead and neck;
the processor is further configured to:
and recommending a corresponding beauty mode and/or a corresponding beauty gear to the user based on the preset area.
28. The cosmetic device-based control apparatus according to claim 23, wherein the target facial image is divided into at least one preset region, the at least one preset region comprising: at least one of the left cheek, right cheek, T-zone, periocular, perilabial, chin, forehead and neck;
the processor is further configured to:
identifying at least one preset area, and extracting skin features of the at least one preset area;
determining a skin type corresponding to the skin characteristic;
and recommending the corresponding beauty mode and/or the corresponding beauty gear to the user based on the skin type.
29. The cosmetic device-based control apparatus according to claim 27 or 28, wherein the first guidance data comprises at least one of:
operation speed guide data of a cosmetic operation and/or operation direction guide data of a cosmetic operation for at least one of the preset regions;
at least one guide data of operation start position data of a cosmetic operation, operation end position data of a cosmetic operation, and intermediate operation stop position data between the operation start position data and the operation end position data for at least one of the preset regions;
operation force guide data of a cosmetic operation for at least one of the preset regions.
30. The cosmetic device-based control apparatus of claim 23, wherein the acquirer is configured to:
acquiring a current face image through a camera, and identifying the current face image to obtain a target face image; or,
photographing a human face through the camera, and identifying a picture obtained by photographing to obtain a target face image; or,
receiving a picture which is uploaded by a user and contains a human face, and identifying the picture to obtain a target face image; or,
receiving a virtual human face generated by a computer, and determining the virtual human face as the target face image.
31. The cosmetic device-based control apparatus of claim 23, further comprising a first controller to:
controlling a cosmetic device to perform a cosmetic action based on the first guideline data.
32. The cosmetic device-based control apparatus of claim 26, further comprising a second controller to:
controlling a cosmetic device to perform a cosmetic action based on the second guidance data.
33. The cosmetic device-based control apparatus according to claim 31, wherein the first controller is specifically configured to:
prompting a user to perform a cosmetic operation on the cosmetic device according to the first guide data;
detecting whether the beauty operation is received on a button for executing beauty action;
if the beauty operation is received, determining that a first operation instruction aiming at the beauty operation is received;
and controlling the beauty device to execute a beauty action according to the first operation instruction.
34. The cosmetic device-based control apparatus according to claim 32, wherein the second controller is specifically configured to:
prompting a user to perform a cosmetic operation on the cosmetic device according to the second guide data;
detecting whether the beauty operation is received on a button for executing beauty action;
if the cosmetic operation is received, determining that a second operation instruction aiming at the cosmetic operation is received;
and controlling the beauty equipment to execute corresponding beauty actions according to the second operation instruction.
35. The cosmetic device-based control apparatus according to claim 32,
the second controller is to:
generating a third operation instruction for correspondingly controlling the beauty equipment according to the second guide data;
and controlling the beauty equipment to execute corresponding beauty actions according to the third operation instruction.
36. The cosmetic device-based control apparatus of claim 23, further comprising a motion sensing component built into the cosmetic device; the motion sensing assembly is configured to:
monitoring, by the motion sensing component, an actual motion of a user;
the processor is further configured to:
judging whether the actual action is consistent with the standard action;
and if the actual action is inconsistent with the standard action, prompting the user of an operation error.
37. The cosmetic device-based control apparatus of claim 23, wherein the acquirer is further configured to:
collecting the beauty actions of a user to generate a plurality of images;
the processor is further configured to:
identifying a plurality of images and extracting action characteristics in the images;
determining the actual action of the user according to the action characteristics;
judging whether the actual action is consistent with a standard action;
if the actual action is inconsistent with the standard action, determining the actual action error of the user, and prompting the user of an operation error; or, the obtainer is further configured to:
collecting the beauty actions of a user to generate a plurality of images;
the processor is further configured to:
determining whether the cosmetic action of the user is consistent with the standard action according to the plurality of images and the neural network;
and if the standard action is inconsistent with the user operation error, determining the cosmetic action error of the user, and prompting the user operation error.
38. The cosmetic device-based control apparatus of claim 23, wherein the acquirer is further configured to:
collecting a beauty area of a user to generate a plurality of images;
the processor is further configured to:
identifying the images, and determining the actual beauty area of the user according to the face area characteristics;
judging whether the actual beauty area of the user is consistent with a preset beauty area or not;
if the actual beauty area is inconsistent with the preset beauty area, determining that the actual beauty area of the user is wrong, and prompting the user of an operation error; or, the obtainer is further configured to:
collecting a beauty area of a user to generate a plurality of images;
the processor is further configured to:
determining whether the beauty area of the user is consistent with a preset beauty area or not according to the plurality of images and the neural network;
and if the actual cosmetic area is inconsistent with the preset cosmetic area, determining that the actual cosmetic area of the user is wrong, and prompting the user of an operation error.
39. The cosmetic device-based control apparatus according to any one of claims 36-38, wherein the prompting of the user operation error comprises:
prompting the user of the operation error by voice; or,
displaying the dynamic actual operation of the user, and displaying a prompt language about the operation error of the user.
40. The cosmetic device-based control apparatus of claim 30, further comprising a detector for:
ambient light detection and/or distance detection;
the processor is further configured to:
judging whether the current environment corresponding to the current face image accords with a preset beauty environment or not;
and if the preset beauty environment is met, performing the operation of identifying the current face image to obtain a target face image.
41. The cosmetic device-based control apparatus of claim 23, wherein the first guidance data includes a cosmetic timing; the processor is further configured to:
sending timing reminding information based on the beauty timing;
wherein the beauty timing includes at least one of beauty timing of an instruction figure for instructing a beauty operation, beauty timing of an instruction animation for instructing a beauty operation, beauty timing of an instruction character for instructing a beauty operation, and beauty timing of an instruction voice for instructing a beauty operation, which are included in the first instruction data.
42. The cosmetic device-based control apparatus according to claim 26, wherein the first guidance data and/or the second guidance data includes a cosmetic timing; the processor is further configured to:
sending timing reminding information based on the beauty timing; wherein,
the beauty timing comprises at least one of beauty timing of an indication graph for guiding beauty operation, beauty timing of an indication animation for guiding beauty operation, beauty timing of an indication character for guiding beauty operation and beauty timing of an indication voice for guiding beauty operation, which are included in the first guide data;
the beauty timing further comprises at least one of beauty timing in response to a beauty mode selection operation, beauty timing of a beauty gear selection operation, and beauty timing of a beauty area selection operation for a beauty tutorial corresponding to the second guide data.
43. The cosmetic device-based control apparatus of claim 23, wherein the cosmetic tutorial further comprises at least one of a pause control, a previous step control, and a next step control; the processor is further configured to:
controlling the beauty treatment equipment to stop running in response to the selection operation of the pause control;
in response to the selection operation of the previous step control, controlling the beauty device to perform the beauty operation of the previous step;
and responding to the selection operation of the next step control, and controlling the beauty equipment to perform the next beauty operation.
44. The cosmetic device-based control apparatus of claim 23, wherein the display is further configured to:
and displaying the target face image comprising the first guide data on the terminal.
45. An electronic device comprising a memory and a processor, wherein the memory stores a computer program operable on the processor, and wherein the processor implements the steps of the method of any of claims 1 to 22 when executing the computer program.
46. A computer readable storage medium having stored thereon computer executable instructions which, when invoked and executed by a processor, cause the processor to execute the method of any one of claims 1 to 22.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011587624.XA CN112749634A (en) | 2020-12-28 | 2020-12-28 | Control method and device based on beauty equipment and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011587624.XA CN112749634A (en) | 2020-12-28 | 2020-12-28 | Control method and device based on beauty equipment and electronic equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112749634A true CN112749634A (en) | 2021-05-04 |
Family
ID=75646439
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011587624.XA Pending CN112749634A (en) | 2020-12-28 | 2020-12-28 | Control method and device based on beauty equipment and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112749634A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114299576A (en) * | 2021-12-24 | 2022-04-08 | 广州星际悦动股份有限公司 | Oral cavity cleaning area identification method, tooth brushing information input system and related device |
CN117651261A (en) * | 2022-09-29 | 2024-03-05 | 广州星际悦动股份有限公司 | Output control method, device, equipment and storage medium of beauty equipment |
WO2025000658A1 (en) * | 2023-06-30 | 2025-01-02 | 广东花至美容科技有限公司 | Beauty care device guiding method and apparatus, external device, and beauty care system |
Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2011008397A (en) * | 2009-06-24 | 2011-01-13 | Sony Ericsson Mobilecommunications Japan Inc | Makeup support apparatus, makeup support method, makeup support program and portable terminal device |
US20170256084A1 (en) * | 2014-09-30 | 2017-09-07 | Tcms Transparent Beauty, Llc | Precise application of cosmetic looks from over a network environment |
US20170255478A1 (en) * | 2016-03-03 | 2017-09-07 | Perfect Corp. | Systems and methods for simulated application of cosmetic effects |
CN107427227A (en) * | 2015-06-15 | 2017-12-01 | 哈伊姆·埃米尔 | System and method for adapting skin treatment |
CN108009893A (en) * | 2017-12-21 | 2018-05-08 | 广东协达信息科技有限公司 | A kind of cosmetic apparatus |
CN108256500A (en) * | 2018-02-05 | 2018-07-06 | 广东欧珀移动通信有限公司 | Information recommendation method and device, terminal and storage medium |
CN108292418A (en) * | 2015-12-15 | 2018-07-17 | 日本时尚造型师协会 | Information provider unit and information providing method |
US20180268572A1 (en) * | 2015-12-25 | 2018-09-20 | Panasonic Intellectual Property Management Co., Ltd. | Makeup part generating apparatus, makeup part utilizing apparatus, makeup part generating method, makeup part utilizing method, non-transitory computer-readable recording medium storing makeup part generating program, and non-transitory computer-readable recording medium storing makeup part utilizing program |
CN109784281A (en) * | 2019-01-18 | 2019-05-21 | 深圳壹账通智能科技有限公司 | Products Show method, apparatus and computer equipment based on face characteristic |
CN110297720A (en) * | 2018-03-22 | 2019-10-01 | 卡西欧计算机株式会社 | Notification device, notification method, and medium storing notification program |
CN110575001A (en) * | 2018-06-11 | 2019-12-17 | 卡西欧计算机株式会社 | Display control device, display control method, and medium storing display control program |
CN111507478A (en) * | 2020-04-08 | 2020-08-07 | 秦国洪 | An AI makeup artist system |
CN111557644A (en) * | 2020-04-22 | 2020-08-21 | 深圳市锐吉电子科技有限公司 | Skin care method and device based on intelligent mirror equipment and skin care equipment |
CN111651040A (en) * | 2020-05-27 | 2020-09-11 | 华为技术有限公司 | Interaction method of electronic equipment for skin detection and electronic equipment |
CN111968248A (en) * | 2020-08-11 | 2020-11-20 | 深圳追一科技有限公司 | Intelligent makeup method and device based on virtual image, electronic equipment and storage medium |
-
2020
- 2020-12-28 CN CN202011587624.XA patent/CN112749634A/en active Pending
Patent Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2011008397A (en) * | 2009-06-24 | 2011-01-13 | Sony Ericsson Mobilecommunications Japan Inc | Makeup support apparatus, makeup support method, makeup support program and portable terminal device |
US20170256084A1 (en) * | 2014-09-30 | 2017-09-07 | Tcms Transparent Beauty, Llc | Precise application of cosmetic looks from over a network environment |
CN107427227A (en) * | 2015-06-15 | 2017-12-01 | 哈伊姆·埃米尔 | System and method for adapting skin treatment |
CN108292418A (en) * | 2015-12-15 | 2018-07-17 | 日本时尚造型师协会 | Information provider unit and information providing method |
US20180268572A1 (en) * | 2015-12-25 | 2018-09-20 | Panasonic Intellectual Property Management Co., Ltd. | Makeup part generating apparatus, makeup part utilizing apparatus, makeup part generating method, makeup part utilizing method, non-transitory computer-readable recording medium storing makeup part generating program, and non-transitory computer-readable recording medium storing makeup part utilizing program |
US20170255478A1 (en) * | 2016-03-03 | 2017-09-07 | Perfect Corp. | Systems and methods for simulated application of cosmetic effects |
CN108009893A (en) * | 2017-12-21 | 2018-05-08 | 广东协达信息科技有限公司 | A kind of cosmetic apparatus |
CN108256500A (en) * | 2018-02-05 | 2018-07-06 | 广东欧珀移动通信有限公司 | Information recommendation method and device, terminal and storage medium |
CN110297720A (en) * | 2018-03-22 | 2019-10-01 | 卡西欧计算机株式会社 | Notification device, notification method, and medium storing notification program |
CN110575001A (en) * | 2018-06-11 | 2019-12-17 | 卡西欧计算机株式会社 | Display control device, display control method, and medium storing display control program |
CN109784281A (en) * | 2019-01-18 | 2019-05-21 | 深圳壹账通智能科技有限公司 | Products Show method, apparatus and computer equipment based on face characteristic |
CN111507478A (en) * | 2020-04-08 | 2020-08-07 | 秦国洪 | An AI makeup artist system |
CN111557644A (en) * | 2020-04-22 | 2020-08-21 | 深圳市锐吉电子科技有限公司 | Skin care method and device based on intelligent mirror equipment and skin care equipment |
CN111651040A (en) * | 2020-05-27 | 2020-09-11 | 华为技术有限公司 | Interaction method of electronic equipment for skin detection and electronic equipment |
CN111968248A (en) * | 2020-08-11 | 2020-11-20 | 深圳追一科技有限公司 | Intelligent makeup method and device based on virtual image, electronic equipment and storage medium |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114299576A (en) * | 2021-12-24 | 2022-04-08 | 广州星际悦动股份有限公司 | Oral cavity cleaning area identification method, tooth brushing information input system and related device |
CN117651261A (en) * | 2022-09-29 | 2024-03-05 | 广州星际悦动股份有限公司 | Output control method, device, equipment and storage medium of beauty equipment |
WO2025000658A1 (en) * | 2023-06-30 | 2025-01-02 | 广东花至美容科技有限公司 | Beauty care device guiding method and apparatus, external device, and beauty care system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6750504B2 (en) | Information processing apparatus, information processing method, and program | |
US12158985B2 (en) | Technique for controlling virtual image generation system using emotional states of user | |
CN112749634A (en) | Control method and device based on beauty equipment and electronic equipment | |
CN106859956B (en) | A kind of human acupoint identification massage method, device and AR equipment | |
JP6809226B2 (en) | Biometric device, biometric detection method, and biometric detection program | |
US20150262403A1 (en) | Makeup support apparatus and method for supporting makeup | |
CN112396679B (en) | Virtual object display method and device, electronic equipment and medium | |
KR20210118149A (en) | Makeup processing method, apparatus, electronic device and recording medium | |
TW201337815A (en) | Method and device for electronic fitting | |
CN106774929B (en) | Display processing method of virtual reality terminal and virtual reality terminal | |
US20200250498A1 (en) | Information processing apparatus, information processing method, and program | |
CN113204282A (en) | Interactive apparatus, interactive method, computer-readable storage medium, and computer program product | |
JP6710095B2 (en) | Technical support device, method, program and system | |
KR20190085466A (en) | Method and device to determine trigger intent of user | |
CN113033250A (en) | Facial muscle state analysis and evaluation method | |
CN113011935B (en) | Intelligent fitting method, device, electronic device and storage medium | |
CN114343320B (en) | Eyebrow penciling equipment and control method | |
CN119562780A (en) | Information processing device, information processing method, and program | |
CN118068950A (en) | Interaction method and related equipment | |
EP4356356A1 (en) | Automated capture of neutral facial expression | |
JPWO2020235191A5 (en) | ||
TW202123074A (en) | Method for automatically labelling muscle feature points on face | |
CN112836545A (en) | A 3D face information processing method, device and terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |