[go: up one dir, main page]

CN119179915B - A method for identifying operation mode of facial massager - Google Patents

A method for identifying operation mode of facial massager Download PDF

Info

Publication number
CN119179915B
CN119179915B CN202411697522.1A CN202411697522A CN119179915B CN 119179915 B CN119179915 B CN 119179915B CN 202411697522 A CN202411697522 A CN 202411697522A CN 119179915 B CN119179915 B CN 119179915B
Authority
CN
China
Prior art keywords
sensing data
data
value
output
skin
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202411697522.1A
Other languages
Chinese (zh)
Other versions
CN119179915A (en
Inventor
尚方明
张新萍
王慧裕
魏静静
尹雪兰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
5th Central Hospital Of Tianjin
Original Assignee
5th Central Hospital Of Tianjin
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 5th Central Hospital Of Tianjin filed Critical 5th Central Hospital Of Tianjin
Priority to CN202411697522.1A priority Critical patent/CN119179915B/en
Publication of CN119179915A publication Critical patent/CN119179915A/en
Application granted granted Critical
Publication of CN119179915B publication Critical patent/CN119179915B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0033Features or image-related aspects of imaging apparatus, e.g. for MRI, optical tomography or impedance tomography apparatus; Arrangements of imaging apparatus in a room
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0077Devices for viewing the surface of the body, e.g. camera, magnifying lens
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/44Detecting, measuring or recording for evaluating the integumentary system, e.g. skin, hair or nails
    • A61B5/441Skin evaluation, e.g. for skin disorder diagnosis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H7/00Devices for suction-kneading massage; Devices for massaging the skin by rubbing or brushing not otherwise provided for
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2201/00Characteristics of apparatus not provided for in the preceding codes
    • A61H2201/50Control means thereof
    • A61H2201/5007Control means thereof computer controlled
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30088Skin; Dermal

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • General Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Veterinary Medicine (AREA)
  • Animal Behavior & Ethology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Public Health (AREA)
  • General Engineering & Computer Science (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Surgery (AREA)
  • Pathology (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Radiology & Medical Imaging (AREA)
  • Computational Linguistics (AREA)
  • Dermatology (AREA)
  • Mathematical Physics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Epidemiology (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Rehabilitation Therapy (AREA)
  • Pain & Pain Management (AREA)
  • Quality & Reliability (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

本发明提出了一种面部按摩仪的运行模式识别方法,涉及模式识别技术领域,由摄像系统生成使用者面部皮肤图片,通过图像分析模块识别皮肤状态的特征值;收集面部按摩仪在不同的运行模式下的各类传感数据集合,并对各类传感数据集合分别进行聚类,形成模式映射列表;将皮肤状态的特征值输入模式识别神经网络模型,输出多类传感数据最优值;根据模式映射列表查询多类传感数据最优值所对应的运行模式,识别出最优运行模式。本发明的按摩模式分析过程更为迅速,同时识别的最终按摩模式与用户的皮肤状态精确对应,面部按摩仪的按摩操作模式能够根据用户的皮肤状态自动识别,以达到最佳的按摩效果。

The present invention proposes an operation mode recognition method for a facial massager, which relates to the field of pattern recognition technology. A camera system generates a user's facial skin picture, and an image analysis module is used to recognize the characteristic value of the skin state; various sensor data sets of the facial massager under different operation modes are collected, and various sensor data sets are clustered respectively to form a pattern mapping list; the characteristic value of the skin state is input into a pattern recognition neural network model, and the optimal value of multiple types of sensor data is output; the operation mode corresponding to the optimal value of multiple types of sensor data is queried according to the pattern mapping list, and the optimal operation mode is recognized. The massage mode analysis process of the present invention is faster, and the final massage mode recognized corresponds accurately to the user's skin state. The massage operation mode of the facial massager can be automatically recognized according to the user's skin state to achieve the best massage effect.

Description

Operation mode identification method of facial massager
Technical Field
The invention provides an operation mode identification method of a face massager, and relates to the technical field of mode identification.
Background
The facial massage apparatus is a beauty treatment apparatus, and aims to improve blood circulation of facial skin, relieve muscular tension, promote absorption of skin care products, and improve facial skin by various massage techniques. The operation modes of the facial massager comprise a vibration massage technology, an intelligent sensor technology, an intelligent control massage intensity and mode, a facial roller massage technology and a facial roller massage technology, wherein the vibration massage technology uses high-frequency vibration to stimulate facial muscles and skin, promote blood circulation and lymph toxin expelling, the intelligent sensor technology is combined with sensors such as pressure, temperature and the like, so as to meet the requirements of different skin states, and the facial roller massage technology simulates the traditional facial massage technology, and adopts roller design to massage facial acupoints, promote blood circulation and the like.
The intelligent sensor technology of the face massager can adjust the massage mode according to the skin condition of a user, and skin detection sensors (such as a capacitance sensor, a pressure sensor, a temperature sensor and the like) integrated in the massager collect skin data in real time, wherein the data comprise moisture content, grease level, temperature, elasticity, pore size and the like of skin, analyze the collected skin data to evaluate the skin condition, analyze the skin condition of the user and recommend a proper massage mode. However, the analysis process of the face massager in the prior art is too complex and slow, resulting in poor user experience, and meanwhile, the massage mode is not accurate enough corresponding to the skin state of the user, and there is a phenomenon of excessive massage or insufficient massage degree.
Disclosure of Invention
In order to solve the above technical problems, the present invention provides a method for identifying an operation mode of a facial massager, comprising the following steps:
s1, generating a facial skin picture of a user by an image pickup system, and identifying a characteristic value of a skin state by an image analysis module;
S2, collecting various sensing data sets of the facial massager in different operation modes, and respectively clustering the various sensing data sets to form a mode mapping list;
S3, inputting the characteristic value of the skin state into a pattern recognition neural network model, and outputting optimal values of multiple types of sensing data;
And S4, inquiring the operation modes corresponding to the optimal values of the multi-class sensing data according to the mode mapping list, and identifying the optimal operation mode.
Further, the step S2 includes:
s21, setting a maximum area radius and a point threshold value for each type of sensing data set respectively;
S22, traversing all data in the sensing data set, and marking all data as clustering core points or contour points according to the maximum area radius and the point threshold;
S23, dividing the sensing data set according to the marking result to generate a plurality of final divided areas and data point sets in the areas.
Further, the method comprises the steps of performing preliminary segmentation on a sensing data set according to a marking result of the step S22 to obtain a plurality of preliminary segmentation areas, taking the average value from the clustering core points of all the preliminary segmentation areas to the maximum length of each contained point in the area as a cut-off length, determining boundary area point sets of every two preliminary segmentation areas based on the cut-off length, taking the average value of local densities of sub-point sets in all the boundary area point sets as a density critical value, and generating a plurality of final segmentation areas.
Further, in the step S3, the characteristic value { tx 1…txi …txn } of the skin state is input to the pattern recognition neural network model, n represents the number of the characteristic values of the skin state, and the output value of each neuron in the hidden layer of the pattern recognition neural network model is calculated:
;
;
Wherein s j represents the output intermediate value of the jth neuron of the hidden layer, The threshold value of the j-th neuron of the hidden layer is represented, b j represents the output value of the j-th neuron of the hidden layer, j=1, 2.
Further, in the step S3, the hidden layer output sequence { b 1,…bj ,…bp } is input to the output layer of the pattern recognition neural network model, and then the output of each neuron in the output layer is:
;
;
Where s k represents the output median of the kth neuron of the output layer, Representing the threshold value of the kth neuron of the output layer, y k represents the output value of the kth neuron of the output layer, p connection weights of the kth neuron of the output layer are { v k1…vkj …vkp }, k=1, 2.
Further, in the step S4, according to the overlapping condition of the data range of each type of sensing data and the data range of each type of sensing data in the pattern mapping list, a priority value F (c m) of each type of sensing data is determined:
;
wherein c m is the data range of the m-th type of sensing data in the mode mapping list, N is the number of the remaining sensing data types except the m-th type of sensing data and the sensing data types with determined priorities, and the function The number of data overlapping the data ranges c h and c m of the h-th and m-th sensing data, and D (c m) represents the difference between the maximum value and the minimum value in the data range of the m-th sensing data.
Further, in the step S4, in the step S22, if one data has at least Z data in the area with the maximum area radius R, the data is marked as a cluster core point, if one data has no other data in the area with the maximum area radius R, the data is marked as a noise point, and deleted, and if one data is located in the area with the maximum area radius R of the cluster core point, the data is marked as an inclusion point.
Further, the sensing data of the force sensor of the massage head also comprises sensing data of massage force, speed and frequency.
Further, the characteristic values of the skin state include erythema index, texture parameter, moisture content of the horny layer, and skin oil content.
Compared with the prior art, the invention has the following beneficial technical effects:
the method comprises the steps of generating a facial skin picture of a user through an image analysis module, identifying characteristic values of skin states of the user through the image analysis module, collecting various sensing data sets of the facial massager in different operation modes, clustering the various sensing data sets to form a mode mapping list, inputting the characteristic values of the skin states into a mode identification neural network model, outputting optimal values of various sensing data, inquiring the operation mode corresponding to the optimal values of the various sensing data according to the mode mapping list, and identifying the optimal operation mode. Compared with the massage method of the face massager in the prior art, the massage mode analysis process is quicker, the identified final massage mode corresponds to the skin state of the user accurately, and the massage operation mode of the face massager can be automatically identified according to the skin state of the user so as to achieve the optimal massage effect.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort to a person skilled in the art.
FIG. 1 is a flow chart of a method for identifying an operation mode of a facial massager according to the present invention;
FIG. 2 is a schematic diagram of a pattern mapping list;
FIG. 3 is a schematic flow chart of clustering each type of sensing data set according to the present invention;
fig. 4 is a schematic diagram of the operation mode identification system of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
In the drawings of the specific embodiments of the present invention, in order to better and more clearly describe the working principle of each element in the system, the connection relationship of each part in the device is represented, but only the relative positional relationship between each element is clearly distinguished, and the limitations on the signal transmission direction, connection sequence and the size, size and shape of each part in the element or structure cannot be constructed.
Example 1
Fig. 1 is a flow chart of a method for identifying an operation mode of the facial massager, which comprises the following steps:
S1, identifying a characteristic value of the skin state through an image analysis module according to a user facial skin picture generated by the image pickup system.
A large number of facial skin pictures of the user are collected, which should cover different skin types, conditions. The collected user facial skin pictures are subjected to standardized processing including clipping, scaling, rotation correction and the like, so that all pictures are ensured to have uniform formats and sizes.
And extracting characteristic values of the skin state from the skin picture after the standardized treatment, wherein the characteristic values comprise erythema index, texture parameters and the like.
In the preferred embodiment, the skin state can also be detected by combining a skin detector, the oil secretion state of the skin surface is detected, and the distribution state of the moisture of the stratum corneum at different depths is accurately analyzed by a microscope, so that the characteristic values of the moisture content of the stratum corneum, the skin oil content and the like are obtained.
In a preferred embodiment, other skin information (e.g., skin texture, age, sex, etc.) should be combined as other characteristic values of skin condition in addition to erythema index, texture parameters, stratum corneum moisture content, and skin oil content.
S2, collecting various sensing data sets of the face massager in different operation modes, and clustering the various sensing data sets respectively to form a mode mapping list.
The various sensing data include sensing data of force sensors for massaging the head, sensing data of posture sensors, and the like. The sensing data of the force sensor for massaging the head also comprises sensing data such as massage force, speed, frequency and the like.
And collecting various sensing data under different operation modes to form corresponding type sensing data sets, such as a massage frequency data set, a massage force data set and the like, and respectively performing cluster analysis on the various sensing data sets to form a mode mapping list.
As shown in fig. 2, clustering each type of sensing data set specifically includes the following steps:
s21, setting a maximum area radius R and a point threshold Z for each type of sensing data set.
S22, traversing all data in the sensing data set, and marking all data as clustering core points or contour points.
If a data has at least Z data in the area with the maximum area radius of R, the data is marked as a clustering core point, if a data has no other data in the area with the maximum area radius of R, the data is marked as a noise point and deleted, and if a data is in the area with the maximum area radius of R, the data can be contained by the clustering core point and marked as a containing point.
S23, dividing the sensing data set according to the marking result to generate a plurality of final divided areas and data point sets in the areas.
The method comprises the steps of performing primary segmentation on a sensing data set according to a marking result of the step S22 to obtain a plurality of primary segmentation areas, taking the average value from a clustering core point of all the primary segmentation areas to the maximum length of each containing point in the area as a cutoff length, determining a boundary area point set of each two primary segmentation areas based on the cutoff length, and generating a plurality of final segmentation areas by taking the average value of local densities of sub-point sets in all the boundary area point sets as a density critical value if the length between the containing point of one primary segmentation area and the containing point of the other primary segmentation area in the two primary segmentation areas is smaller than the cutoff length.
The method specifically comprises the following steps:
Step 231, performing preliminary segmentation on the sensing data set according to the labeling result to obtain a plurality of preliminary segmentation regions, and taking the average value of the maximum lengths from the clustering core points of all the preliminary segmentation regions to each of the inclusion points in the inclusion point set in the clustering core points and the regions as the length to be truncated :
;
Of the M preliminary segmentation areas,The maximum length from the clustering core point of the preliminary segmentation region B to each containing point J in the preliminary segmentation region.
Step 232, determining a boundary region point set of each two preliminary divided regions based on the cutoff length, wherein if the length between the containing point of one of the two preliminary divided regions and the containing point of the other preliminary divided region is smaller than the cutoff length, the two containing points belong to the boundary region point sets of the two preliminary divided regions.
Let the boundary region point set L mn be:
;
wherein I, J are respectively two containing points belonging to the preliminary divided region B m,Bn, when the length between the two containing points is smaller than the cut-off length When the two contained points are considered to be the boundary region point set belonging to the preliminary divided region B m,Bn.
Step 233, taking the average value of the local densities of the sub-point sets in all the boundary area point sets as a density critical value.
Local density comprising point IThe method comprises the following steps:
;
Wherein d IJ is the length of the containing point I and other containing points J in the sub-point set, K is the number of other containing points in the sub-point set, Is the cut-off length.
The average value of the local densities of all the sub-point sets of all the boundary area point sets is calculated as a density critical value.
And 234, taking the data points of the sub-point set with the local density smaller than the density critical value in the boundary area point set as discrete points, for each discrete point, finding out the cluster core point closest to the length of the discrete point from M cluster core points, and adding the discrete point into the partition area to which the cluster core point belongs, thereby generating M final partition areas.
S24, clustering the sensing data sets of other categories sequentially according to the steps to obtain a plurality of final segmentation areas and data point sets in the areas under the sensing data sets of each category, and marking different operation modes for the data point sets in each final segmentation area.
Finally, the data point sets belonging to the same mode of the various sensing data sets are imported to the same column in the mode mapping list, as shown in fig. 3, for example, in the sensing data set formed by the sensing data type a, the data subset with the numerical range of [ a 1,A2 ] belongs to the first operation mode, the data subset with the numerical range of [ a 3,A4 ] belongs to the second operation mode, the data subset with the numerical range of [ a 5,A6 ] belongs to the third operation mode, and so on.
S3, inputting the characteristic value of the skin state obtained in the step S1 into a pattern recognition neural network model, and outputting optimal values of multiple types of sensing data.
Inputting the characteristic values { tx 1…txi …txn } of the skin state into the pattern recognition neural network model, wherein n represents the number of the characteristic values of the skin state, and calculating the output of each neuron of the hidden layer of the pattern recognition neural network model:
;
;
Wherein s j represents the output intermediate value of the jth neuron of the hidden layer, The threshold value of the j-th neuron of the hidden layer is represented, b j represents the output value of the j-th neuron of the hidden layer, j=1, 2.
Inputting the hidden layer output sequence { b 1,…bj ,…bp } to an output layer of the pattern recognition neural network model, and outputting each neuron of the output layer as follows:
;
;
Where s k represents the output median of the kth neuron of the output layer, Representing the threshold value of the kth neuron of the output layer, y k represents the output value of the kth neuron of the output layer, p connection weights of the kth neuron of the output layer are { v k1…vkj …vkp }, k=1, 2.
S4, inquiring the operation mode corresponding to the optimal value of the multi-type sensing data output in the step S3 according to the mode mapping list obtained in the step S2, and identifying the optimal operation mode.
And inquiring the range of the optimal value of the q-type sensing data according to the mode mapping list, and searching the corresponding operation mode according to the range of the optimal value of the q-type sensing data.
Preferably, if the operation modes corresponding to all the optimal values of the sensing data are the same, the operation mode is directly selected, and if the operation modes corresponding to the optimal values of the sensing data are different, the final operation mode is required to be determined according to a priority rule.
In a preferred embodiment, the following formula is used to determine the priority value of each type of sensing data according to the overlapping condition of the data range of each type of sensing data and the data range of each type of sensing data in the pattern mapping list:
;
wherein c m is the data range of the m-th type of sensing data in the mode mapping list, N is the number of the remaining sensing data types except the m-th type of sensing data and the sensing data types with determined priorities, and the function The number of data overlapping the data ranges c h and c m of the h-th and m-th types of sensing data, and D (c m) represents the difference between the maximum value and the minimum value in the data range of the m-th type of sensing data, for example, a 2-A1,F(cm) is the calculated c m priority value.
Example 2
The invention also provides an operation mode recognition system for realizing the operation mode recognition method of the facial massager, as shown in fig. 4, wherein the operation mode recognition system comprises an image pickup system, an image analysis module, a sensing data processing unit, a mode recognition neural network model and a recognition unit.
The image analysis module is used for identifying the characteristic value of the skin state of the facial skin picture of the user generated by the image capturing system.
The image analysis module is used for carrying out specific treatment and analysis on skin images to indirectly evaluate the change of the moisture content of the stratum corneum, and relates to technologies such as color model conversion, region segmentation, image binarization and the like.
The sensing data processing unit collects various sensing data sets of the facial massager in different operation modes and clusters the various sensing data sets respectively to form a mode mapping list. The collection of various sensing data sets is mainly to carry out statistics on historical data, and the more the statistical data is, the better the later clustering analysis effect is.
The input parameter of the pattern recognition neural network model is a characteristic value of the skin state, and the output parameter is a corresponding optimal value of multiple types of sensing data. According to the complexity of the problem and the characteristics of the data, a proper pattern recognition neural network model is selected, such as a multi-layer perceptron neural network model, a convolution neural network or a cyclic neural network. In this embodiment, since the input is a feature value, not image or sequence data, a multi-layer perceptron neural network model is preferred.
The identification unit identifies an optimal operation mode according to the operation mode corresponding to the optimal value of the multi-class sensing data output by the neural network model by inquiring the mode according to the mode mapping list.
Once the optimal mode of operation is identified, the facial massager system may adjust the parameters of the relevant sensors and the massage regimen to provide the most personalized skin massage experience. For example, high-frequency vibration is used for stimulating facial muscles and skin, promoting blood circulation and lymph to remove toxin, and the massage intensity is adjusted by combining sensors such as pressure, temperature and the like so as to adapt to the requirements of different skin states, and the modes such as facial acupoint massage, blood circulation promotion and the like are adopted.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted across a computer-readable storage medium. The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk (SSD)), etc.
While the application has been described with reference to certain preferred embodiments, it will be understood by those skilled in the art that various changes and substitutions of equivalents may be made and equivalents will be apparent to those skilled in the art without departing from the scope of the application. Therefore, the protection scope of the application is subject to the protection scope of the claims.

Claims (5)

1. A method for identifying an operation mode of a face massager, comprising the steps of:
s1, generating a facial skin picture of a user by an image pickup system, and identifying a characteristic value of a skin state by an image analysis module;
S2, collecting various sensing data sets of the face massager in different operation modes, and respectively clustering the various sensing data sets to form a mode mapping list, wherein the method comprises the following steps:
s21, setting a maximum area radius and a point threshold value for each type of sensing data set respectively;
S22, traversing all data in the sensing data set, marking all data as clustering core points or contour points according to the maximum area radius and the point number threshold value, marking one data as clustering core points if the data has at least Z data in an area with the maximum area radius of R, marking the data as noise points if the data does not have any other data in the area with the maximum area radius of R, and deleting the data;
S23, dividing the sensing data set according to the marking result to generate a plurality of final divided areas and data point sets in the areas;
Performing preliminary segmentation on the sensing data set according to the marking result of the step S22 to obtain a plurality of preliminary segmentation areas, taking the average value of the maximum lengths of clustering core points of all the preliminary segmentation areas and all the contained points in the areas as a cut-off length, determining boundary area point sets of every two preliminary segmentation areas based on the cut-off length, and taking the average value of the local densities of sub-point sets in all the boundary area point sets as a density critical value to generate a plurality of final segmentation areas;
s24, clustering the sensing data sets of other categories sequentially according to the steps to obtain a plurality of final segmentation areas and data point sets in the areas under the sensing data sets of each category, and marking different operation modes for the data point sets in each final segmentation area;
Finally, the data point sets of all the sensing data sets belonging to the same mode are imported into the same column in the mode mapping list;
S3, inputting the characteristic value of the skin state into a pattern recognition neural network model, and outputting optimal values of multiple types of sensing data;
S4, inquiring the operation modes corresponding to the optimal values of the multiple types of sensing data according to the mode mapping list, identifying the optimal operation modes, directly selecting the operation modes if the operation modes corresponding to the optimal values of all the sensing data are the same, and determining the final operation mode according to the priority values of the various types of sensing data if the operation modes corresponding to the optimal values of the sensing data are different.
2. The operation pattern recognition method according to claim 1, wherein in the step S3, the characteristic value { tx 1…txi …txn } of the skin state is input to the pattern recognition neural network model, n represents the number of the characteristic values of the skin state, and the output value of each neuron of the hidden layer of the pattern recognition neural network model is calculated:
;
;
Wherein s j represents the output intermediate value of the jth neuron of the hidden layer, The threshold value of the j-th neuron of the hidden layer is represented, b j represents the output value of the j-th neuron of the hidden layer, j=1, 2.
3. The method for identifying an operation mode according to claim 2, wherein in the step S3, the hidden layer output sequence { b 1,…bj ,…bp } is input to an output layer of the pattern-identifying neural network model, and then the output of each neuron of the output layer is:
;
;
Where s k represents the output median of the kth neuron of the output layer, Representing the threshold value of the kth neuron of the output layer, y k represents the output value of the kth neuron of the output layer, p connection weights of the kth neuron of the output layer are { v k1…vkj …vkp }, k=1, 2.
4. The method for recognizing an operation mode according to claim 1, wherein the sensing data comprises sensing data of force sensors of the massage head and sensing data of posture sensors, and the sensing data of the force sensors of the massage head further comprises sensing data of massage force, speed and frequency.
5. The method for identifying a running mode according to claim 1, wherein the characteristic values of the skin condition include erythema index and texture parameters.
CN202411697522.1A 2024-11-26 2024-11-26 A method for identifying operation mode of facial massager Active CN119179915B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411697522.1A CN119179915B (en) 2024-11-26 2024-11-26 A method for identifying operation mode of facial massager

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411697522.1A CN119179915B (en) 2024-11-26 2024-11-26 A method for identifying operation mode of facial massager

Publications (2)

Publication Number Publication Date
CN119179915A CN119179915A (en) 2024-12-24
CN119179915B true CN119179915B (en) 2025-02-28

Family

ID=93902283

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411697522.1A Active CN119179915B (en) 2024-11-26 2024-11-26 A method for identifying operation mode of facial massager

Country Status (1)

Country Link
CN (1) CN119179915B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111616941A (en) * 2020-04-30 2020-09-04 广东艾诗凯奇智能科技有限公司 Mode selection method, device, mobile terminal and computer readable storage medium
CN112560984A (en) * 2020-12-25 2021-03-26 广西师范大学 Differential privacy protection method for self-adaptive K-Nets clustering
CN117831717A (en) * 2023-12-27 2024-04-05 杭州时光机智能电子科技有限公司 Control method, control device, control equipment and storage medium of irradiation module

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN208081484U (en) * 2017-05-23 2018-11-13 广东协达信息科技有限公司 A kind of face massager
GB202009492D0 (en) * 2020-06-22 2020-08-05 Nicoventures Trading Ltd User feedback system and method
CN111752165B (en) * 2020-07-10 2024-08-27 广州博冠智能科技有限公司 Intelligent equipment control method and device of intelligent home system
CN113346957B (en) * 2021-06-01 2022-08-05 北京理工大学 Clustering nonlinear compensation method for OAM-QPSK transmission
CN117322966A (en) * 2023-09-25 2024-01-02 杭州海康慧影科技有限公司 Pneumoperitoneum machine control method, device and system
CN117379005B (en) * 2023-11-21 2024-05-28 欣颜时代(广州)技术有限公司 Skin detection control method, device, equipment and storage medium of beauty instrument
CN118294146A (en) * 2024-04-28 2024-07-05 苏州同元软控信息技术有限公司 Fault determination method and device, electronic equipment and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111616941A (en) * 2020-04-30 2020-09-04 广东艾诗凯奇智能科技有限公司 Mode selection method, device, mobile terminal and computer readable storage medium
CN112560984A (en) * 2020-12-25 2021-03-26 广西师范大学 Differential privacy protection method for self-adaptive K-Nets clustering
CN117831717A (en) * 2023-12-27 2024-04-05 杭州时光机智能电子科技有限公司 Control method, control device, control equipment and storage medium of irradiation module

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"肌电信号的数字采集及其模式识别";顾昕;《中国优秀硕士学位论文全文数据库信息科技辑》;20120315;摘要,正文39-47页 *

Also Published As

Publication number Publication date
CN119179915A (en) 2024-12-24

Similar Documents

Publication Publication Date Title
CN116705337A (en) Health data acquisition and intelligent analysis method
CN115089139B (en) Personalized physiological parameter measurement method combined with biometric recognition
El_Rahman Multimodal biometric systems based on different fusion levels of ECG and fingerprint using different classifiers
CN107273726B (en) Equipment owner's identity real-time identification method and its device based on acceleration cycle variation law
CN103839033A (en) Face identification method based on fuzzy rule
WO2021031817A1 (en) Emotion recognition method and device, computer device, and storage medium
Bansal et al. Statistical feature extraction based iris recognition system
Chen et al. Patient emotion recognition in human computer interaction system based on machine learning method and interactive design theory
Shen et al. A classifier based on multiple feature extraction blocks for gait authentication using smartphone sensors
CN114595725A (en) Electroencephalogram signal classification method based on addition network and supervised contrast learning
CN114973308A (en) Method and system for finger vein recognition based on elastic weight solidification and multivariate similarity loss
CN113506274B (en) Detection system for human cognitive condition based on visual saliency difference map
CN119179915B (en) A method for identifying operation mode of facial massager
CN115188031A (en) Fingerprint identification method, computer program product, storage medium and electronic device
CN119207480A (en) Audio and video acquisition and processing system and method based on big data technology
Li et al. Measuring visual surprise jointly from intrinsic and extrinsic contexts for image saliency estimation
Liu et al. Category-preserving binary feature learning and binary codebook learning for finger vein recognition
CN104376320A (en) Feature extraction method for detection of artificial fingerprints
CN117942079A (en) Emotion intelligence classification method and system based on multidimensional sensing and fusion
CN109255318A (en) Based on multiple dimensioned and multireel lamination Fusion Features fingerprint activity test methods
CN116439706A (en) A recognition method and recognition system based on EEG and eye movement
Baek et al. CNN-based health model using knowledge mining of influencing factors
CN109977777A (en) Gesture identification method based on novel RF-Net model
Lu et al. User Emotion Recognition Method Based on Facial Expression and Speech Signal Fusion
Li et al. Finger vein recognition based on personalized discriminative bit map

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant