CN119074532A - Facial acupuncture positioning guidance system - Google Patents
Facial acupuncture positioning guidance system Download PDFInfo
- Publication number
- CN119074532A CN119074532A CN202411552600.9A CN202411552600A CN119074532A CN 119074532 A CN119074532 A CN 119074532A CN 202411552600 A CN202411552600 A CN 202411552600A CN 119074532 A CN119074532 A CN 119074532A
- Authority
- CN
- China
- Prior art keywords
- acupuncture
- facial
- patient
- points
- face
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000001467 acupuncture Methods 0.000 title claims abstract description 115
- 230000001815 facial effect Effects 0.000 title claims abstract description 110
- 238000011282 treatment Methods 0.000 claims abstract description 61
- 238000003745 diagnosis Methods 0.000 claims abstract description 22
- 238000005070 sampling Methods 0.000 claims description 29
- 238000000034 method Methods 0.000 claims description 23
- 238000012216 screening Methods 0.000 claims description 22
- 210000001508 eye Anatomy 0.000 claims description 13
- 230000011218 segmentation Effects 0.000 claims description 4
- 239000000284 extract Substances 0.000 claims description 3
- 230000000694 effects Effects 0.000 abstract description 8
- 210000000697 sensory organ Anatomy 0.000 description 32
- 238000004364 calculation method Methods 0.000 description 13
- 230000008569 process Effects 0.000 description 7
- 239000000243 solution Substances 0.000 description 5
- 208000004929 Facial Paralysis Diseases 0.000 description 4
- 238000013459 approach Methods 0.000 description 4
- 238000002347 injection Methods 0.000 description 4
- 239000007924 injection Substances 0.000 description 4
- 238000003780 insertion Methods 0.000 description 4
- 230000037431 insertion Effects 0.000 description 4
- 230000004044 response Effects 0.000 description 4
- 208000036826 VIIth nerve paralysis Diseases 0.000 description 3
- 210000000056 organ Anatomy 0.000 description 3
- 230000000638 stimulation Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 230000032696 parturition Effects 0.000 description 2
- 230000004936 stimulating effect Effects 0.000 description 2
- 208000006373 Bell palsy Diseases 0.000 description 1
- 206010061218 Inflammation Diseases 0.000 description 1
- 208000027418 Wounds and injury Diseases 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000010835 comparative analysis Methods 0.000 description 1
- 230000006378 damage Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000008034 disappearance Effects 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 210000004709 eyebrow Anatomy 0.000 description 1
- 230000008921 facial expression Effects 0.000 description 1
- 208000014337 facial nerve disease Diseases 0.000 description 1
- 210000001061 forehead Anatomy 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000004054 inflammatory process Effects 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 208000014674 injury Diseases 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 230000004220 muscle function Effects 0.000 description 1
- 210000005036 nerve Anatomy 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 230000001105 regulatory effect Effects 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61H—PHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
- A61H39/00—Devices for locating or stimulating specific reflex points of the body for physical therapy, e.g. acupuncture
- A61H39/02—Devices for locating such points
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61H—PHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
- A61H39/00—Devices for locating or stimulating specific reflex points of the body for physical therapy, e.g. acupuncture
- A61H39/08—Devices for applying needles to such points, i.e. for acupuncture ; Acupuncture needles or accessories therefor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H20/00—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
- G16H20/40—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mechanical, radiation or invasive therapies, e.g. surgery, laser therapy, dialysis or acupuncture
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Rehabilitation Therapy (AREA)
- Public Health (AREA)
- Theoretical Computer Science (AREA)
- Epidemiology (AREA)
- Pain & Pain Management (AREA)
- Medical Informatics (AREA)
- Veterinary Medicine (AREA)
- Animal Behavior & Ethology (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Life Sciences & Earth Sciences (AREA)
- Physical Education & Sports Medicine (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Urology & Nephrology (AREA)
- Surgery (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Primary Health Care (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Databases & Information Systems (AREA)
- Human Computer Interaction (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Image Processing (AREA)
Abstract
The application belongs to the technical field of acupuncture treatment equipment, and discloses a positioning and guiding system for facial acupuncture. A positioning and guiding system for facial acupuncture comprises a diagnosis and treatment information acquisition module, an image information acquisition module, an acupuncture point positioning module and an acupuncture point marking module, wherein the diagnosis and treatment information acquisition module acquires a treatment prescription of a patient to obtain facial acupuncture points of the patient to be needled, the image information acquisition module acquires facial images of the patient before needling to obtain position information of the facial acupuncture points of the patient to be needled by comparing the facial images of the patient before needling to facial images of the acupuncture points marked in a database, and the acupuncture point marking module acquires the position information of the facial acupuncture points of the patient to be needled and then sequentially irradiates rays capable of generating faculae on the face of the patient to the facial acupuncture points to guide acupuncture treatment. The technical scheme provided by the application ensures that the needle can be accurately positioned at the corresponding acupuncture point Shi Zhen during needle application, and the treatment effect is enhanced.
Description
Technical Field
The application relates to the technical field of acupuncture treatment, in particular to a positioning and guiding system for facial acupuncture.
Background
Peripheral facial paralysis, also called facial neuritis or idiopathic facial paralysis, is often a disease in which facial expression muscle function injury such as facial line disappearance, facial line shallowness, facial angle distortion and the like is mainly marked by nonspecific inflammation of a side nerve.
Acupuncture point stimulation is a good treatment method for facial paralysis, acupuncture points are stimulated by acupuncture, and besides, the acupuncture points can be stimulated by electric shock. Whether acupuncture is used for stimulating the acupoints or electric shock is used for stimulating the acupoints, the positions of the facial acupoints of the patient need to be accurately found.
In clinical practice, it is common for a superior doctor to prescribe an acupoint stimulation to a patient, and then for a next-level doctor to administer an acupressure treatment according to the acupoint stimulation prescription. Because the face condition of each patient is inconsistent, the doctor at the next stage is limited by experience factors, and can not accurately find the corresponding acupuncture point to perform acupuncture, so that the treatment effect is not ideal.
Disclosure of Invention
The summary of the application is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. The summary of the application is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
As a first aspect of the present application, in order to solve the technical problem of inaccurate acupuncture point positions of an acupuncture needle, the present application provides a positioning and guiding system for facial acupuncture, comprising:
the diagnosis and treatment information acquisition module acquires a treatment prescription of a patient to obtain facial acupoints of the patient needing needle application;
the image information acquisition module acquires a facial image of a patient before needle application;
the acupuncture point positioning module is used for comparing the facial image before the needle application of the patient with the facial image marked with the acupuncture points in the facial form database to obtain the position information of the facial acupuncture points of the patient needing the needle application;
The acupuncture point marking module acquires the position information of the face acupuncture points of the patient needing to be needled, and then irradiates laser with the wavelength of visible light to the face acupuncture points in sequence to guide acupuncture treatment.
Aiming at the problem of limited curative effect caused by inaccurate acupoint positioning in acupuncture treatment, the application adopts a diagnosis and treatment information acquisition module to acquire the acupoint needing to be applied with the needle first, and then adopts an acupoint positioning module to accurately match a facial image containing preset acupoint marks from a database, thereby defining the accurate position of the related acupoint of the current patient. Then, the laser is emitted through the acupoint marking module, and the indication light spots are projected on the face of the patient, so that the worker can accurately find the corresponding acupoints when applying the needle to the patient, and the treatment effect is improved.
The current acupoint guiding method is limited to providing an approximate acupoint graphic or video guidance to help medical staff identify the acupoint position, however, in actual operation, the needle application sequence and specific manipulation of the acupoint still need to be judged by the medical staff, which often results in inaccurate needle application and further affects the treatment effect. Aiming at the problem, the application provides the following technical scheme:
the diagnosis and treatment information acquisition module comprises:
The prescription information input unit is used for inputting a treatment prescription of a patient, reading acupoints, the needle application sequence of the acupoints and the interval time of needle application from the treatment prescription;
The input unit is used for adjusting the acupuncture points required to be needled, the needling sequence of the acupuncture points and the needling interval time;
The acupoint marking module marks corresponding acupoints by using light spots in sequence according to the acupoint needle application sequence and the interval time of needle application.
According to the technical scheme provided by the application, the acupoints required by the needle application, the needle application sequence of each acupoint and the interval time can be automatically obtained according to a preset treatment prescription before treatment. In addition, the scheme is also provided with an input unit, so that medical staff can flexibly adjust the parameters according to actual conditions. Therefore, the follow-up acupuncture point marking module can accurately emit facula marks according to the set sequence and time interval, guide medical staff to accurately and timely perform needle application, and reduce the influence of acupuncture point positioning errors, medical staff needle insertion errors and needle insertion time errors on the treatment effect.
Acupoint recognition essentially relies on determining the point of an acupoint at a corresponding distance from the facial feature location. This process requires both accurate facial feature recognition and accurate calculation of the distance between these features and the standard acupoint location, as shown by the yang-white acupoint being located 1 cun above the eyebrow. Current automatic acupoint recognition techniques rely mostly on facial feature similarity matching, for example, comparing a patient's facial image to the labeled yangbai acupoints in a database to find the region with the closest texture. However, this approach over emphasizes local feature similarity, ignoring the variability of facial shape differences and facial texture between patients, resulting in a large error in direct localization of acupoints by similarity. Aiming at the problems, the application provides the following technical scheme:
Further, the image information acquisition module includes:
a camera for acquiring a facial image of a patient;
the marking is attached to the face of a patient in a straight mode, and the length of the marking is recorded in the acupoint positioning module in advance.
According to the technical scheme provided by the application, a method of setting the marked line on the face of the patient is adopted, and a length reference standard is provided for the acquired image information. Therefore, when the acupoint is identified, the five sense organs can be used as stable reference points, and accurate acupoint identification and positioning can be performed by combining the accurate scale information provided by the marking lines, so that the positioning accuracy of the acupoint is obviously improved.
In order to further increase the positioning accuracy of the acupuncture points, the application provides the following technical scheme:
The acupuncture point positioning module comprises:
A face definition unit which defines a standard face of a plurality of face categories in advance;
An image information storage unit configured with a plurality of face databases, each face database corresponding to a standard face, each face database storing standard image information corresponding to the standard face;
the image information primary screening unit is in signal connection with the image information acquisition module and acquires a face image of a patient, and the standard face shape of the patient is acquired according to the face image;
an image information dividing unit for screening a face database according to the standard face of the patient, and dividing the face image into corresponding five sense organs based on the standard image information in the face database;
the acupoint positioning unit is used for acquiring the actual length of each pixel grid in the face image on the face of the patient, taking the five sense organs of the patient as a reference, and positioning the positions of the acupoints according to the length of the acupoints from the boundary of the five sense organs.
The technical scheme provided by the application does not simply directly compare the facial image of the patient with the database image to find the most similar facial image so as to finish the acupoint positioning. In contrast, the method comprises the steps of firstly accurately capturing the positions of the five sense organs of a patient, and then accurately positioning the acupoints based on the positions of the five sense organs according to the established acupoint positioning rules. Compared with the traditional method relying on acupoint similarity matching, the scheme remarkably improves positioning accuracy and realizes more accurate and effective acupoint recognition. Meanwhile, compared with the prior art, the scheme provided by the application can also reduce the collection difficulty and the marking difficulty of the standard image information, under the existing scheme, the acupuncture points on each face image in the standard image information are required to be marked so as to be convenient for the acupuncture point recognition system to recognize the corresponding acupuncture points, and in the scheme, only the boundaries and the outlines of the five sense organs are required to be marked, so that the marking difficulty is obviously reduced, and meanwhile, the scheme can also avoid the influence on errors under different scales. Because each piece of standard image information collected cannot be obtained at the same scale, errors occur when actually performing acupoint positioning because the scale of the standard image information is not clear. In the scheme, only the boundary of the five sense organs in the face image is acquired by using the standard image information, and the length in the face image is converted by using the marked lines, so that the accuracy of point positioning is improved.
Further, the image information preliminary screening unit obtains an edge contour line of a face in a face image of the patient, calculates similarity between an upper part line segment, a lower part line segment and a side part line segment of the patient and each type of the upper part line segment, the lower part line segment and the side part line segment, screens out the type of the upper part line segment, the type of the lower part line segment and the type of the side part line segment which are closest to the patient, and accordingly obtains the face type of the face of the patient.
According to the technical scheme provided by the application, when the face shape of the patient is determined, a method of directly carrying out contrast analysis from three different directions according to the facial contour of the patient is adopted. According to the scheme, the face shape of the patient can be accurately matched to the nearest standard face shape, so that a more accurate reference is provided for subsequent acupoint positioning, and the accuracy of acupoint positioning is remarkably improved.
In the face recognition process, achieving accurate matching of patient face actually depends on a moderate fuzzy matching strategy. Currently, edge contour matching generally relies on neural network models, which, while providing high matching accuracy, are accompanied by high resource consumption. In practical application, in view of the uniqueness of each face, too high matching accuracy cannot significantly improve matching accuracy, but may cause excessive fitting of a model, and affect generalization capability. In addition, the Hausdorff distance is adopted for similarity matching, and accurate alignment can be difficult to achieve due to incorrect accuracy setting. Another common approach is to calculate the minimum bounding box (MBR) of the contour and match by comparing the size, position and orientation of the bounding box, although this approach is computationally less intensive, ignoring the fine shape of the contour edges, and thus lacking in accuracy in face matching.
Aiming at the problem that the prior face matching technology is difficult to consider the simplicity and the accuracy, the application provides a technical scheme which aims to realize the matching of the face of the patient in a more efficient and accurate mode.
The image information preliminary screening unit is used for setting a plurality of sampling points on the upper line segment at equal intervals, and fitting the sampling points into a preset first function for multiple times to obtain a first fitting parameter;
the image information preliminary screening unit is used for setting a plurality of sampling points on the lower line segment at equal intervals, and fitting the sampling points into a preset second function for a plurality of times to obtain a second fitting parameter;
The image information preliminary screening unit is used for setting a plurality of sampling points on the side part line segments at equal intervals, and fitting the sampling points into a preset third function for a plurality of times to obtain a third fitting parameter;
the image information preliminary screening unit matches the face shape according to the similarity of the first fitting parameter, the second fitting parameter and the third fitting parameter and the first fitting parameter, the second fitting parameter and the third fitting parameter of each standard face shape.
According to the technical scheme provided by the application, the number of sampling points is flexibly regulated, so that the effective management of the calculated amount is realized, and the calculation resources can be optimally configured according to the actual requirements. After carefully selecting the proper number of sampling points, the scheme can ensure the accuracy of face matching and realize accurate alignment. More importantly, the similarity evaluation is carried out based on the fitting parameters of the sampling points, and the fine relief forms of the contour line edges are matched substantially, so that the accuracy is improved remarkably in the matching process of the facial contours. Therefore, the scheme adopts a simple, convenient and efficient method, successfully realizes accurate matching of the face shape of the patient, controls the complexity of calculation and ensures the high precision of the matching result.
Given the uniqueness of the facial features of each patient, and the atypical variation in facial morphology that is often associated with facial paralysis patients, this greatly increases the difficulty of accurately locating acupoints based on existing facial features in the database. To overcome this challenge, the present application proposes the following solution:
The image information dividing unit includes:
the five sense organs preliminary sketching device is used for sketching the preliminary boundary of the five sense organs region, and the five sense organs region comprises eyes and nose;
The edge sketching device is used for extracting key points in the preliminary boundary, comparing the key points with standard image information in the facial form database, judging whether the key points are boundaries of the five sense organs area, if so, recording the key points as edge points, and if not, deleting the key points;
all edge points are collected and connected to generate boundaries of the five sense organ regions.
The technical scheme provided by the application eliminates the traditional method of directly comparing the facial image of the patient with the image in the facial form database. In contrast, the application uses the edge sketcher to extract the key points of the face, then compares the key points of the face with standard image information in the database, and judges whether the key points form the boundary of the five sense organs. Subsequently, by sequentially connecting these key points belonging to the boundary, the outline of the five sense organs is precisely outlined. The key advantage of this solution is that it exploits the stability of the key points. These points can be kept constant under different illumination, rotation angles and dimensional changes. These key points substantially correspond to the edge regions where the facial features are significant, so that the five sense organs boundary can be efficiently and accurately located by only performing a comparative analysis on these key points, and the accuracy of the matching is significantly improved. By the scheme, the problem of acupoint positioning caused by facial deformation of facial paralysis patients is effectively solved, and powerful support is provided for accurate medical treatment.
Further, the acupoint marking module includes:
the angle-adjustable lasers are used for generating laser with the wavelength in the visible light frequency band;
the control unit is respectively connected with the acupuncture point positioning module and the diagnosis and treatment information acquisition module in a signal manner, acquires the positions of all the acupuncture points on the face of the patient, and controls the laser to project the light spots to the corresponding acupuncture points according to the needle application sequence of all the acupuncture points recorded by the diagnosis and treatment information acquisition module so as to form the light spots.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application, are incorporated in and constitute a part of this specification. The drawings and their description are illustrative of the application and are not to be construed as unduly limiting the application.
In addition, the same or similar reference numerals denote the same or similar elements throughout the drawings. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale.
In the drawings:
Fig. 1 is a schematic view of a positioning and guiding system for facial acupuncture.
Fig. 2 is a schematic diagram of the division of the edge contour of a face.
Detailed Description
Embodiments of the present application will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the application have been illustrated in the accompanying drawings, it is to be understood that the application may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete. It should be understood that the drawings and embodiments of the application are for illustration purposes only and are not intended to limit the scope of the present application.
It should be noted that, for convenience of description, only the portions related to the present application are shown in the drawings. Embodiments of the application and features of the embodiments may be combined with each other without conflict.
The application will be described in detail below with reference to the drawings in connection with embodiments.
Embodiment 1 referring to fig. 1, a positioning and guiding system for facial acupuncture includes a diagnosis and treatment information acquisition module, an image information acquisition module, an acupuncture point positioning module, and an acupuncture point marking module. The acupoint marking module is respectively in signal connection with the diagnosis and treatment information acquisition module and the acupoint positioning module, and the image information acquisition module is in signal connection with the acupoint positioning module.
The system comprises a diagnosis and treatment information acquisition module, an image information acquisition module, an acupuncture point positioning module and an acupuncture point marking module, wherein the diagnosis and treatment information acquisition module is used for acquiring a treatment prescription of a patient to obtain facial acupuncture points of the patient to be subjected to acupuncture, the image information acquisition module is used for acquiring facial images of the patient before the patient is subjected to acupuncture, the acupuncture point positioning module is used for comparing the facial images of the patient before the patient is subjected to acupuncture with facial images marked with the acupuncture points in a facial form database to obtain position information of the facial acupuncture points of the patient to be subjected to acupuncture, and the acupuncture point marking module is used for acquiring the position information of the facial acupuncture points of the patient to be subjected to acupuncture, and then laser with visible light frequency is sequentially irradiated to the facial acupuncture points to guide acupuncture treatment.
The above general technical solution of the positioning and guiding system for facial acupuncture provided by the present application is further described in the following technical solutions:
the diagnosis and treatment information acquisition module is connected with the HIS system of the hospital to acquire the treatment prescription of the patient, or the treatment prescription of the patient is obtained through a manual input mode. The treatment prescription in the scheme is a treatment mode of acupuncture and moxibustion, and comprises acupuncture points needing to be applied with needles, the sequence of the needles applied with the needles and the interval time of the needles applied with the needles.
Furthermore, the diagnosis and treatment information acquisition module comprises a prescription information input unit and an input unit. The prescription information input unit is used for inputting a treatment prescription of a patient, reading acupoints, the acupoints injection sequence and the acupoints injection interval time from the treatment prescription, and adjusting the acupoints, the acupoints injection sequence and the acupoints injection interval time.
The diagnosis and treatment information acquisition module is essentially an information processing terminal, such as a computer and a mobile phone. The staff can download the treatment prescriptions in the HIS system of the hospital by using the diagnosis and treatment information acquisition module or directly input the treatment prescriptions by using the input unit. The specific input mode of the treatment prescription and the reading mode of the treatment prescription are the prior art, and the application is not further described.
And the image information acquisition module acquires a facial image of the patient before needle application. Specifically, the patient may lie in a designated position prior to needle application, at which time a moving camera or a fixed camera may be used to capture an image of the patient's face. In practice, the face image is image information for which face range division has been completed. The approach of extracting facial images along the facial edge contours is prior art and will not be described here.
The image information acquisition module comprises a camera and a marked line. The camera is fixed on the frame body, or the camera is held by a worker. However, in any fixing method of the camera, in this scheme, the camera is required to obtain a facial image of the front face of the patient. The marking is a line of fixed length which is straightened and attached to the patient's face when in use. The marking needs to be far away from the acupuncture point of the patient needing to be needled, the unclear recognition of the acupoints of the part is avoided.
The application can acquire the facial image of the front face of the patient by adopting the image information acquisition module, and a mark line with a pre-recorded length is added in the facial image. For example, the length of the marking is 1cm, and 1000 pixel grids are occupied in the facial image, so that the width of the 1 pixel grid on the face of the patient is 0.01mm, and when calculating the distance between any two points on the face of the patient, only how many pixel grids are separated by the two points needs to be calculated.
In some embodiments, in order to reduce the calculation amount, the acupoint positioning module acquires the acupoints requiring needle application in the diagnosis and treatment information acquisition module in advance, and only relevant acupoints are positioned in the acupoint positioning module, so that the calculation amount is reduced, and the feedback efficiency of the system is improved.
The acupoint positioning module comprises a face definition unit, an image information storage unit, an image information preliminary screening unit, an image information segmentation unit and an acupoint positioning unit. The face type definition unit divides the edge contour line of the face into an upper part line segment, a lower part line segment and a side part line segment, and defines N 1 types of upper part line segments, N 2 types of lower part line segments and N 3 types of side part line segments to generate standard face types of N face types, wherein N=n 1×n2×n3.
In this scheme, each standard face is divided into 3 parts, and each part is classified and summarized, so after all standard face is analyzed, n 1 types of upper part line segments, n 2 types of lower part line segments and n 3 types of side part line segments can be obtained, and thus 1 upper part line segment, lower part line segment and side part line segment are arbitrarily selected from n 1 types of upper part line segments, n 2 types of lower part line segments and n 3 types of side part line segments to be combined together, namely one standard face. The number of standard face shapes n=n 1×n2×n3 is actually.
And the image information storage unit is configured with a plurality of face type databases, each face type database corresponds to one standard face type, and each face type database stores standard image information corresponding to the standard face type. In this scheme, the standard image information stored in the image information storage unit is a face image that has been marked, that is, a boundary of each five-sense organ region is marked therein. When storing the facial image of the patient, the facial image is classified according to the facial category of the standard face in this scheme.
The image information primary screening unit is in signal connection with the image information acquisition module, acquires the edge contour line of the face in the face image of the patient, calculates the similarity between the upper part line segment, the lower part line segment and the side part line segment of the patient and each type of the upper part line segment, the lower part line segment and the side part line segment, screens the type of the upper part line segment, the type of the lower part line segment and the type of the side part line segment which are closest to the patient, and accordingly obtains the face type of the face of the patient.
When the acupoints are positioned, the similar facial image information needs to be combined together to more easily finish the acupoint positioning. Relatively, the closer the facial shapes of two people are, the closer the corresponding five sense organs are, and the characteristics of the five sense organs are, so that the accuracy is higher when the five sense organs are matched later, and the accuracy of marking is increased.
Therefore, in the scheme, the image information preliminary screening unit can preliminarily judge the face shape of the patient. Specifically, the method for judging the face shape is as follows:
s1, acquiring edge contour lines of a face in a face image.
In practice, the patient lies on a single background sheet, and in the face image thus acquired, the boundary between the face and the background of the patient is clear, and the edge contour line can be directly acquired. In addition, the edge contour line can be drawn by adopting a manual drawing mode. In other embodiments, an editable edge contour may be automatically generated based on the difference in skin color and background color for a majority of patients, and the staff member adjusts the edge contour.
S2, dividing the edge contour line into an upper line segment, a lower line segment and a side line segment.
The upper line segment corresponds to the forehead, the lower line segment corresponds to the chin, and the side line segments correspond to the two sides of the face, as shown in fig. 2, and the corresponding positions of the upper line segment, the lower line segment, and the side line segments are marked in fig. 2.
Marking eyes and mouth corners in a facial image, respectively making horizontal lines, wherein the contour line of the middle part of the two horizontal lines is a side part line segment, the upper part line segment is arranged above the two horizontal lines, and the lower part line segment is arranged below the two horizontal lines.
And S3, performing similarity calculation on the upper part line segment, the lower part line segment and the side part line segment of the patient and the upper part line segment, the lower part line segment and the side part line segment of each type respectively, and screening out the type of the upper part line segment, the type of the lower part line segment and the type of the side part line segment which are closest to the patient, thereby obtaining the standard face of the patient.
Because the standard face shape is judged according to the type of the upper line segment, the type of the lower line segment and the type of the side line segment, when judging the standard face shape of the patient, only the similarity calculation is needed between the upper line segment in the facial image of the patient and the upper line segments of all the standard face shapes, the type of the upper line segment with the highest similarity is screened out, and the lower line segment with the highest similarity and the side line segment are screened out again by analogy, so that the standard face shape of the patient can be combined.
The above is a matching method of the face shape of the patient, the most critical part in the matching process is similarity calculation, and the following is a specific scheme of similarity calculation.
The first step is that the image information preliminary screening unit sets a plurality of sampling points on the upper line segment at equal intervals, and fits the sampling points into a preset first multiple function f 1 (x) to obtain a first fitting parameter.
The first multiple functions are preset, the expression of the first multiple functions is f 1(x)=a1xm+a2xm-1+a3xm-2…+amx1, a 1、a2、a3、…、am is a first fitting parameter, x is a variable, m is the number of terms of the first multiple functions, m is a positive integer, if m=3, the number of terms of the first multiple functions is 3, the more the number of terms is, the higher the accuracy of subsequent fitting is, and the larger the corresponding calculated amount is.
The number of sampling points is also preset, the more the sampling points are, the larger the corresponding calculated amount is, the more the calculated difficulty is, for example, 50 sampling points (x, y) are obtained, x and y in the sampling points are clear, and f 1 (x) is as close to y as possible only when the sampling point x is substituted into f 1 (x) each time.
The upper line segment is actually a line segment in a plane, the line segment is moved into a plane rectangular coordinate system, then sampling points are screened out from the line segment, the sampling points are reserved, and the rest part is deleted, so that the reserved sampling points can be fitted into a first multiple function f 1 (x), and after the fitting is completed, the value of a first fitting parameter such as a 1、a2、a3、…、am in the first multiple function f 1 (x) can be calculated.
The above is the manner of obtaining the first fitting parameters. The principle of the subsequent second fitting parameter and the third fitting parameter is the same.
When the line segment is moved into the plane rectangular coordinate system, the left point of the line segment is taken as the reference point, and the reference point is only required to be positioned at the same point in the plane rectangular coordinate system.
And the second step, setting a plurality of sampling points on the lower line segment at equal intervals by the image information preliminary screening unit, and fitting the sampling points into a preset second multiple function f 2 (x) to obtain a second fitting parameter.
The second multiple function is preset, and the expression is f 2(x)=b1xj+b2xj-1+b3xj-2…+bjx1, wherein b 1、b2、b3、…、bm is a second fitting parameter, x is a variable, j is the number of terms of the second multiple function, and j is a positive integer.
And thirdly, setting a plurality of sampling points on the side part line segments at equal intervals by the image information preliminary screening unit, and fitting the sampling points into a preset third multiple function f 3 (y) to obtain a third fitting parameter.
The third multiple function is preset, the expression of the third multiple function is f 3(y)=c1yo+c2yo-1+c3yo-2…+coy1, wherein c 1、c2、c3、…、co is a third fitting parameter, y is a variable, o is the number of terms of the third multiple function, and o is a positive integer. The third multiple function differs from the second multiple function in that the value of the ordinate is fitted as an argument, and the first multiple function and the second multiple function both have the abscissa as an argument.
In the manner described above, a corresponding set of 3 fitting parameters may be obtained, which in effect describe the shape change of the edges of the regions in the facial image.
And step four, the image information preliminary screening unit matches the face shape according to the similarity of the first fitting parameter, the second fitting parameter and the third fitting parameter with the first fitting parameter, the second fitting parameter and the third fitting parameter of each standard face shape.
In this scheme, a plurality of standard face shapes are configured in advance, and for each standard face shape, the corresponding first fitting parameter, second fitting parameter and third fitting parameter are calculated by the method of the first step to the third step, so that the similarity of the first fitting parameter, the second fitting parameter and the third fitting parameter can be used as the calculation basis of the similarity in S3.
For example, we acquire the facial image of the patient T, divide the facial outline in the facial image of the patient T into an upper segment, a lower segment and a side segment, then calculate the first fitting parameter of the upper segment, the second fitting parameter of the lower segment and the third fitting parameter of the side segment, calculate the similarity between the first fitting parameter of the patient T and the first fitting parameter of the upper segment of each standard face, obtain the nearest upper segment, and so on, and then screen out the lower segment and the side segment with the highest similarity, thus completing the matching work of the standard face. Thus, the matching work of the face images is completed by adopting the scheme.
The similarity is calculated by cosine similarity. The first fitting parameter, the second fitting parameter and the third fitting parameter are sequentially arranged in a sequence, so that cosine similarity can be calculated.
The image information dividing unit is mainly used for identifying the five sense organs of the patient so as to divide the five sense organs of the patient from the facial image. The image information segmentation unit comprises a five sense organs preliminary sketching device and an edge sketching device. The five sense organs preliminary sketching device is used for sketching the preliminary boundary of five sense organs regions, wherein the five sense organs regions comprise eyes, nose, mouth and the like. The part is marked by manpower, the staff marks the approximate edges of the five sense organs such as eyes and nose in the image information, the part does not need to be accurately sketched, and only the positions of the key areas such as eyes, nose and mouth need to be roughly distinguished.
The edge sketcher then performs the actual edge recognition of the five sense organ region based on the portion of the edge. The edge sketching device extracts key points in the preliminary boundary, compares the key points with standard image information in the facial form database, judges whether the key points are boundaries of the facial features, if the key points are matched with feature points with high similarity in the facial form database, the key points are used as edge points, if the key points cannot be matched with the feature points with high similarity, the key points are not used as the edge points, all the edge points are collected, and the edge points are connected to generate the boundaries of the facial features.
In the scheme provided by the application, the edge sketcher can extract the key points in the preliminary boundary, then judge whether the key points are edge points or not, and obtain the edges of the corresponding five sense organs after obtaining all the edge points. The edge sketcher is used for accurately sketching the edge of the five sense organs area for subsequent acupoint positioning.
The extraction method of the key points and the matching method of the key points comprise the following steps:
SO1, the edge sketcher obtains the pixel value in each preliminary boundary and converts the pixel value into a gray value.
This step is mainly to convert the image information with color into gray information, which in some embodiments can also be filtered with a gaussian filter.
SO2, randomly selecting a pixel P i from the primary boundary, wherein i represents the index of the pixel, i is a positive integer, and generating a circular judgment area with a radius being preset by taking the pixel P i as the center.
In this scheme, the diameter of the circular determination area is set to 3 pixel points.
SO3, presetting a threshold t, and comparing the gray values of the pixel point P i with the gray values of the rest pixel points in the circular judgment area to judge whether the pixel point P i is a candidate corner point or not, wherein the pixel point P i in the following two conditions meets any one, and the pixel point P i is the candidate corner point;
The first condition is that if the continuous h pixel points exist in the circular judgment area, the gray value of which is larger than the pixel point P i plus a threshold value t, P i is a candidate corner point;
The second condition is that if there are h consecutive pixels in the circular judgment area that are smaller than the gray value of the pixel P i minus the threshold t, P i is a candidate corner.
The screening candidate corner points (extreme points) are actually the screening whether the pixel point P i is the point with the smallest pixel value or the point with the largest pixel value in the circular area. If the pixel value is the smallest, the gray value of h pixels is larger than the gray value of the pixel P i, otherwise, the gray value of h pixels is smaller than the gray value of the pixel P i, in this scheme, in order to further increase the stability of the candidate corner, the threshold t is further increased, so that the candidate corner must be more different than the pixel values of the surrounding pixels, and h is an integer greater than zero.
And SO4, traversing all pixel points in the preliminary boundary to obtain a candidate corner set A, wherein each point in the candidate corner set A is a key point.
In SO4, non-maximal suppression is performed for each candidate corner. That is, the candidate corner is compared with the response values of the surrounding pixels (the response values can be calculated according to the gray level difference), and if the response value of the candidate corner is not the local maximum value, the candidate corner is removed from the candidate corner set. In this scheme, the response value is the gray value.
In SO 1-SO 4, the collection of the key points is completed, and the following steps need to describe each key point in a proper mode, specifically:
SO5, pre-configuring a rectangular window, and for each key point in the candidate corner point set A, determining the rectangular window by taking the key point as a center, and randomly selecting m pairs of pixel points in the rectangular window;
For each pair of pixels (p 1, p2), their gray values I (p 1) and I (p 2) are compared, and if I (p 1) > I(p2), the corresponding descriptor bit is 1, otherwise 0, to generate an m-bit binary descriptor, which is the descriptor of the key point.
For ease of understanding, the descriptor bits and descriptors are explained by comparing the gray values of each pair of pixels (p 1, p2) to obtain a binary bit (0 or 1), which is called a "descriptor bit", and obtaining all the "descriptor bits" to obtain the descriptor.
In this scheme, the rectangular window is preconfigured and the size is fixed. Since the size of the rectangular window is fixed, a rectangular window can be configured with each candidate corner as the center, and then m pairs of pixels are selected in the rectangular window. The m size may be 128 or 256.
And comparing the key points with standard image information in the face type database. In the above scheme, all the key points in the preliminary boundary are extracted, then the key points are compared with the corresponding boundary points (characteristic points) representing the five-sense organ region in the facial form database, and if the similarity of the key points exceeds a preset threshold, the key points are indicated to be the edge points of the five-sense organ region.
For example, a key point F is extracted from a preliminary region of eyes of the face image, and if the similarity between the key point F and a certain feature point of an eye edge in the face shape database exceeds a preset value, the key point F can be regarded as an edge point, whereas if no key point in the face shape data has a degree of correspondence with the key point F exceeding the preset value, the key point F is not an edge point.
Therefore, the outline of the corresponding edge can be accurately marked by only ensuring the facial image in the facial form data, and the operation can be finished. For similarity calculation, cosine similarity is selected in the scheme. The specific calculation formula is not further described here.
Of course, when judging whether the key point is an edge point, the number threshold may be set in addition to the threshold for setting the similarity. For example, a key point F is extracted from a preliminary region of eyes of a face image, and it is necessary that the similarity of 5 feature points, which can be recognized as boundary points, from the face database exceeds a threshold.
The acupoint positioning unit is used for acquiring the actual length of each pixel grid in the face image on the face of the patient, taking the five sense organs of the patient as a reference, and positioning the positions of the acupoints according to the length of the acupoints from the boundary of the five sense organs. For example, a certain acupoint is recorded on a medical book and is positioned one inch below the eyes, the outline of the eyes can be accurately positioned in the scheme, the side length of each pixel grid can be accurately acquired, and the position one inch below the eyes can be accurately positioned naturally.
The boundaries of the five sense organs regions and the length of each pixel grid in the facial image have been clarified in the above schemes. So the positioning of the acupoints can be completed according to the rules entered in advance. The specific calculation modes are not listed here.
The positions of the acupoints in the facial image of the patient and the needle application sequence of the acupoints are clearly known through the scheme.
The acupoint marking module comprises a bracket, a laser and a control unit. Wherein, the support erects the top at patient's face, and the laser instrument sets up on the support, and the laser instrument is provided with a plurality ofly, and the angle of laser instrument is adjustable.
The arrangement structure of the lasers and the adjusting structure of the angles of the lasers are the prior art, and are not further described in the scheme.
After the laser emits the laser light, a spot of light can be formed on the patient's face, thus guiding the staff to needle the patient according to the treatment prescription. The control unit is respectively connected with the acupuncture point positioning module and the diagnosis and treatment information acquisition module in a signal manner, acquires the positions of all acupuncture points on the face of a patient, and controls the laser to project the light spots to the corresponding acupuncture points according to the needle application sequence of all the acupuncture points recorded by the diagnosis and treatment information acquisition module so as to form the light spots.
In practice, because the angle of adjustment of the laser is not accurate, and the patient's face may also be offset from the position of the laser. Therefore, the laser irradiates the light spot on the face of the patient, and then judges whether to continuously adjust the angle of the laser according to whether the position of the light spot is positioned on the corresponding acupoint.
For this purpose, the following means are also provided in this solution:
The control unit obtains the current acupuncture point to be needled according to the needling sequence of each acupuncture point, then controls the corresponding laser to adjust to the corresponding angle according to the position of the acupuncture point, then emits laser, judges the relative position of the current laser irradiated on the facula of the face of the patient and the position of the acupuncture point, and further adjusts the angle of the laser according to the relative position until the facula coincides with the position of the acupuncture point. The positions of the acupuncture points are calculated by the acupuncture point positioning module, and the positions of the light spots can be directly identified by RBG colors, and specific identification modes of the light spots are not discussed here.
And the control unit is used for adjusting the light spot color of the laser to be green when the laser is adjusted, and adjusting the light spot color to be red when the light spot of the laser is overlapped with the acupuncture point position.
The proposal provided by the application can avoid the situation that the worker misjudges that the current position of the light spot is the position of the acupuncture point in the adjustment process of the laser, and the acupuncture position is not opposite, thereby affecting the treatment effect. Therefore, the positioning and guiding system for facial acupuncture provided by the application can accurately guide the acupuncture process.
In a further scheme, the control unit can remind a worker to stop the needle insertion by using the color of the laser through the pre-recorded needle insertion time.
For example, the treatment prescription records that 50 seconds is needed to be pricked into the No.1 acupoint, then the light spot of the laser irradiates the No.1 acupoint for 50 seconds, then the laser immediately converts into yellow light, and an acoustic alarm is sent out, and at the moment, a worker can accurately pull out the corresponding needle according to the prompt.
So, this scheme can guide to whole puncture process, avoids the staff to appear wrong at the needle in-process of giving birth to, has increased the accuracy of giving birth to the needle.
The above description is only illustrative of the few preferred embodiments of the present application and of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the application in the embodiments of the present application is not limited to the specific combination of the above technical features, but also encompasses other technical features formed by any combination of the above technical features or their equivalents without departing from the spirit of the application. Such as the above-described features, are mutually replaced with the technical features having similar functions (but not limited to) disclosed in the embodiments of the present application.
Claims (7)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202411552600.9A CN119074532B (en) | 2024-11-01 | 2024-11-01 | Facial acupuncture positioning guidance system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202411552600.9A CN119074532B (en) | 2024-11-01 | 2024-11-01 | Facial acupuncture positioning guidance system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN119074532A true CN119074532A (en) | 2024-12-06 |
CN119074532B CN119074532B (en) | 2025-02-07 |
Family
ID=93666835
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202411552600.9A Active CN119074532B (en) | 2024-11-01 | 2024-11-01 | Facial acupuncture positioning guidance system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN119074532B (en) |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105250136A (en) * | 2015-10-28 | 2016-01-20 | 广东小天才科技有限公司 | Method, device and equipment for intelligently reminding acupoint massage |
CN110464633A (en) * | 2019-06-17 | 2019-11-19 | 深圳壹账通智能科技有限公司 | Acupuncture point recognition methods, device, equipment and storage medium |
CN110585592A (en) * | 2019-07-31 | 2019-12-20 | 毕宏生 | Personalized electronic acupuncture device and generation method and generation device thereof |
CN110801392A (en) * | 2019-11-06 | 2020-02-18 | 北京地平线机器人技术研发有限公司 | Method and device for marking predetermined point positions on human body and electronic equipment |
KR102189405B1 (en) * | 2020-04-10 | 2020-12-11 | 주식회사 센스비전 | System for recognizing face in real-time video |
CN113081796A (en) * | 2021-04-09 | 2021-07-09 | 南通市第一人民医院 | System and method for intelligently positioning acupuncture points |
CN114187234A (en) * | 2021-11-02 | 2022-03-15 | 上海市第五人民医院 | Method and system for locating acupoints |
WO2022267653A1 (en) * | 2021-06-23 | 2022-12-29 | 北京旷视科技有限公司 | Image processing method, electronic device, and computer readable storage medium |
CN116942509A (en) * | 2023-07-20 | 2023-10-27 | 中国科学院苏州生物医学工程技术研究所 | Body surface automatic point searching method based on multi-stage optical positioning |
CN117636446A (en) * | 2024-01-25 | 2024-03-01 | 江汉大学 | Face acupoint positioning method, acupuncture robot and storage medium |
-
2024
- 2024-11-01 CN CN202411552600.9A patent/CN119074532B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105250136A (en) * | 2015-10-28 | 2016-01-20 | 广东小天才科技有限公司 | Method, device and equipment for intelligently reminding acupoint massage |
CN110464633A (en) * | 2019-06-17 | 2019-11-19 | 深圳壹账通智能科技有限公司 | Acupuncture point recognition methods, device, equipment and storage medium |
CN110585592A (en) * | 2019-07-31 | 2019-12-20 | 毕宏生 | Personalized electronic acupuncture device and generation method and generation device thereof |
CN110801392A (en) * | 2019-11-06 | 2020-02-18 | 北京地平线机器人技术研发有限公司 | Method and device for marking predetermined point positions on human body and electronic equipment |
KR102189405B1 (en) * | 2020-04-10 | 2020-12-11 | 주식회사 센스비전 | System for recognizing face in real-time video |
CN113081796A (en) * | 2021-04-09 | 2021-07-09 | 南通市第一人民医院 | System and method for intelligently positioning acupuncture points |
WO2022267653A1 (en) * | 2021-06-23 | 2022-12-29 | 北京旷视科技有限公司 | Image processing method, electronic device, and computer readable storage medium |
CN114187234A (en) * | 2021-11-02 | 2022-03-15 | 上海市第五人民医院 | Method and system for locating acupoints |
CN116942509A (en) * | 2023-07-20 | 2023-10-27 | 中国科学院苏州生物医学工程技术研究所 | Body surface automatic point searching method based on multi-stage optical positioning |
CN117636446A (en) * | 2024-01-25 | 2024-03-01 | 江汉大学 | Face acupoint positioning method, acupuncture robot and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN119074532B (en) | 2025-02-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3654239A1 (en) | Contact and non-contact image-based biometrics using physiological elements | |
JP7269711B2 (en) | Biometric authentication system, biometric authentication method and program | |
US8768014B2 (en) | System and method for identifying a person with reference to a sclera image | |
KR100629550B1 (en) | Multiscale Variable Region Segmentation Iris Recognition Method and System | |
WO2017059591A1 (en) | Finger vein identification method and device | |
DE69232024T2 (en) | METHOD FOR PERSONAL IDENTIFICATION BY ANALYZING ELEMENTAL FORMS FROM BIOSENSOR DATA | |
JP5504928B2 (en) | Biometric authentication device, biometric authentication method, and program | |
CN110464633A (en) | Acupuncture point recognition methods, device, equipment and storage medium | |
EP3680794B1 (en) | Device and method for user authentication on basis of iris recognition | |
PT1093633E (en) | Iris identification system and method of identifying a person through iris recognition | |
CN117636446B (en) | Facial acupuncture point positioning method, acupuncture method, acupuncture robot and storage medium | |
KR102162683B1 (en) | Reading aid using atypical skin disease image data | |
TW202026945A (en) | Identity recognition system and identity recognition method | |
Sabharwal et al. | Facial marks for enhancing facial recognition after plastic surgery | |
CN110929570B (en) | Iris rapid positioning device and positioning method thereof | |
CN119074532B (en) | Facial acupuncture positioning guidance system | |
CN116458945B (en) | Intelligent guiding system and method for children facial beauty suture route | |
US20240032856A1 (en) | Method and device for providing alopecia information | |
Huang et al. | Quantitative analysis of facial paralysis based on TCM acupuncture point identification | |
GB2576139A (en) | Ocular assessment | |
CN114036970B (en) | Ultrasonic equipment control method and system | |
JPH11113885A (en) | Individual identification device and method thereof | |
KR20220003201A (en) | Inclination measuring device, measuring method and computer program to measure the degree of tilt or rotation of the user's head through human body image analysis | |
KR102737067B1 (en) | Semi-permanent makeup image recommendation and treatment system | |
CN109993754A (en) | The method and system of skull segmentation is carried out from image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |