[go: up one dir, main page]

CN119074532A - Facial acupuncture positioning guidance system - Google Patents

Facial acupuncture positioning guidance system Download PDF

Info

Publication number
CN119074532A
CN119074532A CN202411552600.9A CN202411552600A CN119074532A CN 119074532 A CN119074532 A CN 119074532A CN 202411552600 A CN202411552600 A CN 202411552600A CN 119074532 A CN119074532 A CN 119074532A
Authority
CN
China
Prior art keywords
acupuncture
facial
patient
points
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202411552600.9A
Other languages
Chinese (zh)
Other versions
CN119074532B (en
Inventor
王康
谭新华
彭中娟
吴其荣
曹正柳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangxi Puzhuo Medical Equipment Co ltd
Second Affiliated Hospital to Nanchang University
Original Assignee
Jiangxi Puzhuo Medical Equipment Co ltd
Second Affiliated Hospital to Nanchang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangxi Puzhuo Medical Equipment Co ltd, Second Affiliated Hospital to Nanchang University filed Critical Jiangxi Puzhuo Medical Equipment Co ltd
Priority to CN202411552600.9A priority Critical patent/CN119074532B/en
Publication of CN119074532A publication Critical patent/CN119074532A/en
Application granted granted Critical
Publication of CN119074532B publication Critical patent/CN119074532B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H39/00Devices for locating or stimulating specific reflex points of the body for physical therapy, e.g. acupuncture
    • A61H39/02Devices for locating such points
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H39/00Devices for locating or stimulating specific reflex points of the body for physical therapy, e.g. acupuncture
    • A61H39/08Devices for applying needles to such points, i.e. for acupuncture ; Acupuncture needles or accessories therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/40ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mechanical, radiation or invasive therapies, e.g. surgery, laser therapy, dialysis or acupuncture

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Rehabilitation Therapy (AREA)
  • Public Health (AREA)
  • Theoretical Computer Science (AREA)
  • Epidemiology (AREA)
  • Pain & Pain Management (AREA)
  • Medical Informatics (AREA)
  • Veterinary Medicine (AREA)
  • Animal Behavior & Ethology (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Urology & Nephrology (AREA)
  • Surgery (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Primary Health Care (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The application belongs to the technical field of acupuncture treatment equipment, and discloses a positioning and guiding system for facial acupuncture. A positioning and guiding system for facial acupuncture comprises a diagnosis and treatment information acquisition module, an image information acquisition module, an acupuncture point positioning module and an acupuncture point marking module, wherein the diagnosis and treatment information acquisition module acquires a treatment prescription of a patient to obtain facial acupuncture points of the patient to be needled, the image information acquisition module acquires facial images of the patient before needling to obtain position information of the facial acupuncture points of the patient to be needled by comparing the facial images of the patient before needling to facial images of the acupuncture points marked in a database, and the acupuncture point marking module acquires the position information of the facial acupuncture points of the patient to be needled and then sequentially irradiates rays capable of generating faculae on the face of the patient to the facial acupuncture points to guide acupuncture treatment. The technical scheme provided by the application ensures that the needle can be accurately positioned at the corresponding acupuncture point Shi Zhen during needle application, and the treatment effect is enhanced.

Description

Positioning and guiding system for facial acupuncture
Technical Field
The application relates to the technical field of acupuncture treatment, in particular to a positioning and guiding system for facial acupuncture.
Background
Peripheral facial paralysis, also called facial neuritis or idiopathic facial paralysis, is often a disease in which facial expression muscle function injury such as facial line disappearance, facial line shallowness, facial angle distortion and the like is mainly marked by nonspecific inflammation of a side nerve.
Acupuncture point stimulation is a good treatment method for facial paralysis, acupuncture points are stimulated by acupuncture, and besides, the acupuncture points can be stimulated by electric shock. Whether acupuncture is used for stimulating the acupoints or electric shock is used for stimulating the acupoints, the positions of the facial acupoints of the patient need to be accurately found.
In clinical practice, it is common for a superior doctor to prescribe an acupoint stimulation to a patient, and then for a next-level doctor to administer an acupressure treatment according to the acupoint stimulation prescription. Because the face condition of each patient is inconsistent, the doctor at the next stage is limited by experience factors, and can not accurately find the corresponding acupuncture point to perform acupuncture, so that the treatment effect is not ideal.
Disclosure of Invention
The summary of the application is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. The summary of the application is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
As a first aspect of the present application, in order to solve the technical problem of inaccurate acupuncture point positions of an acupuncture needle, the present application provides a positioning and guiding system for facial acupuncture, comprising:
the diagnosis and treatment information acquisition module acquires a treatment prescription of a patient to obtain facial acupoints of the patient needing needle application;
the image information acquisition module acquires a facial image of a patient before needle application;
the acupuncture point positioning module is used for comparing the facial image before the needle application of the patient with the facial image marked with the acupuncture points in the facial form database to obtain the position information of the facial acupuncture points of the patient needing the needle application;
The acupuncture point marking module acquires the position information of the face acupuncture points of the patient needing to be needled, and then irradiates laser with the wavelength of visible light to the face acupuncture points in sequence to guide acupuncture treatment.
Aiming at the problem of limited curative effect caused by inaccurate acupoint positioning in acupuncture treatment, the application adopts a diagnosis and treatment information acquisition module to acquire the acupoint needing to be applied with the needle first, and then adopts an acupoint positioning module to accurately match a facial image containing preset acupoint marks from a database, thereby defining the accurate position of the related acupoint of the current patient. Then, the laser is emitted through the acupoint marking module, and the indication light spots are projected on the face of the patient, so that the worker can accurately find the corresponding acupoints when applying the needle to the patient, and the treatment effect is improved.
The current acupoint guiding method is limited to providing an approximate acupoint graphic or video guidance to help medical staff identify the acupoint position, however, in actual operation, the needle application sequence and specific manipulation of the acupoint still need to be judged by the medical staff, which often results in inaccurate needle application and further affects the treatment effect. Aiming at the problem, the application provides the following technical scheme:
the diagnosis and treatment information acquisition module comprises:
The prescription information input unit is used for inputting a treatment prescription of a patient, reading acupoints, the needle application sequence of the acupoints and the interval time of needle application from the treatment prescription;
The input unit is used for adjusting the acupuncture points required to be needled, the needling sequence of the acupuncture points and the needling interval time;
The acupoint marking module marks corresponding acupoints by using light spots in sequence according to the acupoint needle application sequence and the interval time of needle application.
According to the technical scheme provided by the application, the acupoints required by the needle application, the needle application sequence of each acupoint and the interval time can be automatically obtained according to a preset treatment prescription before treatment. In addition, the scheme is also provided with an input unit, so that medical staff can flexibly adjust the parameters according to actual conditions. Therefore, the follow-up acupuncture point marking module can accurately emit facula marks according to the set sequence and time interval, guide medical staff to accurately and timely perform needle application, and reduce the influence of acupuncture point positioning errors, medical staff needle insertion errors and needle insertion time errors on the treatment effect.
Acupoint recognition essentially relies on determining the point of an acupoint at a corresponding distance from the facial feature location. This process requires both accurate facial feature recognition and accurate calculation of the distance between these features and the standard acupoint location, as shown by the yang-white acupoint being located 1 cun above the eyebrow. Current automatic acupoint recognition techniques rely mostly on facial feature similarity matching, for example, comparing a patient's facial image to the labeled yangbai acupoints in a database to find the region with the closest texture. However, this approach over emphasizes local feature similarity, ignoring the variability of facial shape differences and facial texture between patients, resulting in a large error in direct localization of acupoints by similarity. Aiming at the problems, the application provides the following technical scheme:
Further, the image information acquisition module includes:
a camera for acquiring a facial image of a patient;
the marking is attached to the face of a patient in a straight mode, and the length of the marking is recorded in the acupoint positioning module in advance.
According to the technical scheme provided by the application, a method of setting the marked line on the face of the patient is adopted, and a length reference standard is provided for the acquired image information. Therefore, when the acupoint is identified, the five sense organs can be used as stable reference points, and accurate acupoint identification and positioning can be performed by combining the accurate scale information provided by the marking lines, so that the positioning accuracy of the acupoint is obviously improved.
In order to further increase the positioning accuracy of the acupuncture points, the application provides the following technical scheme:
The acupuncture point positioning module comprises:
A face definition unit which defines a standard face of a plurality of face categories in advance;
An image information storage unit configured with a plurality of face databases, each face database corresponding to a standard face, each face database storing standard image information corresponding to the standard face;
the image information primary screening unit is in signal connection with the image information acquisition module and acquires a face image of a patient, and the standard face shape of the patient is acquired according to the face image;
an image information dividing unit for screening a face database according to the standard face of the patient, and dividing the face image into corresponding five sense organs based on the standard image information in the face database;
the acupoint positioning unit is used for acquiring the actual length of each pixel grid in the face image on the face of the patient, taking the five sense organs of the patient as a reference, and positioning the positions of the acupoints according to the length of the acupoints from the boundary of the five sense organs.
The technical scheme provided by the application does not simply directly compare the facial image of the patient with the database image to find the most similar facial image so as to finish the acupoint positioning. In contrast, the method comprises the steps of firstly accurately capturing the positions of the five sense organs of a patient, and then accurately positioning the acupoints based on the positions of the five sense organs according to the established acupoint positioning rules. Compared with the traditional method relying on acupoint similarity matching, the scheme remarkably improves positioning accuracy and realizes more accurate and effective acupoint recognition. Meanwhile, compared with the prior art, the scheme provided by the application can also reduce the collection difficulty and the marking difficulty of the standard image information, under the existing scheme, the acupuncture points on each face image in the standard image information are required to be marked so as to be convenient for the acupuncture point recognition system to recognize the corresponding acupuncture points, and in the scheme, only the boundaries and the outlines of the five sense organs are required to be marked, so that the marking difficulty is obviously reduced, and meanwhile, the scheme can also avoid the influence on errors under different scales. Because each piece of standard image information collected cannot be obtained at the same scale, errors occur when actually performing acupoint positioning because the scale of the standard image information is not clear. In the scheme, only the boundary of the five sense organs in the face image is acquired by using the standard image information, and the length in the face image is converted by using the marked lines, so that the accuracy of point positioning is improved.
Further, the image information preliminary screening unit obtains an edge contour line of a face in a face image of the patient, calculates similarity between an upper part line segment, a lower part line segment and a side part line segment of the patient and each type of the upper part line segment, the lower part line segment and the side part line segment, screens out the type of the upper part line segment, the type of the lower part line segment and the type of the side part line segment which are closest to the patient, and accordingly obtains the face type of the face of the patient.
According to the technical scheme provided by the application, when the face shape of the patient is determined, a method of directly carrying out contrast analysis from three different directions according to the facial contour of the patient is adopted. According to the scheme, the face shape of the patient can be accurately matched to the nearest standard face shape, so that a more accurate reference is provided for subsequent acupoint positioning, and the accuracy of acupoint positioning is remarkably improved.
In the face recognition process, achieving accurate matching of patient face actually depends on a moderate fuzzy matching strategy. Currently, edge contour matching generally relies on neural network models, which, while providing high matching accuracy, are accompanied by high resource consumption. In practical application, in view of the uniqueness of each face, too high matching accuracy cannot significantly improve matching accuracy, but may cause excessive fitting of a model, and affect generalization capability. In addition, the Hausdorff distance is adopted for similarity matching, and accurate alignment can be difficult to achieve due to incorrect accuracy setting. Another common approach is to calculate the minimum bounding box (MBR) of the contour and match by comparing the size, position and orientation of the bounding box, although this approach is computationally less intensive, ignoring the fine shape of the contour edges, and thus lacking in accuracy in face matching.
Aiming at the problem that the prior face matching technology is difficult to consider the simplicity and the accuracy, the application provides a technical scheme which aims to realize the matching of the face of the patient in a more efficient and accurate mode.
The image information preliminary screening unit is used for setting a plurality of sampling points on the upper line segment at equal intervals, and fitting the sampling points into a preset first function for multiple times to obtain a first fitting parameter;
the image information preliminary screening unit is used for setting a plurality of sampling points on the lower line segment at equal intervals, and fitting the sampling points into a preset second function for a plurality of times to obtain a second fitting parameter;
The image information preliminary screening unit is used for setting a plurality of sampling points on the side part line segments at equal intervals, and fitting the sampling points into a preset third function for a plurality of times to obtain a third fitting parameter;
the image information preliminary screening unit matches the face shape according to the similarity of the first fitting parameter, the second fitting parameter and the third fitting parameter and the first fitting parameter, the second fitting parameter and the third fitting parameter of each standard face shape.
According to the technical scheme provided by the application, the number of sampling points is flexibly regulated, so that the effective management of the calculated amount is realized, and the calculation resources can be optimally configured according to the actual requirements. After carefully selecting the proper number of sampling points, the scheme can ensure the accuracy of face matching and realize accurate alignment. More importantly, the similarity evaluation is carried out based on the fitting parameters of the sampling points, and the fine relief forms of the contour line edges are matched substantially, so that the accuracy is improved remarkably in the matching process of the facial contours. Therefore, the scheme adopts a simple, convenient and efficient method, successfully realizes accurate matching of the face shape of the patient, controls the complexity of calculation and ensures the high precision of the matching result.
Given the uniqueness of the facial features of each patient, and the atypical variation in facial morphology that is often associated with facial paralysis patients, this greatly increases the difficulty of accurately locating acupoints based on existing facial features in the database. To overcome this challenge, the present application proposes the following solution:
The image information dividing unit includes:
the five sense organs preliminary sketching device is used for sketching the preliminary boundary of the five sense organs region, and the five sense organs region comprises eyes and nose;
The edge sketching device is used for extracting key points in the preliminary boundary, comparing the key points with standard image information in the facial form database, judging whether the key points are boundaries of the five sense organs area, if so, recording the key points as edge points, and if not, deleting the key points;
all edge points are collected and connected to generate boundaries of the five sense organ regions.
The technical scheme provided by the application eliminates the traditional method of directly comparing the facial image of the patient with the image in the facial form database. In contrast, the application uses the edge sketcher to extract the key points of the face, then compares the key points of the face with standard image information in the database, and judges whether the key points form the boundary of the five sense organs. Subsequently, by sequentially connecting these key points belonging to the boundary, the outline of the five sense organs is precisely outlined. The key advantage of this solution is that it exploits the stability of the key points. These points can be kept constant under different illumination, rotation angles and dimensional changes. These key points substantially correspond to the edge regions where the facial features are significant, so that the five sense organs boundary can be efficiently and accurately located by only performing a comparative analysis on these key points, and the accuracy of the matching is significantly improved. By the scheme, the problem of acupoint positioning caused by facial deformation of facial paralysis patients is effectively solved, and powerful support is provided for accurate medical treatment.
Further, the acupoint marking module includes:
the angle-adjustable lasers are used for generating laser with the wavelength in the visible light frequency band;
the control unit is respectively connected with the acupuncture point positioning module and the diagnosis and treatment information acquisition module in a signal manner, acquires the positions of all the acupuncture points on the face of the patient, and controls the laser to project the light spots to the corresponding acupuncture points according to the needle application sequence of all the acupuncture points recorded by the diagnosis and treatment information acquisition module so as to form the light spots.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application, are incorporated in and constitute a part of this specification. The drawings and their description are illustrative of the application and are not to be construed as unduly limiting the application.
In addition, the same or similar reference numerals denote the same or similar elements throughout the drawings. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale.
In the drawings:
Fig. 1 is a schematic view of a positioning and guiding system for facial acupuncture.
Fig. 2 is a schematic diagram of the division of the edge contour of a face.
Detailed Description
Embodiments of the present application will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the application have been illustrated in the accompanying drawings, it is to be understood that the application may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete. It should be understood that the drawings and embodiments of the application are for illustration purposes only and are not intended to limit the scope of the present application.
It should be noted that, for convenience of description, only the portions related to the present application are shown in the drawings. Embodiments of the application and features of the embodiments may be combined with each other without conflict.
The application will be described in detail below with reference to the drawings in connection with embodiments.
Embodiment 1 referring to fig. 1, a positioning and guiding system for facial acupuncture includes a diagnosis and treatment information acquisition module, an image information acquisition module, an acupuncture point positioning module, and an acupuncture point marking module. The acupoint marking module is respectively in signal connection with the diagnosis and treatment information acquisition module and the acupoint positioning module, and the image information acquisition module is in signal connection with the acupoint positioning module.
The system comprises a diagnosis and treatment information acquisition module, an image information acquisition module, an acupuncture point positioning module and an acupuncture point marking module, wherein the diagnosis and treatment information acquisition module is used for acquiring a treatment prescription of a patient to obtain facial acupuncture points of the patient to be subjected to acupuncture, the image information acquisition module is used for acquiring facial images of the patient before the patient is subjected to acupuncture, the acupuncture point positioning module is used for comparing the facial images of the patient before the patient is subjected to acupuncture with facial images marked with the acupuncture points in a facial form database to obtain position information of the facial acupuncture points of the patient to be subjected to acupuncture, and the acupuncture point marking module is used for acquiring the position information of the facial acupuncture points of the patient to be subjected to acupuncture, and then laser with visible light frequency is sequentially irradiated to the facial acupuncture points to guide acupuncture treatment.
The above general technical solution of the positioning and guiding system for facial acupuncture provided by the present application is further described in the following technical solutions:
the diagnosis and treatment information acquisition module is connected with the HIS system of the hospital to acquire the treatment prescription of the patient, or the treatment prescription of the patient is obtained through a manual input mode. The treatment prescription in the scheme is a treatment mode of acupuncture and moxibustion, and comprises acupuncture points needing to be applied with needles, the sequence of the needles applied with the needles and the interval time of the needles applied with the needles.
Furthermore, the diagnosis and treatment information acquisition module comprises a prescription information input unit and an input unit. The prescription information input unit is used for inputting a treatment prescription of a patient, reading acupoints, the acupoints injection sequence and the acupoints injection interval time from the treatment prescription, and adjusting the acupoints, the acupoints injection sequence and the acupoints injection interval time.
The diagnosis and treatment information acquisition module is essentially an information processing terminal, such as a computer and a mobile phone. The staff can download the treatment prescriptions in the HIS system of the hospital by using the diagnosis and treatment information acquisition module or directly input the treatment prescriptions by using the input unit. The specific input mode of the treatment prescription and the reading mode of the treatment prescription are the prior art, and the application is not further described.
And the image information acquisition module acquires a facial image of the patient before needle application. Specifically, the patient may lie in a designated position prior to needle application, at which time a moving camera or a fixed camera may be used to capture an image of the patient's face. In practice, the face image is image information for which face range division has been completed. The approach of extracting facial images along the facial edge contours is prior art and will not be described here.
The image information acquisition module comprises a camera and a marked line. The camera is fixed on the frame body, or the camera is held by a worker. However, in any fixing method of the camera, in this scheme, the camera is required to obtain a facial image of the front face of the patient. The marking is a line of fixed length which is straightened and attached to the patient's face when in use. The marking needs to be far away from the acupuncture point of the patient needing to be needled, the unclear recognition of the acupoints of the part is avoided.
The application can acquire the facial image of the front face of the patient by adopting the image information acquisition module, and a mark line with a pre-recorded length is added in the facial image. For example, the length of the marking is 1cm, and 1000 pixel grids are occupied in the facial image, so that the width of the 1 pixel grid on the face of the patient is 0.01mm, and when calculating the distance between any two points on the face of the patient, only how many pixel grids are separated by the two points needs to be calculated.
In some embodiments, in order to reduce the calculation amount, the acupoint positioning module acquires the acupoints requiring needle application in the diagnosis and treatment information acquisition module in advance, and only relevant acupoints are positioned in the acupoint positioning module, so that the calculation amount is reduced, and the feedback efficiency of the system is improved.
The acupoint positioning module comprises a face definition unit, an image information storage unit, an image information preliminary screening unit, an image information segmentation unit and an acupoint positioning unit. The face type definition unit divides the edge contour line of the face into an upper part line segment, a lower part line segment and a side part line segment, and defines N 1 types of upper part line segments, N 2 types of lower part line segments and N 3 types of side part line segments to generate standard face types of N face types, wherein N=n 1×n2×n3.
In this scheme, each standard face is divided into 3 parts, and each part is classified and summarized, so after all standard face is analyzed, n 1 types of upper part line segments, n 2 types of lower part line segments and n 3 types of side part line segments can be obtained, and thus 1 upper part line segment, lower part line segment and side part line segment are arbitrarily selected from n 1 types of upper part line segments, n 2 types of lower part line segments and n 3 types of side part line segments to be combined together, namely one standard face. The number of standard face shapes n=n 1×n2×n3 is actually.
And the image information storage unit is configured with a plurality of face type databases, each face type database corresponds to one standard face type, and each face type database stores standard image information corresponding to the standard face type. In this scheme, the standard image information stored in the image information storage unit is a face image that has been marked, that is, a boundary of each five-sense organ region is marked therein. When storing the facial image of the patient, the facial image is classified according to the facial category of the standard face in this scheme.
The image information primary screening unit is in signal connection with the image information acquisition module, acquires the edge contour line of the face in the face image of the patient, calculates the similarity between the upper part line segment, the lower part line segment and the side part line segment of the patient and each type of the upper part line segment, the lower part line segment and the side part line segment, screens the type of the upper part line segment, the type of the lower part line segment and the type of the side part line segment which are closest to the patient, and accordingly obtains the face type of the face of the patient.
When the acupoints are positioned, the similar facial image information needs to be combined together to more easily finish the acupoint positioning. Relatively, the closer the facial shapes of two people are, the closer the corresponding five sense organs are, and the characteristics of the five sense organs are, so that the accuracy is higher when the five sense organs are matched later, and the accuracy of marking is increased.
Therefore, in the scheme, the image information preliminary screening unit can preliminarily judge the face shape of the patient. Specifically, the method for judging the face shape is as follows:
s1, acquiring edge contour lines of a face in a face image.
In practice, the patient lies on a single background sheet, and in the face image thus acquired, the boundary between the face and the background of the patient is clear, and the edge contour line can be directly acquired. In addition, the edge contour line can be drawn by adopting a manual drawing mode. In other embodiments, an editable edge contour may be automatically generated based on the difference in skin color and background color for a majority of patients, and the staff member adjusts the edge contour.
S2, dividing the edge contour line into an upper line segment, a lower line segment and a side line segment.
The upper line segment corresponds to the forehead, the lower line segment corresponds to the chin, and the side line segments correspond to the two sides of the face, as shown in fig. 2, and the corresponding positions of the upper line segment, the lower line segment, and the side line segments are marked in fig. 2.
Marking eyes and mouth corners in a facial image, respectively making horizontal lines, wherein the contour line of the middle part of the two horizontal lines is a side part line segment, the upper part line segment is arranged above the two horizontal lines, and the lower part line segment is arranged below the two horizontal lines.
And S3, performing similarity calculation on the upper part line segment, the lower part line segment and the side part line segment of the patient and the upper part line segment, the lower part line segment and the side part line segment of each type respectively, and screening out the type of the upper part line segment, the type of the lower part line segment and the type of the side part line segment which are closest to the patient, thereby obtaining the standard face of the patient.
Because the standard face shape is judged according to the type of the upper line segment, the type of the lower line segment and the type of the side line segment, when judging the standard face shape of the patient, only the similarity calculation is needed between the upper line segment in the facial image of the patient and the upper line segments of all the standard face shapes, the type of the upper line segment with the highest similarity is screened out, and the lower line segment with the highest similarity and the side line segment are screened out again by analogy, so that the standard face shape of the patient can be combined.
The above is a matching method of the face shape of the patient, the most critical part in the matching process is similarity calculation, and the following is a specific scheme of similarity calculation.
The first step is that the image information preliminary screening unit sets a plurality of sampling points on the upper line segment at equal intervals, and fits the sampling points into a preset first multiple function f 1 (x) to obtain a first fitting parameter.
The first multiple functions are preset, the expression of the first multiple functions is f 1(x)=a1xm+a2xm-1+a3xm-2…+amx1, a 1、a2、a3、…、am is a first fitting parameter, x is a variable, m is the number of terms of the first multiple functions, m is a positive integer, if m=3, the number of terms of the first multiple functions is 3, the more the number of terms is, the higher the accuracy of subsequent fitting is, and the larger the corresponding calculated amount is.
The number of sampling points is also preset, the more the sampling points are, the larger the corresponding calculated amount is, the more the calculated difficulty is, for example, 50 sampling points (x, y) are obtained, x and y in the sampling points are clear, and f 1 (x) is as close to y as possible only when the sampling point x is substituted into f 1 (x) each time.
The upper line segment is actually a line segment in a plane, the line segment is moved into a plane rectangular coordinate system, then sampling points are screened out from the line segment, the sampling points are reserved, and the rest part is deleted, so that the reserved sampling points can be fitted into a first multiple function f 1 (x), and after the fitting is completed, the value of a first fitting parameter such as a 1、a2、a3、…、am in the first multiple function f 1 (x) can be calculated.
The above is the manner of obtaining the first fitting parameters. The principle of the subsequent second fitting parameter and the third fitting parameter is the same.
When the line segment is moved into the plane rectangular coordinate system, the left point of the line segment is taken as the reference point, and the reference point is only required to be positioned at the same point in the plane rectangular coordinate system.
And the second step, setting a plurality of sampling points on the lower line segment at equal intervals by the image information preliminary screening unit, and fitting the sampling points into a preset second multiple function f 2 (x) to obtain a second fitting parameter.
The second multiple function is preset, and the expression is f 2(x)=b1xj+b2xj-1+b3xj-2…+bjx1, wherein b 1、b2、b3、…、bm is a second fitting parameter, x is a variable, j is the number of terms of the second multiple function, and j is a positive integer.
And thirdly, setting a plurality of sampling points on the side part line segments at equal intervals by the image information preliminary screening unit, and fitting the sampling points into a preset third multiple function f 3 (y) to obtain a third fitting parameter.
The third multiple function is preset, the expression of the third multiple function is f 3(y)=c1yo+c2yo-1+c3yo-2…+coy1, wherein c 1、c2、c3、…、co is a third fitting parameter, y is a variable, o is the number of terms of the third multiple function, and o is a positive integer. The third multiple function differs from the second multiple function in that the value of the ordinate is fitted as an argument, and the first multiple function and the second multiple function both have the abscissa as an argument.
In the manner described above, a corresponding set of 3 fitting parameters may be obtained, which in effect describe the shape change of the edges of the regions in the facial image.
And step four, the image information preliminary screening unit matches the face shape according to the similarity of the first fitting parameter, the second fitting parameter and the third fitting parameter with the first fitting parameter, the second fitting parameter and the third fitting parameter of each standard face shape.
In this scheme, a plurality of standard face shapes are configured in advance, and for each standard face shape, the corresponding first fitting parameter, second fitting parameter and third fitting parameter are calculated by the method of the first step to the third step, so that the similarity of the first fitting parameter, the second fitting parameter and the third fitting parameter can be used as the calculation basis of the similarity in S3.
For example, we acquire the facial image of the patient T, divide the facial outline in the facial image of the patient T into an upper segment, a lower segment and a side segment, then calculate the first fitting parameter of the upper segment, the second fitting parameter of the lower segment and the third fitting parameter of the side segment, calculate the similarity between the first fitting parameter of the patient T and the first fitting parameter of the upper segment of each standard face, obtain the nearest upper segment, and so on, and then screen out the lower segment and the side segment with the highest similarity, thus completing the matching work of the standard face. Thus, the matching work of the face images is completed by adopting the scheme.
The similarity is calculated by cosine similarity. The first fitting parameter, the second fitting parameter and the third fitting parameter are sequentially arranged in a sequence, so that cosine similarity can be calculated.
The image information dividing unit is mainly used for identifying the five sense organs of the patient so as to divide the five sense organs of the patient from the facial image. The image information segmentation unit comprises a five sense organs preliminary sketching device and an edge sketching device. The five sense organs preliminary sketching device is used for sketching the preliminary boundary of five sense organs regions, wherein the five sense organs regions comprise eyes, nose, mouth and the like. The part is marked by manpower, the staff marks the approximate edges of the five sense organs such as eyes and nose in the image information, the part does not need to be accurately sketched, and only the positions of the key areas such as eyes, nose and mouth need to be roughly distinguished.
The edge sketcher then performs the actual edge recognition of the five sense organ region based on the portion of the edge. The edge sketching device extracts key points in the preliminary boundary, compares the key points with standard image information in the facial form database, judges whether the key points are boundaries of the facial features, if the key points are matched with feature points with high similarity in the facial form database, the key points are used as edge points, if the key points cannot be matched with the feature points with high similarity, the key points are not used as the edge points, all the edge points are collected, and the edge points are connected to generate the boundaries of the facial features.
In the scheme provided by the application, the edge sketcher can extract the key points in the preliminary boundary, then judge whether the key points are edge points or not, and obtain the edges of the corresponding five sense organs after obtaining all the edge points. The edge sketcher is used for accurately sketching the edge of the five sense organs area for subsequent acupoint positioning.
The extraction method of the key points and the matching method of the key points comprise the following steps:
SO1, the edge sketcher obtains the pixel value in each preliminary boundary and converts the pixel value into a gray value.
This step is mainly to convert the image information with color into gray information, which in some embodiments can also be filtered with a gaussian filter.
SO2, randomly selecting a pixel P i from the primary boundary, wherein i represents the index of the pixel, i is a positive integer, and generating a circular judgment area with a radius being preset by taking the pixel P i as the center.
In this scheme, the diameter of the circular determination area is set to 3 pixel points.
SO3, presetting a threshold t, and comparing the gray values of the pixel point P i with the gray values of the rest pixel points in the circular judgment area to judge whether the pixel point P i is a candidate corner point or not, wherein the pixel point P i in the following two conditions meets any one, and the pixel point P i is the candidate corner point;
The first condition is that if the continuous h pixel points exist in the circular judgment area, the gray value of which is larger than the pixel point P i plus a threshold value t, P i is a candidate corner point;
The second condition is that if there are h consecutive pixels in the circular judgment area that are smaller than the gray value of the pixel P i minus the threshold t, P i is a candidate corner.
The screening candidate corner points (extreme points) are actually the screening whether the pixel point P i is the point with the smallest pixel value or the point with the largest pixel value in the circular area. If the pixel value is the smallest, the gray value of h pixels is larger than the gray value of the pixel P i, otherwise, the gray value of h pixels is smaller than the gray value of the pixel P i, in this scheme, in order to further increase the stability of the candidate corner, the threshold t is further increased, so that the candidate corner must be more different than the pixel values of the surrounding pixels, and h is an integer greater than zero.
And SO4, traversing all pixel points in the preliminary boundary to obtain a candidate corner set A, wherein each point in the candidate corner set A is a key point.
In SO4, non-maximal suppression is performed for each candidate corner. That is, the candidate corner is compared with the response values of the surrounding pixels (the response values can be calculated according to the gray level difference), and if the response value of the candidate corner is not the local maximum value, the candidate corner is removed from the candidate corner set. In this scheme, the response value is the gray value.
In SO 1-SO 4, the collection of the key points is completed, and the following steps need to describe each key point in a proper mode, specifically:
SO5, pre-configuring a rectangular window, and for each key point in the candidate corner point set A, determining the rectangular window by taking the key point as a center, and randomly selecting m pairs of pixel points in the rectangular window;
For each pair of pixels (p 1, p2), their gray values I (p 1) and I (p 2) are compared, and if I (p 1) > I(p2), the corresponding descriptor bit is 1, otherwise 0, to generate an m-bit binary descriptor, which is the descriptor of the key point.
For ease of understanding, the descriptor bits and descriptors are explained by comparing the gray values of each pair of pixels (p 1, p2) to obtain a binary bit (0 or 1), which is called a "descriptor bit", and obtaining all the "descriptor bits" to obtain the descriptor.
In this scheme, the rectangular window is preconfigured and the size is fixed. Since the size of the rectangular window is fixed, a rectangular window can be configured with each candidate corner as the center, and then m pairs of pixels are selected in the rectangular window. The m size may be 128 or 256.
And comparing the key points with standard image information in the face type database. In the above scheme, all the key points in the preliminary boundary are extracted, then the key points are compared with the corresponding boundary points (characteristic points) representing the five-sense organ region in the facial form database, and if the similarity of the key points exceeds a preset threshold, the key points are indicated to be the edge points of the five-sense organ region.
For example, a key point F is extracted from a preliminary region of eyes of the face image, and if the similarity between the key point F and a certain feature point of an eye edge in the face shape database exceeds a preset value, the key point F can be regarded as an edge point, whereas if no key point in the face shape data has a degree of correspondence with the key point F exceeding the preset value, the key point F is not an edge point.
Therefore, the outline of the corresponding edge can be accurately marked by only ensuring the facial image in the facial form data, and the operation can be finished. For similarity calculation, cosine similarity is selected in the scheme. The specific calculation formula is not further described here.
Of course, when judging whether the key point is an edge point, the number threshold may be set in addition to the threshold for setting the similarity. For example, a key point F is extracted from a preliminary region of eyes of a face image, and it is necessary that the similarity of 5 feature points, which can be recognized as boundary points, from the face database exceeds a threshold.
The acupoint positioning unit is used for acquiring the actual length of each pixel grid in the face image on the face of the patient, taking the five sense organs of the patient as a reference, and positioning the positions of the acupoints according to the length of the acupoints from the boundary of the five sense organs. For example, a certain acupoint is recorded on a medical book and is positioned one inch below the eyes, the outline of the eyes can be accurately positioned in the scheme, the side length of each pixel grid can be accurately acquired, and the position one inch below the eyes can be accurately positioned naturally.
The boundaries of the five sense organs regions and the length of each pixel grid in the facial image have been clarified in the above schemes. So the positioning of the acupoints can be completed according to the rules entered in advance. The specific calculation modes are not listed here.
The positions of the acupoints in the facial image of the patient and the needle application sequence of the acupoints are clearly known through the scheme.
The acupoint marking module comprises a bracket, a laser and a control unit. Wherein, the support erects the top at patient's face, and the laser instrument sets up on the support, and the laser instrument is provided with a plurality ofly, and the angle of laser instrument is adjustable.
The arrangement structure of the lasers and the adjusting structure of the angles of the lasers are the prior art, and are not further described in the scheme.
After the laser emits the laser light, a spot of light can be formed on the patient's face, thus guiding the staff to needle the patient according to the treatment prescription. The control unit is respectively connected with the acupuncture point positioning module and the diagnosis and treatment information acquisition module in a signal manner, acquires the positions of all acupuncture points on the face of a patient, and controls the laser to project the light spots to the corresponding acupuncture points according to the needle application sequence of all the acupuncture points recorded by the diagnosis and treatment information acquisition module so as to form the light spots.
In practice, because the angle of adjustment of the laser is not accurate, and the patient's face may also be offset from the position of the laser. Therefore, the laser irradiates the light spot on the face of the patient, and then judges whether to continuously adjust the angle of the laser according to whether the position of the light spot is positioned on the corresponding acupoint.
For this purpose, the following means are also provided in this solution:
The control unit obtains the current acupuncture point to be needled according to the needling sequence of each acupuncture point, then controls the corresponding laser to adjust to the corresponding angle according to the position of the acupuncture point, then emits laser, judges the relative position of the current laser irradiated on the facula of the face of the patient and the position of the acupuncture point, and further adjusts the angle of the laser according to the relative position until the facula coincides with the position of the acupuncture point. The positions of the acupuncture points are calculated by the acupuncture point positioning module, and the positions of the light spots can be directly identified by RBG colors, and specific identification modes of the light spots are not discussed here.
And the control unit is used for adjusting the light spot color of the laser to be green when the laser is adjusted, and adjusting the light spot color to be red when the light spot of the laser is overlapped with the acupuncture point position.
The proposal provided by the application can avoid the situation that the worker misjudges that the current position of the light spot is the position of the acupuncture point in the adjustment process of the laser, and the acupuncture position is not opposite, thereby affecting the treatment effect. Therefore, the positioning and guiding system for facial acupuncture provided by the application can accurately guide the acupuncture process.
In a further scheme, the control unit can remind a worker to stop the needle insertion by using the color of the laser through the pre-recorded needle insertion time.
For example, the treatment prescription records that 50 seconds is needed to be pricked into the No.1 acupoint, then the light spot of the laser irradiates the No.1 acupoint for 50 seconds, then the laser immediately converts into yellow light, and an acoustic alarm is sent out, and at the moment, a worker can accurately pull out the corresponding needle according to the prompt.
So, this scheme can guide to whole puncture process, avoids the staff to appear wrong at the needle in-process of giving birth to, has increased the accuracy of giving birth to the needle.
The above description is only illustrative of the few preferred embodiments of the present application and of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the application in the embodiments of the present application is not limited to the specific combination of the above technical features, but also encompasses other technical features formed by any combination of the above technical features or their equivalents without departing from the spirit of the application. Such as the above-described features, are mutually replaced with the technical features having similar functions (but not limited to) disclosed in the embodiments of the present application.

Claims (7)

1.一种面部针灸的定位引导系统,其特征在于,包括:1. A positioning guidance system for facial acupuncture, characterized by comprising: 诊疗信息获取模块,获取患者的治疗处方,得到患者需要施针的面部穴位;The diagnosis and treatment information acquisition module obtains the patient's treatment prescription and the facial acupuncture points that the patient needs to receive; 图像信息获取模块,获取患者施针之前的面部图像;An image information acquisition module is used to acquire the facial image of the patient before acupuncture; 穴位定位模块,对比患者施针之前的面部图像与脸型数据库中已经标记了穴位的面部图像,得到患者需要施针的面部穴位的位置信息;The acupuncture point positioning module compares the facial image of the patient before acupuncture with the facial image with acupuncture points marked in the face database to obtain the location information of the facial acupuncture points that need to be acupunctured on the patient; 穴位标记模块,获取患者需要施针的面部穴位的位置信息,然后将波长为可见光频率的激光依次照射至面部穴位,引导针灸治疗;The acupuncture point marking module obtains the location information of the facial acupuncture points where the patient needs to be treated, and then irradiates the facial acupuncture points with a wavelength of visible light frequency in sequence to guide acupuncture treatment; 其中:in: 图像信息获取模块包括:标线,笔直的贴附在患者面部,标线的长度预先记录至穴位定位模块中;The image information acquisition module includes: a marking line, which is attached straightly to the patient's face, and the length of the marking line is pre-recorded in the acupoint positioning module; 穴位定位模块包括:The acupoint positioning module includes: 脸型定义单元,将面部的边缘轮廓线划分为上部分线段、下部分线段以及侧部分线段,并定义有n1种类型的上部分线段、n2种类型的下部分线段,以及n3种类型的侧部分线段,以生成N种面部类别的标准脸型,N=n1×n2×n3A face shape definition unit divides the edge contour of the face into upper line segments, lower line segments and side line segments, and defines n 1 types of upper line segments, n 2 types of lower line segments, and n 3 types of side line segments to generate standard face shapes of N face categories, N=n 1 ×n 2 ×n 3 ; 图像信息储存单元,配置有多个脸型数据库,每个脸型数据库对应一种标准脸型,每个脸型数据库储存有对应标准脸型的标准图像信息;The image information storage unit is configured with a plurality of face shape databases, each face shape database corresponds to a standard face shape, and each face shape database stores standard image information corresponding to the standard face shape; 图像信息初步筛选单元,与图像信息获取模块信号连接,获取患者的面部图像中面部的边缘轮廓线,将患者的上部分线段、下部分线段以及侧部分线段分别与各类型的上部分线段、下部分线段以及侧部分线段进行相似度计算,筛选出患者最为接近的上部分线段的类型、下部分线段的类型以及侧部分线段的类型,以此得到患者的所属标准脸型;The image information preliminary screening unit is connected to the image information acquisition module signal, obtains the edge contour line of the face in the patient's facial image, calculates the similarity between the patient's upper line segment, lower line segment and side line segment and each type of upper line segment, lower line segment and side line segment, and screens out the type of upper line segment, lower line segment and side line segment that are closest to the patient, so as to obtain the patient's standard face shape; 图像信息分割单元,根据患者所属标准脸型筛选出脸型数据库,基于脸型数据库中的标准图像信息将面部图像分割为对应的五官区域;An image information segmentation unit, which selects a face shape database according to the patient's standard face shape, and segments the facial image into corresponding facial features based on the standard image information in the face shape database; 穴位定位单元,获取面部图像中每个像素格在患者面部的实际长度,以患者五官区域为参照,以穴位距离五官区域边界的长度为依据,定位出各穴位的位置。The acupoint positioning unit obtains the actual length of each pixel grid in the facial image on the patient's face, takes the patient's facial features as a reference, and uses the length of the acupoint from the facial features to the boundary of the facial features as a basis to locate the position of each acupoint. 2.根据权利要求1所述的面部针灸的定位引导系统,其特征在于:诊疗信息获取模块包括:2. The facial acupuncture positioning guidance system according to claim 1, characterized in that: the diagnosis and treatment information acquisition module comprises: 处方信息录入单元,用于录入患者的治疗处方,从治疗处方中读取出需要施针的穴位、穴位的施针顺序以及施针的间隔时间;The prescription information input unit is used to input the patient's treatment prescription, and read the acupuncture points to be acupunctured, the acupuncture sequence of the acupuncture points, and the interval between acupuncture treatments from the treatment prescription; 输入单元,用于调整需要施针的穴位、穴位的施针顺序以及施针的间隔时间;An input unit is used to adjust the acupuncture points to be acupunctured, the acupuncture order of the acupuncture points, and the interval between acupuncture points; 所述的穴位标记模块按照穴位的施针顺序以及施针的间隔时间,依次用光斑标记对应的穴位。The acupoint marking module marks the corresponding acupoints with light spots in sequence according to the acupuncture sequence and the interval between acupuncture. 3.根据权利要求1所述的面部针灸的定位引导系统,其特征在于:图像信息获取模块包括:3. The positioning guidance system for facial acupuncture according to claim 1, characterized in that the image information acquisition module comprises: 摄像机,用于获取患者的面部图像。A camera is used to obtain facial images of the patient. 4.根据权利要求1所述的面部针灸的定位引导系统,其特征在于:面部的边缘轮廓线的界定方法为:4. The positioning guidance system for facial acupuncture according to claim 1, characterized in that the method for defining the edge contour line of the face is: 在面部图像中标记出眼角和嘴角,分别做出水平线,两个水平线中间的部分的轮廓线为侧部分线段,两个水平线上方为上部分线段,两个水平线下方为下部分线段。Mark the corners of the eyes and mouth in the facial image, and draw horizontal lines respectively. The contour line of the part between the two horizontal lines is the side line segment, the part above the two horizontal lines is the upper line segment, and the part below the two horizontal lines is the lower line segment. 5.根据权利要求4所述的面部针灸的定位引导系统,其特征在于:5. The positioning guidance system for facial acupuncture according to claim 4, characterized in that: 图像信息初步筛选单元在上部分线段上等间距设置若干个取样点,将取样点拟合至预先设置的第一多次函数中,得到第一拟合参数;The image information preliminary screening unit sets a number of sampling points at equal intervals on the upper line segment, and fits the sampling points to a preset first multi-order function to obtain a first fitting parameter; 图像信息初步筛选单元在下部分线段上等间距设置若干个取样点,将取样点拟合至预先设置的第二多次函数中,得到第二拟合参数;The image information preliminary screening unit sets a number of sampling points at equal intervals on the lower line segment, and fits the sampling points to a preset second multi-time function to obtain a second fitting parameter; 图像信息初步筛选单元在侧部分线段上等间距设置若干个取样点,将取样点拟合至预先设置的第三多次函数中,得到第三拟合参数;The image information preliminary screening unit sets a number of sampling points at equal intervals on the side line segment, fits the sampling points to a preset third multi-time function, and obtains a third fitting parameter; 图像信息初步筛选单元根据第一拟合参数、第二拟合参数以及第三拟合参数与各标准脸型的第一拟合参数、第二拟合参数以及第三拟合参数的相似度匹配脸型。The image information preliminary screening unit matches the face shape according to the similarity between the first fitting parameter, the second fitting parameter and the third fitting parameter and the first fitting parameter, the second fitting parameter and the third fitting parameter of each standard face shape. 6.根据权利要求3所述的面部针灸的定位引导系统,其特征在于:图像信息分割单元包括:6. The positioning guidance system for facial acupuncture according to claim 3, characterized in that the image information segmentation unit comprises: 五官初步勾画器,用于勾画出五官区域的初步边界,五官区域包括眼睛、鼻子;The facial features preliminary delineator is used to outline the preliminary boundaries of the facial features area, including eyes and nose; 边缘勾画器,提取初步边界内的关键点,将关键点与脸型数据库中标准图像信息对比,判断关键点是否为五官区域的边界,如果是则将该关键点记录为边缘点,如果不是则删除该关键点;The edge delineator extracts key points within the preliminary boundary, compares the key points with the standard image information in the face shape database, and determines whether the key points are the boundaries of the facial features. If so, the key points are recorded as edge points, and if not, the key points are deleted. 收集所有的边缘点,将边缘点连接起来以生成五官区域的边界。Collect all edge points and connect them to generate the boundaries of the facial features. 7.根据权利要求1所述的面部针灸的定位引导系统,其特征在于:穴位标记模块包括:7. The positioning and guiding system for facial acupuncture according to claim 1, characterized in that the acupoint marking module comprises: 若干个可调节角度的激光器,用于生成波长在可见光频段的激光;A number of lasers with adjustable angles for generating lasers with wavelengths in the visible light band; 控制单元,分别与穴位定位模块和诊疗信息获取模块信号连接,获取患者面部各穴位的位置,并根据诊疗信息获取模块记录的各穴位的施针顺序,控制激光器将光斑投影至对应穴位以形成光斑。The control unit is respectively connected to the acupoint positioning module and the diagnosis and treatment information acquisition module by signal, obtains the position of each acupoint on the patient's face, and controls the laser to project the light spot to the corresponding acupoint to form a light spot according to the acupuncture sequence of each acupoint recorded by the diagnosis and treatment information acquisition module.
CN202411552600.9A 2024-11-01 2024-11-01 Facial acupuncture positioning guidance system Active CN119074532B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411552600.9A CN119074532B (en) 2024-11-01 2024-11-01 Facial acupuncture positioning guidance system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411552600.9A CN119074532B (en) 2024-11-01 2024-11-01 Facial acupuncture positioning guidance system

Publications (2)

Publication Number Publication Date
CN119074532A true CN119074532A (en) 2024-12-06
CN119074532B CN119074532B (en) 2025-02-07

Family

ID=93666835

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411552600.9A Active CN119074532B (en) 2024-11-01 2024-11-01 Facial acupuncture positioning guidance system

Country Status (1)

Country Link
CN (1) CN119074532B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105250136A (en) * 2015-10-28 2016-01-20 广东小天才科技有限公司 Method, device and equipment for intelligently reminding acupoint massage
CN110464633A (en) * 2019-06-17 2019-11-19 深圳壹账通智能科技有限公司 Acupuncture point recognition methods, device, equipment and storage medium
CN110585592A (en) * 2019-07-31 2019-12-20 毕宏生 Personalized electronic acupuncture device and generation method and generation device thereof
CN110801392A (en) * 2019-11-06 2020-02-18 北京地平线机器人技术研发有限公司 Method and device for marking predetermined point positions on human body and electronic equipment
KR102189405B1 (en) * 2020-04-10 2020-12-11 주식회사 센스비전 System for recognizing face in real-time video
CN113081796A (en) * 2021-04-09 2021-07-09 南通市第一人民医院 System and method for intelligently positioning acupuncture points
CN114187234A (en) * 2021-11-02 2022-03-15 上海市第五人民医院 Method and system for locating acupoints
WO2022267653A1 (en) * 2021-06-23 2022-12-29 北京旷视科技有限公司 Image processing method, electronic device, and computer readable storage medium
CN116942509A (en) * 2023-07-20 2023-10-27 中国科学院苏州生物医学工程技术研究所 Body surface automatic point searching method based on multi-stage optical positioning
CN117636446A (en) * 2024-01-25 2024-03-01 江汉大学 Face acupoint positioning method, acupuncture robot and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105250136A (en) * 2015-10-28 2016-01-20 广东小天才科技有限公司 Method, device and equipment for intelligently reminding acupoint massage
CN110464633A (en) * 2019-06-17 2019-11-19 深圳壹账通智能科技有限公司 Acupuncture point recognition methods, device, equipment and storage medium
CN110585592A (en) * 2019-07-31 2019-12-20 毕宏生 Personalized electronic acupuncture device and generation method and generation device thereof
CN110801392A (en) * 2019-11-06 2020-02-18 北京地平线机器人技术研发有限公司 Method and device for marking predetermined point positions on human body and electronic equipment
KR102189405B1 (en) * 2020-04-10 2020-12-11 주식회사 센스비전 System for recognizing face in real-time video
CN113081796A (en) * 2021-04-09 2021-07-09 南通市第一人民医院 System and method for intelligently positioning acupuncture points
WO2022267653A1 (en) * 2021-06-23 2022-12-29 北京旷视科技有限公司 Image processing method, electronic device, and computer readable storage medium
CN114187234A (en) * 2021-11-02 2022-03-15 上海市第五人民医院 Method and system for locating acupoints
CN116942509A (en) * 2023-07-20 2023-10-27 中国科学院苏州生物医学工程技术研究所 Body surface automatic point searching method based on multi-stage optical positioning
CN117636446A (en) * 2024-01-25 2024-03-01 江汉大学 Face acupoint positioning method, acupuncture robot and storage medium

Also Published As

Publication number Publication date
CN119074532B (en) 2025-02-07

Similar Documents

Publication Publication Date Title
EP3654239A1 (en) Contact and non-contact image-based biometrics using physiological elements
JP7269711B2 (en) Biometric authentication system, biometric authentication method and program
US8768014B2 (en) System and method for identifying a person with reference to a sclera image
KR100629550B1 (en) Multiscale Variable Region Segmentation Iris Recognition Method and System
WO2017059591A1 (en) Finger vein identification method and device
DE69232024T2 (en) METHOD FOR PERSONAL IDENTIFICATION BY ANALYZING ELEMENTAL FORMS FROM BIOSENSOR DATA
JP5504928B2 (en) Biometric authentication device, biometric authentication method, and program
CN110464633A (en) Acupuncture point recognition methods, device, equipment and storage medium
EP3680794B1 (en) Device and method for user authentication on basis of iris recognition
PT1093633E (en) Iris identification system and method of identifying a person through iris recognition
CN117636446B (en) Facial acupuncture point positioning method, acupuncture method, acupuncture robot and storage medium
KR102162683B1 (en) Reading aid using atypical skin disease image data
TW202026945A (en) Identity recognition system and identity recognition method
Sabharwal et al. Facial marks for enhancing facial recognition after plastic surgery
CN110929570B (en) Iris rapid positioning device and positioning method thereof
CN119074532B (en) Facial acupuncture positioning guidance system
CN116458945B (en) Intelligent guiding system and method for children facial beauty suture route
US20240032856A1 (en) Method and device for providing alopecia information
Huang et al. Quantitative analysis of facial paralysis based on TCM acupuncture point identification
GB2576139A (en) Ocular assessment
CN114036970B (en) Ultrasonic equipment control method and system
JPH11113885A (en) Individual identification device and method thereof
KR20220003201A (en) Inclination measuring device, measuring method and computer program to measure the degree of tilt or rotation of the user's head through human body image analysis
KR102737067B1 (en) Semi-permanent makeup image recommendation and treatment system
CN109993754A (en) The method and system of skull segmentation is carried out from image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant