CN116919639A - Visual cleaning method and system and visual cleaner thereof - Google Patents
Visual cleaning method and system and visual cleaner thereof Download PDFInfo
- Publication number
- CN116919639A CN116919639A CN202310923585.3A CN202310923585A CN116919639A CN 116919639 A CN116919639 A CN 116919639A CN 202310923585 A CN202310923585 A CN 202310923585A CN 116919639 A CN116919639 A CN 116919639A
- Authority
- CN
- China
- Prior art keywords
- shooting
- cleaning
- module
- lesion
- areas
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000004140 cleaning Methods 0.000 title claims abstract description 137
- 230000000007 visual effect Effects 0.000 title claims abstract description 62
- 238000000034 method Methods 0.000 title claims abstract description 51
- 230000003902 lesion Effects 0.000 claims abstract description 143
- 238000011156 evaluation Methods 0.000 claims abstract description 79
- 210000000214 mouth Anatomy 0.000 claims abstract description 60
- 210000000436 anus Anatomy 0.000 claims abstract description 55
- 210000003928 nasal cavity Anatomy 0.000 claims abstract description 55
- 238000012545 processing Methods 0.000 claims abstract description 55
- 239000007788 liquid Substances 0.000 claims abstract description 29
- 238000013500 data storage Methods 0.000 claims abstract description 16
- 238000004891 communication Methods 0.000 claims abstract description 5
- 230000000694 effects Effects 0.000 claims description 32
- 238000013527 convolutional neural network Methods 0.000 claims description 19
- 238000013135 deep learning Methods 0.000 claims description 13
- 238000005286 illumination Methods 0.000 claims description 13
- 230000008569 process Effects 0.000 claims description 12
- 238000005516 engineering process Methods 0.000 claims description 11
- 230000006870 function Effects 0.000 claims description 11
- 230000036285 pathological change Effects 0.000 claims description 11
- 231100000915 pathological change Toxicity 0.000 claims description 11
- 230000001133 acceleration Effects 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 claims description 3
- 238000004458 analytical method Methods 0.000 abstract description 8
- 238000004422 calculation algorithm Methods 0.000 description 10
- 238000003384 imaging method Methods 0.000 description 8
- 238000003745 diagnosis Methods 0.000 description 7
- 238000004590 computer program Methods 0.000 description 4
- 238000011002 quantification Methods 0.000 description 4
- 238000012360 testing method Methods 0.000 description 4
- 238000012549 training Methods 0.000 description 4
- 230000002411 adverse Effects 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000000605 extraction Methods 0.000 description 3
- 230000002159 abnormal effect Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000002349 favourable effect Effects 0.000 description 2
- 230000036541 health Effects 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 241000894006 Bacteria Species 0.000 description 1
- 239000000470 constituent Substances 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 229940079593 drug Drugs 0.000 description 1
- 239000012530 fluid Substances 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000003205 muscle Anatomy 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 210000001331 nose Anatomy 0.000 description 1
- 230000035790 physiological processes and functions Effects 0.000 description 1
- 230000029058 respiratory gaseous exchange Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000001356 surgical procedure Methods 0.000 description 1
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Measuring And Recording Apparatus For Diagnosis (AREA)
Abstract
The invention discloses a visual cleaning method, a visual cleaning system and a visual cleaner thereof, in particular relates to the technical field of visual cleaning, and is used for solving the problem that the quality of the existing images shot in the areas of the oral cavity, the nasal cavity, the anus and the like is uneven so as to seriously influence the subsequent analysis of the images; the system comprises a data processing module, a data acquisition module, a shooting condition evaluation module, a shooting module, a cleaning module and a data storage module, wherein the data acquisition module, the shooting condition evaluation module, the shooting module, the cleaning module and the data storage module are in communication connection with the data processing module; acquiring shooting information and human body information, calculating a shooting condition evaluation coefficient through the shooting information and the human body information, and judging whether shooting conditions are reached or not according to the shooting condition evaluation coefficient; thereby improving the quality of images shot in the areas of the oral cavity, the nasal cavity, the anus and the like, and avoiding errors of subsequent analysis and processing caused by images with poor shooting quality; the concentration and the temperature of the cleaning liquid are set according to the different lesion degrees, so that unnecessary damage to the areas such as the oral cavity, the nasal cavity, the anus and the like is avoided.
Description
Technical Field
The invention relates to the technical field of visual cleaning, in particular to a visual cleaning method, a visual cleaning system and a visual cleaner thereof.
Background
Visual cleaning of the oral cavity means that the cleaning process in the oral cavity is displayed on a screen in real time by means of modern technologies such as an oral cavity camera, a high-brightness light source and the like so as to clearly see the condition in the oral cavity, and cleaning and treatment can be performed pertinently.
In the current visual shooting process, the quality of the images shot in the areas of the oral cavity, the nasal cavity, the anus and the like is uneven, so that the subsequent analysis of the images is seriously influenced, and the pathological change degree of each area cannot be accurately judged; and the temperature and the concentration of the cleaning liquid cannot be automatically adjusted according to the pathological change degree of each area, so that the adverse effect on the cleaning effect is large.
In order to solve the above problems, a technical solution is now provided.
Disclosure of Invention
In order to overcome the above-mentioned drawbacks of the prior art, embodiments of the present invention provide a visual cleaning method, a visual cleaning system and a visual cleaner thereof, so as to solve the above-mentioned problems in the prior art.
In order to achieve the above purpose, the present invention provides the following technical solutions:
a visual cleaning method comprising the steps of:
step S1: acquiring shooting information and human body information, calculating a shooting condition evaluation coefficient through the shooting information and the human body information, and judging whether shooting conditions are reached or not according to the shooting condition evaluation coefficient;
Step S2: after the system sends out the shooting permission signal, saving the image corresponding to the minimum shooting condition evaluation coefficient; performing image recognition processing on the image, determining a key area, and performing key shooting on the key area;
step S3: determining a region to be cleaned, dividing the region to be cleaned into a plurality of regions, quantifying the lesion degree of each region based on a deep learning convolutional neural network, calculating a lesion degree value, and sending out lesion degree signals of different grades according to the lesion degree value;
step S4: sending out pathological change degree signals of different grades according to the pathological change degree values, automatically adjusting the temperature and the concentration of the cleaning liquid in each region, and cleaning a plurality of regions respectively;
step S5: after the cleaning is finished, the areas of the oral cavity, the nasal cavity, the anus and the like after the cleaning is finished are subjected to secondary shooting, the images of the areas of the oral cavity, the nasal cavity, the anus and the like which are shot for the second time are compared with the images which are uploaded and stored in the step S3, and the similarity is calculated to judge the cleaning effect.
In a preferred embodiment, in step S1, the photographing information includes illumination information and camera shake information; the illumination information is represented by a brightness deviation value, and the camera vibration information is represented by a camera vibration amplitude;
The human body information includes human body physiological information and human body position information; the human physiological information is embodied by heart rate, and the human position information is embodied by relative distance deviation value.
In a preferred embodiment, the relative offset value is calculated as: relative distance offset = [ (off-center distance+1) × camera and area to be photographed distance offset) ]/area to be photographed.
In a preferred embodiment, the brightness deviation value, the vibration amplitude of the camera, the heart rate and the relative distance deviation value are normalized to calculate the shooting condition evaluation coefficient, wherein the expression is as follows:
wherein P is an evaluation coefficient of shooting condition, lp, tf, hp, pl, sl, dm is a brightness deviation value, a vibration amplitude of the camera, a heart rate, a distance from the center to the center, a distance deviation value between the camera and an area to be shot respectively,as relative offset value alpha 1 、α 2 、α 3 、α 4 Preset proportional coefficients of brightness deviation value, camera vibration amplitude, heart rate and relative distance deviation value respectively, and alpha 1 >α 4 >α 3 >α 2 >0;
Setting a shooting condition evaluation coefficient critical threshold, and sending out a non-shooting signal by the system when the shooting condition evaluation coefficient is larger than the shooting condition evaluation coefficient critical threshold;
when the shooting condition evaluation coefficient is smaller than or equal to the shooting condition evaluation coefficient critical threshold, the system sends out a shooting permission signal.
In a preferred embodiment, in step S2, after the system sends out a signal for allowing shooting, the system automatically shoots the areas such as the oral cavity, the nasal cavity and the anus, and shoots a plurality of images at uniform intervals, calculates the shooting condition evaluation coefficient of each shot image, selects the image corresponding to the smallest shooting condition evaluation coefficient, and stores the image;
the important areas are defined by carrying out image identification processing on the images, and the important areas comprise but are not limited to lesion areas and important position areas, and are subjected to important shooting.
In a preferred embodiment, in step S3, the area to be cleaned is divided into a plurality of block areas based on the convolutional neural network for deep learning, the lesion degree of each area is quantized, and the value of the lesion degree is calculated;
setting a first threshold value of the lesion degree and a second threshold value of the lesion degree;
when the lesion extent value is smaller than a first threshold value of the lesion extent, the system sends out a first-level lesion extent signal;
when the lesion degree value is greater than or equal to a first threshold value of the lesion degree and the lesion degree value is less than or equal to a second threshold value of the lesion degree, the system sends a secondary lesion degree signal;
When the lesion extent value is greater than the second threshold lesion extent value, the system sends out a third-level lesion extent signal.
In a preferred embodiment, in step S4, when the lesion extent value is smaller than the first threshold value of the lesion extent, cleaning with a cleaning liquid of a lower concentration, the temperature of the cleaning liquid being used as the first-stage temperature;
when the lesion degree value is larger than or equal to a first threshold value of the lesion degree and the lesion degree value is smaller than or equal to a second threshold value of the lesion degree, cleaning by using a cleaning liquid with medium concentration, wherein the temperature of the cleaning liquid is a secondary temperature;
when the lesion degree value is larger than the second threshold value of the lesion degree, cleaning is carried out by using a cleaning liquid with higher concentration, and the temperature of the cleaning liquid is three-level.
In a preferred embodiment, after the cleaning is completed in step S5, after the system sends out a signal for allowing shooting, the cleaned areas such as the oral cavity, the nasal cavity, the anus, etc. are shot for the second time;
comparing the images of the areas of the oral cavity, the nasal cavity, the anus and the like which are shot for the second time with the images uploaded and saved in the step S3, obtaining the similarity of the images of the areas of the oral cavity, the nasal cavity, the anus and the like which are shot for the second time through an image processing technology, setting a similarity threshold, and if the similarity is higher than the similarity threshold, the cleaning effect is good, otherwise, the cleaning effect is poor.
In a preferred embodiment, a visual cleaning system comprises a data processing module, and a data acquisition module, a shooting condition evaluation module, a shooting module, a cleaning module and a data storage module which are in communication connection with the data processing module;
the data acquisition module acquires shooting information and human body information, the shooting information and the human body information are sent to the data processing module, and the data processing module calculates to obtain shooting condition evaluation coefficients;
the shooting condition evaluation module receives the shooting condition evaluation coefficient calculated by the data processing module, and judges whether shooting conditions are reached or not according to comparison between the shooting condition evaluation coefficient and a shooting condition evaluation coefficient critical threshold;
when the shooting condition evaluation coefficient is smaller than or equal to the shooting condition evaluation coefficient critical threshold, the shooting module shoots for a plurality of times, and uploads an image corresponding to the smallest shooting condition evaluation coefficient to the data storage module;
the data processing module receives the images in the data storage module to process, judges key areas, sends the key areas to the shooting module, and the shooting module shoots the key areas;
the cleaning module divides the area to be cleaned into a plurality of areas, obtains a lesion degree value according to the processing of the data processing module, judges the lesion degrees of different areas according to the lesion degree value, and implements different cleaning methods for the different areas;
The data storage module stores signals sent by all images and systems.
In a preferred embodiment, a visual washer includes, but is not limited to:
the camera is used for shooting the areas of the oral cavity, the nasal cavity, the anus and the like;
a light sensor for measuring brightness of a photographing process;
the distance sensor is used for measuring the distance between the camera and the area to be shot and calculating the offset center distance and the area of the area to be shot by combining an image processing technology;
an acceleration sensor; the vibration amplitude measuring device is used for measuring the vibration amplitude of the camera;
a heart rate sensor for measuring a heart rate of a user;
the cleaner has the function of automatically adjusting the temperature and the concentration of the cleaning liquid;
a visual cleaner is in signal connection with a cloud processor, and the cloud processor is embodied as a data processing module in a visual cleaning system.
The invention relates to a visual cleaning method, a visual cleaning system and a visual cleaner thereof, which have the technical effects and advantages that:
1. the shooting condition evaluation coefficient is calculated through analysis of illumination information, camera vibration information, human body physiological information and human body position information, and shooting conditions are evaluated, so that the quality of images shot by areas such as an oral cavity, a nasal cavity and an anus is improved, and errors of subsequent analysis and processing caused by images with poor shooting quality are avoided.
2. The imaging condition evaluation coefficient of each image is calculated, and the image corresponding to the minimum imaging condition evaluation coefficient is selected, so that the follow-up diagnosis and treatment can be more accurate;
3. the Convolutional Neural Network (CNN) based on deep learning quantifies the lesion degree of the areas such as the oral cavity, the nasal cavity and the anus, and the automatic quantification of the lesion degree of the areas such as the oral cavity, the nasal cavity and the anus is realized by utilizing the computer vision and the deep learning technology, so that misjudgment and missed diagnosis of the lesion degree are reduced, and the accuracy is improved.
4. The concentration and the temperature of the cleaning liquid are set according to different lesion degrees so as to achieve the optimal cleaning effect, and the cleaning liquid is purposefully cleaned according to the conditions of different lesion degrees, so that unnecessary damage to the areas such as the oral cavity, the nasal cavity, the anus and the like is avoided.
5. The cleaning effect is judged by calculating the similarity, so that the accuracy of the cleaning effect can be improved, the quality of the cleaning effect can be intuitively seen by means of image comparison, and when a patient is treated, a reference is provided for diagnosis of a doctor, so that the efficiency is improved.
Drawings
FIG. 1 is a schematic diagram of a visual cleaning method according to the present invention;
fig. 2 is a schematic structural view of a visual cleaning system according to the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1
Fig. 1 shows a visual cleaning method of the present invention, which includes the steps of:
step S1: and acquiring shooting information and human body information, calculating shooting condition evaluation coefficients through the shooting information and the human body information, and judging whether shooting conditions are reached or not according to the shooting condition evaluation coefficients.
Step S2: after the system sends out the shooting permission signal, saving the image corresponding to the minimum shooting condition evaluation coefficient; and carrying out image recognition processing on the image, determining a key area, and carrying out key shooting on the key area.
Step S3: determining a region to be cleaned, dividing the region to be cleaned into a plurality of regions, quantifying the lesion degree of each region based on a deep learning convolutional neural network, calculating a lesion degree value, and sending out lesion degree signals of different grades according to the lesion degree value.
Step S4: and sending out lesion degree signals of different grades according to the lesion degree values, automatically adjusting the temperature and the concentration of the cleaning liquid in each region, and cleaning a plurality of regions respectively.
Step S5: after the cleaning is finished, the areas of the oral cavity, the nasal cavity, the anus and the like after the cleaning is finished are subjected to secondary shooting, the images of the areas of the oral cavity, the nasal cavity, the anus and the like which are shot for the second time are compared with the images which are uploaded and stored in the step S3, and the similarity is calculated to judge the cleaning effect.
In step S1, shooting information and human body information are collected, and it is determined whether shooting conditions are reached when shooting areas such as the oral cavity, the nasal cavity and the anus through the shooting information and the human body information, so as to ensure shooting effects, and more accurately obtain the health condition of the user.
The shooting information comprises illumination information and camera vibration information; the illumination information is represented by a brightness deviation value, and the camera vibration information is represented by a camera vibration amplitude.
The human body information includes human body physiological information and human body position information; the human physiological information is embodied by heart rate, and the human position information is embodied by relative distance deviation value.
The illumination information and the vibration information of the camera are collected, so that the stability and the light condition of a shooting environment can be evaluated, the illumination degree of the shooting environment is reflected by a brightness value, and the vibration information of the camera reflects the shaking degree of the camera in the shooting process; these factors affect the sharpness and stability of the captured image, and therefore the acquisition of illumination information and camera shake information is critical for capturing images of the oral cavity, nasal cavity, anus, etc.
Meanwhile, human physiological information and human position information also have great influence on shooting effect. For example, breathing and heartbeat can cause small movements in the mouth, nose, anus, etc., which can have a blurring and distorting effect on the captured image; therefore, collecting human physiological information can help us determine whether the human body is in the best shooting state, and collecting human body position information can help us evaluate whether the human body posture is correct, whether the human body posture is inclined or jittery, and the factors can affect the quality of the shot image. In summary, collecting illumination information, camera vibration information, human physiological information and human position information can help us evaluate the stability and light condition of the shooting environment and determine whether to be in the optimal shooting state, so as to ensure that high-quality images of areas such as oral cavity, nasal cavity and anus are shot.
The brightness deviation value, the vibration amplitude of the camera, the heart rate and the relative distance deviation value are explained as follows:
brightness deviation value: setting an optimal threshold value of shooting brightness, acquiring real-time shooting brightness, wherein the adverse effect on shooting effect is caused by the fact that the real-time shooting brightness is too high or too low, and the brightness deviation value is the deviation value of the acquired real-time shooting brightness and the optimal threshold value of the shooting brightness, wherein the adverse effect on shooting effect is larger as the brightness deviation value is larger.
Vibration amplitude of camera: the stability of the camera when shooting the areas such as oral cavity, nasal cavity and anus is reflected by the vibration amplitude of the camera, the higher the vibration amplitude of the camera is, the worse the stability is when shooting is, the blurring or shaking of the image can be caused, and the worse the quality of the image obtained by shooting is.
Heart rate: the heart rate reflects the physiological state of a human body when the image is shot on the areas such as the oral cavity, the nasal cavity and the anus, the heart rate is increased to strengthen the tiny movements of the areas such as the oral cavity, the nasal cavity and the anus, so that the muscle changes of the areas such as the oral cavity, the nasal cavity and the anus are frequent when shooting, the quality of the shot image is influenced, and the stable heart rate is favorable for shooting the areas such as the oral cavity, the nasal cavity and the anus.
Relative offset value: relative distance offset = [ (offset center distance+1) ×camera and area to be photographed distance offset value) ]/area to be photographed area; the larger the relative distance deviation value is, the larger the deviation center distance and the deviation value of the distance between the camera and the area to be shot are, namely the distance between the area to be shot and the camera is too far or too close, so that the larger the relative distance deviation value is, the worse the quality of the image obtained by shooting in the areas such as the oral cavity, the nasal cavity, the anus and the like is.
Offset center distance: the distance between the center point of the target area and the center point of the area to be shot; the target area is an area such as an oral cavity, a nasal cavity and an anus, for example, if the target object is positioned at the center point of the area to be photographed, the off-center distance is 0, and if the distance between the target object and the center point of the area to be photographed is 3cm, the off-center distance is 3cm.
Area of region to be photographed: the area of the area to be photographed is the area of the area to be photographed of the oral cavity, the nasal cavity, the anus and the like.
Distance deviation value of camera and region to be photographed: the distance between the camera and the to-be-shot area is the distance between the camera and the center point of the to-be-shot area, each different to-be-shot area corresponds to an optimal camera and to-be-shot area, and the offset value of the distance between the camera and the to-be-shot area is the offset value of the distance between the camera and the to-be-shot area and the optimal camera and the to-be-shot area.
The determination of the center point of the target area, the center point of the area to be photographed and the area of the area to be photographed is determined according to an image processing technology, which is the prior art and will not be described here again.
It should be noted that, for each different area of the area to be photographed, an optimal distance between the camera and the area to be photographed corresponds, and in general, the larger the area of the area to be photographed, the larger the optimal distance between the camera and the area to be photographed should be.
Calculating a shooting condition evaluation coefficient by normalizing the brightness deviation value, the vibration amplitude of the camera, the heart rate and the relative distance deviation value, wherein the expression is as follows:
wherein P is an evaluation coefficient of shooting condition, lp, tf, hp, pl, sl, dm is a brightness deviation value, a vibration amplitude of the camera, a heart rate, a distance from the center to the center, a distance deviation value between the camera and an area to be shot respectively,as relative offset value alpha 1 、α 2 、α 3 、α 4 Preset proportional coefficients of brightness deviation value, camera vibration amplitude, heart rate and relative distance deviation value respectively, and alpha 1 >α 4 >α 3 >α 2 >0。
Setting a shooting condition evaluation coefficient critical threshold, and when the shooting condition evaluation coefficient is larger than the shooting condition evaluation coefficient critical threshold, sending out a non-shooting signal by the system, and correspondingly, not uploading shooting by the visual cleaner; at this time, if the shooting condition is poor, the quality of the image shot in the areas such as the oral cavity, the nasal cavity and the anus is poor, and a large error is caused during the subsequent analysis processing of the image, so that the processing efficiency is affected.
When the shooting condition evaluation coefficient is smaller than or equal to the shooting condition evaluation coefficient critical threshold, the system sends out a shooting permission signal, and the shooting condition is reached at the moment, and the visual cleaner automatically performs shooting uploading.
The imaging condition evaluation coefficient is calculated through analysis of illumination information, camera vibration information, human physiological information and human position information, the imaging condition is evaluated through comparison of the imaging condition evaluation coefficient and the critical threshold value of the imaging condition evaluation coefficient, and imaging under the conditions of insufficient light, camera vibration, human position deviation and the like can be avoided, so that the quality of images imaged in areas such as an oral cavity, a nasal cavity and an anus is improved, and errors of subsequent analysis processing caused by images with poor imaging quality are avoided.
In step S2, under the condition that the system sends out the shooting-allowed signal, the areas such as the oral cavity, the nasal cavity and the anus are automatically shot, a plurality of images are shot at uniform intervals, shooting condition evaluation coefficients of each shot image are respectively calculated, an image corresponding to the smallest shooting condition evaluation coefficient is selected, and the image is stored.
The method comprises the steps of carrying out image identification processing on an image, dividing the image into a plurality of key areas, wherein the key areas comprise but are not limited to lesion areas and important position areas, carrying out key shooting on the key areas after the key areas are determined, wherein the key shooting is carried out on specific key areas by a pointer, and carrying out targeted multiple shooting by adjusting factors such as shooting positions, angles and illumination of a camera so as to obtain a clearer and more accurate image.
And storing the image shot in the step.
The image recognition process is typically performed using machine learning algorithms based on large samples collected historically. In particular, a large amount of labeled image data may be trained using deep learning techniques, such as convolutional neural networks (Convolutional Neural Network, CNN), so that the model can automatically extract features of the image and learn how to classify different diseases or regions.
In the image recognition processing of the oral cavity, nasal cavity, anus and other areas, a large number of marked images can be divided into different categories, such as health and abnormal lesions. These images are then trained using machine learning algorithms to develop a model that accurately identifies the different classes. Then, the model is applied to a new unknown image to automatically judge which areas in the image are key areas.
By calculating the shooting condition evaluation coefficient of each image and selecting the smallest, the selected image can be ensured to be obtained under the optimal shooting condition, so that the quality of the image can be improved, and the subsequent diagnosis and treatment can be more accurate; the automatic shooting can rapidly and accurately acquire a plurality of images, and the optimal image is selected by calculating the shooting condition evaluation coefficient. Therefore, the time and labor cost of manual shooting can be saved, and the shooting efficiency and accuracy are improved. By performing focus shooting on these areas, a clearer and more accurate image can be obtained. This helps the doctor to diagnose the illness more accurately, and improves the accuracy of diagnosis.
In step S3, after performing an image processing technique on a captured image, automatically dividing a region to be cleaned, quantifying lesion degrees of regions such as an oral cavity, a nasal cavity, and an anus based on a deep-learning Convolutional Neural Network (CNN), dividing the region to be cleaned into a plurality of regions, and quantifying the lesion degrees of each region, including the following steps:
data preprocessing: the image of the area of the oral cavity, the nasal cavity, the anus and the like is preprocessed, and the operations of image normalization, clipping, scaling and the like are included, so that network training and testing are facilitated.
Training a network: training the convolutional neural network by using the noted image data sets of the areas of the oral cavity, the nasal cavity, the anus and the like. In the training process, the structure, super parameters, loss functions and the like of the network need to be set.
Network test: the trained network is tested using the test dataset and the test results are evaluated and analyzed, such as accuracy, recall, etc.
Quantification of lesion extent: for images of the oral cavity, nasal cavity, anus, etc., a trained network was used to quantify the extent of lesions. Specifically, the image is input into a network, the network performs feature extraction on the image, and a grading or classifying result of the lesion degree is output. Results show that: and displaying the quantized lesion degree result in a visual mode.
In deep-learning Convolutional Neural Networks (CNNs), the model output is typically processed using a softmax function or an sigmoid function to convert the lesion level to a probability value between 0 and 1. This probability value may be understood as the confidence that the image belongs to a certain class, wherein a class may be normal or abnormal, etc.; if the model identifies better the extent of the lesion in the region, the probability value will be higher and vice versa. Therefore, this probability value can be used as a quantification result of the lesion level of the region, and a value closer to 1 represents a higher lesion level of the region and a value closer to 0 represents a lower lesion level of the region.
The probability value is marked as a lesion extent value, the lesion extent value is equal to the probability value, and the higher the lesion extent value is, the higher the lesion extent of the region is.
And setting a first threshold value of the lesion degree and a second threshold value of the lesion degree according to the lesion degree value, wherein the first threshold value of the lesion degree is smaller than the second threshold value of the lesion degree.
When the lesion level value is smaller than a first threshold value of the lesion level, the system sends out a first-level lesion level signal, the lesion level of the area is lower, and no further processing is needed.
When the lesion extent value is greater than or equal to the first threshold value of the lesion extent and the lesion extent value is less than or equal to the second threshold value of the lesion extent, the system sends out a secondary lesion extent signal, and the lesion extent of the region is at a medium level, and further treatment, such as drug treatment or surgical treatment, can be considered.
When the lesion degree value is larger than the second threshold value of the lesion degree, the system sends out a third-level lesion degree signal, and the lesion degree of the region is higher and needs to be treated as soon as possible.
The severity of the tertiary lesion level signal lesion is greater than the severity of the secondary lesion level signal lesion, and the severity of the secondary lesion level signal lesion is greater than the severity of the primary lesion level signal lesion.
The Convolutional Neural Network (CNN) based on deep learning quantifies the lesion degrees of the areas such as the oral cavity, the nasal cavity and the anus, and the automatic quantification of the lesion degrees of the areas such as the oral cavity, the nasal cavity and the anus is realized by utilizing the computer vision and the deep learning technology. The method ensures that the evaluation of the pathological change degree is more objective, accurate and standardized, realizes the efficient processing and evaluation of a large amount of data, ensures that doctors and users can more intuitively know the pathological change, is favorable for making a more proper treatment scheme and improves the treatment effect. The Convolutional Neural Network (CNN) based on deep learning has strong feature extraction and classification capability, can automatically identify and quantify the pathological changes of areas such as oral cavity, nasal cavity and anus, reduces misjudgment and missed diagnosis on the pathological changes, and improves accuracy.
In step S4, for three scenes of the primary lesion level signal, the secondary lesion level signal and the tertiary lesion level signal sent by the system, the concentration and the temperature of the cleaning liquid are set according to the difference of the lesion levels.
It is noted that the composition of the cleaning solution is different for the oral cavity, nasal cavity, anus and other areas.
When the lesion degree value is smaller than the first threshold value of the lesion degree, the lesion degree of the region is lower, cleaning is carried out by using cleaning liquid with lower concentration, the temperature of the cleaning liquid is first-stage temperature, and the first-stage temperature is generally at room temperature.
When the lesion degree value is greater than or equal to the first threshold value of the lesion degree and the lesion degree value is less than or equal to the second threshold value of the lesion degree, the lesion degree of the region is moderate, cleaning is performed by using a cleaning liquid with medium concentration, and the temperature of the cleaning liquid is a secondary temperature and is generally between 30 and 40 ℃.
When the lesion degree value is larger than a second threshold value of the lesion degree, the lesion degree of the region is higher, the cleaning solution with higher concentration is used for cleaning, and the temperature of the cleaning solution is three-level, and is generally 50-60 ℃; it should be noted that the concentration and temperature of the cleaning solution should not be too high to avoid damaging the sample.
The concentration of the cleaning liquid and the temperature of the cleaning liquid are set according to different conditions of the oral cavity, the nasal cavity, the anus and other areas by referring to the prior known medical treatment.
Wherein, tertiary temperature is higher than the second grade temperature, and the second grade temperature is higher than the first grade temperature.
The concentration and the temperature of the cleaning liquid are set according to the different pathological changes so as to achieve the optimal cleaning effect. By using a lower concentration and temperature cleaning solution to clean the lesion area to a lower extent, unnecessary damage to the oral, nasal, anal, etc. areas can be avoided. And for the areas with higher lesion degrees, cleaning liquid with higher concentration and temperature is used to better clean the lesion parts. Through the mode, the oral cavity and nasal cavity can be purposefully cleaned according to the conditions of different lesion degrees, the cleaning effect is improved, bacteria in the areas such as the oral cavity, the nasal cavity and the anus are cleaned, the lesion aggravation is prevented, and the damage to the areas such as the oral cavity, the nasal cavity and the anus can be reduced.
In step S5, after the cleaning in step S4 is completed, after the system sends out a signal for permitting photographing, the areas such as the oral cavity, the nasal cavity, the anus, etc. after the cleaning is completed are photographed for the second time.
Comparing the image of the area of the oral cavity, the nasal cavity, the anus and the like shot for the second time with the image uploaded and saved in the step S3, and obtaining the similarity of the image of the area of the oral cavity, the nasal cavity, the anus and the like shot for the second time and the image uploaded and saved in the step S3 through an image processing technology to judge the cleaning effect, wherein the method comprises the following steps of:
Preprocessing the images shot for the first time and the second time, including image denoising, contrast enhancement and other processing, so as to improve the accuracy of subsequent image comparison.
Similarity comparison is performed by using an image processing algorithm, and a feature extraction and matching-based method, such as an SI FT algorithm, a SURF algorithm, an ORB algorithm, or the like, can be used; the algorithms can extract key feature points of the images and match the feature points, so that the similarity between the two images is calculated.
According to the calculation result of the similarity, the evaluation of the cleaning effect can be obtained. Setting a similarity threshold, and if the similarity is higher than the similarity threshold, considering that the cleaning effect is good, otherwise, considering that the cleaning effect is poor; the similarity threshold is set according to the actual situation and different algorithms.
The cleaning effect is judged by calculating the similarity, so that the accuracy of the cleaning effect can be improved, the subjective error possibly caused by manual operation is avoided, the automation level of the operation is improved, the quality of the cleaning effect can be intuitively seen in an image comparison mode, the user can conveniently evaluate and judge the cleaning effect, the reference can be provided for the diagnosis of a doctor during the treatment of a patient, and the efficiency is improved.
Example 2
Embodiment 2 of the present invention differs from embodiment 1 in that this embodiment is described with respect to a visual cleaning system.
Fig. 2 shows a schematic structural diagram of a visual cleaning system according to the present invention, which includes a data processing module, and a data acquisition module, a shooting condition evaluation module, a shooting module, a cleaning module, and a data storage module communicatively connected to the data processing module.
The data acquisition module acquires shooting information and human body information, the shooting information and the human body information are sent to the data processing module, and the data processing module calculates to obtain shooting condition evaluation coefficients.
The shooting condition evaluation module receives the shooting condition evaluation coefficient calculated by the data processing module, and judges whether the shooting condition is reached or not according to comparison of the shooting condition evaluation coefficient and a shooting condition evaluation coefficient critical threshold.
And when the shooting condition evaluation coefficient is smaller than or equal to the shooting condition evaluation coefficient critical threshold, the shooting module shoots for a plurality of times, and uploads an image corresponding to the smallest shooting condition evaluation coefficient to the data storage module.
The data processing module receives the images in the data storage module to process, judges key areas, sends the key areas to the shooting module, and the shooting module shoots the key areas.
The cleaning module divides the area to be cleaned into a plurality of areas, obtains a lesion degree value according to the processing of the data processing module, judges the lesion degrees of different areas according to the lesion degree value, and implements different cleaning methods for the different areas.
The data storage module stores signals sent by all images and systems.
Example 3
A visual cleaner for implementing a visual cleaning method and a visual cleaning system.
A visual cleaner for implementing a visual cleaning method and a visual cleaning system, including but not limited to:
the camera is used for shooting the areas of the oral cavity, the nasal cavity, the anus and the like.
And a light sensor for measuring brightness of the photographing process to calculate a brightness deviation value.
And the distance sensor is used for measuring the distance between the camera and the area to be shot and calculating the offset center distance and the area of the area to be shot by combining an image processing technology.
An acceleration sensor; the method is used for measuring the vibration amplitude of the camera.
And the heart rate sensor is used for measuring the heart rate of the user.
The cleaner has the function of automatically adjusting the temperature and the concentration of the cleaning liquid; the washer uses different washer fluid temperatures and concentrations in different areas according to image recognition techniques.
The visual cleaner is in signal connection with a cloud processor, the cloud processor is embodied as a data processing module in a visual cleaning system, and the cloud processor has the algorithm function and the image processing technology mentioned in embodiment 1, which are not described herein again.
It should be noted that, in this embodiment, the functions of a visual cleaning method and a visual cleaning system related to the main visual cleaner are described, and specific constituent positions are not described again, so as to support a visual cleaning method and a visual cleaning system.
The rest of the related prior art is mature and will not be described in detail here.
The above formulas are all formulas with dimensionality removed and numerical calculation, the formulas are formulas with the latest real situation obtained by software simulation through collecting a large amount of data, and preset parameters and threshold selection in the formulas are set by those skilled in the art according to the actual situation.
The above embodiments may be implemented in whole or in part by software, hardware, firmware, or any other combination. When implemented in software, the above-described embodiments may be implemented in whole or in part in the form of a computer program product. The computer program product comprises one or more computer instructions or computer programs. When the computer instructions or computer program are loaded or executed on a computer, the processes or functions described in accordance with embodiments of the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website site, computer, server, or data center to another website site, computer, server, or data center by wired (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains one or more sets of available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium. The semiconductor medium may be a solid state disk.
Those of ordinary skill in the art will appreciate that the various illustrative modules and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described system, apparatus and module may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
In the several embodiments provided by the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be additional divisions when actually implemented, for example, multiple modules or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or modules, which may be in electrical, mechanical, or other forms.
The modules described as separate components may or may not be physically separate, and components shown as modules may or may not be physical modules, i.e., may be located in one place, or may be distributed over a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional module in each embodiment of the present application may be integrated into one processing module, or each module may exist alone physically, or two or more modules may be integrated into one module.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Finally: the foregoing description of the preferred embodiments of the application is not intended to limit the application to the precise form disclosed, and any such modifications, equivalents, and alternatives falling within the spirit and principles of the application are intended to be included within the scope of the application.
Claims (10)
1. A visual cleaning method, comprising the steps of:
step S1: acquiring shooting information and human body information, calculating a shooting condition evaluation coefficient through the shooting information and the human body information, and judging whether shooting conditions are reached or not according to the shooting condition evaluation coefficient;
step S2: after the system sends out the shooting permission signal, saving the image corresponding to the minimum shooting condition evaluation coefficient; performing image recognition processing on the image, determining a key area, and performing key shooting on the key area;
Step S3: determining a region to be cleaned, dividing the region to be cleaned into a plurality of regions, quantifying the lesion degree of each region based on a deep learning convolutional neural network, calculating a lesion degree value, and sending out lesion degree signals of different grades according to the lesion degree value;
step S4: sending out pathological change degree signals of different grades according to the pathological change degree values, automatically adjusting the temperature and the concentration of the cleaning liquid in each region, and cleaning a plurality of regions respectively;
step S5: after the cleaning is finished, the areas of the oral cavity, the nasal cavity, the anus and the like after the cleaning is finished are subjected to secondary shooting, the images of the areas of the oral cavity, the nasal cavity, the anus and the like which are shot for the second time are compared with the images which are uploaded and stored in the step S3, and the similarity is calculated to judge the cleaning effect.
2. A visual cleaning method according to claim 1, wherein: in step S1, the photographing information includes illumination information and camera vibration information; the illumination information is represented by a brightness deviation value, and the camera vibration information is represented by a camera vibration amplitude;
the human body information includes human body physiological information and human body position information; the human physiological information is embodied by heart rate, and the human position information is embodied by relative distance deviation value.
3. A visual cleaning method according to claim 2, characterized in that: the relative distance offset value calculation formula is: relative distance offset = [ (off-center distance+1) × camera and area to be photographed distance offset) ]/area to be photographed.
4. A visual cleaning method according to claim 3, characterized in that: calculating a shooting condition evaluation coefficient by normalizing the brightness deviation value, the vibration amplitude of the camera, the heart rate and the relative distance deviation value, wherein the expression is as follows:
wherein P is an evaluation coefficient of shooting condition, lp, tf, hp, pl, sl, dm is a brightness deviation value, a vibration amplitude of the camera, a heart rate, an off-center distance, a distance deviation value between the camera and a region to be shot, and a region to be shotThe area of the domain is defined by the area of the domain,as relative offset value alpha 1 、α 2 、α 3 、α 4 Preset proportional coefficients of brightness deviation value, camera vibration amplitude, heart rate and relative distance deviation value respectively, and alpha 1 >α 4 >α 3 >α 2 >0;
Setting a shooting condition evaluation coefficient critical threshold, and sending out a non-shooting signal by the system when the shooting condition evaluation coefficient is larger than the shooting condition evaluation coefficient critical threshold;
when the shooting condition evaluation coefficient is smaller than or equal to the shooting condition evaluation coefficient critical threshold, the system sends out a shooting permission signal.
5. A visual cleaning method according to claim 4, wherein: in step S2, after the system sends out a shooting permission signal, automatically shooting the areas such as the oral cavity, the nasal cavity and the anus, shooting a plurality of images at uniform intervals, respectively calculating shooting condition evaluation coefficients of each shot image, selecting an image corresponding to the smallest shooting condition evaluation coefficient, and storing the image;
the important areas are defined by carrying out image identification processing on the images, and the important areas comprise but are not limited to lesion areas and important position areas, and are subjected to important shooting.
6. A visual cleaning method according to claim 5, wherein: in step S3, dividing the region to be cleaned into a plurality of block regions based on a deep-learning convolutional neural network, quantifying the lesion degree of each region, and calculating a lesion degree value;
setting a first threshold value of the lesion degree and a second threshold value of the lesion degree;
when the lesion extent value is smaller than a first threshold value of the lesion extent, the system sends out a first-level lesion extent signal;
when the lesion degree value is greater than or equal to a first threshold value of the lesion degree and the lesion degree value is less than or equal to a second threshold value of the lesion degree, the system sends a secondary lesion degree signal;
When the lesion extent value is greater than the second threshold lesion extent value, the system sends out a third-level lesion extent signal.
7. A visual cleaning method according to claim 6, wherein: in step S4, when the lesion extent value is smaller than the first threshold value of the lesion extent, cleaning with a cleaning solution with a lower concentration, wherein the temperature of the cleaning solution is the first-stage temperature;
when the lesion degree value is larger than or equal to a first threshold value of the lesion degree and the lesion degree value is smaller than or equal to a second threshold value of the lesion degree, cleaning by using a cleaning liquid with medium concentration, wherein the temperature of the cleaning liquid is a secondary temperature;
when the lesion degree value is larger than the second threshold value of the lesion degree, cleaning is carried out by using a cleaning liquid with higher concentration, and the temperature of the cleaning liquid is three-level.
8. A visual cleaning method as claimed in claim 7, wherein: in step S5, after the cleaning is completed, after the system sends out a signal for allowing shooting, the cleaned areas such as the oral cavity, the nasal cavity, the anus and the like are shot for the second time;
comparing the images of the areas of the oral cavity, the nasal cavity, the anus and the like which are shot for the second time with the images uploaded and saved in the step S3, obtaining the similarity of the images of the areas of the oral cavity, the nasal cavity, the anus and the like which are shot for the second time through an image processing technology, setting a similarity threshold, and if the similarity is higher than the similarity threshold, the cleaning effect is good, otherwise, the cleaning effect is poor.
9. A visual cleaning system for implementing a visual cleaning method according to any one of claims 1-8, characterized by: the system comprises a data processing module, a data acquisition module, a shooting condition evaluation module, a shooting module, a cleaning module and a data storage module, wherein the data acquisition module, the shooting condition evaluation module, the shooting module, the cleaning module and the data storage module are in communication connection with the data processing module;
the data acquisition module acquires shooting information and human body information, the shooting information and the human body information are sent to the data processing module, and the data processing module calculates to obtain shooting condition evaluation coefficients;
the shooting condition evaluation module receives the shooting condition evaluation coefficient calculated by the data processing module, and judges whether shooting conditions are reached or not according to comparison between the shooting condition evaluation coefficient and a shooting condition evaluation coefficient critical threshold;
when the shooting condition evaluation coefficient is smaller than or equal to the shooting condition evaluation coefficient critical threshold, the shooting module shoots for a plurality of times, and uploads an image corresponding to the smallest shooting condition evaluation coefficient to the data storage module;
the data processing module receives the images in the data storage module to process, judges key areas, sends the key areas to the shooting module, and the shooting module shoots the key areas;
The cleaning module divides the area to be cleaned into a plurality of areas, obtains a lesion degree value according to the processing of the data processing module, judges the lesion degrees of different areas according to the lesion degree value, and implements different cleaning methods for the different areas;
the data storage module stores signals sent by all images and systems.
10. A visual cleaner, characterized by: for implementing a visual cleaning method and a visual cleaning system, a visual cleaner includes, but is not limited to:
the camera is used for shooting the areas of the oral cavity, the nasal cavity, the anus and the like;
a light sensor for measuring brightness of a photographing process;
the distance sensor is used for measuring the distance between the camera and the area to be shot and calculating the offset center distance and the area of the area to be shot by combining an image processing technology;
an acceleration sensor; the vibration amplitude measuring device is used for measuring the vibration amplitude of the camera;
a heart rate sensor for measuring a heart rate of a user;
the cleaner has the function of automatically adjusting the temperature and the concentration of the cleaning liquid;
a visual cleaner is in signal connection with a cloud processor, and the cloud processor is embodied as a data processing module in a visual cleaning system.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310923585.3A CN116919639A (en) | 2023-07-26 | 2023-07-26 | Visual cleaning method and system and visual cleaner thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310923585.3A CN116919639A (en) | 2023-07-26 | 2023-07-26 | Visual cleaning method and system and visual cleaner thereof |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116919639A true CN116919639A (en) | 2023-10-24 |
Family
ID=88378800
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310923585.3A Pending CN116919639A (en) | 2023-07-26 | 2023-07-26 | Visual cleaning method and system and visual cleaner thereof |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116919639A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118395094A (en) * | 2024-06-27 | 2024-07-26 | 威海东舟医疗器械股份有限公司 | Automatic adjusting system of nasal cavity cleaning equipment |
CN120052789A (en) * | 2025-04-28 | 2025-05-30 | 珠海微视医用科技有限公司 | Anal fistula cleaning device and system based on visual identification |
-
2023
- 2023-07-26 CN CN202310923585.3A patent/CN116919639A/en active Pending
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118395094A (en) * | 2024-06-27 | 2024-07-26 | 威海东舟医疗器械股份有限公司 | Automatic adjusting system of nasal cavity cleaning equipment |
CN120052789A (en) * | 2025-04-28 | 2025-05-30 | 珠海微视医用科技有限公司 | Anal fistula cleaning device and system based on visual identification |
CN120052789B (en) * | 2025-04-28 | 2025-07-22 | 珠海微视医用科技有限公司 | Anal fistula cleaning device and system based on visual identification |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109472781B (en) | Diabetic retinopathy detection system based on serial structure segmentation | |
CN105513077B (en) | A kind of system for diabetic retinopathy screening | |
CN116919639A (en) | Visual cleaning method and system and visual cleaner thereof | |
KR102155381B1 (en) | Method, apparatus and software program for cervical cancer decision using image analysis of artificial intelligence based technology | |
CN113012093B (en) | Training method and training system for glaucoma image feature extraction | |
CN112472089A (en) | System and method for judging reliability of psychological test based on eye movement technology | |
CN113768461A (en) | Fundus image analysis method and system and electronic equipment | |
CN111640097B (en) | Dermatological image recognition method and dermatological image recognition equipment | |
CN119313680B (en) | Glaucoma distinguishing and detecting method and system based on multi-mode fusion | |
CN116630237A (en) | Image quality detection method and related device, electronic equipment and storage medium | |
KR20210033902A (en) | Method, apparatus and software program for cervical cancer diagnosis using image analysis of artificial intelligence based technology | |
CN119235255B (en) | Imaging method of fundus camera and portable fundus camera | |
CN114241565B (en) | A method, device and equipment for analyzing facial expression and target object state | |
CN119560142A (en) | A disease early warning system based on mobile nursing | |
CN119540161A (en) | A method and system for positioning and monitoring the health status of the spine | |
CN111612755B (en) | Lung focus analysis method, device, electronic equipment and storage medium | |
CN118512173A (en) | Deep learning-based children hearing detection method and system | |
CN111507948A (en) | Automatic intercepting system and method for key images of ultrasonic video stream based on machine vision | |
CN117796757A (en) | Retina OCTA image analysis method and system | |
CN113907775B (en) | A method and system for judging the quality of hip joint images | |
CN116798639A (en) | Exercise injury severity assessment method and assessment device | |
Hussein et al. | Automatic classification of AMD in retinal images | |
CN112597949B (en) | A video-based psychological stress measurement method and system | |
Ahmed et al. | Automatic region of interest extraction from finger nail images for measuring blood hemoglobin level | |
CN113781453A (en) | Scoliosis progress prediction method and scoliosis progress prediction device based on X-ray film |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |