CN113012112A - Evaluation method and system for thrombus detection - Google Patents
Evaluation method and system for thrombus detection Download PDFInfo
- Publication number
- CN113012112A CN113012112A CN202110222887.9A CN202110222887A CN113012112A CN 113012112 A CN113012112 A CN 113012112A CN 202110222887 A CN202110222887 A CN 202110222887A CN 113012112 A CN113012112 A CN 113012112A
- Authority
- CN
- China
- Prior art keywords
- limb
- image
- unit
- light projection
- light
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 208000007536 Thrombosis Diseases 0.000 title claims abstract description 23
- 238000001514 detection method Methods 0.000 title claims abstract description 19
- 238000011156 evaluation Methods 0.000 title claims abstract description 19
- 238000003384 imaging method Methods 0.000 claims abstract description 59
- 238000012545 processing Methods 0.000 claims abstract description 58
- 238000012549 training Methods 0.000 claims abstract description 23
- 238000005259 measurement Methods 0.000 claims abstract description 16
- 210000003414 extremity Anatomy 0.000 claims description 131
- 238000004458 analytical method Methods 0.000 claims description 23
- 210000004417 patella Anatomy 0.000 claims description 17
- 210000003141 lower extremity Anatomy 0.000 claims description 16
- 238000000149 argon plasma sintering Methods 0.000 claims description 8
- 230000011218 segmentation Effects 0.000 claims description 8
- 210000001364 upper extremity Anatomy 0.000 claims description 7
- 238000006073 displacement reaction Methods 0.000 claims description 3
- 206010047249 Venous thrombosis Diseases 0.000 description 10
- 238000000034 method Methods 0.000 description 10
- 238000013527 convolutional neural network Methods 0.000 description 4
- 201000010099 disease Diseases 0.000 description 4
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 4
- 230000008961 swelling Effects 0.000 description 4
- 208000004210 Pressure Ulcer Diseases 0.000 description 3
- 230000033001 locomotion Effects 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 206010051055 Deep vein thrombosis Diseases 0.000 description 2
- 206010014522 Embolism venous Diseases 0.000 description 2
- 208000006011 Stroke Diseases 0.000 description 2
- 208000027418 Wounds and injury Diseases 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 208000004043 venous thromboembolism Diseases 0.000 description 2
- 208000003251 Pruritus Diseases 0.000 description 1
- 208000010378 Pulmonary Embolism Diseases 0.000 description 1
- 206010037714 Quadriplegia Diseases 0.000 description 1
- 208000025865 Ulcer Diseases 0.000 description 1
- 230000002411 adverse Effects 0.000 description 1
- 230000000172 allergic effect Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 208000010668 atopic eczema Diseases 0.000 description 1
- 230000017531 blood circulation Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000013498 data listing Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000006866 deterioration Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000001125 extrusion Methods 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 210000004394 hip joint Anatomy 0.000 description 1
- 230000007803 itching Effects 0.000 description 1
- 210000000629 knee joint Anatomy 0.000 description 1
- 210000002414 leg Anatomy 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 210000003205 muscle Anatomy 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000003449 preventive effect Effects 0.000 description 1
- 210000001747 pupil Anatomy 0.000 description 1
- 208000020431 spinal cord injury Diseases 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
- 208000024891 symptom Diseases 0.000 description 1
- 231100000397 ulcer Toxicity 0.000 description 1
- 210000003462 vein Anatomy 0.000 description 1
- 238000011179 visual inspection Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Measuring devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/107—Measuring physical dimensions, e.g. size of the entire body or parts thereof
- A61B5/1072—Measuring physical dimensions, e.g. size of the entire body or parts thereof measuring distances on the body, e.g. measuring length, height or thickness
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Measuring devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/107—Measuring physical dimensions, e.g. size of the entire body or parts thereof
- A61B5/1075—Measuring physical dimensions, e.g. size of the entire body or parts thereof for measuring dimensions by non-invasive methods, e.g. for determining thickness of tissue layer
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Measuring devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/107—Measuring physical dimensions, e.g. size of the entire body or parts thereof
- A61B5/1077—Measuring of profiles
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Measuring devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/107—Measuring physical dimensions, e.g. size of the entire body or parts thereof
- A61B5/1079—Measuring physical dimensions, e.g. size of the entire body or parts thereof using optical or photographic means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/20—ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Theoretical Computer Science (AREA)
- Public Health (AREA)
- Heart & Thoracic Surgery (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Physics & Mathematics (AREA)
- Veterinary Medicine (AREA)
- Pathology (AREA)
- Dentistry (AREA)
- Software Systems (AREA)
- Computational Linguistics (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Epidemiology (AREA)
- Primary Health Care (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Quality & Reliability (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
The invention relates to an evaluation method for thrombus detection, which at least comprises a light projection unit and an imaging unit, wherein the light projection unit can directionally project distance measurement light rays on a limb to be detected, the imaging unit can collect partial limb surface images marked by the light projected by the light projection unit, and the light projection unit receives a projection operation instruction and roughly projects structured light in a target area on the surface of the limb to be detected; the imaging unit is arranged at a certain distance from the light projection unit and at a certain included angle relative to the light projection unit, and the imaging unit responds to an instruction of completing projection operation of the light projection unit and records a first image of at least part of the surface of the limb to be measured; a pre-set training image processing network is retrieved to output estimated data including the position in the one or more first images and a three-dimensional model of the limb and limb parameters are generated by triangulation.
Description
Technical Field
The invention relates to the technical field of thrombus detection, in particular to a thrombus detection evaluation method and system.
Background
At present, in hospitals, patients with past venous thromboembolism history, quadriplegia, hip joint or knee joint replacement, spinal cord injury, stroke and the like are more, and symptoms such as limb swelling, shank itching and pain and the like are caused due to the fact that blood flow of limbs is slow when the patients lie in bed for a long time, so that deep venous thrombosis of limbs is easily caused, and pulmonary embolism can be caused due to the falling of deep venous thromboembolism, so that the life of the patients is seriously endangered, and even the patients die. The method that clinician generally adopted now is that often observe whether patient's limbs swell and skin temperature colour change condition, wherein, the big bad phenomenon of lower limb swelling appears because of venous thrombosis often for the stroke patient, consequently need carry out the measurement of limbs week to the patient, compare through carrying out the week diameter data that limb assigned position measured and obtain with different time points, can control the progress condition of patient's state of an illness effectively to make effectual preventive care measures, prevent further aggravation of the state of an illness.
When the circumference of the leg of a patient is measured, a measuring point is usually determined, a patella is usually selected as a reference point during clinical measurement, and the circumference of the fixed point is measured at a position which is 10cm to 15cm away from the upper and lower parts of the patella.
Chinese patent CN108937945A discloses a portable multifunctional limb measurement record evaluation device, which comprises a limb circumference measuring device, a measuring tool, a recording meter clip box, a measuring device for limb length and angle, a measuring device for skin dune, bedsore, and length, width, and depth of wound, wherein the measuring device, the measuring tool, the recording meter clip box, the measuring device for limb length and angle are sequentially connected, the measuring device is arranged at the right side of the measuring tool and the recording meter clip box and is movably connected with the measuring tool and the recording meter clip box, and the measuring device is arranged at the lower side of the measuring tool and the recording. This multi-functional limbs measurement record evaluation device can carry out limbs circumference and measure, and the measurement of limbs length, the measurement of limbs motion angle, the measurement of the size of allergic test skin dune diameter, bedsore ulcer face, wound, area, pupil observation measurement can also carry out painful measurement aassessment, bedsore is evaluateed in grades and is measured, dark vein thrombus muscle power aassessment etc. provides accurate basis for medical personnel's disease of diagnosing and observation treatment. Although various measuring tools are combined to a certain degree and an auxiliary positioning measuring structure for marking is arranged, the accurate measuring position cannot be selected according to the requirement, and the measurement cannot be rapidly and effectively carried out, so that a method for effectively acquiring and storing data of the thickness change of the limbs at the set position of the upper and lower limbs of the patella of a patient under the condition that the limbs of the patient are not moved is needed, and the medical staff can conveniently evaluate whether the patient has thrombus or the risk degree of thrombus attack according to the thickness change of the upper and lower limbs of the patella of the patient within a certain time, thereby timely and effectively dealing with the change of the state of illness of the patient
Furthermore, on the one hand, due to the differences in understanding to the person skilled in the art; on the other hand, since the inventor has studied a lot of documents and patents when making the present invention, but the space is not limited to the details and contents listed in the above, however, the present invention is by no means free of the features of the prior art, but the present invention has been provided with all the features of the prior art, and the applicant reserves the right to increase the related prior art in the background.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a thrombus detection evaluation method, which at least comprises a light projection unit and an imaging unit, wherein the light projection unit can directionally project ranging light rays on a limb to be detected, the imaging unit can collect partial limb surface images marked by the light rays projected by the light projection unit, and the light projection unit receives a projection operation instruction and roughly projects structured light in a target area on the surface of the limb to be detected; the imaging unit is arranged at a certain distance from the light projection unit and at a certain included angle relative to the light projection unit, and the imaging unit responds to an instruction of completing projection operation of the light projection unit and records a first image of at least part of the surface of the limb to be measured; a pre-set training image processing network is retrieved to output data comprising an estimate of the position in the one or more first images and a three-dimensional model of the limb and limb parameters are generated by triangulation.
According to a preferred embodiment, the imaging unit transmits one or more acquired first images of the same light scattering area of the limb to be measured in the same time period to the processing unit, and the processing unit performs comparison retrieval on the images in the storage unit, which store a plurality of reference images including the reference points and image parameters, and the acquired first images, so as to generate the general parameter data of the set part of the limb to be measured.
According to a preferred embodiment, the imaging unit corrects the first image recorded by the imaging unit in such a way that the imaging unit can capture an image using the patella of the limb to be measured as a reference point and calibrate the imaging data of the imaging unit, so that the first image acquired can record a contour comprising the reference point of the limb and a clear outline of the limb.
According to a preferred embodiment, the first image recorded by the imaging unit is taken at the time of the first recording measurement; the processing unit generates a three-dimensional model of an initially set part of the limb and parameters based on the datum points in the first images and the estimated data of the position in the one or more first images, and in case at least one of the first images is captured, the parameter values of the imaging unit and the light projection unit comprising the physical geometry of the mutual orientation and displacement perform triangulation; the processing unit can transmit the processed limb three-dimensional model and the parameters to the analysis platform.
According to a preferred embodiment, the processing unit is capable of segmenting the limb picture element related to the target position and other picture elements, so that a plurality of limb picture elements acquired in the same time period can be processed and corresponding limb three-dimensional models can be generated.
According to a preferred embodiment, the light projecting unit and the imaging unit acquire the second image by performing secondary image acquisition on the patella of the limb and the upper and lower limbs in a plurality of time periods during the observation period, and the processing unit can perform image processing and image element segmentation on the acquired second image in the same time period, so as to generate a three-dimensional model corresponding to the second image data and parameters of the limb.
According to a preferred embodiment, the processing module transmits the limb three-dimensional model and parameters acquired by processing the plurality of second images to the analysis platform, the analysis platform can collect the stored limb three-dimensional model and parameters of the first image acquired at the initial period and the limb three-dimensional models and parameters of the second images sequentially acquired at a plurality of periods thereafter, and the stored data can be displayed through the display module.
According to a preferred embodiment, the analysis platform can compare and analyze the limb three-dimensional models and parameters of the plurality of second images acquired in sequence with the limb three-dimensional models and parameters of the first image acquired in the initial period and/or the limb three-dimensional models and parameters of the second images acquired in the previous period, so as to determine the change of the upper and lower limbs of the patella of the patient.
According to a preferred embodiment, before or as an initial step of recording the first image and before or as an initial step of recording the second image: applying respective scanner settings for recording the first image and setting respective operating conditions for one or both of the light projection unit and the imaging unit to operate in accordance with the respective operating conditions prior to or as an initial step of recording the second image; wherein the respective operating conditions are the same at least during recording of the first image and during recording of the second image.
The application also provides an evaluation system for thrombus detection, which is characterized in that the structured light is projected to a target area on the surface of the limb to be detected by the light projection unit; recording a first image of at least part of the surface of the limb under test using an imaging unit arranged at a distance from the light projection unit and at an angle of intersection with respect to the light projection unit; a pre-set training image processing network is retrieved to output data comprising an estimate of the position in the one or more first images and a three-dimensional model of the limb and limb parameters are generated by triangulation.
Drawings
FIG. 1 is a logical schematic diagram of a preferred embodiment of an assessment method of the present invention as applied to thrombus detection.
List of reference numerals
1: the projection unit 2: the image forming unit 3: processing module
4: the analysis platform 5: display module
Detailed Description
Example 1
An evaluation method applied to thrombus detection is used for measuring the fixed-point limb size of partial lower limbs of 10cm-15cm above and below the patella of a bedridden patient so as to judge whether the patient has the adverse phenomenon of lower limb swelling. The method judges whether the lower limb swelling caused by the venous thrombosis exists according to the change of the limb size measured in sequence in a plurality of time periods at intervals, so that the medical treatment can be effectively and pertinently carried out on the deep venous thrombosis of the limbs which possibly appears or appears in time, the disease deterioration of the bedridden patient can be avoided, the progress of the disease of the patient can be mastered in time, and a reasonable and effective treatment scheme can be further made in stages according to the disease.
According to a specific embodiment, as shown in fig. 1, in order to effectively monitor whether a bedridden patient has lower limb venous thrombosis, a projection unit 1 and an imaging unit 2 are arranged to perform image acquisition on a limb in a set target area which can be projected by structured light, so that ranging light can be directionally projected on the limb to be measured at a position which is at a certain distance and angle from the limb to be measured by the projection unit, and the imaging unit 2 can perform image acquisition on the target area marked by the projection light of the projection unit 1, so as to acquire an image with information of the limb to be measured. Preferably, the imaging unit 2 is arranged at a certain distance from the light projecting unit 1 and at a certain included angle with respect to the light projecting unit 1, so that after the light projecting unit 1 projects the structured light on the target area of the limb to be measured, the imaging unit 2 is started and performs image acquisition on the area. In addition, when the light projection-imaging device is moved, the imaging unit 2 can acquire images of the same limb part at different angles and different areas along with the movement of the light projection unit 1, and process a plurality of images acquired in the same time period through a preset training image processing network to obtain estimated data of the position of the limb to be measured in the images, so that a three-dimensional model of the section of the limb to be measured and actually acquired limb parameters represented by the model limb are generated by the obtained multiple groups of position data in a triangulation calculation mode, and the limb size of the position at a certain distance above and below the patella of the limb at a specific time represented by the three-dimensional model can be obtained. Specifically, the imaging unit 2 transmits one or more acquired images of the same light scattering area of the limb to be measured in the same time period to the processing unit 3, the processing unit 3 performs comparison and retrieval on the acquired images and images in the storage unit 4 in which a plurality of reference images containing reference calibration points and parameters corresponding to the images are stored, so as to generate rough parameter data of the set part of the limb to be measured, and the processing unit 3 can further generate a limb three-dimensional model which can perform feature point position correlation with the reference images according to the generated rough parameter data and the corresponding relationship with the reference images with the reference calibration points. Preferably, the imaging unit 2 captures the position of the patella of the limb to be measured as a reference point and corrects the image recorded by the imaging unit 2 in such a way that the imaging data of the imaging unit 2 is calibrated, so that the acquired image can record a contour including the reference point of the limb and the clear outline of the limb. When carrying out many times data acquisition of many periods to patient's settlement limbs, limbs three-dimensional model and parameter after processing unit 3 handles all transmit analysis platform to analysis platform 4 carries out data listing and contrastive analysis to the multiunit data of gathering many times, and then can show the change of the limbs size of patient's upper and lower settlement distance position department of patient's limbs in certain time to medical personnel, thereby make things convenient for medical personnel to judge whether bed patient has taken place or probably takes place venous thrombosis, and give the treatment scheme that is fit for different patients with this as the basis. In addition, the three-dimensional model obtained by multiple times of acquisition can help medical staff to more intuitively know the actual change of the limb size of the patient in an overlapping comparison mode, can effectively eliminate the condition judgment only by depending on the size data at the set position, and reduces the influence on the accuracy of the acquired measurement data due to the deformation of the limb caused by the posture, external extrusion and the like of the patient.
The first image recorded by the imaging unit 2 is the image taken when the first recording measurement is performed on the limb of the patient, and the generated three-dimensional model and parameters of the limb are also used as reference data of the initial state of the patient. The processing module generates a three-dimensional model of the set part of the initial limb and its parameters based on the datum points in the first images and the estimated data of the position in the one or more first images, and in case at least one first image is captured, the light projection unit and the imaging unit perform triangulation including parameter values of the physical geometry oriented and displaced with respect to each other. The processing unit 3 can divide the limb picture element related to the target position and other picture elements, so that a plurality of limb picture elements collected in the same time period can be processed and a corresponding limb three-dimensional model can be generated. The processing unit 3 is capable of transmitting the processed three-dimensional model of the limb and the parameters to the analysis platform 4.
The projection-imaging assembly comprising the projection unit 1 and the imaging unit 2 is capable of acquiring the second image in such a way that the patella of the limb and the upper and lower limbs thereof are subjected to secondary image acquisition in a plurality of time periods during the observation period. The processing unit 3 is capable of image processing and image element segmentation of the acquired second image within the same time period, thereby generating a three-dimensional model corresponding to the second image data and parameters of the section of the limb thereof. And transmitting the limb three-dimensional model and the parameters acquired by the plurality of second images processed by the processing module 3 to the analysis platform 4. The analysis platform 4 can summarize the limb three-dimensional model and parameters of the first image acquired at the initial period and the limb three-dimensional model and parameters of the second image sequentially acquired at a plurality of periods thereafter, and the stored data can be displayed through the display module 5. The analysis platform 4 can also compare and analyze the limb three-dimensional models and parameters of the plurality of second images collected in order with the limb three-dimensional models and parameters of the first image collected in the initial period and/or the limb three-dimensional models and parameters of the second images collected in the previous period, and accordingly, the change of the upper limb and the lower limb of the patella of the patient can be judged.
Prior to or as an initial step of recording the first image and prior to or as an initial step of recording the second image: applying respective scanner settings for recording the first image and setting respective operating conditions for one or both of the light projection unit and the imaging unit to operate in accordance with the respective operating conditions prior to or as an initial step of recording the second image; wherein the respective operating conditions are the same at least during recording of the first image and during recording of the second image.
Example 2
An assessment system capable of measuring whether a bedridden patient has limb venous thrombosis includes a projection-imaging scanning analysis system capable of being integrated in a single device through a housing. The light projection unit 1 in the system can project the structured light emitted by the light projection unit to the surface of the set position of the limb to be measured in a mode of separating a certain angle and a certain distance from the limb to be measured. The imaging unit 2 may be disposed at a distance from the light projection unit 1 in a range of 20mm to 500mm or more. Thus, the imaging unit 2 can move along with the light projection unit, so that images of different angles and positions of the surface of the limb to be measured can be recorded.
After completing the registration of the images, a training image processing network configured during training is retrieved to output estimated data for the position in one or more images related to the target position including the surface of the physical object. An estimate of the position in the one or more first images of the target position on the surface of the physical object is then input to the subsequent triangulation. Since an estimate of the position in the one or more first images can be provided with improved accuracy, the trained image processing network improves the input of triangulation, enabling accurate computation of a point cloud or three-dimensional model of the surface of the limb under test. The system can effectively improve the effects of accurately and effectively scanning and acquiring data of the device under the condition that the outline of the limb to be detected is not obvious by properly training the image processing network, and can scan the surface of an object with larger variability of surface characteristics.
The system may proceed with receiving user input to perform a scanning operation to generate a partial or complete computer-readable point cloud or three-dimensional model of the surface of the physical object upon the aforementioned receiving user input to manipulate operating conditions of one or both of the light projection unit and the imaging unit. Thus, the user does not need to worry about trying the scanning process over and over again before achieving a satisfactory result. The training image processing network may have the ability to generalize within and/or beyond its training data, which increases the chances that the user will be able to obtain better scan results than can be achieved in at least the first few attempts at scanning a particular limb object. This is relevant because the system scanning process may take a significant amount of time to complete, e.g., 5 to 30 minutes or more. Further, the system may need to receive user input via a user interface to perform digital reconstruction operations including one or more of: the method includes retrieving a training image processing network, using the training image processing network, and generating a partial or complete computer-readable point cloud or three-dimensional model of a surface of a physical object. Since both the scanning operation and the digital reconstruction operation may be time consuming, and since the operations may be performed on individual hardware components, it may be advantageous to perform either operation in response to receiving user input. In either case, the user is typically freed from the very time-consuming task of adjusting the operating conditions by a trial and error process involving both the scanning operation (which is time consuming) and the digital reconstruction operation (which is also time consuming). Preferably, the training image processing network may be configured to suppress the undesired effects of light scattering areas during training, which is a source of erroneously shifting the estimate of the target position. Training the image processing network may use a combination of linear and non-linear operators.
Preferably, the structured light is projected onto the surface of the object to be measured, substantially at target positions gradually displaced on the surface of the object to be measured. This may be achieved by motorized rotation supporting angular movement of the light projection unit. The structured light can be aimed at a target location of the object to be measured. When observing an object to be measured that the structured light impinges on, it can be observed that certain surfaces where the structured light appears are focused at the target location (intense light intensity) and substantially gaussian distributed along a direction orthogonal to the structured light at approximately the target location, which may be a line. The light scattering areas illuminated by structured light projected substantially at the target location on the surface of the physical object may appear as one or more regularly or irregularly illuminated areas that are symmetric or asymmetric about the target location. The target location may correspond to a geometric center or "center of gravity" of the structured light (e.g., corresponding to a center of a substantially gaussian distribution), but disregards the light scattering region, at least to some extent.
Preferably, the light projection unit 1 may include a light source (such as an LED or LASER) and may include one or more of an optical lens and a prism. The light projection unit 1 may comprise a "fan laser". The light projection unit 1 may include a laser emitting a laser beam and a laser line generator lens converting the laser beam into a uniform straight line. The laser line generator lens may be configured as a cylindrical lens or a rod lens to focus the laser beam along an axis to form the light line. The structured light may be configured as 'dots', 'arrays of dots', 'matrices of dots' or 'clouds of dots', a single ray, as a plurality of parallel lines or intersecting lines. The structured light may be, for example, 'white light', 'red light', 'green light' or 'infrared light' or a combination thereof. The structured light arranged as a line appears as a line on the object from the perspective of the light projection unit 1. When the angle of view is different from that of the light projection unit 1, if the object is bent, the line appears to be bent. This curve is observable in an image captured by a camera arranged at a distance from the light projection unit 1. The target location corresponds to an edge of the structured light. The edges may be defined according to, for example, statistical criteria corresponding to light intensity that is approximately half of the light intensity at the center of structured light that may have a significant gaussian distribution. Another criterion for detecting edges or other criteria may also be used. The target location may correspond to a 'left edge', 'right edge', 'upper edge', 'lower edge', or a combination thereof. An advantage of using the target position at the edge of the structured light is that the resolution of the 3D scan can be improved.
Preferably, the integrated light projection-imaging device of the system may have one or two wired interfaces (for example according to the USB standard) or wireless interfaces (for example according to the Wi-Fi or bluetooth standard) for transmitting the image sequence to a computer provided with the analysis platform 4 (for example a desktop, laptop, tablet or smartphone). The sequence of images may be in accordance with a video format (e.g., the JPEG standard). In retrieving one or more of the trained image processing networks, processing the images and generating a partial or complete computer-readable point cloud or three-dimensional model of the object surface is performed by a computer. The first computer may receive the sequence of images via a data communication link. The imaging unit 2 may be configured as a color camera, for example, an RGB camera or a gray tone camera. The term triangulation should be construed to include any type of triangulation, including determining the location of a point by forming a triangle for it from known points. Triangulation includes, for example, triangulation used in one or more of the following: nuclear geometry, photogrammetry, and stereo vision. But is not limited thereto.
Preferably, the image recorded by the imaging unit 2 may be a monochrome image or a color image. The imaging unit 2 may be configured with a camera sensor that outputs an image having columns and rows of monochrome or color pixel values in a matrix format. The 2D representation may be in the form of a 2D coordinate list, for example, with reference to column and row indices of the first image. In some aspects, the 2D representation is obtained with sub-pixel precision. The 2D representation can alternatively or additionally be output in the form of a 2D image, e.g. a binary image having only two possible pixel values. Thus, the 2D representation is encoded and the 2D representation can be used to prepare input for triangulation. The target position in at least one of the first images is obtained by processing using the training image processing network of the processing unit 3. The parameter values representing the physical geometry of the light projection unit 1 and the imaging unit 2 may comprise one or more of a mutual orientation and a 2D or 3D displacement. The parameter values may comprise at least a first value which is still fixed during the scanning of the specific 3D object and a second value which varies during the scanning of the specific 3D object. The second value may be read by a sensor, sensed, or provided by a controller that controls the scanning of the 3D object. The sensor may sense rotation of the light projector or a component thereof.
In performing the division classification of the image element, a picture element distinguished from other picture elements as target positions is a target position estimation on the surface of the physical object. A picture element that is distinguished from other picture elements that are target locations may be encoded with one or more unique values (e.g., as binary values in a binary image). The training image processing network may receive images of the image sequence as input images and may provide a segmentation in which certain picture elements are distinguished from a target position in the output image. The output image may have a higher resolution than the input image, which allows for estimation of the sub-pixel accuracy of the target location that is input for triangulation. Preferably, the output image may have a lower resolution, for example, for triangulation-related processing. For example, by using Bicubic interpolation (e.g., via a Bicubic filter), higher resolution images may be generated by upsampling as known in the art. In some embodiments, other types of upsampling are used. The upsampling may be, for example, eight times the upsampling, four times the upsampling, or another upsampling size. Further, the segmentation may correspond to an estimated target location identified by an image processor performing image processing in response to human operator-controlled parameters, for example, parameters related to one or more of: image filter type, image filter kernel size, and intensity threshold. The parameters controlled by the human operator may be set by the operator during trial and error, wherein the operator uses visual inspection to control the parameters for accurate segmentation indicative of the target location.
Preferably, the training image processing network is a convolutional neural network, such as a deep convolutional neural network. The convolutional neural network also provides a highly accurate estimate of the target position under sub-optimal operating conditions of the light projector, such as where light scattering regions significantly distort the light pattern on the surface of the physical object and its spatial definition (e.g., by ambient light, cultural and light definitions of the physical object). Properly trained convolutional neural networks provide superior segmentation results for accurate estimation of target locations. In some embodiments, training the image processing network comprises a support vector machine. The training image processing network may also be a deep convolutional network with u-net including downsampling and upsampling operators. Such a training image processing network provides a good balance between computational endurance and accuracy, let alone a relatively small subset of training data. The convolutional network with u-net is provided when it outputs the segmentation map in the form of an image.
Example 3
A method for evaluating whether venous thrombosis occurs or exists in a bedridden patient through measurement of limb sizes.
After receiving the patient, the light projection unit 1 projects the structured light to the position range of 10cm-15cm above and below the patella of the lower limb of the patient, so that the scattering range of the projected light is within the limited range; then, recording partial limb surface images in the structured light scattering range through a camera of the imaging unit 2 so as to obtain a first image, and enabling the camera of the imaging unit 2 moving along with the projection unit to record and store the images of the whole limb surface in the position range of 10cm-15cm above and below the patella of the lower limb through rotating or moving the projection unit; secondly, the acquired first image is segmented and processed through the processing module 3, so that a three-dimensional model of the limb to be measured and size data of the limb of the model are obtained; and finally, transmitting the processed data information to an analysis platform 4 for storage and analysis, and displaying through a display module 5, wherein the display module 5 also serves as a user operation terminal, and an operator can perform parameter setting and related regulation and control operations on the terminal.
After the first image is acquired, medical staff carries out image acquisition on the same limb part of the patient for multiple times at intervals to acquire the real-time condition of the limb of the bedridden patient along with time, and the second image acquired for multiple times is compared with the limb size data acquired by the first image, so that whether the patient is possible to have or has had lower limb venous thrombosis is judged. The image contrast includes an overlay contrast of the three-dimensional model obtained from the image processing and a direct contrast of the size data. The processing module 3 transmits the processed data of the second image to the analysis platform 4, the analysis platform 4 compares the initially acquired data of the first image with the data of the second image acquired for a plurality of times in the following, and whether thrombus occurs is judged according to the change of the limb size data displayed by the data. Under the condition that the data of limbs of the patient show to be gradually enlarged and exceed the preset threshold value for a plurality of times, the analysis platform 4 transmits an early warning signal to the display module so as to remind medical personnel that the patient has venous thrombosis.
It should be noted that the above-mentioned embodiments are exemplary, and that those skilled in the art, having benefit of the present disclosure, may devise various arrangements that are within the scope of the present disclosure and that fall within the scope of the invention. It should be understood by those skilled in the art that the present specification and figures are illustrative only and are not limiting upon the claims. The scope of the invention is defined by the claims and their equivalents.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110222887.9A CN113012112B (en) | 2021-02-26 | 2021-02-26 | Evaluation system for thrombus detection |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110222887.9A CN113012112B (en) | 2021-02-26 | 2021-02-26 | Evaluation system for thrombus detection |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113012112A true CN113012112A (en) | 2021-06-22 |
CN113012112B CN113012112B (en) | 2024-07-02 |
Family
ID=76386847
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110222887.9A Active CN113012112B (en) | 2021-02-26 | 2021-02-26 | Evaluation system for thrombus detection |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113012112B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114452459A (en) * | 2022-03-01 | 2022-05-10 | 上海璞慧医疗器械有限公司 | Thrombus aspiration catheter monitoring and early warning system |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5497787A (en) * | 1994-08-05 | 1996-03-12 | Nemesdy; Gabor | Limb monitoring method and associated apparatus |
EP0993805A1 (en) * | 1998-10-15 | 2000-04-19 | Medical concept Werbeagentur GmbH | System providing prophylaxis against thromboembolism |
US20150302594A1 (en) * | 2013-07-12 | 2015-10-22 | Richard H. Moore | System and Method For Object Detection Using Structured Light |
US20160235354A1 (en) * | 2015-02-12 | 2016-08-18 | Lymphatech, Inc. | Methods for detecting, monitoring and treating lymphedema |
DE102016118073A1 (en) * | 2016-09-26 | 2018-03-29 | Comsecura Ag | Measuring device for determining and displaying the exact size of a thrombosis prophylaxis hosiery and method for determining and displaying the size of the thrombosis prophylaxis hosiery |
US20180317772A1 (en) * | 2015-07-03 | 2018-11-08 | Universite De Montpellier | Device for biochemical measurements of vessels and for volumetric analysis of limbs |
CN110542390A (en) * | 2018-05-29 | 2019-12-06 | 环球扫描丹麦有限公司 | 3D object scanning method using structured light |
CN212489887U (en) * | 2020-04-30 | 2021-02-09 | 厦门中翎易优创科技有限公司 | Limb swelling monitoring device |
-
2021
- 2021-02-26 CN CN202110222887.9A patent/CN113012112B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5497787A (en) * | 1994-08-05 | 1996-03-12 | Nemesdy; Gabor | Limb monitoring method and associated apparatus |
EP0993805A1 (en) * | 1998-10-15 | 2000-04-19 | Medical concept Werbeagentur GmbH | System providing prophylaxis against thromboembolism |
US20150302594A1 (en) * | 2013-07-12 | 2015-10-22 | Richard H. Moore | System and Method For Object Detection Using Structured Light |
US20160235354A1 (en) * | 2015-02-12 | 2016-08-18 | Lymphatech, Inc. | Methods for detecting, monitoring and treating lymphedema |
US20180317772A1 (en) * | 2015-07-03 | 2018-11-08 | Universite De Montpellier | Device for biochemical measurements of vessels and for volumetric analysis of limbs |
DE102016118073A1 (en) * | 2016-09-26 | 2018-03-29 | Comsecura Ag | Measuring device for determining and displaying the exact size of a thrombosis prophylaxis hosiery and method for determining and displaying the size of the thrombosis prophylaxis hosiery |
CN110542390A (en) * | 2018-05-29 | 2019-12-06 | 环球扫描丹麦有限公司 | 3D object scanning method using structured light |
CN212489887U (en) * | 2020-04-30 | 2021-02-09 | 厦门中翎易优创科技有限公司 | Limb swelling monitoring device |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114452459A (en) * | 2022-03-01 | 2022-05-10 | 上海璞慧医疗器械有限公司 | Thrombus aspiration catheter monitoring and early warning system |
CN114452459B (en) * | 2022-03-01 | 2022-10-18 | 上海璞慧医疗器械有限公司 | Monitoring and early warning system for thrombus aspiration catheter |
Also Published As
Publication number | Publication date |
---|---|
CN113012112B (en) | 2024-07-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP0541598B1 (en) | Method and apparatus for obtaining the topography of an object | |
JP5849048B2 (en) | Three-dimensional (3D) ultrasound imaging system for scoliosis evaluation | |
US6567682B1 (en) | Apparatus and method for lesion feature identification and characterization | |
Krouskop et al. | A noncontact wound measurement system. | |
Moss et al. | A laser scanning system for the measurement of facial surface morphology | |
US20170079575A1 (en) | System for integrated wound analysis | |
CN102469937B (en) | Tomography apparatus and control method for same | |
KR101496669B1 (en) | Information processing device, method, system, and recording medium | |
KR20160030509A (en) | Video-based auto-capture for dental surface imaging apparatus | |
RU2005133397A (en) | AUTOMATIC SKIN DETECTION | |
CN114189623B (en) | Light field-based refraction pattern generation method, device, equipment and storage medium | |
CN113012112B (en) | Evaluation system for thrombus detection | |
CN103099622B (en) | A kind of body steadiness evaluation methodology based on image | |
KR102140657B1 (en) | Resolution correction device of thermal image | |
JP2006528499A (en) | Online wavefront measurement and display | |
WO2020037582A1 (en) | Graph-based key frame selection for 3-d scanning | |
CN118252529A (en) | Ultrasonic scanning method, device and system, electronic equipment and storage medium | |
US6556691B1 (en) | System for measuring curved surfaces | |
US20240122527A1 (en) | Method and system for detecting and optionally measuring at least one dimension of one or more protuberances of an area of a skin surface | |
US20230274441A1 (en) | Analysis method and analysis apparatus | |
Coghill et al. | Stereo vision based optic nerve head 3D reconstruction using a slit lamp fitted with cameras: performance trial with an eye phantom | |
Johasson et al. | Development of a computer vision system for ensuring quality of Fetal Spiral Electrodes | |
Kalina et al. | Quantitative assessment of optic nerve head topography | |
CN118334095A (en) | Image processing apparatus, image processing method, and recording medium | |
Mutsvangwa | Characterization of the facial phenotype associated with fetal alcohol syndrome using stereo-photogrammetry and geometric morphometrics |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant |