[go: up one dir, main page]

CN113012112A - Evaluation method and system for thrombus detection - Google Patents

Evaluation method and system for thrombus detection Download PDF

Info

Publication number
CN113012112A
CN113012112A CN202110222887.9A CN202110222887A CN113012112A CN 113012112 A CN113012112 A CN 113012112A CN 202110222887 A CN202110222887 A CN 202110222887A CN 113012112 A CN113012112 A CN 113012112A
Authority
CN
China
Prior art keywords
limb
image
unit
light projection
light
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110222887.9A
Other languages
Chinese (zh)
Other versions
CN113012112B (en
Inventor
高兰
郭桂丽
郭然
张和艳
徐倩
田红敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xuanwu Hospital
Original Assignee
Xuanwu Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xuanwu Hospital filed Critical Xuanwu Hospital
Priority to CN202110222887.9A priority Critical patent/CN113012112B/en
Publication of CN113012112A publication Critical patent/CN113012112A/en
Application granted granted Critical
Publication of CN113012112B publication Critical patent/CN113012112B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Measuring devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/107Measuring physical dimensions, e.g. size of the entire body or parts thereof
    • A61B5/1072Measuring physical dimensions, e.g. size of the entire body or parts thereof measuring distances on the body, e.g. measuring length, height or thickness
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Measuring devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/107Measuring physical dimensions, e.g. size of the entire body or parts thereof
    • A61B5/1075Measuring physical dimensions, e.g. size of the entire body or parts thereof for measuring dimensions by non-invasive methods, e.g. for determining thickness of tissue layer
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Measuring devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/107Measuring physical dimensions, e.g. size of the entire body or parts thereof
    • A61B5/1077Measuring of profiles
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Measuring devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/107Measuring physical dimensions, e.g. size of the entire body or parts thereof
    • A61B5/1079Measuring physical dimensions, e.g. size of the entire body or parts thereof using optical or photographic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Theoretical Computer Science (AREA)
  • Public Health (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Physics & Mathematics (AREA)
  • Veterinary Medicine (AREA)
  • Pathology (AREA)
  • Dentistry (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention relates to an evaluation method for thrombus detection, which at least comprises a light projection unit and an imaging unit, wherein the light projection unit can directionally project distance measurement light rays on a limb to be detected, the imaging unit can collect partial limb surface images marked by the light projected by the light projection unit, and the light projection unit receives a projection operation instruction and roughly projects structured light in a target area on the surface of the limb to be detected; the imaging unit is arranged at a certain distance from the light projection unit and at a certain included angle relative to the light projection unit, and the imaging unit responds to an instruction of completing projection operation of the light projection unit and records a first image of at least part of the surface of the limb to be measured; a pre-set training image processing network is retrieved to output estimated data including the position in the one or more first images and a three-dimensional model of the limb and limb parameters are generated by triangulation.

Description

Evaluation method and system for thrombus detection
Technical Field
The invention relates to the technical field of thrombus detection, in particular to a thrombus detection evaluation method and system.
Background
At present, in hospitals, patients with past venous thromboembolism history, quadriplegia, hip joint or knee joint replacement, spinal cord injury, stroke and the like are more, and symptoms such as limb swelling, shank itching and pain and the like are caused due to the fact that blood flow of limbs is slow when the patients lie in bed for a long time, so that deep venous thrombosis of limbs is easily caused, and pulmonary embolism can be caused due to the falling of deep venous thromboembolism, so that the life of the patients is seriously endangered, and even the patients die. The method that clinician generally adopted now is that often observe whether patient's limbs swell and skin temperature colour change condition, wherein, the big bad phenomenon of lower limb swelling appears because of venous thrombosis often for the stroke patient, consequently need carry out the measurement of limbs week to the patient, compare through carrying out the week diameter data that limb assigned position measured and obtain with different time points, can control the progress condition of patient's state of an illness effectively to make effectual preventive care measures, prevent further aggravation of the state of an illness.
When the circumference of the leg of a patient is measured, a measuring point is usually determined, a patella is usually selected as a reference point during clinical measurement, and the circumference of the fixed point is measured at a position which is 10cm to 15cm away from the upper and lower parts of the patella.
Chinese patent CN108937945A discloses a portable multifunctional limb measurement record evaluation device, which comprises a limb circumference measuring device, a measuring tool, a recording meter clip box, a measuring device for limb length and angle, a measuring device for skin dune, bedsore, and length, width, and depth of wound, wherein the measuring device, the measuring tool, the recording meter clip box, the measuring device for limb length and angle are sequentially connected, the measuring device is arranged at the right side of the measuring tool and the recording meter clip box and is movably connected with the measuring tool and the recording meter clip box, and the measuring device is arranged at the lower side of the measuring tool and the recording. This multi-functional limbs measurement record evaluation device can carry out limbs circumference and measure, and the measurement of limbs length, the measurement of limbs motion angle, the measurement of the size of allergic test skin dune diameter, bedsore ulcer face, wound, area, pupil observation measurement can also carry out painful measurement aassessment, bedsore is evaluateed in grades and is measured, dark vein thrombus muscle power aassessment etc. provides accurate basis for medical personnel's disease of diagnosing and observation treatment. Although various measuring tools are combined to a certain degree and an auxiliary positioning measuring structure for marking is arranged, the accurate measuring position cannot be selected according to the requirement, and the measurement cannot be rapidly and effectively carried out, so that a method for effectively acquiring and storing data of the thickness change of the limbs at the set position of the upper and lower limbs of the patella of a patient under the condition that the limbs of the patient are not moved is needed, and the medical staff can conveniently evaluate whether the patient has thrombus or the risk degree of thrombus attack according to the thickness change of the upper and lower limbs of the patella of the patient within a certain time, thereby timely and effectively dealing with the change of the state of illness of the patient
Furthermore, on the one hand, due to the differences in understanding to the person skilled in the art; on the other hand, since the inventor has studied a lot of documents and patents when making the present invention, but the space is not limited to the details and contents listed in the above, however, the present invention is by no means free of the features of the prior art, but the present invention has been provided with all the features of the prior art, and the applicant reserves the right to increase the related prior art in the background.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a thrombus detection evaluation method, which at least comprises a light projection unit and an imaging unit, wherein the light projection unit can directionally project ranging light rays on a limb to be detected, the imaging unit can collect partial limb surface images marked by the light rays projected by the light projection unit, and the light projection unit receives a projection operation instruction and roughly projects structured light in a target area on the surface of the limb to be detected; the imaging unit is arranged at a certain distance from the light projection unit and at a certain included angle relative to the light projection unit, and the imaging unit responds to an instruction of completing projection operation of the light projection unit and records a first image of at least part of the surface of the limb to be measured; a pre-set training image processing network is retrieved to output data comprising an estimate of the position in the one or more first images and a three-dimensional model of the limb and limb parameters are generated by triangulation.
According to a preferred embodiment, the imaging unit transmits one or more acquired first images of the same light scattering area of the limb to be measured in the same time period to the processing unit, and the processing unit performs comparison retrieval on the images in the storage unit, which store a plurality of reference images including the reference points and image parameters, and the acquired first images, so as to generate the general parameter data of the set part of the limb to be measured.
According to a preferred embodiment, the imaging unit corrects the first image recorded by the imaging unit in such a way that the imaging unit can capture an image using the patella of the limb to be measured as a reference point and calibrate the imaging data of the imaging unit, so that the first image acquired can record a contour comprising the reference point of the limb and a clear outline of the limb.
According to a preferred embodiment, the first image recorded by the imaging unit is taken at the time of the first recording measurement; the processing unit generates a three-dimensional model of an initially set part of the limb and parameters based on the datum points in the first images and the estimated data of the position in the one or more first images, and in case at least one of the first images is captured, the parameter values of the imaging unit and the light projection unit comprising the physical geometry of the mutual orientation and displacement perform triangulation; the processing unit can transmit the processed limb three-dimensional model and the parameters to the analysis platform.
According to a preferred embodiment, the processing unit is capable of segmenting the limb picture element related to the target position and other picture elements, so that a plurality of limb picture elements acquired in the same time period can be processed and corresponding limb three-dimensional models can be generated.
According to a preferred embodiment, the light projecting unit and the imaging unit acquire the second image by performing secondary image acquisition on the patella of the limb and the upper and lower limbs in a plurality of time periods during the observation period, and the processing unit can perform image processing and image element segmentation on the acquired second image in the same time period, so as to generate a three-dimensional model corresponding to the second image data and parameters of the limb.
According to a preferred embodiment, the processing module transmits the limb three-dimensional model and parameters acquired by processing the plurality of second images to the analysis platform, the analysis platform can collect the stored limb three-dimensional model and parameters of the first image acquired at the initial period and the limb three-dimensional models and parameters of the second images sequentially acquired at a plurality of periods thereafter, and the stored data can be displayed through the display module.
According to a preferred embodiment, the analysis platform can compare and analyze the limb three-dimensional models and parameters of the plurality of second images acquired in sequence with the limb three-dimensional models and parameters of the first image acquired in the initial period and/or the limb three-dimensional models and parameters of the second images acquired in the previous period, so as to determine the change of the upper and lower limbs of the patella of the patient.
According to a preferred embodiment, before or as an initial step of recording the first image and before or as an initial step of recording the second image: applying respective scanner settings for recording the first image and setting respective operating conditions for one or both of the light projection unit and the imaging unit to operate in accordance with the respective operating conditions prior to or as an initial step of recording the second image; wherein the respective operating conditions are the same at least during recording of the first image and during recording of the second image.
The application also provides an evaluation system for thrombus detection, which is characterized in that the structured light is projected to a target area on the surface of the limb to be detected by the light projection unit; recording a first image of at least part of the surface of the limb under test using an imaging unit arranged at a distance from the light projection unit and at an angle of intersection with respect to the light projection unit; a pre-set training image processing network is retrieved to output data comprising an estimate of the position in the one or more first images and a three-dimensional model of the limb and limb parameters are generated by triangulation.
Drawings
FIG. 1 is a logical schematic diagram of a preferred embodiment of an assessment method of the present invention as applied to thrombus detection.
List of reference numerals
1: the projection unit 2: the image forming unit 3: processing module
4: the analysis platform 5: display module
Detailed Description
Example 1
An evaluation method applied to thrombus detection is used for measuring the fixed-point limb size of partial lower limbs of 10cm-15cm above and below the patella of a bedridden patient so as to judge whether the patient has the adverse phenomenon of lower limb swelling. The method judges whether the lower limb swelling caused by the venous thrombosis exists according to the change of the limb size measured in sequence in a plurality of time periods at intervals, so that the medical treatment can be effectively and pertinently carried out on the deep venous thrombosis of the limbs which possibly appears or appears in time, the disease deterioration of the bedridden patient can be avoided, the progress of the disease of the patient can be mastered in time, and a reasonable and effective treatment scheme can be further made in stages according to the disease.
According to a specific embodiment, as shown in fig. 1, in order to effectively monitor whether a bedridden patient has lower limb venous thrombosis, a projection unit 1 and an imaging unit 2 are arranged to perform image acquisition on a limb in a set target area which can be projected by structured light, so that ranging light can be directionally projected on the limb to be measured at a position which is at a certain distance and angle from the limb to be measured by the projection unit, and the imaging unit 2 can perform image acquisition on the target area marked by the projection light of the projection unit 1, so as to acquire an image with information of the limb to be measured. Preferably, the imaging unit 2 is arranged at a certain distance from the light projecting unit 1 and at a certain included angle with respect to the light projecting unit 1, so that after the light projecting unit 1 projects the structured light on the target area of the limb to be measured, the imaging unit 2 is started and performs image acquisition on the area. In addition, when the light projection-imaging device is moved, the imaging unit 2 can acquire images of the same limb part at different angles and different areas along with the movement of the light projection unit 1, and process a plurality of images acquired in the same time period through a preset training image processing network to obtain estimated data of the position of the limb to be measured in the images, so that a three-dimensional model of the section of the limb to be measured and actually acquired limb parameters represented by the model limb are generated by the obtained multiple groups of position data in a triangulation calculation mode, and the limb size of the position at a certain distance above and below the patella of the limb at a specific time represented by the three-dimensional model can be obtained. Specifically, the imaging unit 2 transmits one or more acquired images of the same light scattering area of the limb to be measured in the same time period to the processing unit 3, the processing unit 3 performs comparison and retrieval on the acquired images and images in the storage unit 4 in which a plurality of reference images containing reference calibration points and parameters corresponding to the images are stored, so as to generate rough parameter data of the set part of the limb to be measured, and the processing unit 3 can further generate a limb three-dimensional model which can perform feature point position correlation with the reference images according to the generated rough parameter data and the corresponding relationship with the reference images with the reference calibration points. Preferably, the imaging unit 2 captures the position of the patella of the limb to be measured as a reference point and corrects the image recorded by the imaging unit 2 in such a way that the imaging data of the imaging unit 2 is calibrated, so that the acquired image can record a contour including the reference point of the limb and the clear outline of the limb. When carrying out many times data acquisition of many periods to patient's settlement limbs, limbs three-dimensional model and parameter after processing unit 3 handles all transmit analysis platform to analysis platform 4 carries out data listing and contrastive analysis to the multiunit data of gathering many times, and then can show the change of the limbs size of patient's upper and lower settlement distance position department of patient's limbs in certain time to medical personnel, thereby make things convenient for medical personnel to judge whether bed patient has taken place or probably takes place venous thrombosis, and give the treatment scheme that is fit for different patients with this as the basis. In addition, the three-dimensional model obtained by multiple times of acquisition can help medical staff to more intuitively know the actual change of the limb size of the patient in an overlapping comparison mode, can effectively eliminate the condition judgment only by depending on the size data at the set position, and reduces the influence on the accuracy of the acquired measurement data due to the deformation of the limb caused by the posture, external extrusion and the like of the patient.
The first image recorded by the imaging unit 2 is the image taken when the first recording measurement is performed on the limb of the patient, and the generated three-dimensional model and parameters of the limb are also used as reference data of the initial state of the patient. The processing module generates a three-dimensional model of the set part of the initial limb and its parameters based on the datum points in the first images and the estimated data of the position in the one or more first images, and in case at least one first image is captured, the light projection unit and the imaging unit perform triangulation including parameter values of the physical geometry oriented and displaced with respect to each other. The processing unit 3 can divide the limb picture element related to the target position and other picture elements, so that a plurality of limb picture elements collected in the same time period can be processed and a corresponding limb three-dimensional model can be generated. The processing unit 3 is capable of transmitting the processed three-dimensional model of the limb and the parameters to the analysis platform 4.
The projection-imaging assembly comprising the projection unit 1 and the imaging unit 2 is capable of acquiring the second image in such a way that the patella of the limb and the upper and lower limbs thereof are subjected to secondary image acquisition in a plurality of time periods during the observation period. The processing unit 3 is capable of image processing and image element segmentation of the acquired second image within the same time period, thereby generating a three-dimensional model corresponding to the second image data and parameters of the section of the limb thereof. And transmitting the limb three-dimensional model and the parameters acquired by the plurality of second images processed by the processing module 3 to the analysis platform 4. The analysis platform 4 can summarize the limb three-dimensional model and parameters of the first image acquired at the initial period and the limb three-dimensional model and parameters of the second image sequentially acquired at a plurality of periods thereafter, and the stored data can be displayed through the display module 5. The analysis platform 4 can also compare and analyze the limb three-dimensional models and parameters of the plurality of second images collected in order with the limb three-dimensional models and parameters of the first image collected in the initial period and/or the limb three-dimensional models and parameters of the second images collected in the previous period, and accordingly, the change of the upper limb and the lower limb of the patella of the patient can be judged.
Prior to or as an initial step of recording the first image and prior to or as an initial step of recording the second image: applying respective scanner settings for recording the first image and setting respective operating conditions for one or both of the light projection unit and the imaging unit to operate in accordance with the respective operating conditions prior to or as an initial step of recording the second image; wherein the respective operating conditions are the same at least during recording of the first image and during recording of the second image.
Example 2
An assessment system capable of measuring whether a bedridden patient has limb venous thrombosis includes a projection-imaging scanning analysis system capable of being integrated in a single device through a housing. The light projection unit 1 in the system can project the structured light emitted by the light projection unit to the surface of the set position of the limb to be measured in a mode of separating a certain angle and a certain distance from the limb to be measured. The imaging unit 2 may be disposed at a distance from the light projection unit 1 in a range of 20mm to 500mm or more. Thus, the imaging unit 2 can move along with the light projection unit, so that images of different angles and positions of the surface of the limb to be measured can be recorded.
After completing the registration of the images, a training image processing network configured during training is retrieved to output estimated data for the position in one or more images related to the target position including the surface of the physical object. An estimate of the position in the one or more first images of the target position on the surface of the physical object is then input to the subsequent triangulation. Since an estimate of the position in the one or more first images can be provided with improved accuracy, the trained image processing network improves the input of triangulation, enabling accurate computation of a point cloud or three-dimensional model of the surface of the limb under test. The system can effectively improve the effects of accurately and effectively scanning and acquiring data of the device under the condition that the outline of the limb to be detected is not obvious by properly training the image processing network, and can scan the surface of an object with larger variability of surface characteristics.
The system may proceed with receiving user input to perform a scanning operation to generate a partial or complete computer-readable point cloud or three-dimensional model of the surface of the physical object upon the aforementioned receiving user input to manipulate operating conditions of one or both of the light projection unit and the imaging unit. Thus, the user does not need to worry about trying the scanning process over and over again before achieving a satisfactory result. The training image processing network may have the ability to generalize within and/or beyond its training data, which increases the chances that the user will be able to obtain better scan results than can be achieved in at least the first few attempts at scanning a particular limb object. This is relevant because the system scanning process may take a significant amount of time to complete, e.g., 5 to 30 minutes or more. Further, the system may need to receive user input via a user interface to perform digital reconstruction operations including one or more of: the method includes retrieving a training image processing network, using the training image processing network, and generating a partial or complete computer-readable point cloud or three-dimensional model of a surface of a physical object. Since both the scanning operation and the digital reconstruction operation may be time consuming, and since the operations may be performed on individual hardware components, it may be advantageous to perform either operation in response to receiving user input. In either case, the user is typically freed from the very time-consuming task of adjusting the operating conditions by a trial and error process involving both the scanning operation (which is time consuming) and the digital reconstruction operation (which is also time consuming). Preferably, the training image processing network may be configured to suppress the undesired effects of light scattering areas during training, which is a source of erroneously shifting the estimate of the target position. Training the image processing network may use a combination of linear and non-linear operators.
Preferably, the structured light is projected onto the surface of the object to be measured, substantially at target positions gradually displaced on the surface of the object to be measured. This may be achieved by motorized rotation supporting angular movement of the light projection unit. The structured light can be aimed at a target location of the object to be measured. When observing an object to be measured that the structured light impinges on, it can be observed that certain surfaces where the structured light appears are focused at the target location (intense light intensity) and substantially gaussian distributed along a direction orthogonal to the structured light at approximately the target location, which may be a line. The light scattering areas illuminated by structured light projected substantially at the target location on the surface of the physical object may appear as one or more regularly or irregularly illuminated areas that are symmetric or asymmetric about the target location. The target location may correspond to a geometric center or "center of gravity" of the structured light (e.g., corresponding to a center of a substantially gaussian distribution), but disregards the light scattering region, at least to some extent.
Preferably, the light projection unit 1 may include a light source (such as an LED or LASER) and may include one or more of an optical lens and a prism. The light projection unit 1 may comprise a "fan laser". The light projection unit 1 may include a laser emitting a laser beam and a laser line generator lens converting the laser beam into a uniform straight line. The laser line generator lens may be configured as a cylindrical lens or a rod lens to focus the laser beam along an axis to form the light line. The structured light may be configured as 'dots', 'arrays of dots', 'matrices of dots' or 'clouds of dots', a single ray, as a plurality of parallel lines or intersecting lines. The structured light may be, for example, 'white light', 'red light', 'green light' or 'infrared light' or a combination thereof. The structured light arranged as a line appears as a line on the object from the perspective of the light projection unit 1. When the angle of view is different from that of the light projection unit 1, if the object is bent, the line appears to be bent. This curve is observable in an image captured by a camera arranged at a distance from the light projection unit 1. The target location corresponds to an edge of the structured light. The edges may be defined according to, for example, statistical criteria corresponding to light intensity that is approximately half of the light intensity at the center of structured light that may have a significant gaussian distribution. Another criterion for detecting edges or other criteria may also be used. The target location may correspond to a 'left edge', 'right edge', 'upper edge', 'lower edge', or a combination thereof. An advantage of using the target position at the edge of the structured light is that the resolution of the 3D scan can be improved.
Preferably, the integrated light projection-imaging device of the system may have one or two wired interfaces (for example according to the USB standard) or wireless interfaces (for example according to the Wi-Fi or bluetooth standard) for transmitting the image sequence to a computer provided with the analysis platform 4 (for example a desktop, laptop, tablet or smartphone). The sequence of images may be in accordance with a video format (e.g., the JPEG standard). In retrieving one or more of the trained image processing networks, processing the images and generating a partial or complete computer-readable point cloud or three-dimensional model of the object surface is performed by a computer. The first computer may receive the sequence of images via a data communication link. The imaging unit 2 may be configured as a color camera, for example, an RGB camera or a gray tone camera. The term triangulation should be construed to include any type of triangulation, including determining the location of a point by forming a triangle for it from known points. Triangulation includes, for example, triangulation used in one or more of the following: nuclear geometry, photogrammetry, and stereo vision. But is not limited thereto.
Preferably, the image recorded by the imaging unit 2 may be a monochrome image or a color image. The imaging unit 2 may be configured with a camera sensor that outputs an image having columns and rows of monochrome or color pixel values in a matrix format. The 2D representation may be in the form of a 2D coordinate list, for example, with reference to column and row indices of the first image. In some aspects, the 2D representation is obtained with sub-pixel precision. The 2D representation can alternatively or additionally be output in the form of a 2D image, e.g. a binary image having only two possible pixel values. Thus, the 2D representation is encoded and the 2D representation can be used to prepare input for triangulation. The target position in at least one of the first images is obtained by processing using the training image processing network of the processing unit 3. The parameter values representing the physical geometry of the light projection unit 1 and the imaging unit 2 may comprise one or more of a mutual orientation and a 2D or 3D displacement. The parameter values may comprise at least a first value which is still fixed during the scanning of the specific 3D object and a second value which varies during the scanning of the specific 3D object. The second value may be read by a sensor, sensed, or provided by a controller that controls the scanning of the 3D object. The sensor may sense rotation of the light projector or a component thereof.
In performing the division classification of the image element, a picture element distinguished from other picture elements as target positions is a target position estimation on the surface of the physical object. A picture element that is distinguished from other picture elements that are target locations may be encoded with one or more unique values (e.g., as binary values in a binary image). The training image processing network may receive images of the image sequence as input images and may provide a segmentation in which certain picture elements are distinguished from a target position in the output image. The output image may have a higher resolution than the input image, which allows for estimation of the sub-pixel accuracy of the target location that is input for triangulation. Preferably, the output image may have a lower resolution, for example, for triangulation-related processing. For example, by using Bicubic interpolation (e.g., via a Bicubic filter), higher resolution images may be generated by upsampling as known in the art. In some embodiments, other types of upsampling are used. The upsampling may be, for example, eight times the upsampling, four times the upsampling, or another upsampling size. Further, the segmentation may correspond to an estimated target location identified by an image processor performing image processing in response to human operator-controlled parameters, for example, parameters related to one or more of: image filter type, image filter kernel size, and intensity threshold. The parameters controlled by the human operator may be set by the operator during trial and error, wherein the operator uses visual inspection to control the parameters for accurate segmentation indicative of the target location.
Preferably, the training image processing network is a convolutional neural network, such as a deep convolutional neural network. The convolutional neural network also provides a highly accurate estimate of the target position under sub-optimal operating conditions of the light projector, such as where light scattering regions significantly distort the light pattern on the surface of the physical object and its spatial definition (e.g., by ambient light, cultural and light definitions of the physical object). Properly trained convolutional neural networks provide superior segmentation results for accurate estimation of target locations. In some embodiments, training the image processing network comprises a support vector machine. The training image processing network may also be a deep convolutional network with u-net including downsampling and upsampling operators. Such a training image processing network provides a good balance between computational endurance and accuracy, let alone a relatively small subset of training data. The convolutional network with u-net is provided when it outputs the segmentation map in the form of an image.
Example 3
A method for evaluating whether venous thrombosis occurs or exists in a bedridden patient through measurement of limb sizes.
After receiving the patient, the light projection unit 1 projects the structured light to the position range of 10cm-15cm above and below the patella of the lower limb of the patient, so that the scattering range of the projected light is within the limited range; then, recording partial limb surface images in the structured light scattering range through a camera of the imaging unit 2 so as to obtain a first image, and enabling the camera of the imaging unit 2 moving along with the projection unit to record and store the images of the whole limb surface in the position range of 10cm-15cm above and below the patella of the lower limb through rotating or moving the projection unit; secondly, the acquired first image is segmented and processed through the processing module 3, so that a three-dimensional model of the limb to be measured and size data of the limb of the model are obtained; and finally, transmitting the processed data information to an analysis platform 4 for storage and analysis, and displaying through a display module 5, wherein the display module 5 also serves as a user operation terminal, and an operator can perform parameter setting and related regulation and control operations on the terminal.
After the first image is acquired, medical staff carries out image acquisition on the same limb part of the patient for multiple times at intervals to acquire the real-time condition of the limb of the bedridden patient along with time, and the second image acquired for multiple times is compared with the limb size data acquired by the first image, so that whether the patient is possible to have or has had lower limb venous thrombosis is judged. The image contrast includes an overlay contrast of the three-dimensional model obtained from the image processing and a direct contrast of the size data. The processing module 3 transmits the processed data of the second image to the analysis platform 4, the analysis platform 4 compares the initially acquired data of the first image with the data of the second image acquired for a plurality of times in the following, and whether thrombus occurs is judged according to the change of the limb size data displayed by the data. Under the condition that the data of limbs of the patient show to be gradually enlarged and exceed the preset threshold value for a plurality of times, the analysis platform 4 transmits an early warning signal to the display module so as to remind medical personnel that the patient has venous thrombosis.
It should be noted that the above-mentioned embodiments are exemplary, and that those skilled in the art, having benefit of the present disclosure, may devise various arrangements that are within the scope of the present disclosure and that fall within the scope of the invention. It should be understood by those skilled in the art that the present specification and figures are illustrative only and are not limiting upon the claims. The scope of the invention is defined by the claims and their equivalents.

Claims (10)

1.一种血栓检测的评估方法,其至少包括投光单元(1)和成像单元(2),其中,所述投光单元(1)能够将测距光线定向投射在待测肢体上,成像单元(2)能够采集经投光单元(1)投射光线标记的部分肢体表面图像,其特征在于,1. An evaluation method for thrombus detection, comprising at least a light projection unit (1) and an imaging unit (2), wherein the light projection unit (1) is capable of directional projection of ranging light on the limb to be measured, and imaging The unit (2) is capable of collecting a part of the limb surface image marked by light projected by the light projection unit (1), and is characterized in that: 所述投光单元(1)接收到投射操作的指令而将结构光大致投射在待测肢体表面的目标区域内;The light projection unit (1) receives the instruction of the projection operation and projects the structured light roughly in the target area of the surface of the limb to be measured; 所述成像单元(2)按照距离所述投光单元(1)一定距离且相对于所述投光单元(1)成一定夹角的方式进行设置,所述成像单元(2)响应于所述投光单元(1)完成投射操作的指令而记录至少所述待测肢体的部分表面的第一图像;The imaging unit (2) is arranged at a certain distance from the light projection unit (1) and at a certain angle with respect to the light projection unit (1), and the imaging unit (2) responds to the The light projection unit (1) completes the instruction of the projection operation and records at least the first image of the partial surface of the limb to be tested; 检索预先设置的训练图像处理网络以输出包括一个或多个第一图像中的位置的估计的数据,并通过三角测量生成该肢体的三维模型及肢体参数。A preconfigured training image processing network is retrieved to output data including an estimate of the position in one or more first images, and a three-dimensional model of the limb and limb parameters are generated by triangulation. 2.如权利要求1所述的血栓检测的评估方法,其特征在于,所述成像单元(2)将采集的待测肢体的同一时间段内的同一光散射区域的一个或多个第一图像传输至处理单元(3),所述处理单元(3)通过对存储有若干包含有基准点的参考图像及图像参数的存储单元(4)内的图像与采集的第一图像进行对比检索,从而生成待测肢体的设定部位的大体参数数据。2 . The evaluation method for thrombus detection according to claim 1 , wherein the imaging unit ( 2 ) collects one or more first images of the same light scattering region in the same time period of the limb to be tested. 3 . It is transmitted to the processing unit (3), and the processing unit (3) compares and retrieves the image in the storage unit (4) that stores a number of reference images and image parameters including the fiducial point and the collected first image, thereby Generate general parameter data of the set part of the limb to be tested. 3.如权利要求2所述的血栓检测的评估方法,其特征在于,所述成像单元(2)按照能够以待测肢体的髌骨作为基准点进行图像抓取并以此校准成像单元(2)的成像数据的方式校正其记录的第一图像,从而使得采集的第一图像是能够记录包含有肢体基准点和肢体清晰的外形轮廓的。3 . The evaluation method for thrombus detection according to claim 2 , wherein the imaging unit ( 2 ) is capable of capturing images by taking the patella of the limb to be measured as a reference point, and calibrating the imaging unit ( 2 ). 4 . The recorded first image is corrected by means of the imaging data, so that the collected first image can record the limb reference point and the clear outline of the limb. 4.如权利要求3所述的血栓检测的评估方法,其特征在于,所述成像单元(2)记录的所述第一图像是在首次记录测量时拍摄的;所述处理单元(3)基于第一图像中的基准点以及一个或多个第一图像中的位置的估计数据生成肢体的初始设定部位的三维模型及三维模型参数,且在捕获至少一个所述第一图像的情况下,所述成像单元和所述投光单元的包括相互定向和位移的物理几何形状的参数值来执行三角测量;所述处理单元(3)能够将处理后的肢体三维模型和参数传输至分析平台(4)。4. The evaluation method for thrombus detection according to claim 3, characterized in that, the first image recorded by the imaging unit (2) is taken when the measurement is recorded for the first time; the processing unit (3) is based on The fiducial point in the first image and the estimated data of the position in the one or more first images generate a 3D model and 3D model parameters of the initial set part of the limb, and in the case of capturing at least one of said first images, The parameter values of the imaging unit and the light projection unit including the physical geometry of mutual orientation and displacement are used to perform triangulation; the processing unit (3) is capable of transmitting the processed three-dimensional model of the limb and parameters to the analysis platform ( 4). 5.如权利要求2所述的血栓检测的评估方法,其特征在于,所述处理单元(3)能够将与目标位置相关的肢体图片元素和其他图片元素进行分割,从而能够对同一时间段采集的多张肢体图片元素进行处理并生成相对应的肢体三维模型。5. The evaluation method for thrombus detection according to claim 2, characterized in that, the processing unit (3) can segment the limb picture elements and other picture elements related to the target position, so as to collect the same time period. The multiple limb picture elements are processed and the corresponding limb 3D model is generated. 6.如权利要求4所述的血栓检测的评估方法,其特征在于,所述投光单元(1)和成像单元(2)按照在观察期间的多个时间段内对肢体的髌骨及其上下段肢体进行二次图像采集的方式获取第二图像,所述处理单元(3)能够对采集的同一时间段内的第二图像进行图像处理和图像元素分割,从而生成对应于该第二图像数据的三维模型及该段肢体的参数。6. The evaluation method for thrombus detection according to claim 4, characterized in that, the light projection unit (1) and the imaging unit (2) are based on the patella and the upper part of the limb in multiple time periods during the observation period. The second image is acquired by means of secondary image acquisition of the lower limb, and the processing unit (3) can perform image processing and image element segmentation on the acquired second image within the same time period, so as to generate data corresponding to the second image The 3D model and the parameters of the limb. 7.如权利要求6所述的血栓检测的评估方法,其特征在于,所述处理模块(3)将处理若干第二图像获取的肢体三维模型和参数传输到分析平台(4),所述分析平台(4)能够将存储的初始时段获取的第一图像的肢体三维模型及参数与在其后多时段有序采集的第二图像的肢体三维模型及参数进行汇总,且存储数据能够通过显示模块(5)进行展示。7. The evaluation method for thrombus detection according to claim 6, wherein the processing module (3) transmits the three-dimensional model and parameters of the limb obtained by processing several second images to the analysis platform (4), and the analysis The platform (4) can summarize the three-dimensional limb model and parameters of the first image acquired in the stored initial period and the limb three-dimensional model and parameters of the second image acquired sequentially in multiple periods thereafter, and the stored data can pass through the display module. (5) to display. 8.如权利要求7所述的血栓检测的评估方法,其特征在于,所述分析平台(4)能够对有序采集的多个第二图像的肢体三维模型及参数与初始时段采集的第一图像的肢体三维模型及参数和/或上一时间段采集的第二图像的肢体三维模型及参数进行对比分析,并以此来判断患者髌骨的上下段肢体的变化。8. The evaluation method for thrombus detection according to claim 7, characterized in that, the analysis platform (4) is capable of analyzing the three-dimensional models of the limbs and the parameters of a plurality of second images collected in an orderly manner and the first time collected in the initial period. The limb three-dimensional model and parameters of the image and/or the limb three-dimensional model and parameters of the second image collected in the previous period are compared and analyzed, and the changes of the upper and lower limbs of the patient's patella are judged accordingly. 9.如权利要求6所述的血栓检测的评估方法,其特征在于,在记录所述第一图像之前或者作为记录所述第一图像的初始步骤并且在记录所述第二图像之前或者作为记录所述第二图像的初始步骤:9. The evaluation method of thrombus detection according to claim 6, characterized in that before recording the first image or as an initial step of recording the first image and before recording the second image or as a recording Initial steps for the second image: 应用相应的扫描仪设置用于记录所述第一图像并且在记录所述第二图像之前或者作为记录所述第二图像的初始步骤,为所述投光单元和所述成像单元中的之一或两者设置相应的操作条件以根据所述相应的操作条件操作;Applying the corresponding scanner settings for recording the first image and before recording the second image or as an initial step for recording the second image, for one of the light projection unit and the imaging unit or both set corresponding operating conditions to operate according to said corresponding operating conditions; 其中,至少在记录所述第一图像期间和记录所述第二图像期间的所述相应的操作条件是相同的。wherein the respective operating conditions are the same at least during recording of the first image and during recording of the second image. 10.一种血栓检测的评估系统,其特征在于,通过投光单元(1)将结构光大致投射在待测肢体表面的目标区域内;10. An evaluation system for thrombus detection, characterized in that the structured light is roughly projected in a target area on the surface of the limb to be measured by the light projection unit (1); 使用布置为距所述投光单元(1)一定距离并且相对于所述投光单元成一定夹角的成像单元(2)记录所述待测肢体的至少部分表面的第一图像;recording a first image of at least part of the surface of the limb to be tested using an imaging unit (2) arranged at a distance from the light projection unit (1) and at a certain angle with respect to the light projection unit; 检索预先设置的训练图像处理网络以输出包括一个或多个第一图像中的位置的估计的数据,并通过三角测量生成该肢体的三维模型及肢体参数。A preconfigured training image processing network is retrieved to output data including an estimate of the position in one or more first images, and a three-dimensional model of the limb and limb parameters are generated by triangulation.
CN202110222887.9A 2021-02-26 2021-02-26 Evaluation system for thrombus detection Active CN113012112B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110222887.9A CN113012112B (en) 2021-02-26 2021-02-26 Evaluation system for thrombus detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110222887.9A CN113012112B (en) 2021-02-26 2021-02-26 Evaluation system for thrombus detection

Publications (2)

Publication Number Publication Date
CN113012112A true CN113012112A (en) 2021-06-22
CN113012112B CN113012112B (en) 2024-07-02

Family

ID=76386847

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110222887.9A Active CN113012112B (en) 2021-02-26 2021-02-26 Evaluation system for thrombus detection

Country Status (1)

Country Link
CN (1) CN113012112B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114452459A (en) * 2022-03-01 2022-05-10 上海璞慧医疗器械有限公司 Thrombus aspiration catheter monitoring and early warning system

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5497787A (en) * 1994-08-05 1996-03-12 Nemesdy; Gabor Limb monitoring method and associated apparatus
EP0993805A1 (en) * 1998-10-15 2000-04-19 Medical concept Werbeagentur GmbH System providing prophylaxis against thromboembolism
US20150302594A1 (en) * 2013-07-12 2015-10-22 Richard H. Moore System and Method For Object Detection Using Structured Light
US20160235354A1 (en) * 2015-02-12 2016-08-18 Lymphatech, Inc. Methods for detecting, monitoring and treating lymphedema
DE102016118073A1 (en) * 2016-09-26 2018-03-29 Comsecura Ag Measuring device for determining and displaying the exact size of a thrombosis prophylaxis hosiery and method for determining and displaying the size of the thrombosis prophylaxis hosiery
US20180317772A1 (en) * 2015-07-03 2018-11-08 Universite De Montpellier Device for biochemical measurements of vessels and for volumetric analysis of limbs
CN110542390A (en) * 2018-05-29 2019-12-06 环球扫描丹麦有限公司 3D object scanning method using structured light
CN212489887U (en) * 2020-04-30 2021-02-09 厦门中翎易优创科技有限公司 Limb swelling monitoring device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5497787A (en) * 1994-08-05 1996-03-12 Nemesdy; Gabor Limb monitoring method and associated apparatus
EP0993805A1 (en) * 1998-10-15 2000-04-19 Medical concept Werbeagentur GmbH System providing prophylaxis against thromboembolism
US20150302594A1 (en) * 2013-07-12 2015-10-22 Richard H. Moore System and Method For Object Detection Using Structured Light
US20160235354A1 (en) * 2015-02-12 2016-08-18 Lymphatech, Inc. Methods for detecting, monitoring and treating lymphedema
US20180317772A1 (en) * 2015-07-03 2018-11-08 Universite De Montpellier Device for biochemical measurements of vessels and for volumetric analysis of limbs
DE102016118073A1 (en) * 2016-09-26 2018-03-29 Comsecura Ag Measuring device for determining and displaying the exact size of a thrombosis prophylaxis hosiery and method for determining and displaying the size of the thrombosis prophylaxis hosiery
CN110542390A (en) * 2018-05-29 2019-12-06 环球扫描丹麦有限公司 3D object scanning method using structured light
CN212489887U (en) * 2020-04-30 2021-02-09 厦门中翎易优创科技有限公司 Limb swelling monitoring device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114452459A (en) * 2022-03-01 2022-05-10 上海璞慧医疗器械有限公司 Thrombus aspiration catheter monitoring and early warning system
CN114452459B (en) * 2022-03-01 2022-10-18 上海璞慧医疗器械有限公司 Monitoring and early warning system for thrombus aspiration catheter

Also Published As

Publication number Publication date
CN113012112B (en) 2024-07-02

Similar Documents

Publication Publication Date Title
EP0541598B1 (en) Method and apparatus for obtaining the topography of an object
JP5849048B2 (en) Three-dimensional (3D) ultrasound imaging system for scoliosis evaluation
US6567682B1 (en) Apparatus and method for lesion feature identification and characterization
Krouskop et al. A noncontact wound measurement system.
Moss et al. A laser scanning system for the measurement of facial surface morphology
US20170079575A1 (en) System for integrated wound analysis
CN102469937B (en) Tomography apparatus and control method for same
KR101496669B1 (en) Information processing device, method, system, and recording medium
KR20160030509A (en) Video-based auto-capture for dental surface imaging apparatus
RU2005133397A (en) AUTOMATIC SKIN DETECTION
CN114189623B (en) Light field-based refraction pattern generation method, device, equipment and storage medium
CN113012112B (en) Evaluation system for thrombus detection
CN103099622B (en) A kind of body steadiness evaluation methodology based on image
KR102140657B1 (en) Resolution correction device of thermal image
JP2006528499A (en) Online wavefront measurement and display
WO2020037582A1 (en) Graph-based key frame selection for 3-d scanning
CN118252529A (en) Ultrasonic scanning method, device and system, electronic equipment and storage medium
US6556691B1 (en) System for measuring curved surfaces
US20240122527A1 (en) Method and system for detecting and optionally measuring at least one dimension of one or more protuberances of an area of a skin surface
US20230274441A1 (en) Analysis method and analysis apparatus
Coghill et al. Stereo vision based optic nerve head 3D reconstruction using a slit lamp fitted with cameras: performance trial with an eye phantom
Johasson et al. Development of a computer vision system for ensuring quality of Fetal Spiral Electrodes
Kalina et al. Quantitative assessment of optic nerve head topography
CN118334095A (en) Image processing apparatus, image processing method, and recording medium
Mutsvangwa Characterization of the facial phenotype associated with fetal alcohol syndrome using stereo-photogrammetry and geometric morphometrics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant