[go: up one dir, main page]

WO2025049680A1 - System and method for non-invasive heart pressure measurement - Google Patents

System and method for non-invasive heart pressure measurement Download PDF

Info

Publication number
WO2025049680A1
WO2025049680A1 PCT/US2024/044332 US2024044332W WO2025049680A1 WO 2025049680 A1 WO2025049680 A1 WO 2025049680A1 US 2024044332 W US2024044332 W US 2024044332W WO 2025049680 A1 WO2025049680 A1 WO 2025049680A1
Authority
WO
WIPO (PCT)
Prior art keywords
body region
blood vessel
sequence
images
subject
Prior art date
Application number
PCT/US2024/044332
Other languages
French (fr)
Inventor
Michele Esposito
Ramsey WEHBE
Original Assignee
Musc Foundation For Research Development
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Musc Foundation For Research Development filed Critical Musc Foundation For Research Development
Publication of WO2025049680A1 publication Critical patent/WO2025049680A1/en

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording for evaluating the cardiovascular system, e.g. pulse, heart rate, blood pressure or blood flow
    • A61B5/021Measuring pressure in heart or blood vessels
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Measuring devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/67ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for remote operation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/70ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients

Definitions

  • 25% of HF patients are readmitted after 30 days while 50% are readmitted within 6 months of discharge.
  • Better tools to assess congestion outside of the hospital could improve these readmission rates by pre-empting worsening HF, thereby improving patients’ quality of life and reducing medical costs.
  • several implantable device-based diagnostics have failed to yield significant results, including intrathoracic impedance, percentage of biventricular pacing, and arrhythmia burden, among others, (van Veldhuisen DJ, et al. Circulation 2011 ; 124( 16) : 1719-26. ; Brachmann J, et al . Eur J Heart Fail 2011 ; 13 (7) : 796-804. ; Morgan JM, et al. Eur Heart J 2017;38(30):2352-60.; Boriani G, et al. Eur J Heart Fail 2017;19(3):416-25.)
  • a critical barrier to these strategies is the lack of hemodynamic data available.
  • the gold standard to assess congestion is direct measurement of intravascular filling pressures via a right heart catheterization (RHC).
  • RHC right heart catheterization
  • This invasive procedure carries a risk of peri-procedural adverse events and is not feasible for some patients or scalable.
  • the only FDA-approved remote monitoring device that has been shown to replicate invasive hemodynamics is a permanently-implanted pressure sensor that is deployed into the pulmonary artery. Similar to an RHC, this requires an invasive procedure that may not be appropriate for all patients.
  • JVP jugular venous pressure
  • JVP is visually assessed at the bedside by a physician, who can then utilize that information to direct fluid management of the patient.
  • this subjective assessment has been shown to be inconsistently correlated with right atrial pressure, as measured by the gold standard of cardiac catheterization.
  • PPG photoplethysmography
  • contact PPG skin displacement algorithms
  • specular reflection imaging specular reflection imaging
  • Eulerian video magnification among others.
  • video recordings of the necks of 50 subjects were obtained, and an Eulerian magnification algorithm was applied to these videos. Physicians were then able to subjectively assess the enhanced videos, and their JVP estimates were compared against the results of a cardiac catheterization.
  • the investigators showed that visual assessment of JVP resulted in improved correlation with invasively-obtained pressures when using the amplified videos compared to no amplification.
  • the remainder of these studies were small, conducted in healthy normal subjects, and were not correlated with the gold standard of cardiac catheterization. Collectively they showed that reproducible venous waveforms were able to be obtained from videos of the necks of the study subjects.
  • a method of calculating an estimated blood vessel pressure of a subject comprises illuminating a surface of a body region of a subject, the body region comprising at least one blood vessel, acquiring a sequence of images of the surface of a body region of the subject while illuminating the surface, performing at least one imaging processing step on the sequence of images to produce processed image data, measuring a distance from the top of the blood vessel pulsation to a marked position referenced to at least one anatomical landmark within at least one of the sequence of images, and calculating an estimated blood vessel pressure from the sequence of images and the measured distance.
  • the illuminating step comprises applying a structured illumination to the surface of the body region.
  • the structured illumination comprises illuminating the surface of the body region with a laser via a diffractive optical element.
  • the at least one blood vessel comprises a jugular vein.
  • the sequence of images are acquired at a framerate of at least 10 fps.
  • the at least one image processing step comprises image segmentation, thresholding, Fourier transformation, motion magnification, optical flow analysis, or combinations thereof.
  • the at least one image processing step comprises applying a machine learning algorithm to at least one image of the sequence of images.
  • the distance is measured with a LIDAR based measurement.
  • the method further comprises the step of treating the subject with diuretic medication when the estimated blood vessel pressure is above a threshold.
  • a system for calculating an estimated blood vessel pressure of a subject comprises an imaging device, an illumination device configured to illuminate a surface of a body region of the subject, a display, a processor communicatively connected to the imaging device and the illumination device, and a non-transitory computer readable medium with instructions stored thereon, which when executed by a processor perform steps comprising illuminating the surface of a body region of a subject with the illumination device, the body region comprising at least one blood vessel, acquiring a sequence of images of the surface of a body region of the subject with the imaging device while illuminating the surface, performing at least one image processing step on the sequence of images to produce processed image data, measuring a distance between the top of the blood vessel to a marked position referenced to at least anatomical landmark within at least one of the sequence of images, calculating an estimated blood vessel pressure from the sequence of images and the measured distance, and displaying the calculated blood vessel pressure on the display.
  • the illumination device comprises a structured illumination device.
  • the structured illumination device comprises a laser and a diffractive optical element.
  • the illumination device comprises a controllable driver communicatively connected to the processor, and the instructions further comprise the step of activating the illumination device during the image acquisition step.
  • the at least one image processing step is image segmentation, thresholding, Fourier transformation, motion magnification, optical flow analysis, or combinations thereof.
  • a method of training a machine learning model to calculate an estimated blood vessel pressure comprises illuminating a surface of a body region of a subject, the body region comprising at least one blood vessel, acquiring a set of image sequences of the surface of the body region while illuminating the surface of the body region, performing at least one image processing step on at least one image in the sequence of images, acquiring at least one additional measurement of blood vessel pressure in the body region of the subject, and training a machine learning model with the processed image sequence and the corresponding set of additional measurements to infer an estimated blood vessel pressure from the processed image sequence.
  • the illumination step comprises applying a structured illumination to the surface of the body region.
  • the structured illumination comprises illuminating the surface of the body region with a laser via a diffractive optical element.
  • the sequence of images is acquired at a framerate of at least 10 fps.
  • the additional measurement comprises a LiDAR or ultrasound image sequence of the body region of the subject.
  • the ultrasound image sequence is acquired at a framerate of at least 10 fps.
  • Fig. 1 depicts a flow diagram showing an exemplary method of calculating an estimated blood vessel pressure of a subject.
  • Fig. 2 depicts a schematic showing an exemplary method of calculating an estimated blood vessel pressure of a subject.
  • Fig. 3 depicts an exemplary illumination device.
  • Fig. 4 depicts an exemplary structured illumination generated by the illumination device.
  • Fig. 5 depicts an exemplary computing environment in which aspects of the present invention may be practiced.
  • Fig. 6 depicts a flow diagram showing an exemplary method of training a machine learning model to calculate an estimated blood pressure of a subject.
  • Relative terms such as “horizontal”, “vertical”, “up”, “down”, “top”, and “bottom” as well as derivatives thereof (e.g. “horizontally”, “downwardly”, “upwardly”, etc.) should be construed to refer to the orientation as then described or shown in the drawing figure under discussion. These relative terms are for convenience of description and normally are not intended to require a particular orientation. Terms including “inwardly” versus “outwardly”, “longitudinal” versus “lateral” and the like are to be interpreted relative to one another or relative to an axis of elongation, or an axis or center of rotation, as appropriate.
  • range format various aspects of the invention can be presented in a range format. It should be understood that the description in range format is merely for convenience and brevity and should not be construed as an inflexible limitation on the scope of the invention. Accordingly, the description of a range should be considered to have specifically disclosed all the possible subranges as well as individual numerical values within that range. For example, description of a range such as from 1 to 6 should be considered to have specifically disclosed subranges such as from 1 to 3, from 1 to 4, from 1 to 5, from 2 to 4, from 2 to 6, from 3 to 6 etc., as well as individual numbers within that range, for example, 1, 2, 2.7, 3, 4, 5, 5.3, 6, and any whole and partial increments therebetween. This applies regardless of the breadth of the range.
  • the term “about” in reference to a measurable value is meant to encompass the specified value variations of plus or minus 20%, plus or minus 10%, plus or minus 5%, plus or minus 1%, and plus or minus 0.1% of the specified value, as such variations are appropriate.
  • patient refers to any animal amenable to the systems, devices, and methods described herein.
  • patient, subject, or individual may be a mammal, and in some instances, a human.
  • the present invention provides a method for calculating an estimated blood vessel pressure for assessment of a heart condition.
  • the method described herein is non-invasive and may be configured for at-home or clinical use.
  • Method 100 generally comprises the steps of illuminating a surface of a body region of a subject, the body region comprising at least one blood vessel (step 102), acquiring a sequence of images of the surface of a body region of the subject while illuminating the surface (step 104), performing at least one imaging processing step on the sequence of images to produce processed image data (step 106), measuring a distance from the top of the blood vessel pulsation to a marked position referenced to at least one anatomical landmark within at least one of the sequence of images (step 108), and calculating an estimated blood vessel pressure from the sequence of images and the measured distance (step 110).
  • Fig. 2 depicts an exemplary schematic of the method 100.
  • the blood vessel may be the jugular vein.
  • the estimated blood vessel pressure is the jugular venous pressure (JVP), and the body region imaged is the right side of the neck.
  • JVP jugular venous pressure
  • step 102 comprises applying a structured illumination to the surface of the body region.
  • the structured illumination is performed by illuminating the surface of the body region via a laser.
  • the laser passes through a diffractive optical element to create a structured illumination as depicted in Fig. 3.
  • the laser may emit light having wavelengths in the ranging between 700 nm and 1 mm, between 710 nm and 90 pm, between 720 nm and 75 pm, between 730 nm and 50 pm, between 740 nm and 3000 nm, or between 750 nm and 2500 nm.
  • the laser may emit light having wavelengths in the visible spectrum range, the near-infrared range, or the infrared range.
  • the diffractive optical element may be a diffractive grating, one or more lenses, beam splitters, diffusors, prisms and the like.
  • the laser light may further pass through any other suitable optical component known to one of skill in the art. For example, lenses, collimators, beam splitters, reflective gratings, mirrors, polarizers, and the like may be used.
  • the structured illumination may have any suitable pattern.
  • the pattern may be a grid, horizontal lines, vertical lines, circles, concentric circles, or combinations thereof.
  • the structured illumination may be a square grid as depicted in Fig. 4.
  • the square grid may be a grid having between 25 to 100000 squares.
  • the square grid may be a 10x10 square grid, a 20x20 square grid, a 30x30 square grid, a 40x40 square grid, a 50x50 square grid, a 60x60 square grid, a 70x70 square grid, a 80x80 square grid, a 90x90 square grid, or a 100x100 square grid.
  • the structured illumination may have any suitable brightness such that the structured illumination pattern is visible and detectable by an imaging system or device regardless of lighting conditions.
  • each individual square or shape in the repeating pattern may have a width of less than 1 cm, less than 90 mm, less than 80 mm, less than 70 mm, less than 60 mm, less than 50 mm, less than 40 mm, less than 30 mm, less than 20 mm, less than 15 mm, less than 10 mm, less than 5 mm, or less than 2 mm.
  • Each individual square or shape in the repeating pattern may have a height of less than 1 cm, less than 90 mm, less than 80 mm, less than 70 mm, less than 60 mm, less than 50 mm, less than 40 mm, less than 30 mm, less than 20 mm, less than 15 mm, less than 10 mm, less than 5 mm, or less than 2 mm.
  • the structured illumination may be semi-structured or may comprise pseudorandom patterns.
  • the pseudorandom patterns may include speckle patterns, blue noise patterns, Hadamard patterns, Bayer patterns, and the like.
  • the pseudorandom patterns may facilitate image processing by making the at least one image processing step more robust to background noise.
  • step 104 comprises utilizing an imaging device or system to capture one or more sequences of images.
  • the imaging device may be any imaging device known to one of skill in the art.
  • a digital camera may be used, including cameras integrated into any consumer grade mobile device (such as a smartphone).
  • image sequences may be captured in 8K, 4K, 2K, or 1080p resolutions.
  • the sequence of images may be acquired at a framerate of at least 10 fps, at least 20 fps, at least 30 fps, at least 40 fps, at least 50 fps, at least 60 fps, at least 70 fps, at least 80 fps, at least 90 fps, at least 100 fps, between 10 fps and 100 fps, between 20 fps and 90 fps, between 30 fps and 80 fps, between 40 fps and 70 fps, between 45 fps and 65 fps, or between 50 fps and 60 fps.
  • the imaging device may have an integrated LiDAR probe or sensor. Alternatively, a separate LiDAR probe or sensor may be utilized in conjunction with the imaging device.
  • one or more sequences of images may be acquired while the body region of the subject is being illuminated. In some embodiments, one or more sequences of images are acquired at different subject orientations. For example, the sequence of images may be acquired while the subject is supine, at an incline, while the subject is seated upright, or any combinations thereof. In some embodiments, the patient may be inclined at any angle between 0° and 90°, 0° being the supine position and 90° being the upright position. In some embodiments, step 104 further comprises acquiring LiDAR data via the LiDAR probe or sensor. In some embodiments, step 104 further comprises acquiring electrocardiogram (ECG) measurements using a diagnostic single lead ECG monitoring device.
  • ECG electrocardiogram
  • step 104 further comprises acquiring ultrasound images from the body region.
  • the ultrasound images may be utilized to locate one or more anatomical landmarks within the body region of the subject, which may then be marked on the corresponding location on the patient’s skin using a fluorescent marker.
  • additional sets of image sequences may be captured with the fluorescent marker in place in order to aid in image processing and manual labeling for training and evaluation of the imaging system.
  • ultrasound images may be captured using a conventional ultrasound probe, a hand-held portable ultrasound probe, or any ultrasound imaging device known to one of skill in the art.
  • step 106 comprises at least one image processing step.
  • the at least one image processing step comprises video stabilization.
  • video stabilization is performed via feature point tracking and warping or deep-learning based or optical flow based image stabilization.
  • the at least one image processing step comprises image segmentation. Image segmentation may be performed using a Gaussian mixture model as described in Abnousi et al. (Abnousi F, et al. Npj Digit Med 2019;2( 1 ): 1-6.)
  • the at least one image processing step may comprise contouring.
  • contouring may be performed using standard computer vision techniques of adaptive thresholding, morphological operations such as opening and closing, and contour detection using topological structural analysis of the resulting binarized images.
  • the at least one image processing step may comprise measuring the change in distance between grid vertices between consecutive frames of the one of more sequences of images.
  • the at least one image processing step may comprise measuring the change in area of individual squares in the grid between consecutive frame of the one or more sequences of images.
  • the change in distance between grid vertices or change in area of the individual grid squares may be used to create a spatial motion map of depth changes across frames in the set of image sequences.
  • the spatial motion map may be transformed into the frequency domain using a Fourier transform. In some embodiments, the spatial motion map passes through a band-pass filter to ensure only movements within the desired frequency range are captured. In some embodiments, the desired frequency range may be between 0.5 and 2.5 times the heart rate measured by the ECG. In some embodiments, the frequency band for the band pass filter is determined by the heart rate derived from the ECG measurements.
  • the at least one image processing step comprises motion magnification.
  • the motion of deformations of the structured illumination pattern may be magnified.
  • motion magnification is performed via phase-based Eulerian motion magnification using Riesz pyramids.
  • the step of motion magnification may comprise machine learning algorithms. For example, deep learning models based on encoderdecoder convolutional neural network architecture may be used. The models may be pretrained models.
  • the at least one image processing step comprises power spectral analysis of the contoured image data.
  • power spectral analysis is used to reconstruct the waveform pulsation associated with the blood vessel pressure for morphological characterization.
  • the characterization reveals the characteristic phases of the cardiac cycle reflected by the right atrium (a, c, and v waves, and x and y descent).
  • a bandpass fdter of approximately 0.5 - 2.5 times the measured heart rate or between 1 and 4 Hz may be used to extract the contours most likely to represent the jugular venous pressure (JVP).
  • optical sensing approaches such as those described in Amelard et al. (Am elard R, Hughson RL, Greaves DK, et al. Non-contact hemodynamic imaging reveals the jugular venous pulse waveform. 2016) may be utilized. These approaches observed an inverse relationship between the peak arterial signal and peak jugular venous signal. Patch based spatiotemporal filtering may be used to remove all optical flow signals except those with a strong inverse relationship to the simultaneously recorded ECG signal.
  • the at least one image processing step comprises optical flow analysis.
  • Optical flow analysis may be used as a method to detect the magnified movements of the blood vessel and segment the area of these blood vessel pulsations.
  • non-learning methods such as those described in Baker et al. (Baker S, et al. Int J Comput Vis 2004;56(3):221-55.) and Fameback et al. (Farneback G In: Bigun J, Gustavsson T, editors. Image Analysis. Berlin, Heidelberg: Springer; 2003. p. 363-70), or pre-trained deep learning optical flow models may be used for optical flow analysis.
  • the jugular venous pulsations may be subject to the aperture effect since the lack of color contrast in the area of the jugular vein may affect the ability for optical flow to perform optimal tracking.
  • deep learning models for example deep learning models that have been pre-trained to detect camouflaged animals via optical flow, may be used.
  • step 108 comprises measuring a distance within at least one frame of the sequence of images via the LiDAR.
  • the distance measured is the distance between the top of the blood vessel pressure pulsation to an anatomical landmark.
  • the anatomical landmark is the sternal angle.
  • the anatomical landmark is the sternoclavicular junction.
  • a 3D reconstruction of the body region is obtained via the LiDAR and the optical flow derived contour. In some embodiments, the 3D reconstruction is used to automate the distance measurement via 3D template matching.
  • step 110 comprises estimating the vertical component of the distance measured in step 108 via the LiDAR sensor integrated into the imaging device.
  • Estimating the vertical component of the distance may comprise measuring the vertical distance from the midaxillary line to the anatomical landmark, assuming that the right atrium sits in the midaxillary line.
  • other imaging modes for example, CT or MRI
  • the jugular venous pressure is calculated by adding a standard estimate of the distance between the sternal angle or sternoclavicular junction and the right atrium to the vertical component of the distance. In some embodiments, the standard estimate is 5 cm.
  • the step of estimating venous pressure may comprise providing one or more collected images of the subject to a machine learning algorithm trained with images collected from other subjects and inferring an estimated venous pressure from the one or more collected images.
  • additional data about the subject may be provided to the machine learning algorithm in order to infer the estimated venous pressure. Examples of additional data that may be provided to the machine learning algorithm include, but are not limited to, heart rate data, pulse data, RR interval data, age, height, weight, body mass index (BMI), sex, smoking status, pulse oxygenation, heart rhythm data (e.g. presence of arrhythmias), measures of left ventricular systolic function (e g.
  • ejection fraction measures of right ventricular systolic function, presence of valvular disease, respiratory rate, chest circumference, or left/right ventricular diastolic function grade.
  • some of the data provided to the machine learning algorithm may be preprocessed or manipulated prior to being provided to the machine learning algorithm, for example image processing steps may be applied to the one or more images, or signal filtering may be applied to any timeseries data. Transforms, for example from time-domain to frequency-domain, may be applied to some or all of the data provided.
  • Suitable machine learning algorithms include but are not limited to convolutional neural networks (CNN), recurrent neural networks to include long short-term memory (LSTM) networks, generative adversarial networks, transformer (self-attention feedforward) neural networks, and combinations thereof (e.g. CNN-LSTM).
  • CNN convolutional neural networks
  • LSTM long short-term memory
  • generative adversarial networks e.g. generative adversarial networks
  • transformer self-attention feedforward neural networks
  • combinations thereof e.g. CNN-LSTM.
  • aspects of the present invention relate to a system for calculating an estimated blood vessel pressure of a subject.
  • the system comprises an imaging device, an illumination device configured to illuminate a surface of a body region of the subject, a display, a processor communicatively connected to the imaging device and the illumination device, and a non-transitory computer readable medium with instructions stored thereon, which when executed by a processor, perform steps comprising: illuminating the surface of a body region of a subject with the illumination device, the body region comprising at least one blood vessel, acquiring a sequence of images of the surface of a body region of the subject with the imaging device while illuminating the surface, performing at least one image processing step on the sequence of images to produce processed image data, measuring a distance within at least one of the sequence of images referenced to at least one anatomical landmark, calculating an estimated blood vessel pressure from the sequence of images and the measured distance, and displaying the calculated estimated blood vessel pressure on the display.
  • the calculated estimated blood pressure measurement may be calculated and displayed or recorded as a pressure value. In other embodiments, the calculated estimated blood pressure measurement may be calculated or displayed as a rough value, for example high, medium, or low, or above or below one or more thresholds.
  • the system may further record other metrics simultaneously to the image acquisition, for example heart rate, heart rhythm data (e.g. arrythmias), respiratory rate, or RR interval. Measurement of these other metrics may in some embodiments improve the quality of the collected data and the calculations resulting therefrom, for example by filtering out heart beats from measured data to determine venous pressure. Heart rate, heart rhythm data, respiratory interval, and/or RR interval may be measured for example using a fitness watch, chest strap, or any other suitable means.
  • heart rate heart rhythm data
  • respiratory rate e.g. arrythmias
  • RR interval e.g. arrythmias
  • Measurement of these other metrics may in some embodiments improve the quality of the collected data and the calculations resulting therefrom, for example by filtering out heart beats from measured data to determine venous pressure.
  • Heart rate, heart rhythm data, respiratory interval, and/or RR interval may be measured for example using a fitness watch, chest strap, or any other suitable means.
  • the imaging device comprises a camera.
  • the illumination device comprises a structured illumination device.
  • the structured illumination device comprises a light source and a diffractive optical element.
  • the light source may be any suitable light source, for example a class I laser or a class II laser.
  • the laser and the diffractive optical element may be positioned in a housing.
  • the illumination device and the imaging device may be positioned in a housing.
  • the illumination device comprises a controllable driver communicatively connected to the processor, and the instructions further comprise the step of activating the illumination device during the image acquisition step.
  • the controllable driver may be communicatively connected to an interface device, configured to receive an input from a user, wherein the input directs the processor to activate the illumination device.
  • the step of acquiring the sequence of images is automated by the system.
  • the sequence of images is acquired at a framerate of at least 10 fps.
  • the sequence of images comprises one or more video streams of images.
  • the system is configured to acquire the one or more video streams of images sequentially.
  • the system is configured to only acquire the sequence of images when the structured illumination is projected onto the surface of the body region of the subject.
  • the system is configured to automatically turn off the structured illumination once the acquisition of the sequence of images is completed.
  • the system may be configured to transmit the calculated estimated blood vessel pressure to an external device.
  • the system may be configured to transmit the calculated estimated blood vessel to a cloud-based storage.
  • the system may be configured to alert a user if the calculated estimated blood vessel pressure is above a threshold value.
  • Method 200 generally comprises illuminating a surface of a body region of a subject, the body region comprising at least one blood vessel (step 202), acquiring a set of image sequences of the surface of the body region while illuminating the surface of the body region (step 204), performing at least one image processing step on at least one image in the sequence of images (step 206), acquiring at least one additional measurement of blood vessel pressure in the body region of the subject (step 208); and training a machine learning model with the processed image sequence and the corresponding set of additional measurements to infer an estimated blood vessel pressure from the processed image sequence (step 210).
  • the illumination comprises a structured illumination, formed via a laser and a diffractive optical element.
  • the set of image sequences are acquired at a framerate of at least 10 fps.
  • the additional measurement comprises a LiDAR or ultrasound image sequence of the body region of the subject.
  • the ultrasound image sequence has a framerate of at least 10 fps.
  • the machine learning algorithm may comprise supervised learning, unsupervised learning, or reinforcement learning.
  • step 210 may further comprise establishing a ground truth for fine-tuning of the model.
  • each frame in the acquired set of image sequences may be labeled via segmentation of the jugular vein pulsation in a semiautomated fashion, using a combination of the optical flow-derived contours and the videos with fluorescent markings obtained from ultrasound assessment to aid a board- certified cardiologist in identifying the true jugular vein contours. These contours can then be propagated throughout the video clip and adjusted frame by frame to establish a ground truth for both evaluation of the rules-based components and pre-trained deep learning models, as well as further fine-tuning of the deep learning optical flow models.
  • data acquired from the one or more sequence of images using the structured illumination may be used to train a machine learning model to detect jugular venous pulsations and estimate the jugular venous pressure directly from one or more sequence of images acquired without structured illumination.
  • data acquired with structured illumination is used as labels.
  • software executing the instructions provided herein may be stored on a non-transitory computer-readable medium, wherein the software performs some or all of the steps of the present invention when executed on a processor.
  • aspects of the invention relate to algorithms executed in computer software. Though certain embodiments may be described as written in particular programming languages, or executed on particular operating systems or computing platforms, it is understood that the system and method of the present invention is not limited to any particular computing language, platform, or combination thereof.
  • Software executing the algorithms described herein may be written in any programming language known in the art, compiled, or interpreted, including but not limited to C, C++, C#, Objective-C, Java, JavaScript, MATLAB, Python, PHP, Perl, Ruby, or Visual Basic.
  • elements of the present invention may be executed on any acceptable computing platform, including but not limited to a server, a cloud instance, a workstation, a thin client, a mobile device, an embedded microcontroller, a television, or any other suitable computing device known in the art.
  • parts of this invention are described as communicating over a variety of wireless or wired computer networks.
  • the words “network”, “networked”, and “networking” are understood to encompass wired Ethernet, fiber optic connections, wireless connections including any of the various 802.11 standards, cellular WAN infrastructures such as 3G, 4G/LTE, or 5G networks, Bluetooth®, Bluetooth® Low Energy (BLE) or Zigbee® communication links, or any other method by which one electronic device is capable of communicating with another.
  • elements of the networked portion of the invention may be implemented over a Virtual Private Network (VPN).
  • VPN Virtual Private Network
  • FIG. 5 and the following discussion are intended to provide a brief, general description of a suitable computing environment in which the invention may be implemented. While the invention is described above in the general context of program modules that execute in conjunction with an application program that runs on an operating system on a computer, those skilled in the art will recognize that the invention may also be implemented in combination with other program modules.
  • program modules include routines, programs, components, data structures, and other types of structures that perform particular tasks or implement particular abstract data types.
  • program modules may be located in both local and remote memory storage devices.
  • Fig. 5 depicts an illustrative computer architecture for a computer 300 for practicing the various embodiments of the invention.
  • the computer architecture shown in Fig. 5 illustrates a conventional personal computer, including a central processing unit 350 (“CPU”), a system memory 305, including a random access memory 310 (“RAM”) and a read-only memory (“ROM”) 315, and a system bus 335 that couples the system memory 305 to the CPU 350.
  • the computer 300 further includes a storage device 320 for storing an operating system 325, application/program 330, and data.
  • the storage device 320 is connected to the CPU 350 through a storage controller (not shown) connected to the bus 335.
  • the storage device 320 and its associated computer-readable media provide non-volatile storage for the computer 300.
  • computer-readable media can be any available media that can be accessed by the computer 300.
  • Computer-readable media may comprise computer storage media.
  • Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data.
  • Computer storage media includes, but is not limited to, RAM, ROM, EPROM, EEPROM, flash memory or other solid state memory technology, CD-ROM, DVD, or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by the computer.
  • the computer 300 may operate in a networked environment using logical connections to remote computers through a network 340, such as TCP/IP network such as the Internet or an intranet.
  • the computer 300 may connect to the network 340 through a network interface unit 345 connected to the bus 335.
  • the network interface unit 345 may also be utilized to connect to other types of networks and remote computer systems.
  • the computer 300 may also include an input/output controller 355 for receiving and processing input from a number of input/output devices 360, including a keyboard, a mouse, a touchscreen, a camera, a microphone, a controller, a joystick, or other type of input device. Similarly, the input/output controller 355 may provide output to a display screen, a printer, a speaker, or other type of output device.
  • the computer 300 can connect to the input/output device 360 via a wired connection including, but not limited to, fiber optic, Ethernet, or copper wire or wireless means including, but not limited to, Wi-Fi, Bluetooth, Near-Field Communication (NFC), infrared, or other suitable wired or wireless connections.
  • a wired connection including, but not limited to, fiber optic, Ethernet, or copper wire or wireless means including, but not limited to, Wi-Fi, Bluetooth, Near-Field Communication (NFC), infrared, or other suitable wired or wireless connections.
  • a number of program modules and data files may be stored in the storage device 320 and/or RAM 310 of the computer 300, including an operating system 325 suitable for controlling the operation of a networked computer.
  • the storage device 320 and RAM 310 may also store one or more applications/programs 330.
  • the storage device 320 and RAM 310 may store an application/program 330 for providing a variety of functionalities to a user.
  • the application/program 330 may comprise many types of programs such as a word processing application, a spreadsheet application, a desktop publishing application, a database application, a gaming application, internet browsing application, electronic mail application, messaging application, and the like.
  • the application/program 330 comprises a multiple functionality software application for providing word processing functionality, slide presentation functionality, spreadsheet functionality, database functionality and the like.
  • the computer 300 in some embodiments can include a variety of sensors 365 for monitoring the environment surrounding and the environment internal to the computer 300.
  • sensors 365 can include a Global Positioning System (GPS) sensor, a photosensitive sensor, a gyroscope, a magnetometer, thermometer, a proximity sensor, an accelerometer, a microphone, biometric sensor, barometer, humidity sensor, radiation sensor, or any other suitable sensor.
  • GPS Global Positioning System
  • JVPro aims to be the first non-invasive remote monitoring solution for managing congestion in patients with heart failure (HF).
  • HF heart failure
  • JVPro plans to address including 1) recognizing and amplifying subtle skin displacements from the jugular venous pulse; 2) differentiating jugular venous pulsations from carotid artery pulsations and other inherent noise in video acquisitions; and 3) translating detection of the jugular venous pulsation/waveform into an estimation of jugular venous pressure.
  • JVPro’ s hardware and software design are intended to overcome these challenges and provide a non-invasive measurement of JVP.
  • the app guides a user to acquire video of the patient’s neck from the appropriate distance and angle.
  • video captured by the smartphone is processed to estimate JVP by extracting waveforms from jugular venous pulsations at the skin surface.
  • the software includes a collection of non-learning algorithms and machine learning (ML) models to identify a patient’s clavicle, locate the peak of the jugular venous pulsation, and finally quantify the jugular venous pressure. Structured illumination is used to identify subtle pulsations of the neck veins that would not be otherwise visible across different lighting conditions.
  • ML machine learning
  • JVPro uses a series of non-learning and machine learning based computer vision algorithms to contour, extract, and filter signals of the JVP.
  • the system then extracts the IJ waveform using a power spectral analysis of the magnified and isolated jugular venous pulsations in the neck.
  • the depth sensor using LiDAR (light detection and ranging) technology, allows for an absolute measurement of distance from the clavicle to the peak of the jugular venous pulsation, the same measure as performed during a visual JVP exam.
  • the benchmark for remote HF monitoring is the CardioMEMSTM HF System. It is the only FDA-approved device for remote measurement of hemodynamics. Other competitors in remote HF monitoring do not measure hemodynamics and have never shown a reduction in HF hospitalizations. These include Sensinel (Analog Devices, Inc, Wilmington, MA), Bodyport (San Francisco, CA), and HearO (Cordio Medical, Tel Aviv, Israel). These products rely on amalgamations of biometric data (i.e. electrocardiographic signals, weight) or noncardiac data (voice changes) to diagnose worsening HF. This is problematic for several reasons including the increased likelihood of detecting false positives and non-specificity for cardiac dysfunction in several of the variables (i.e. weight, voice changes).
  • Example 2 Developing an Al based Computer Vision System for Non- Invasive Estimation of JVP
  • the study population consisted of patients 18 and older scheduled to undergo right heart catheterization for any reason. A full list of the inclusion/exclusion criteria used is provided in Table 1. For this specific aim, a total of 40 patients were recruited. All patients provided informed consent prior to image acquisition.
  • a 50x50 green square grid pattern was generated using a laser pointer and a diffractive optical element, together referred to as the Projection System.
  • the 532nm laser is rated as a Class 3R product in accordance with IEC 60825 (FDA Class Illa).
  • the diffractive optical element or DOE DE-R 256, Holoeye Photonics AG transformed the laser beam into a 50x50 square grid pattern (Fig. 4).
  • the purpose of the Projection System was to generate a bright visible grid on the patient's neck that enables a standard smartphone camera to detect subtle pulsations at the skin surface and is agnostic of ambient lighting conditions.
  • Fig. 3 diagrams the entire Projection System.
  • a hand-held, portable ultrasound probe (Butterfly IQ+, Butterfly Network, Inc.) was applied to the neck and the jugular vein identified in each of the 3 positions.
  • the peak collapse point of the jugular vein in long axis images were recorded according to previously validated methods (Sathish N, et al. Ann Card Anaesth 2016; 19(3):405- 9.) and was marked on the patient’s skin using a washable fluorescent marker (if apparent in a given position).
  • An additional 30 second clip was captured with this fluorescent mark present in order to aid in post processing and manual labeling for training and evaluation of the computer vision system.
  • Step 1 Structured illumination and video stabilization: As described above, videos contained frames acquired at 60 frames per second of the patient’s neck, obtained at an oblique angle and with a structured illumination grid pattern projected onto the skin. In order to eliminate translational motion and perspective transforms from camera movement that may affect subsequent processing steps, video stabilization was performed using standard methods of feature point tracking and warping.
  • Step 2 Grid projection system contouring and waveform extraction: Segmentation of the skin of the neck was performed using a Gaussian mixture model as described in Abnousi F, et al. Npj Digit Med 2019;2(1 ) : 1-6. Contouring of the grid was performed using standard computer vision techniques of adaptive thresholding, morphological operations such as opening and closing, and contour detection using topological structural analysis of the resulting binarized images. (Abnousi F, et al. Npj Digit Med 2019;2(1 ): 1-6.) The change in distance between grid vertices resulting from one frame to the next was then used to create a spatial motion map of depth changes across frames of the video clip. This spatial motion map was then transformed into the frequency domain using a Fourier transformation.
  • the frequency band for the band pass filter was determined by the heart rate derived from the wearable ECG sensor to ensure only movements within the desired frequency range were magnified.
  • Step 3 Detection and segmentation of JVP via spectral filtering and optical flow: Using power spectral analysis of the previously contoured representation of the jugular venous pulsations in the neck, the jugular venous waveform was reconstructed for morphological characterization (to reveal the characteristic phases of the cardiac cycle reflected by the right atrium - a, c, and v waves, x and y descent). A bandpass filter (Butterworth filter) of ⁇ 1-4 Hz (within the range of twice the heart rate and dictated by the simultaneous heart rate recordings, given two positive deflections of the jugular venous pulsation per cardiac cycle) was used to extract those contours most likely to represent the JVP.
  • a bandpass filter Billutterworth filter
  • Optical flow analysis was also tested as a method to detect the magnified movements of the jugular vein and segment the area of these jugular venous pulsations.
  • nonleaming methods were tested (for example, the methods described in Baker S, et al. Int J Comput Vis 2004;56(3):221-55.; Farneback G In: Bigun J, Gustavsson T, editors. Image Analysis. Berlin, Heidelberg: Springer; 2003. p. 363-70) as well as existing, pre-trained deep learning optical flow models (e.g. FlowNet (Dosovitskiy A, et al. In: 2015 IEEE International Conference on Computer Vision (ICCV). 2015. p. 2758- 66.), FastFlowNet (Tone.
  • FastFlowNet A Lightweight Network for Fast Optical Flow Estimation. 2022.), PWC-Net (Sun D, et al. 2018), and RAFT (Teed Z, et al. Cham: Springer International Publishing; 2020. p. 402-19.)) for this step.
  • the jugular venous pulsations may be subject to the aperture effect (as the lack of color contrast in the area of the jugular vein may affect the ability for optical flow to perform optimal tracking)
  • deep learning models that have been pre-trained to detect camouflaged animals via optical flow were also tested.
  • Lamdouar H Lamdouar H, et al. In: Ishikawa H, Liu C-L, Pajdla T, Shi J, editors. Computer Vision - ACCV 2020. Cham: Springer International Publishing; 2021. p. 488-503.
  • Step 4 LiDAR based measurement of JVP height: Using the 3D reconstruction of the neck anatomy derived from the mobile device’s LiDAR sensor and the peak height of the optical flow derived JVP contours described above, the measurement of the top of the jugular vein pulsation from the sternoclavicular junction (as an approximation of the sternal angle) was automated using 3D template matching. The vertical component of this distance was then estimated, using the mobile device’s built-in accelerometer to determine the vertical axis. This measurement (in cm) plus an additional 5cm (as a standard estimate of the distance from the sternal angle to the right atrium) served as the system’s estimation of the JVP.
  • Model Validation and Fine Tuning Two independent clinicians qualitatively rated the results of the nonlearning and deep learning-based motion magnification methods on clinical interpretability to determine the most robust approach to motion augmentation. Additionally, downstream performance of optical flow analysis was considered in determining the desired approach to motion magnification Performance of JVP segmentation with optical flow analysis was assessed using the Dice similarity coefficient and the intersection over union metrics. Adjustment of parameters for non-learning components, any hyperparameter tuning for deep learning components during fine-tuning of pretrained models, and model selection were performed using 5- fold cross-validation. Final testing of the system in a hold-out test set is detailed in Example 3.
  • the right atrial pressure waveforms derived from invasive cardiac catheterization served as the ground truth estimate of central venous pressure (CVP).
  • CVP central venous pressure
  • the right atrial pressure was determined from the reported values in the cardiac catheterization report and confirmed by a board-certified heart failure cardiologist blinded to the pre-catheterization assessment of JVP via review of the waveforms.
  • invasive CVP estimates were categorized into clinically meaningful thresholds based on expert opinion of ⁇ 5mmHg (normal), 5- lOmmHg (mildly elevated), and >10mmHg (significantly elevated).
  • Patient data was split into model training/development (40 patients from Example 2) and testing/validation (20 additional patients from Example 3) sets in order to simulate a prospective evaluation of the system. Data from these 20 patients were not used in the development of the computer vision system and were only used for a final test of the performance of the system. Any additional fine-tuning of the non-learning components and deep learning models for JVP detection and measurement was performed based on feedback from the invasively derived CVP using cross-validation on the training/development set only.

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Pathology (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Radiology & Medical Imaging (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Veterinary Medicine (AREA)
  • Cardiology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Physiology (AREA)
  • Physics & Mathematics (AREA)
  • Surgery (AREA)
  • Biophysics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Molecular Biology (AREA)
  • Business, Economics & Management (AREA)
  • Vascular Medicine (AREA)
  • General Business, Economics & Management (AREA)
  • Dentistry (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Measuring Pulse, Heart Rate, Blood Pressure Or Blood Flow (AREA)

Abstract

A method and system for calculating an estimated blood vessel pressure of a subject, comprising illuminating a surface of a body region of a subject, the body region comprising at least one blood vessel; acquiring a sequence of images of the surface of a body region of the subject while illuminating the surface; performing at least one imaging processing step on the sequence of images to produce processed image data; measuring a distance from the top of the blood vessel to a marked position referenced to at least one anatomical landmark within at least one of the sequence of images; and calculating an estimated blood vessel pressure from the sequence of images and the measured distance. A method of training a machine learning model to calculate an estimated blood vessel pressure of a subject is also disclosed.

Description

SYSTEM AND METHOD FOR NON-INVASIVE HEART PRESSURE MEASUREMENT
CROSS-REFERENCE TO RELATED APPLICATIONS
This application claims priority to U.S. Provisional Patent Application No. 63/579,332 filed August 29, 2023, the contents of which are incorporated by reference herein in its entirety.
BACKGROUND
Over 6 million adults in the US are currently living with heart failure (HF) (Virani SS, et al. Heart Disease and Stroke Statistics — 2020 Update. 2020.) Direct medical costs for this patient population are projected to rise to $70 billion by 2030 with hospitalizations contributing to the majority of these costs. (Jackson SL, et al. Circ Heart Fail 2018;l l(12):e004873.; Urbich M, et al. PharmacoEconomics 2020;38(l 1): 1219-36.) Among all HF hospitalizations, the majority are due to persistent intravascular congestion, and even at the time of discharge about half of these patients remain partially congested, (Lala A, et al. Circ Heart Fail 2015 ; 8(4):741— 8. ; Ambrosy AP, et al. Eur Heart J 2013 ;34( 11 ): 835— 43.) contributing to high readmission rates among this population. In fact, 25% of HF patients are readmitted after 30 days while 50% are readmitted within 6 months of discharge. Better tools to assess congestion outside of the hospital could improve these readmission rates by pre-empting worsening HF, thereby improving patients’ quality of life and reducing medical costs.
Heart failure remains a leading public health problem in the United States. Despite medical advancements, the number of HF hospitalizations has been increasing over the last two decades, with recent reports indicating that over 1 million hospitalizations list HF as the leading diagnosis. (Fang J, et al. J Am Coll Cardiol 2008;52(6):428-34.; Hall MJ, et al. Natl Health Stat Rep 2010;(29):l-20, 24.) Additionally, HF continues to be a major driver of overall healthcare costs, with projections of a 100% increase in total medical costs from 2013 by the year 2030. (Jackson SL, et al. Circ Heart Fail 2018; 11 (12):e004873.) Of all HF hospitalizations, it has been shown that approximately 90% are due to pulmonary congestion. (Adams KF, et al. Am Heart J 2005;149(2):209-16.; Krum H, et al. The Lancet 2009;373(9667):941- 55.) Even after discharge, about half of these patients remain at least partially congested at the 60-day follow-up. (Lala A, et al. Circ Heart Fail 2015;8(4):741-8.) These data suggest that our current paradigm to reduce the burden of HF hospitalizations is inadequate, and there is a need for better tools to monitor congestion in these patients.
Prior attempts to monitor the development of congestion at home have been unsuccessful in reaching an endpoint of reduced HF hospitalizations. These methodologies have included monitoring daily weights, logging patient-reported signs and symptoms, and nurse communications with the patient. (Chaudhry SI, et al. J Card Fail 2007;13(9):709-14.; Koehler F, et al. Eur J Heart Fail 2010;12(12): 1354-62.; Cleland JGF, et al. J Am Coll Cardiol 2005;45(10): 1654-64.; Ong MK, et al. JAMA Intern Med 2016; 176(3):310-8.; Angermann CE, et al. Circ Heart Fail 2012;5(l):25-35.) Additionally, several implantable device-based diagnostics have failed to yield significant results, including intrathoracic impedance, percentage of biventricular pacing, and arrhythmia burden, among others, (van Veldhuisen DJ, et al. Circulation 2011 ; 124( 16) : 1719-26. ; Brachmann J, et al . Eur J Heart Fail 2011 ; 13 (7) : 796-804. ; Morgan JM, et al. Eur Heart J 2017;38(30):2352-60.; Boriani G, et al. Eur J Heart Fail 2017;19(3):416-25.) A critical barrier to these strategies is the lack of hemodynamic data available. Indeed, the only FDA-approved device for remote measurement of hemodynamics is the CardioMEMS™ HF System (Abbott Laboratories, Chicago, IL), which relies on a pressure sensor that is permanently implanted into the pulmonary artery. Its pivotal randomized prospective trial demonstrated a 37% annual reduction in HF hospitalizations. (Drazner MH, et al. Circ Heart Fail 2008;l(3): 170-7.) However, the device and its implantation are costly, carry a risk of peri-procedural adverse events, and is not an appropriate option for all patients. Therefore, to date, while there have been numerous attempts to measure the signs and symptoms of congestion outside of clinical settings, there have been no at-home, non-invasive interventions that have been shown to reduce HF hospitalizations.
The gold standard to assess congestion is direct measurement of intravascular filling pressures via a right heart catheterization (RHC). This invasive procedure carries a risk of peri-procedural adverse events and is not feasible for some patients or scalable. The only FDA-approved remote monitoring device that has been shown to replicate invasive hemodynamics is a permanently-implanted pressure sensor that is deployed into the pulmonary artery. Similar to an RHC, this requires an invasive procedure that may not be appropriate for all patients. Thus, there is an unmet need to develop a better surrogate for assessing congestion that can translate to an ambulatory or home setting, which would allow providers to identify and intervene upon an impending HF exacerbation. Of all physical exam findings, an elevated jugular venous pressure (JVP) correlates best with directly measured intracardiac pressures, including for example centrally-measured right atrial pressure. (Butman SM, et al. J Am Coll Cardiol 1993;22(4):968-74.; Drazner MH, et al. Circ Heart Fail 2008; 1(3): 170-7.) However, this visual assessment, performed by a physician, requires not only significant expertise but also may be subjective.
Traditionally, JVP is visually assessed at the bedside by a physician, who can then utilize that information to direct fluid management of the patient. However, this subjective assessment has been shown to be inconsistently correlated with right atrial pressure, as measured by the gold standard of cardiac catheterization. (Brennan JM, et al. Am J Cardiol 2007;99(l 1): 1614-6.; Kircher BJ, et al. Am J Cardiol 1990;66(4):493-6.) To date, there have been several attempts to non-invasively measure JVP using various forms of camera-based video augmentation. These include non-contact photoplethysmography (PPG), contact PPG, skin displacement algorithms, specular reflection imaging, and Eulerian video magnification, among others. (Kelly SA, et al. JAMA Cardiol 2020;5(10): 1194-5.; Pellicori P, et al. Eur J Heart Fail 2017; 19(7):883- 92.; Amelard R, Hughson RL, Greaves DK, et al. Non-contact hemodynamic imaging reveals the jugular venous pulse waveform [Internet], 2016; Lam Po Tang EJ, et al. Sci Rep 2018;8(l): 17236.; Suzuki S, et al. J Biomed Sci Eng 2021;14(3):94-102.; Amelard R, et al. J Biomed Opt. 2022;27(l 1): 116005.; Abnousi F, et al. Npj Digit Med 2019;2(l): 1- 6.; Garcia-L6pez I, et al. Sci Rep 2020;10(l):3466.; Saiko G, et al. Front Bioeng Biotechnol 2022 [cited 2022 Nov 11]; 10.; Amelard R, et al. IEEE Trans Biomed Eng 2021;68(8):2582-91.; Sathish N, et al. Ann Card Anaesth 2016;19(3):405 9.) Only one of these studies collected invasive hemodynamic data in the cardiac catheterization laboratory. (Abnousi F, et al. Npj Digit Med 2019;2(l): 1— 6.) In this study, video recordings of the necks of 50 subjects were obtained, and an Eulerian magnification algorithm was applied to these videos. Physicians were then able to subjectively assess the enhanced videos, and their JVP estimates were compared against the results of a cardiac catheterization. The investigators showed that visual assessment of JVP resulted in improved correlation with invasively-obtained pressures when using the amplified videos compared to no amplification. However, the remainder of these studies were small, conducted in healthy normal subjects, and were not correlated with the gold standard of cardiac catheterization. Collectively they showed that reproducible venous waveforms were able to be obtained from videos of the necks of the study subjects. In addition, near infrared spectroscopy has been utilized to perform continuous JVP monitoring, requiring use of a sensor patch that is placed on the subject’s neck connected via a cable to a display console. (Sathish N, et al. Ann Card Anaesth 2016; 19(3):405- 9.) However, the system is costly and is designed to be operated by a nurse in an acute care center, and not by an untrained caregiver.
Thus, there is a need in the art for a remote, non-invasive at-home assessment system and method for congestion.
SUMMARY
Some embodiments of the invention disclosed herein are set forth below, and any combination of these embodiments (or portions thereof) may be made to define another embodiment.
In one aspect, a method of calculating an estimated blood vessel pressure of a subject comprises illuminating a surface of a body region of a subject, the body region comprising at least one blood vessel, acquiring a sequence of images of the surface of a body region of the subject while illuminating the surface, performing at least one imaging processing step on the sequence of images to produce processed image data, measuring a distance from the top of the blood vessel pulsation to a marked position referenced to at least one anatomical landmark within at least one of the sequence of images, and calculating an estimated blood vessel pressure from the sequence of images and the measured distance. In some embodiments, the illuminating step comprises applying a structured illumination to the surface of the body region. In some embodiments, the structured illumination comprises illuminating the surface of the body region with a laser via a diffractive optical element. In some embodiments, the at least one blood vessel comprises a jugular vein. In some embodiments, the sequence of images are acquired at a framerate of at least 10 fps. In some embodiments, the at least one image processing step comprises image segmentation, thresholding, Fourier transformation, motion magnification, optical flow analysis, or combinations thereof. In some embodiments, the at least one image processing step comprises applying a machine learning algorithm to at least one image of the sequence of images. In some embodiments, the distance is measured with a LIDAR based measurement. In some embodiments, the method further comprises the step of treating the subject with diuretic medication when the estimated blood vessel pressure is above a threshold.
In another aspect, a system for calculating an estimated blood vessel pressure of a subject comprises an imaging device, an illumination device configured to illuminate a surface of a body region of the subject, a display, a processor communicatively connected to the imaging device and the illumination device, and a non-transitory computer readable medium with instructions stored thereon, which when executed by a processor perform steps comprising illuminating the surface of a body region of a subject with the illumination device, the body region comprising at least one blood vessel, acquiring a sequence of images of the surface of a body region of the subject with the imaging device while illuminating the surface, performing at least one image processing step on the sequence of images to produce processed image data, measuring a distance between the top of the blood vessel to a marked position referenced to at least anatomical landmark within at least one of the sequence of images, calculating an estimated blood vessel pressure from the sequence of images and the measured distance, and displaying the calculated blood vessel pressure on the display.
In some embodiments, the illumination device comprises a structured illumination device. In some embodiments, the structured illumination device comprises a laser and a diffractive optical element. In some embodiments, the illumination device comprises a controllable driver communicatively connected to the processor, and the instructions further comprise the step of activating the illumination device during the image acquisition step. In some embodiments, the at least one image processing step is image segmentation, thresholding, Fourier transformation, motion magnification, optical flow analysis, or combinations thereof.
In another aspect, a method of training a machine learning model to calculate an estimated blood vessel pressure comprises illuminating a surface of a body region of a subject, the body region comprising at least one blood vessel, acquiring a set of image sequences of the surface of the body region while illuminating the surface of the body region, performing at least one image processing step on at least one image in the sequence of images, acquiring at least one additional measurement of blood vessel pressure in the body region of the subject, and training a machine learning model with the processed image sequence and the corresponding set of additional measurements to infer an estimated blood vessel pressure from the processed image sequence.
In some embodiments, the illumination step comprises applying a structured illumination to the surface of the body region. In some embodiments, the structured illumination comprises illuminating the surface of the body region with a laser via a diffractive optical element. In some embodiments, the sequence of images is acquired at a framerate of at least 10 fps. In some embodiments, the additional measurement comprises a LiDAR or ultrasound image sequence of the body region of the subject. In some embodiments, the ultrasound image sequence is acquired at a framerate of at least 10 fps.
BRIEF DESCRIPTION OF THE DRAWINGS
The following detailed description of embodiments of the invention will be better understood when read in conjunction with the appended drawings. It should be understood, however, that the invention is not limited to the precise arrangements and instrumentalities of the embodiments shown in the drawings.
Fig. 1 depicts a flow diagram showing an exemplary method of calculating an estimated blood vessel pressure of a subject.
Fig. 2 depicts a schematic showing an exemplary method of calculating an estimated blood vessel pressure of a subject. Fig. 3 depicts an exemplary illumination device.
Fig. 4 depicts an exemplary structured illumination generated by the illumination device.
Fig. 5 depicts an exemplary computing environment in which aspects of the present invention may be practiced.
Fig. 6 depicts a flow diagram showing an exemplary method of training a machine learning model to calculate an estimated blood pressure of a subject.
DETAILED DESCRIPTION
It is to be understood that the figures and descriptions of the present invention have been simplified to illustrate elements that are relevant for a clear understanding of the present invention, while eliminating, for the purpose of clarity many other elements found in related systems and methods. Those of ordinary skill in the art may recognize that other elements and/or steps are desirable and/or required in implementing the present invention. However, because such elements and steps are well known in the art, and because they do not facilitate a better understanding of the present invention, a discussion of such elements and steps is not provided herein. The disclosure herein is directed to all such variations and modifications to such elements and methods known to those skilled in the art.
It is noted that various embodiments are described in detail with reference to the drawings, in which like reference numerals represent like parts and assemblies throughout the several views. Reference to various embodiments does not limit the scope of the claims attached hereto. Additionally, any examples set forth in this specification are intended to be non-limiting and merely set forth some of the many possible embodiments for the appended claims. Further, particular features described herein can be used in combination with other described features in each of the various possible combinations and permutations.
Unless otherwise specifically defined herein, all terms are to be given their broadest reasonable interpretation including meanings implied from the specification as well as meanings understood by those skilled in the art and/or as defined in dictionaries, treatises, etc. It is noted that as used in the specification and the appended claims, the singular forms “a”, “an”, and “the” include plural referents unless otherwise specified, and that the terms “includes” and/or “including”, when used in this specification, specify the presence of stated features, elements, and/or components, but do not preclude the presence of additional of one or more features, steps, operations, elements, components, and/or groups thereof.
Relative terms such as “horizontal”, “vertical”, “up”, “down”, “top”, and “bottom” as well as derivatives thereof (e.g. “horizontally”, “downwardly”, “upwardly”, etc.) should be construed to refer to the orientation as then described or shown in the drawing figure under discussion. These relative terms are for convenience of description and normally are not intended to require a particular orientation. Terms including “inwardly” versus “outwardly”, “longitudinal” versus “lateral” and the like are to be interpreted relative to one another or relative to an axis of elongation, or an axis or center of rotation, as appropriate. Terms concerning attachments, coupling, and the like, such as “connected” and “interconnected”, refer to a relationship wherein structures are secured or attached to one another either directly or indirectly through intervening structures, as well as both movable or rigid attachments or relationships, unless expressly described otherwise. The term “operably connected” is such an attachment, coupling, or connection that allows the pertinent structure to operate as intended by virtue of that relationship.
Reference throughout the specification to “one embodiment”, “an embodiment” or “some embodiments” means that a particular feature, structure, or characteristic described in connection with an embodiment is included in at least one embodiment of the subject matter disclosed. Thus, the appearance of the phrases “in one embodiment”, “in an embodiment”, or “in some embodiments” in various places throughout the specification is not necessarily referring to the same embodiment. Further, the particular features, structures, or characteristics of “one embodiment”, “an embodiment”, “or some embodiments” may be combined in any suitable manner with each other to form additional embodiments of such combinations. It is intended that embodiments of the disclosed subject matter cover modifications and variations thereof. Terms such as “first”, “second”, “third”, etc., merely identify one of a number of portions, components, steps, operations, functions, and/or points of reference as disclosed herein, and likewise do not necessarily limit embodiments of the present disclosure to any particular configuration or orientation.
Moreover, throughout this disclosure, various aspects of the invention can be presented in a range format. It should be understood that the description in range format is merely for convenience and brevity and should not be construed as an inflexible limitation on the scope of the invention. Accordingly, the description of a range should be considered to have specifically disclosed all the possible subranges as well as individual numerical values within that range. For example, description of a range such as from 1 to 6 should be considered to have specifically disclosed subranges such as from 1 to 3, from 1 to 4, from 1 to 5, from 2 to 4, from 2 to 6, from 3 to 6 etc., as well as individual numbers within that range, for example, 1, 2, 2.7, 3, 4, 5, 5.3, 6, and any whole and partial increments therebetween. This applies regardless of the breadth of the range. As used herein, the term “about” in reference to a measurable value, such as an amount, a temporal duration, and the like, is meant to encompass the specified value variations of plus or minus 20%, plus or minus 10%, plus or minus 5%, plus or minus 1%, and plus or minus 0.1% of the specified value, as such variations are appropriate.
The terms “patient,” “subject,” “individual,” and the like are used interchangeably herein, and refer to any animal amenable to the systems, devices, and methods described herein. The patient, subject, or individual may be a mammal, and in some instances, a human.
In some aspects, the present invention provides a method for calculating an estimated blood vessel pressure for assessment of a heart condition. The method described herein is non-invasive and may be configured for at-home or clinical use.
Referring now to Fig. 1, an exemplary method of calculating an estimated blood vessel pressure 100 (hereinafter “method 100”) is shown. Method 100 generally comprises the steps of illuminating a surface of a body region of a subject, the body region comprising at least one blood vessel (step 102), acquiring a sequence of images of the surface of a body region of the subject while illuminating the surface (step 104), performing at least one imaging processing step on the sequence of images to produce processed image data (step 106), measuring a distance from the top of the blood vessel pulsation to a marked position referenced to at least one anatomical landmark within at least one of the sequence of images (step 108), and calculating an estimated blood vessel pressure from the sequence of images and the measured distance (step 110). Fig. 2 depicts an exemplary schematic of the method 100.
In some embodiments, the blood vessel may be the jugular vein. In such cases, the estimated blood vessel pressure is the jugular venous pressure (JVP), and the body region imaged is the right side of the neck.
In some embodiments, step 102 comprises applying a structured illumination to the surface of the body region. In some embodiments, the structured illumination is performed by illuminating the surface of the body region via a laser. In some embodiments, the laser passes through a diffractive optical element to create a structured illumination as depicted in Fig. 3. In some embodiments, the laser may emit light having wavelengths in the ranging between 700 nm and 1 mm, between 710 nm and 90 pm, between 720 nm and 75 pm, between 730 nm and 50 pm, between 740 nm and 3000 nm, or between 750 nm and 2500 nm. That is, the laser may emit light having wavelengths in the visible spectrum range, the near-infrared range, or the infrared range. In some embodiments, the diffractive optical element may be a diffractive grating, one or more lenses, beam splitters, diffusors, prisms and the like. In some embodiments, the laser light may further pass through any other suitable optical component known to one of skill in the art. For example, lenses, collimators, beam splitters, reflective gratings, mirrors, polarizers, and the like may be used.
In some embodiments, the structured illumination may have any suitable pattern. In some embodiments, the pattern may be a grid, horizontal lines, vertical lines, circles, concentric circles, or combinations thereof. In some embodiments, the structured illumination may be a square grid as depicted in Fig. 4. In some embodiments, the square grid may be a grid having between 25 to 100000 squares. For example, the square grid may be a 10x10 square grid, a 20x20 square grid, a 30x30 square grid, a 40x40 square grid, a 50x50 square grid, a 60x60 square grid, a 70x70 square grid, a 80x80 square grid, a 90x90 square grid, or a 100x100 square grid. In some embodiments, the structured illumination may have any suitable brightness such that the structured illumination pattern is visible and detectable by an imaging system or device regardless of lighting conditions. In embodiments where the structured illumination comprises a square grid or two-dimensional repeating pattern of shapes, each individual square or shape in the repeating pattern may have a width of less than 1 cm, less than 90 mm, less than 80 mm, less than 70 mm, less than 60 mm, less than 50 mm, less than 40 mm, less than 30 mm, less than 20 mm, less than 15 mm, less than 10 mm, less than 5 mm, or less than 2 mm. Each individual square or shape in the repeating pattern may have a height of less than 1 cm, less than 90 mm, less than 80 mm, less than 70 mm, less than 60 mm, less than 50 mm, less than 40 mm, less than 30 mm, less than 20 mm, less than 15 mm, less than 10 mm, less than 5 mm, or less than 2 mm.
In some embodiments, the structured illumination may be semi-structured or may comprise pseudorandom patterns. For example, the pseudorandom patterns may include speckle patterns, blue noise patterns, Hadamard patterns, Bayer patterns, and the like. The pseudorandom patterns may facilitate image processing by making the at least one image processing step more robust to background noise.
In some embodiments, step 104 comprises utilizing an imaging device or system to capture one or more sequences of images. The imaging device may be any imaging device known to one of skill in the art. For example, a digital camera may be used, including cameras integrated into any consumer grade mobile device (such as a smartphone). In some embodiments, image sequences may be captured in 8K, 4K, 2K, or 1080p resolutions. In some embodiments, the sequence of images may be acquired at a framerate of at least 10 fps, at least 20 fps, at least 30 fps, at least 40 fps, at least 50 fps, at least 60 fps, at least 70 fps, at least 80 fps, at least 90 fps, at least 100 fps, between 10 fps and 100 fps, between 20 fps and 90 fps, between 30 fps and 80 fps, between 40 fps and 70 fps, between 45 fps and 65 fps, or between 50 fps and 60 fps. In some embodiments, the imaging device may have an integrated LiDAR probe or sensor. Alternatively, a separate LiDAR probe or sensor may be utilized in conjunction with the imaging device.
In some embodiments, one or more sequences of images may be acquired while the body region of the subject is being illuminated. In some embodiments, one or more sequences of images are acquired at different subject orientations. For example, the sequence of images may be acquired while the subject is supine, at an incline, while the subject is seated upright, or any combinations thereof. In some embodiments, the patient may be inclined at any angle between 0° and 90°, 0° being the supine position and 90° being the upright position. In some embodiments, step 104 further comprises acquiring LiDAR data via the LiDAR probe or sensor. In some embodiments, step 104 further comprises acquiring electrocardiogram (ECG) measurements using a diagnostic single lead ECG monitoring device. In some embodiments, step 104 further comprises acquiring ultrasound images from the body region. The ultrasound images may be utilized to locate one or more anatomical landmarks within the body region of the subject, which may then be marked on the corresponding location on the patient’s skin using a fluorescent marker. In some embodiments, additional sets of image sequences may be captured with the fluorescent marker in place in order to aid in image processing and manual labeling for training and evaluation of the imaging system. In some embodiments, ultrasound images may be captured using a conventional ultrasound probe, a hand-held portable ultrasound probe, or any ultrasound imaging device known to one of skill in the art.
In some embodiments, step 106 comprises at least one image processing step. In some embodiments, the at least one image processing step comprises video stabilization. In some embodiments, video stabilization is performed via feature point tracking and warping or deep-learning based or optical flow based image stabilization. In some embodiments, the at least one image processing step comprises image segmentation. Image segmentation may be performed using a Gaussian mixture model as described in Abnousi et al. (Abnousi F, et al. Npj Digit Med 2019;2( 1 ): 1-6.) In some embodiments, the at least one image processing step may comprise contouring. In some embodiments, contouring may be performed using standard computer vision techniques of adaptive thresholding, morphological operations such as opening and closing, and contour detection using topological structural analysis of the resulting binarized images. In some embodiments, the at least one image processing step may comprise measuring the change in distance between grid vertices between consecutive frames of the one of more sequences of images. In some embodiments, the at least one image processing step may comprise measuring the change in area of individual squares in the grid between consecutive frame of the one or more sequences of images. In some embodiments, the change in distance between grid vertices or change in area of the individual grid squares may be used to create a spatial motion map of depth changes across frames in the set of image sequences. In some embodiments, the spatial motion map may be transformed into the frequency domain using a Fourier transform. In some embodiments, the spatial motion map passes through a band-pass filter to ensure only movements within the desired frequency range are captured. In some embodiments, the desired frequency range may be between 0.5 and 2.5 times the heart rate measured by the ECG. In some embodiments, the frequency band for the band pass filter is determined by the heart rate derived from the ECG measurements.
In some embodiments, the at least one image processing step comprises motion magnification. In some embodiments, the motion of deformations of the structured illumination pattern may be magnified. In some embodiments, motion magnification is performed via phase-based Eulerian motion magnification using Riesz pyramids. In some embodiments, the step of motion magnification may comprise machine learning algorithms. For example, deep learning models based on encoderdecoder convolutional neural network architecture may be used. The models may be pretrained models.
In some embodiments, the at least one image processing step comprises power spectral analysis of the contoured image data. In some embodiments, power spectral analysis is used to reconstruct the waveform pulsation associated with the blood vessel pressure for morphological characterization. When calculating the blood vessel pressure of the jugular vein, the characterization reveals the characteristic phases of the cardiac cycle reflected by the right atrium (a, c, and v waves, and x and y descent). A bandpass fdter of approximately 0.5 - 2.5 times the measured heart rate or between 1 and 4 Hz may be used to extract the contours most likely to represent the jugular venous pressure (JVP). To differentiate the jugular venous signal from the pulsations of the carotid artery, optical sensing approaches such as those described in Amelard et al. (Am elard R, Hughson RL, Greaves DK, et al. Non-contact hemodynamic imaging reveals the jugular venous pulse waveform. 2016) may be utilized. These approaches observed an inverse relationship between the peak arterial signal and peak jugular venous signal. Patch based spatiotemporal filtering may be used to remove all optical flow signals except those with a strong inverse relationship to the simultaneously recorded ECG signal.
In some embodiments, the at least one image processing step comprises optical flow analysis. Optical flow analysis may be used as a method to detect the magnified movements of the blood vessel and segment the area of these blood vessel pulsations. In some embodiments, non-learning methods (such as those described in Baker et al. (Baker S, et al. Int J Comput Vis 2004;56(3):221-55.) and Fameback et al. (Farneback G In: Bigun J, Gustavsson T, editors. Image Analysis. Berlin, Heidelberg: Springer; 2003. p. 363-70), or pre-trained deep learning optical flow models may be used for optical flow analysis. The jugular venous pulsations may be subject to the aperture effect since the lack of color contrast in the area of the jugular vein may affect the ability for optical flow to perform optimal tracking. In such cases, deep learning models, for example deep learning models that have been pre-trained to detect camouflaged animals via optical flow, may be used.
In some embodiments, step 108 comprises measuring a distance within at least one frame of the sequence of images via the LiDAR. In some embodiments, the distance measured is the distance between the top of the blood vessel pressure pulsation to an anatomical landmark. In some embodiments, the anatomical landmark is the sternal angle. In some embodiments, the anatomical landmark is the sternoclavicular junction. In some embodiments, a 3D reconstruction of the body region is obtained via the LiDAR and the optical flow derived contour. In some embodiments, the 3D reconstruction is used to automate the distance measurement via 3D template matching.
In some embodiments, step 110 comprises estimating the vertical component of the distance measured in step 108 via the LiDAR sensor integrated into the imaging device. Estimating the vertical component of the distance may comprise measuring the vertical distance from the midaxillary line to the anatomical landmark, assuming that the right atrium sits in the midaxillary line. Alternatively, other imaging modes (for example, CT or MRI) may be used to measure the vertical distance from the right atrium to the anatomical landmark. In some embodiments, the jugular venous pressure is calculated by adding a standard estimate of the distance between the sternal angle or sternoclavicular junction and the right atrium to the vertical component of the distance. In some embodiments, the standard estimate is 5 cm.
In some embodiments, the step of estimating venous pressure may comprise providing one or more collected images of the subject to a machine learning algorithm trained with images collected from other subjects and inferring an estimated venous pressure from the one or more collected images. In some embodiments, additional data about the subject may be provided to the machine learning algorithm in order to infer the estimated venous pressure. Examples of additional data that may be provided to the machine learning algorithm include, but are not limited to, heart rate data, pulse data, RR interval data, age, height, weight, body mass index (BMI), sex, smoking status, pulse oxygenation, heart rhythm data (e.g. presence of arrhythmias), measures of left ventricular systolic function (e g. ejection fraction), measures of right ventricular systolic function, presence of valvular disease, respiratory rate, chest circumference, or left/right ventricular diastolic function grade. In some embodiments, some of the data provided to the machine learning algorithm may be preprocessed or manipulated prior to being provided to the machine learning algorithm, for example image processing steps may be applied to the one or more images, or signal filtering may be applied to any timeseries data. Transforms, for example from time-domain to frequency-domain, may be applied to some or all of the data provided.
Suitable machine learning algorithms that may be used in systems and methods of the present invention include but are not limited to convolutional neural networks (CNN), recurrent neural networks to include long short-term memory (LSTM) networks, generative adversarial networks, transformer (self-attention feedforward) neural networks, and combinations thereof (e.g. CNN-LSTM).
Aspects of the present invention relate to a system for calculating an estimated blood vessel pressure of a subject. Generally, the system comprises an imaging device, an illumination device configured to illuminate a surface of a body region of the subject, a display, a processor communicatively connected to the imaging device and the illumination device, and a non-transitory computer readable medium with instructions stored thereon, which when executed by a processor, perform steps comprising: illuminating the surface of a body region of a subject with the illumination device, the body region comprising at least one blood vessel, acquiring a sequence of images of the surface of a body region of the subject with the imaging device while illuminating the surface, performing at least one image processing step on the sequence of images to produce processed image data, measuring a distance within at least one of the sequence of images referenced to at least one anatomical landmark, calculating an estimated blood vessel pressure from the sequence of images and the measured distance, and displaying the calculated estimated blood vessel pressure on the display. In some embodiments, the calculated estimated blood pressure measurement may be calculated and displayed or recorded as a pressure value. In other embodiments, the calculated estimated blood pressure measurement may be calculated or displayed as a rough value, for example high, medium, or low, or above or below one or more thresholds.
In some embodiments, the system may further record other metrics simultaneously to the image acquisition, for example heart rate, heart rhythm data (e.g. arrythmias), respiratory rate, or RR interval. Measurement of these other metrics may in some embodiments improve the quality of the collected data and the calculations resulting therefrom, for example by filtering out heart beats from measured data to determine venous pressure. Heart rate, heart rhythm data, respiratory interval, and/or RR interval may be measured for example using a fitness watch, chest strap, or any other suitable means.
In some embodiments, the imaging device comprises a camera. In some embodiments, the illumination device comprises a structured illumination device. In some embodiments, the structured illumination device comprises a light source and a diffractive optical element. In some embodiments, the light source may be any suitable light source, for example a class I laser or a class II laser. In some embodiments, the laser and the diffractive optical element may be positioned in a housing. In some embodiments, the illumination device and the imaging device may be positioned in a housing. In some embodiments, the illumination device comprises a controllable driver communicatively connected to the processor, and the instructions further comprise the step of activating the illumination device during the image acquisition step. In some embodiments, the controllable driver may be communicatively connected to an interface device, configured to receive an input from a user, wherein the input directs the processor to activate the illumination device.
In some embodiments, the step of acquiring the sequence of images is automated by the system. In some embodiments, the sequence of images is acquired at a framerate of at least 10 fps. In some embodiments, the sequence of images comprises one or more video streams of images. In some embodiments, the system is configured to acquire the one or more video streams of images sequentially. In some embodiments, the system is configured to only acquire the sequence of images when the structured illumination is projected onto the surface of the body region of the subject. In some embodiments, the system is configured to automatically turn off the structured illumination once the acquisition of the sequence of images is completed. In some embodiments, the system may be configured to transmit the calculated estimated blood vessel pressure to an external device. In some embodiments, the system may be configured to transmit the calculated estimated blood vessel to a cloud-based storage. In some embodiments, the system may be configured to alert a user if the calculated estimated blood vessel pressure is above a threshold value.
Aspects of the present invention relate to a method of training a machine learning model to calculate an estimated blood vessel pressure. Referring now to Fig. 6, an exemplary method of training a machine learning model 200 (hereinafter “method 200”) is depicted. Method 200 generally comprises illuminating a surface of a body region of a subject, the body region comprising at least one blood vessel (step 202), acquiring a set of image sequences of the surface of the body region while illuminating the surface of the body region (step 204), performing at least one image processing step on at least one image in the sequence of images (step 206), acquiring at least one additional measurement of blood vessel pressure in the body region of the subject (step 208); and training a machine learning model with the processed image sequence and the corresponding set of additional measurements to infer an estimated blood vessel pressure from the processed image sequence (step 210).
In some embodiments, the illumination comprises a structured illumination, formed via a laser and a diffractive optical element. In some embodiments, the set of image sequences are acquired at a framerate of at least 10 fps. In some embodiments, the additional measurement comprises a LiDAR or ultrasound image sequence of the body region of the subject. In some embodiments, the ultrasound image sequence has a framerate of at least 10 fps.
In some embodiments, the machine learning algorithm may comprise supervised learning, unsupervised learning, or reinforcement learning. In some embodiments, step 210 may further comprise establishing a ground truth for fine-tuning of the model. For example, each frame in the acquired set of image sequences may be labeled via segmentation of the jugular vein pulsation in a semiautomated fashion, using a combination of the optical flow-derived contours and the videos with fluorescent markings obtained from ultrasound assessment to aid a board- certified cardiologist in identifying the true jugular vein contours. These contours can then be propagated throughout the video clip and adjusted frame by frame to establish a ground truth for both evaluation of the rules-based components and pre-trained deep learning models, as well as further fine-tuning of the deep learning optical flow models.
In some embodiments, data acquired from the one or more sequence of images using the structured illumination may be used to train a machine learning model to detect jugular venous pulsations and estimate the jugular venous pressure directly from one or more sequence of images acquired without structured illumination. In such cases, data acquired with structured illumination is used as labels.
Computing Device
In some aspects of the present invention, software executing the instructions provided herein may be stored on a non-transitory computer-readable medium, wherein the software performs some or all of the steps of the present invention when executed on a processor.
Aspects of the invention relate to algorithms executed in computer software. Though certain embodiments may be described as written in particular programming languages, or executed on particular operating systems or computing platforms, it is understood that the system and method of the present invention is not limited to any particular computing language, platform, or combination thereof. Software executing the algorithms described herein may be written in any programming language known in the art, compiled, or interpreted, including but not limited to C, C++, C#, Objective-C, Java, JavaScript, MATLAB, Python, PHP, Perl, Ruby, or Visual Basic. It is further understood that elements of the present invention may be executed on any acceptable computing platform, including but not limited to a server, a cloud instance, a workstation, a thin client, a mobile device, an embedded microcontroller, a television, or any other suitable computing device known in the art.
Parts of this invention are described as software running on a computing device. Though software described herein may be disclosed as operating on one particular computing device (e.g. a dedicated server or a workstation), it is understood in the art that software is intrinsically portable and that most software running on a dedicated server may also be run, for the purposes of the present invention, on any of a wide range of devices including desktop or mobile devices, laptops, tablets, smartphones, watches, wearable electronics or other wireless digital/cellular phones, televisions, cloud instances, embedded microcontrollers, thin client devices, or any other suitable computing device known in the art.
Similarly, parts of this invention are described as communicating over a variety of wireless or wired computer networks. For the purposes of this invention, the words “network”, “networked”, and “networking” are understood to encompass wired Ethernet, fiber optic connections, wireless connections including any of the various 802.11 standards, cellular WAN infrastructures such as 3G, 4G/LTE, or 5G networks, Bluetooth®, Bluetooth® Low Energy (BLE) or Zigbee® communication links, or any other method by which one electronic device is capable of communicating with another. In some embodiments, elements of the networked portion of the invention may be implemented over a Virtual Private Network (VPN).
Fig. 5 and the following discussion are intended to provide a brief, general description of a suitable computing environment in which the invention may be implemented. While the invention is described above in the general context of program modules that execute in conjunction with an application program that runs on an operating system on a computer, those skilled in the art will recognize that the invention may also be implemented in combination with other program modules.
Generally, program modules include routines, programs, components, data structures, and other types of structures that perform particular tasks or implement particular abstract data types. Moreover, those skilled in the art will appreciate that the invention may be practiced with other computer system configurations, including handheld devices, multiprocessor systems, microprocessor-based or programmable consumer electronics, minicomputers, mainframe computers, and the like. The invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote memory storage devices.
Fig. 5 depicts an illustrative computer architecture for a computer 300 for practicing the various embodiments of the invention. The computer architecture shown in Fig. 5 illustrates a conventional personal computer, including a central processing unit 350 (“CPU”), a system memory 305, including a random access memory 310 (“RAM”) and a read-only memory (“ROM”) 315, and a system bus 335 that couples the system memory 305 to the CPU 350. A basic input/output system containing the basic routines that help to transfer information between elements within the computer, such as during startup, is stored in the ROM 315. The computer 300 further includes a storage device 320 for storing an operating system 325, application/program 330, and data.
The storage device 320 is connected to the CPU 350 through a storage controller (not shown) connected to the bus 335. The storage device 320 and its associated computer-readable media provide non-volatile storage for the computer 300. Although the description of computer-readable media contained herein refers to a storage device, such as a hard disk or CD-ROM drive, it should be appreciated by those skilled in the art that computer-readable media can be any available media that can be accessed by the computer 300.
By way of example, and not to be limiting, computer-readable media may comprise computer storage media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, RAM, ROM, EPROM, EEPROM, flash memory or other solid state memory technology, CD-ROM, DVD, or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by the computer. According to various embodiments of the invention, the computer 300 may operate in a networked environment using logical connections to remote computers through a network 340, such as TCP/IP network such as the Internet or an intranet. The computer 300 may connect to the network 340 through a network interface unit 345 connected to the bus 335. It should be appreciated that the network interface unit 345 may also be utilized to connect to other types of networks and remote computer systems.
The computer 300 may also include an input/output controller 355 for receiving and processing input from a number of input/output devices 360, including a keyboard, a mouse, a touchscreen, a camera, a microphone, a controller, a joystick, or other type of input device. Similarly, the input/output controller 355 may provide output to a display screen, a printer, a speaker, or other type of output device. The computer 300 can connect to the input/output device 360 via a wired connection including, but not limited to, fiber optic, Ethernet, or copper wire or wireless means including, but not limited to, Wi-Fi, Bluetooth, Near-Field Communication (NFC), infrared, or other suitable wired or wireless connections.
As mentioned briefly above, a number of program modules and data files may be stored in the storage device 320 and/or RAM 310 of the computer 300, including an operating system 325 suitable for controlling the operation of a networked computer. The storage device 320 and RAM 310 may also store one or more applications/programs 330. In particular, the storage device 320 and RAM 310 may store an application/program 330 for providing a variety of functionalities to a user. For instance, the application/program 330 may comprise many types of programs such as a word processing application, a spreadsheet application, a desktop publishing application, a database application, a gaming application, internet browsing application, electronic mail application, messaging application, and the like. According to an embodiment of the present invention, the application/program 330 comprises a multiple functionality software application for providing word processing functionality, slide presentation functionality, spreadsheet functionality, database functionality and the like.
The computer 300 in some embodiments can include a variety of sensors 365 for monitoring the environment surrounding and the environment internal to the computer 300. These sensors 365 can include a Global Positioning System (GPS) sensor, a photosensitive sensor, a gyroscope, a magnetometer, thermometer, a proximity sensor, an accelerometer, a microphone, biometric sensor, barometer, humidity sensor, radiation sensor, or any other suitable sensor.
EXPERIMENTAL EXAMPLES
The invention is further described in detail by reference to the following experimental examples. These examples are provided for purposes of illustration only, and are not intended to be limiting unless otherwise specified. Thus, the invention should in no way be construed as being limited to the following examples, but rather, should be construed to encompass any and all variations which become evident as a result of the teaching provided herein.
Without further description, it is believed that one of ordinary skill in the art can, using the preceding description and the following illustrative examples, make and utilize the present invention and practice the claimed methods. The following working examples therefore are not to be construed as limiting in any way the remainder of the disclosure.
Example 1 : Introduction
JVPro aims to be the first non-invasive remote monitoring solution for managing congestion in patients with heart failure (HF). There are several key challenges in extracting a jugular venous pressure waveform from non-contact hemodynamic imaging that JVPro plans to address, including 1) recognizing and amplifying subtle skin displacements from the jugular venous pulse; 2) differentiating jugular venous pulsations from carotid artery pulsations and other inherent noise in video acquisitions; and 3) translating detection of the jugular venous pulsation/waveform into an estimation of jugular venous pressure.
JVPro’ s hardware and software design are intended to overcome these challenges and provide a non-invasive measurement of JVP. First, the app guides a user to acquire video of the patient’s neck from the appropriate distance and angle. Next, video captured by the smartphone is processed to estimate JVP by extracting waveforms from jugular venous pulsations at the skin surface. The software includes a collection of non-learning algorithms and machine learning (ML) models to identify a patient’s clavicle, locate the peak of the jugular venous pulsation, and finally quantify the jugular venous pressure. Structured illumination is used to identify subtle pulsations of the neck veins that would not be otherwise visible across different lighting conditions. JVPro uses a series of non-learning and machine learning based computer vision algorithms to contour, extract, and filter signals of the JVP. The system then extracts the IJ waveform using a power spectral analysis of the magnified and isolated jugular venous pulsations in the neck. Finally, the depth sensor, using LiDAR (light detection and ranging) technology, allows for an absolute measurement of distance from the clavicle to the peak of the jugular venous pulsation, the same measure as performed during a visual JVP exam.
The benchmark for remote HF monitoring is the CardioMEMS™ HF System. It is the only FDA-approved device for remote measurement of hemodynamics. Other competitors in remote HF monitoring do not measure hemodynamics and have never shown a reduction in HF hospitalizations. These include Sensinel (Analog Devices, Inc, Wilmington, MA), Bodyport (San Francisco, CA), and HearO (Cordio Medical, Tel Aviv, Israel). These products rely on amalgamations of biometric data (i.e. electrocardiographic signals, weight) or noncardiac data (voice changes) to diagnose worsening HF. This is problematic for several reasons including the increased likelihood of detecting false positives and non-specificity for cardiac dysfunction in several of the variables (i.e. weight, voice changes).
Example 2: Developing an Al based Computer Vision System for Non- Invasive Estimation of JVP
The study population consisted of patients 18 and older scheduled to undergo right heart catheterization for any reason. A full list of the inclusion/exclusion criteria used is provided in Table 1. For this specific aim, a total of 40 patients were recruited. All patients provided informed consent prior to image acquisition. A 50x50 green square grid pattern was generated using a laser pointer and a diffractive optical element, together referred to as the Projection System. The 532nm laser is rated as a Class 3R product in accordance with IEC 60825 (FDA Class Illa). The diffractive optical element or DOE (DE-R 256, Holoeye Photonics AG) transformed the laser beam into a 50x50 square grid pattern (Fig. 4). The purpose of the Projection System was to generate a bright visible grid on the patient's neck that enables a standard smartphone camera to detect subtle pulsations at the skin surface and is agnostic of ambient lighting conditions. Fig. 3 diagrams the entire Projection System.
Videos were acquired using a consumer grade mobile device with light detection and ranging (LiDAR) technology built in (iPhone 14 Pro, Apple, Inc.). Videos were captured at 4K resolution at 60 frames per second (FPS) using the mobile device’s forward-facing camera. Simultaneous LiDAR data was captured at 30 FPS via the mobile device’s built in LiDAR sensor. Separate 30 second clips of the right side of the neck at an oblique angle (the right internal jugular vein is most reliable for CVP estimation) were captured in 3 different patient orientations - supine, at a 45-degree incline, and seated upright. A diagnostic single lead electrocardiogram (ECG) was recorded simultaneously during image acquisition using a wearable monitoring device (Movesense, Suunto, Finland) to enable alignment to the cardiac cycle in subsequent analysis as well as heart rhythm identification.
Following initial video acquisitions, a hand-held, portable ultrasound probe (Butterfly IQ+, Butterfly Network, Inc.) was applied to the neck and the jugular vein identified in each of the 3 positions. The peak collapse point of the jugular vein in long axis images were recorded according to previously validated methods (Sathish N, et al. Ann Card Anaesth 2016; 19(3):405- 9.) and was marked on the patient’s skin using a washable fluorescent marker (if apparent in a given position). An additional 30 second clip was captured with this fluorescent mark present in order to aid in post processing and manual labeling for training and evaluation of the computer vision system.
After acquisition of videos as above, the following image processing steps were used to derive an estimate of the JVP and extract waveforms from the jugular venous pulsations at the skin surface (Fig. 2):
Step 1. Structured illumination and video stabilization: As described above, videos contained frames acquired at 60 frames per second of the patient’s neck, obtained at an oblique angle and with a structured illumination grid pattern projected onto the skin. In order to eliminate translational motion and perspective transforms from camera movement that may affect subsequent processing steps, video stabilization was performed using standard methods of feature point tracking and warping.
Step 2. Grid projection system contouring and waveform extraction: Segmentation of the skin of the neck was performed using a Gaussian mixture model as described in Abnousi F, et al. Npj Digit Med 2019;2(1 ) : 1-6. Contouring of the grid was performed using standard computer vision techniques of adaptive thresholding, morphological operations such as opening and closing, and contour detection using topological structural analysis of the resulting binarized images. (Abnousi F, et al. Npj Digit Med 2019;2(1 ): 1-6.) The change in distance between grid vertices resulting from one frame to the next was then used to create a spatial motion map of depth changes across frames of the video clip. This spatial motion map was then transformed into the frequency domain using a Fourier transformation.
To aid in identification of the subtle pulsations of the jugular vein, both non-learning and deep learning methods were tested for motion magnification of the deformations of our grid projection system. The non-learning approach was built on prior methods that have proven successful at magnifying jugular vein pulsation - a more efficient form of phase-based Eulerian motion magnification using Riesz pyramids was utilized. (Wadhwa N, et al. In: 2014 IEEE International Conference on Computational Photography (ICCP). 2014. p. 1-10.) Additionally, advanced motion magnification was tested using a deep learning model based on an encoder-decoder convolutional neural network architecture which has been pretrained on natural scenes. (Oh T-H, et al. In: Ferrari V, Hebert M, Sminchisescu C, Weiss Y, editors. Computer Vision - ECCV 2018. p. 663-79.) For both non-learning and deep learning methods, the frequency band for the band pass filter was determined by the heart rate derived from the wearable ECG sensor to ensure only movements within the desired frequency range were magnified.
Step 3. Detection and segmentation of JVP via spectral filtering and optical flow: Using power spectral analysis of the previously contoured representation of the jugular venous pulsations in the neck, the jugular venous waveform was reconstructed for morphological characterization (to reveal the characteristic phases of the cardiac cycle reflected by the right atrium - a, c, and v waves, x and y descent). A bandpass filter (Butterworth filter) of ~ 1-4 Hz (within the range of twice the heart rate and dictated by the simultaneous heart rate recordings, given two positive deflections of the jugular venous pulsation per cardiac cycle) was used to extract those contours most likely to represent the JVP. To differentiate the jugular venous signal from the pulsations of the carotid artery, prior approaches which observed an inverse relationship between the peak arterial signal by PPG and the peak jugular venous signal by optical sensing were utilized. (Am elard R, Hughson RL, Greaves DK, et al. Non-contact hemodynamic imaging reveals the jugular venous pulse waveform. 2016) Patch based spatiotemporal filtering was used to remove all optical flow signals except those with a strong inverse relationship to the simultaneously recorded ECG signal.
Optical flow analysis was also tested as a method to detect the magnified movements of the jugular vein and segment the area of these jugular venous pulsations. Again, nonleaming methods were tested (for example, the methods described in Baker S, et al. Int J Comput Vis 2004;56(3):221-55.; Farneback G In: Bigun J, Gustavsson T, editors. Image Analysis. Berlin, Heidelberg: Springer; 2003. p. 363-70) as well as existing, pre-trained deep learning optical flow models (e.g. FlowNet (Dosovitskiy A, et al. In: 2015 IEEE International Conference on Computer Vision (ICCV). 2015. p. 2758- 66.), FastFlowNet (Tone. FastFlowNet: A Lightweight Network for Fast Optical Flow Estimation. 2022.), PWC-Net (Sun D, et al. 2018), and RAFT (Teed Z, et al. Cham: Springer International Publishing; 2020. p. 402-19.)) for this step. Given the jugular venous pulsations may be subject to the aperture effect (as the lack of color contrast in the area of the jugular vein may affect the ability for optical flow to perform optimal tracking) deep learning models that have been pre-trained to detect camouflaged animals via optical flow were also tested. (. Lamdouar H, et al. In: Ishikawa H, Liu C-L, Pajdla T, Shi J, editors. Computer Vision - ACCV 2020. Cham: Springer International Publishing; 2021. p. 488-503.)
Step 4. LiDAR based measurement of JVP height: Using the 3D reconstruction of the neck anatomy derived from the mobile device’s LiDAR sensor and the peak height of the optical flow derived JVP contours described above, the measurement of the top of the jugular vein pulsation from the sternoclavicular junction (as an approximation of the sternal angle) was automated using 3D template matching. The vertical component of this distance was then estimated, using the mobile device’s built-in accelerometer to determine the vertical axis. This measurement (in cm) plus an additional 5cm (as a standard estimate of the distance from the sternal angle to the right atrium) served as the system’s estimation of the JVP. Previous studies have revealed that the measurement accuracy and precision of small objects with the iPhone’s LiDAR is within 1cm, which is adequate for this application. (Luetzenburg G, et al. Scientific Reports. Sci Rep 2021 ; 11(1 ):22221.)
Ground Truth/Labels: Each frame in the acquired clips was additionally labeled via segmentation of the jugular vein pulsation in a semi-automated fashion, using a combination of the optical flow-derived contours and the videos with fluorescent markings obtained from ultrasound assessment to aid a board-certified cardiologist in identifying the true jugular vein contours. These contours were then be propagated throughout the video clip and adjusted frame by frame to establish a ground truth for both evaluation of the rules-based components and pre-trained deep learning models, as well as further fine-tuning of the deep learning optical flow models.
Model Validation and Fine Tuning: Two independent clinicians qualitatively rated the results of the nonlearning and deep learning-based motion magnification methods on clinical interpretability to determine the most robust approach to motion augmentation. Additionally, downstream performance of optical flow analysis was considered in determining the desired approach to motion magnification Performance of JVP segmentation with optical flow analysis was assessed using the Dice similarity coefficient and the intersection over union metrics. Adjustment of parameters for non-learning components, any hyperparameter tuning for deep learning components during fine-tuning of pretrained models, and model selection were performed using 5- fold cross-validation. Final testing of the system in a hold-out test set is detailed in Example 3.
Example 3: Comparison of Non-Invasive Prediction of JVP with Gold Standard of Invasive Hemodynamic Data
The study population for this phase included the 40 patients from Example 2 as well as an additional 20 patients scheduled to undergo right heart catheterization for any indication who carry a diagnosis of heart failure. These 20 additional patients served as a hold-out prospective test set. Inclusion and exclusion criteria were the same as in Example 2. For all 60 patients from Examples 2 and 3, after video acquisition as described above, the clinician acquiring the data recorded a clinical assessment of JVP height in centimeters (cJVP) using the standard bedside technique (both by visual estimation as well as the use of a ruler) in each of the 3 orientations provided the top of the pulsation is visible. Using the portable ultrasound as described in Example 2, the vertical distance (in cm) from this collapse point to the sternal angle (+ 5cm for the distance to the right atrium) was additionally measured using a ruler as an ultrasound assisted assessment of JVP (uJVP).
Patients underwent cardiac catheterization as scheduled for their specific clinical indication (not as a part of the research protocol) and hemodynamic data including waveforms was recorded. Additionally, demographics, vital signs, relevant clinical diagnoses (e.g. restrictive cardiomyopathy), and relevant echocardiographic features (e.g. severe tricuspid regurgitation) was extracted from the research data warehouse for subsequent sensitivity analyses of the effect of baseline clinical variables on JVP estimation.
The right atrial pressure waveforms derived from invasive cardiac catheterization served as the ground truth estimate of central venous pressure (CVP). The right atrial pressure was determined from the reported values in the cardiac catheterization report and confirmed by a board-certified heart failure cardiologist blinded to the pre-catheterization assessment of JVP via review of the waveforms. In addition to the continuous measurement, invasive CVP estimates were categorized into clinically meaningful thresholds based on expert opinion of <5mmHg (normal), 5- lOmmHg (mildly elevated), and >10mmHg (significantly elevated).
Non-invasive JVP estimates were derived from the Al system (AIJVP), bedside clinical assessment (cJVP), and ultrasound assessment (uJVP). If JVP estimates were obtained in multiple orientations for a particular patient the results were averaged to obtain an overall estimate of the JVP. Additionally, failure to detect the JVP in any orientation (either because the pulsation is not visible or too high to measure) was recorded as a separate outcome. All non-invasive estimates of JVP were converted from cm H2O to mm Hg (1 cmH20 = 0.736 mmHg) for comparison to the invasively derived CVP.
Patient data was split into model training/development (40 patients from Example 2) and testing/validation (20 additional patients from Example 3) sets in order to simulate a prospective evaluation of the system. Data from these 20 patients were not used in the development of the computer vision system and were only used for a final test of the performance of the system. Any additional fine-tuning of the non-learning components and deep learning models for JVP detection and measurement was performed based on feedback from the invasively derived CVP using cross-validation on the training/development set only.
For comparison of JVP estimation by the AIJVP, cJVP, and uJVP as continuous variables to the gold standard of invasively derived CVP in the test set, metrics of agreement were calculated including mean absolute error, intraclass correlation coefficient, and Bland- Altman analysis with calculation of 95% limits of agreement. For the categorical outcome of clinically meaningful CVP thresholds, sensitivity (recall), specificity, positive predictive value (precision), negative predictive value, Fl score (harmonic mean of precision and recall), and area under the receiver operator characteristic curve (AUC) were calculated and 95% confidence intervals were produced using bootstrap samples. Comparison of performance between AIJVP, cJVP, and uJVP for estimating CVP categories was performed with the McNemar test for sensitivity and specificity and the DeLong test for the area under the receiver operating characteristic curve (AUC). Fisher’s exact test was used to compare failure rates between the various non-invasive methods of JVP assessment.
Finally, the waveforms were averaged over 5 cardiac cycles for both the invasive hemodynamic data and the Al system to compare the morphological characteristics of these waveforms using cross-correlation. This morphologic comparison will serve as a basis for future studies aimed at deriving changes in jugular venous pressure from morphological changes in the Al derived JVP waveform.
The disclosures of each and every patent, patent application, and publication cited herein are hereby each incorporated herein by reference in their entirety. While this invention has been disclosed with reference to specific embodiments, it is apparent that other embodiments and variations of this invention may be devised by others skilled in the art without departing from the true spirit and scope of the invention. The appended claims are intended to be construed to include all such embodiments and equivalent variations.

Claims

CLAIMS What is claimed is:
1. A method of calculating an estimated blood vessel pressure of a subject, comprising: illuminating a surface of a body region of a subject, the body region comprising at least one blood vessel; acquiring a sequence of images of the surface of a body region of the subject while illuminating the surface; performing at least one imaging processing step on the sequence of images to produce processed image data; measuring a distance from the top of the blood vessel pulsation to a marked position referenced to at least one anatomical landmark within at least one of the sequence of images; and calculating an estimated blood vessel pressure from the sequence of images and the measured distance.
2. The method of claim 1, the illuminating step comprising applying a structured illumination to the surface of the body region.
3. The method of claim 2, the structured illumination comprising illuminating the surface of the body region with a laser via a diffractive optical element.
4. The method of claim 1, wherein the at least one blood vessel comprises a jugular vein.
5. The method of claim 1, wherein the sequence of images are acquired at a framerate of at least 10 fps.
6. The method of claim 1 , wherein the at least one image processing step comprises image segmentation, thresholding, Fourier transformation, motion magnification, optical flow analysis, or combinations thereof.
7. The method of claim 1, wherein the at least one image processing step comprises applying a machine learning algorithm to at least one image of the sequence of images.
8. The method of claim 1, wherein the distance is measured with a LIDAR based measurement.
9. The method of claim 1, further comprising the step of treating the subject with diuretic medication when the estimated blood vessel pressure is above a threshold.
10. A system for calculating an estimated blood vessel pressure of a subject, comprising: an imaging device; an illumination device configured to illuminate a surface of a body region of the subject; a display; a processor communicatively connected to the imaging device and the illumination device; and a non-transitory computer readable medium with instructions stored thereon, which when executed by a processor perform steps comprising: illuminating the surface of a body region of a subject with the illumination device, the body region comprising at least one blood vessel; acquiring a sequence of images of the surface of a body region of the subject with the imaging device while illuminating the surface; performing at least one image processing step on the sequence of images to produce processed image data; measuring a distance between the top of the blood vessel to a marked position referenced to at least anatomical landmark within at least one of the sequence of images; calculating an estimated blood vessel pressure from the sequence of images and the measured distance; and displaying the calculated blood vessel pressure on the display.
11. The system of claim 10, wherein the illumination device comprises a structured illumination device.
12. The system of claim 11, wherein the structured illumination device comprises a laser and a diffractive optical element.
13. The system of claim 10, wherein the illumination device comprises a controllable driver communicatively connected to the processor, and the instructions further comprise the step of activating the illumination device during the image acquisition step.
14. The system of claim 10, wherein the at least one image processing step is image segmentation, thresholding, Fourier transformation, motion magnification, optical flow analysis, or combinations thereof.
15. A method of training a machine learning model to calculate an estimated blood vessel pressure, comprising: illuminating a surface of a body region of a subject, the body region comprising at least one blood vessel; acquiring a set of image sequences of the surface of the body region while illuminating the surface of the body region; performing at least one image processing step on at least one image in the sequence of images; acquiring at least one additional measurement of blood vessel pressure in the body region of the subject; and training a machine learning model with the processed image sequence and the corresponding set of additional measurements to infer an estimated blood vessel pressure from the processed image sequence.
16. The method of claim 15, the illumination step comprising applying a structured illumination to the surface of the body region.
17. The method of claim 16, the structured illumination comprising illuminating the surface of the body region with a laser via a diffractive optical element.
18. The method of claim 15, wherein the sequence of images is acquired at a framerate of at least 10 fps.
19. The method of claim 15, wherein the additional measurement comprises a LiDAR or ultrasound image sequence of the body region of the subject.
20. The method of claim 19, wherein the ultrasound image sequence is acquired at a framerate of at least 10 fps.
PCT/US2024/044332 2023-08-29 2024-08-29 System and method for non-invasive heart pressure measurement WO2025049680A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202363579332P 2023-08-29 2023-08-29
US63/579,332 2023-08-29

Publications (1)

Publication Number Publication Date
WO2025049680A1 true WO2025049680A1 (en) 2025-03-06

Family

ID=94820403

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2024/044332 WO2025049680A1 (en) 2023-08-29 2024-08-29 System and method for non-invasive heart pressure measurement

Country Status (1)

Country Link
WO (1) WO2025049680A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160338599A1 (en) * 2015-05-22 2016-11-24 Google, Inc. Synchronizing Cardiovascular Sensors for Cardiovascular Monitoring
US20170323481A1 (en) * 2015-07-17 2017-11-09 Bao Tran Systems and methods for computer assisted operation
US20210000347A1 (en) * 2014-07-29 2021-01-07 Sempulse Corporation Enhanced physiological monitoring devices and computer-implemented systems and methods of remote physiological monitoring of subjects
US20210233656A1 (en) * 2019-12-15 2021-07-29 Bao Tran Health management
WO2023117560A1 (en) * 2021-12-20 2023-06-29 Bayer Aktiengesellschaft Tool for identifying measures against hypertension and for their monitoring

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210000347A1 (en) * 2014-07-29 2021-01-07 Sempulse Corporation Enhanced physiological monitoring devices and computer-implemented systems and methods of remote physiological monitoring of subjects
US20160338599A1 (en) * 2015-05-22 2016-11-24 Google, Inc. Synchronizing Cardiovascular Sensors for Cardiovascular Monitoring
US20170323481A1 (en) * 2015-07-17 2017-11-09 Bao Tran Systems and methods for computer assisted operation
US20210233656A1 (en) * 2019-12-15 2021-07-29 Bao Tran Health management
WO2023117560A1 (en) * 2021-12-20 2023-06-29 Bayer Aktiengesellschaft Tool for identifying measures against hypertension and for their monitoring

Similar Documents

Publication Publication Date Title
JP7261811B2 (en) Systems and methods for non-invasive determination of blood pressure lowering based on trained predictive models
Charlton et al. Breathing rate estimation from the electrocardiogram and photoplethysmogram: A review
Fan et al. Robust blood pressure estimation using an RGB camera
US11771381B2 (en) Device, system and method for measuring and processing physiological signals of a subject
Zhou et al. The noninvasive blood pressure measurement based on facial images processing
US10278595B2 (en) Analysis and characterization of patient signals
CN105792742A (en) Device and method for obtaining pulse transit time and/or pulse wave velocity information of a subject
CA2934659A1 (en) System and methods for measuring physiological parameters
CN110276271A (en) Non-contact Heart Rate Estimation Method Fusion IPPG and Depth Information Anti-Noise Interference
JP6866399B2 (en) High resolution hemoperfusion imaging using camera and pulse oximeter
US20250169700A1 (en) Machine learning techniques for estimating carotid-femoral pulse wave velocity and/or vascular age from single-site arterial waveform measurements
Chen et al. Modulation model of the photoplethysmography signal for vital sign extraction
JP2024532284A (en) Method and system for designing photoplethysmographic waveform features from biophysical signals for use in characterizing physiological systems
JP7060569B2 (en) Atrial fibrillation detection system
KR102787507B1 (en) Method for measuring non-contact heart rate and computing device for executing the method
Alam et al. UbiHeart: A novel approach for non-invasive blood pressure monitoring through real-time facial video
US10327648B2 (en) Blood vessel mechanical signal analysis
US20230072281A1 (en) Methods and Systems for Engineering Wavelet-Based Features From Biophysical Signals for Use in Characterizing Physiological Systems
CN117442173A (en) Method for training blood pressure prediction model based on meta learning
WO2025049680A1 (en) System and method for non-invasive heart pressure measurement
JP2024529373A (en) DEVICES, SYSTEMS AND METHODS FOR DETERMINING HEALTH INFORMATION RELATED TO A SUBJECT&#39;S CARDIOVASCULAR SYSTEM - Patent application
JP2024532280A (en) Method and system for engineering visual features from biophysical signals for use in characterizing physiological systems - Patents.com
US10251564B2 (en) Thermal patient signal analysis
Zhang et al. Non-contact heart rate and respiratory rate estimation from videos of the neck
CN114548365A (en) LSTM network training method and ECG reconstruction method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24860995

Country of ref document: EP

Kind code of ref document: A1