[go: up one dir, main page]

CN112567378B - Methods and systems utilizing quantitative imaging - Google Patents

Methods and systems utilizing quantitative imaging Download PDF

Info

Publication number
CN112567378B
CN112567378B CN201980049912.9A CN201980049912A CN112567378B CN 112567378 B CN112567378 B CN 112567378B CN 201980049912 A CN201980049912 A CN 201980049912A CN 112567378 B CN112567378 B CN 112567378B
Authority
CN
China
Prior art keywords
data
analyte
imaging
pathology
plaque
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201980049912.9A
Other languages
Chinese (zh)
Other versions
CN112567378A (en
Inventor
马克·A·巴克勒
戴维·S·派克
弗拉迪米尔·瓦尔特奇诺夫
安德鲁·J·巴克勒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Elucid Bioimaging Inc.
Original Assignee
Elucid Bioimaging Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Elucid Bioimaging Inc. filed Critical Elucid Bioimaging Inc.
Publication of CN112567378A publication Critical patent/CN112567378A/en
Application granted granted Critical
Publication of CN112567378B publication Critical patent/CN112567378B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques
    • G06F18/2193Validation; Performance evaluation; Active pattern learning techniques based on specific statistical tests
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • G06F18/2414Smoothing the distance, e.g. radial basis function networks [RBFN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • G06V2201/032Recognition of patterns in medical or anatomical images of protuberances, polyps nodules, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Databases & Information Systems (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Geometry (AREA)
  • Quality & Reliability (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Analysis (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)
  • Image Processing (AREA)
  • Investigating Or Analysing Biological Materials (AREA)

Abstract

本文呈现了用于利用定量成像分析病理学的系统和方法。有利地,本公开的所述系统和方法利用层级分析框架,所述层级分析框架从成像数据中标识生物学性质/分析物并且对所述生物学性质/分析物进行定量,并且然后基于经过定量的生物学性质/分析物标识和表征一种或多种病理学。相对于被配置成从基础成像数据中直接确定和表征病理学的系统和方法,使用成像来检查作为用于评估病理学的中间物的基础生物学的此层级方法提供了许多分析和处理优势。

Systems and methods for analyzing pathology using quantitative imaging are presented herein. Advantageously, the systems and methods of the present disclosure utilize a hierarchical analysis framework that identifies and quantifies biological properties/analytes from imaging data, and then identifies and characterizes one or more pathologies based on the quantified biological properties/analytes. This hierarchical approach of using imaging to examine the underlying biology as an intermediate for assessing pathology provides many analytical and processing advantages relative to systems and methods configured to directly determine and characterize pathology from underlying imaging data.

Description

Method and system for utilizing quantitative imaging
Background
The present disclosure relates to quantitative imaging and analysis. More particularly, the present disclosure relates to systems and methods for analyzing pathology using quantitative imaging.
Imaging, particularly imaging with safe and noninvasive methods, represents the most powerful method for locating disease origin, capturing its detailed pathology, guiding therapy, and monitoring health progression. Imaging is also an extremely valuable and low cost method for mitigating these human and financial costs by allowing for suitable early interventions that are both inexpensive and damaging.
Enhanced imaging techniques have made medical imaging an essential component of patient care. Imaging is particularly valuable because it provides spatially and temporally located anatomical and functional information using non-invasive or minimally invasive methods. However, techniques for efficiently utilizing increased temporal and spatial resolution are needed to take advantage of patterns or features in data that are not readily assessed by the human eye and in this way manage large-scale data to integrate it efficiently into a clinical workflow. Without assistance, clinicians have neither time nor the ability to efficiently extract available information content, and in any event typically interpret information subjectively and qualitatively. Integrating quantitative imaging for individual patient management and clinical trials for therapy development requires a new class of decision support informatics tools to enable the medical community to take full advantage of the capabilities that imaging modalities may possess with evolving and growing within the reality of existing workflow and repayment constraints.
Quantitative results from imaging methods are likely to be used as biomarkers in routine clinical care and clinical trials, e.g., in accordance with the widely accepted definition of biomarkers by the NIH consensus conference (NIH Consensus Conference). In clinical practice, quantitative imaging aims at (a) detecting and characterizing a disease before, during or after the course of therapy, and (b) predicting the course of a disease with or without therapy. In clinical studies, imaging biomarkers may be used to define endpoints of a clinical trial.
Quantification is based on the development of imaging physics that lead to improvements in spatial, temporal and contrast resolution and the ability to excite tissue with multiple energies/sequences, thereby generating different tissue-specific responses. Thus, these improvements may allow for tissue differentiation and functional assessment and are evident in, for example, computed Tomography (CT), dual Energy Computed Tomography (DECT), spectral computed tomography (spectral CT), computed Tomography Angiography (CTA), cardiac Computed Tomography Angiography (CCTA), magnetic Resonance Imaging (MRI), multi-contrast magnetic resonance imaging (multi-contrast MRI), ultrasound (US), and targeted or conventional contrast agent approaches with various imaging modalities. Quantitative imaging measurements indicate specific biological characteristics of the effectiveness of one treatment relative to another, how effective the current treatment is, or what risk the patient is at if he is still untreated. As a measuring device, scanners combined with image processing of the formed images have the ability to measure characteristics of tissue and how different tissues respond to them based on physical principles related to a given imaging method. Although the image formation process varies greatly across modalities, some summaries help to frame an overall assessment, despite exceptions, nuances, and subtleties, they will drive the true conclusion and be omitted before they are considered some of the greatest opportunities, and unless they are considered some of the greatest opportunities.
Imaging during the early stages of clinical testing of novel therapeutic agents helps understand the underlying biological pathways and pharmacological effects. It can also reduce the cost and time required to develop new drugs and therapeutics. In later stages of development, imaging biomarkers can serve as important endpoints of clinical benefit and/or as concomitant diagnostics to aid in prescribing and/or following a particular patient condition for personalized therapy. In all stages, imaging biomarkers may be used to select patients or stratify patients based on disease status in order to better demonstrate therapeutic effects.
A. Quantitative medical imaging:
Enhanced imaging techniques have made medical imaging an essential component of patient care. Imaging is particularly valuable because it provides spatially and temporally located anatomical and functional information using non-invasive or minimally invasive methods. However, techniques to handle increased resolution are needed to take advantage of patterns or features in data that are not readily assessed by the human eye and in this way manage large-scale data to integrate it efficiently into a clinical workflow. With newer high resolution imaging techniques, the radiologist will "flood" the data without assistance. Integrating quantitative imaging for individual patient management would require a new class of decision support informatics tools to enable the medical community to fully utilize the capabilities of these new tools within the reality of existing workflows and compensation constraints.
Furthermore, quantitative imaging methods are of increasing importance for (i) preclinical studies, (ii) clinical studies, (iii) clinical trials and (iv) clinical practice. Imaging during the early stages of clinical testing of novel therapeutic agents helps understand the underlying biological pathways and pharmacological effects. It can also reduce the cost and time required to develop new drugs and therapeutics. Imaging biomarkers can serve as important endpoints of clinical benefit at a later stage of development. In all stages, imaging biomarkers may be used to select patients or stratify patients based on disease status in order to better demonstrate therapeutic effects.
Improving patient selection by using quantitative imaging reduces the sample size required for a given trial (by increasing the fraction of evaluable patients and/or reducing the impact of unwanted variables) and helps identify the subpopulations that can benefit most from the proposed treatment. This will reduce the development time and cost of new drugs, but may also lead to a corresponding reduction in the size of the "target" population.
The disease is not simple and, although it usually manifests itself locally, it is often systemic. Multi-factor assessment of objectively related tissue properties, which represent panels or "spectra" of continuous indicators, sometimes ideally proven to be future "surrogate" and/or difficult to measure but approved endpoints, has proven to be an effective method across medicine and so forth. The computer-aided measurement of lesions and/or organ biology and quantification of tissue composition in the first or second reader paradigm made possible by interdisciplinary convergence between next generation computational methods for personalized diagnosis based on quantitative imaging assessment of phenotypes implemented in architectures that actively optimize interoperability with modern clinical IT systems provides the clinician with strength for improving classification of the interdisciplinary, medical and surveillance pathways as they manage their patient across successive disease severity. On a level of granularity and complexity that more closely approximates the complexity of the disease itself rather than insisting on the assumption that it can be simplified to a level that hides the underlying biology, a more timely and accurate assessment will yield improved results and more efficient use of healthcare resources, with a much better gain than tool costs.
B. Phenotyping (phenotyping):
Radiological imaging is typically subjectively and qualitatively interpreted for medical conditions. The term phenotype is used in the medical literature as a group of observable properties of an individual caused by its genotype interacting with the environment. Phenotypes generally mean objectivity, i.e., phenotypes may be referred to as being true rather than subjective. Radiology is well known for its ability to visualize features and is increasingly being validated as an objective genuine standard (U.S. application Ser. No. 14/959,732; U.S. application Ser. No. 15/237,249; U.S. application Ser. No. 15/874,474; and U.S. application Ser. No. 16/102,042). Thus, radiological images can be used to determine phenotypes, but often lack adequate methods for determining phenotypes.
Advantageously, phenotyping has a true basis and can therefore be independently and objectively assessed. Furthermore, phenotyping has become an accepted form system in the healthcare industry for managing patient treatment decisions. Thus, phenotypes are highly clinically relevant. Finally, phenotyping is consumer-related. This allows both self-advocations and acts as an incentive for lifestyle changes.
Early identification of phenotypes based on continuous indicators of integrated panels, rather than just as detection of individual features, would enable rapid intervention to prevent irreversible injury and death. Solutions are critical for a complete pre-producer event or at least for improving the accuracy of diagnosis of the signs and/or symptoms that are experiencing. An efficient workflow solution that automatically measures structure and quantifies tissue composition and/or hemodynamics can be used to characterize patients at higher risk that will be treated differently than patients not at higher risk. There would be a great clinical significance if plaque morphology characteristics were linked to embolic potential.
Imaging phenotypes may be associated with gene expression patterns in association studies. This may have a clinical impact, as imaging is commonly used in clinical practice, providing unprecedented opportunities for improving decision support in personalized therapies at low cost. Correlating specific imaging phenotypes with large-scale genomic, proteomic, transcriptomic, and/or metabonomic analysis is likely to influence treatment strategies by yielding a more definitive and patient-specific prognosis and measurement of response to drugs or other therapies. However, the methods used to date to extract imaging phenotypes are mostly empirical in nature and are based primarily on human observations, albeit expert observations. These methods have embedded human variability and are obviously not scalable to support high throughput analysis.
At the same time, the convergence of unmet needs to achieve more personalized medicine without increasing costs-indeed, technological advances in delivering such capabilities in a way that at the same time reduces costs bring unprecedented pressure to better control costs and/or avoid unfortunate events rather than reacting to unfortunate events by proactively controlling them in preventive medicine, comparative effects, compensation methods, etc.
In addition to the problem of phenotype classification (classifying unordered classes), there is a problem of outcome prediction/risk stratification (predicting the order level of risk). Both problems have clinical utility, but both are the result of different technical device characteristics. In particular, one problem does not strictly depend on another.
Without limiting the generality, examples of phenotypical classifications containing clinical relevance are provided below:
An example manifestation of the "stable plaque" phenotype of atherosclerosis may be described as follows:
"healing" disease, low response to enhanced anti-statin regimens
Balloon/stent complications are less
Sometimes the stenosis is higher >50%
Sometimes Ca is higher and deeper
Lipid, minimal or no bleeding and/or ulceration bleeding and/or ulceration
Smooth appearance
Such plaques generally have a lower rate of adverse events than the "unstable plaque" phenotype.
Active disease, possibly high response to enhanced lipid lowering and/or anti-inflammatory regimens
Balloon/stent complications are more
Sometimes stenosis is lower <50%
Low or diffuse Ca, ca approaching lumen, napkin ring sign and/or microcalcification
Sometimes there is evidence of higher lipid content, thin cap
Sometimes with signs of bleeding or intra-plaque hemorrhage (IPH) and/or ulceration
Adverse event rates for such plaques have been reported to be 3-4 times higher than for the stable phenotype. These two examples may be evaluated at the time of single patient encounter, but other phenotypes such as "rapid progression variable (rapid progressor)" may be determined by comparing the rate of change of characteristics over time, i.e., phenotypes that exist statically at a certain point in time and are informed based on dynamics and/or how things change n times.
C. machine learning techniques and deep learning techniques:
Deep Learning (DL) methods have been applied very successfully to many difficult Machine Learning (ML) and classification tasks that stem from complex reality problems. Recent applications of note include computer vision (e.g., optical character recognition, facial recognition, satellite image interpretation, etc.), speech recognition, natural language processing, medical image analysis (image segmentation, feature extraction and classification), discovery and validation of clinical and molecular data biomarkers. An attractive feature of the method is that it can be applied to both unsupervised and supervised learning tasks.
Neural Networks (NN) and deep NN methods, which broadly include Convolutional Neural Networks (CNN), recursive Convolutional Neural Networks (RCNN), etc., have been shown to be based on a good theoretical basis and are broadly modeled after the principle is considered to represent advanced cognitive functions of the human brain. For example, in the brain region neocortex associated with many cognitive abilities, sensory signals propagate through complex local modular levels of learning representing observations made over time—this is a design principle that has led to the general definition and construction of CNNs for i.e. image classification and feature extraction. However, the DL network and method is somewhat an unresolved problem than what is the more fundamental reason for the superior performance of a framework with the same number of fitting parameters but without a deep hierarchical architecture.
Conventional ML methods for image feature extraction and image-based learning from raw data have many limitations. Notably, spatial context is often difficult to obtain in ML methods using feature extraction when the features are at a summary level rather than the 3D level of the original image, or when they are at a level to the extent that they are not tied to biological reality that can be objectively verified but is merely a mathematical operator. The use of raw data sets that do contain spatial context often lacks objective reference real-world labels for the extracted features, but the use of raw image data itself contains many variations that are "noise" with respect to the classification problem at hand. In applications other than imaging, this problem is typically alleviated by a very large training set, such that the network training "learns" ignores this variation and focuses only on salient information, but large training sets of this scale are often not available in medical applications, especially because of cost, logistical and privacy issues with reference to live annotated data sets. The systems and methods of the present disclosure help overcome these limitations.
D. example applications cardiovascular measures such as Fractional Flow Reserve (FFR) and plaque stability or High Risk Plaque (HRP):
Over the last 30 years, new treatments have been revolutionary in improving outcome, but cardiovascular disease has imposed a load of 3200 billions of dollars per year on the U.S. economy. There are a large patient population that can benefit from a better characterization of the risk of serious adverse coronary or brain events. The american heart Association (AMERICAN HEART Association) (AHA) inferred a ACSD (atherosclerotic cardiovascular disease) risk score for the population project that the risk of 9.4% of adverse events in all adults (age > 20) was greater than 20% in the next 10 years, and 26% risk was between 7.5% and 20%. Applying this to the population resulted in 2300 ten thousand high risk patients and 5700 ten thousand moderate risk patients. 8000 ten thousand at risk can be compared to 3000 ten thousand us patients currently being statin treated in an attempt to avoid a new or recurrent event, 1650 ten thousand diagnosed with CVD. Among statin patients, some will develop occlusive disease and Acute Coronary Syndrome (ACS). Most patients are unaware of their disease progression prior to the onset of chest pain. Additional consequences and cost improvements in coronary artery disease will shift from improved noninvasive diagnosis to identify which patients have a disease that progresses under first line therapy.
Heart health may be significantly affected by arterial degeneration around the heart. A variety of factors (e.g., tissue characteristics such as angiogenesis, neovascularization, inflammation, calcification, lipid deposition, necrosis, hemorrhage, stiffness, density, stenosis, dilation, remodeling ratio, ulceration, blood flow (e.g., blood flow in a channel), pressure (e.g., blood pressure in a channel or pressure of one tissue against another), cell type (e.g., macrophages), cell alignment (e.g., cell alignment of smooth muscle cells), or shear stress (e.g., shear stress of blood in a channel), cap thickness and/or tortuosity (e.g., entrance and exit angles)) may cause these arteries to reduce their effectiveness in delivering oxygen-filled blood to surrounding tissue (fig. 35).
Functional testing of coronary arteries mainly loaded echocardiography and single photon emission computed tomography myocardial perfusion testing (SPECT MPI) are the main non-invasive methods currently used to diagnose obstructive coronary artery disease. In the united states, more than ten million functional tests are performed annually, with positive results driving 260 ten thousand visits to the catheter laboratory to perform invasive angiography to confirm the discovery of coronary artery disease.
Another method of assessing perfusion is to determine the ability of the vasculature to transmit oxygen. Specifically, the reduced capacity may be quantified as fractional flow reserve or FFR. FFR is not a direct measure of ischemia, but is an alternative to measuring the rate of pressure drop across a lesion. Changes in vessel diameter at maximum congestion caused by local vasodilatory lesions relative to other segments of the same vessel can produce significant hemodynamic effects, resulting in abnormal FFR measurements. During physical FFR measurements, the infusion of adenosine reduces downstream resistance to allow increased blood flow in the hyperemic state. Physically measuring FFR requires an invasive surgical procedure involving a physical pressure sensor within the artery. Since this level of trauma can present itself with risks and inconvenience, there is a need for a method of estimating FFR with high accuracy without requiring physical measurements. The ability to perform this measurement noninvasively also reduces the obvious "treatment bias" in that stent placement is relatively easy to perform once the patient is in the catheter laboratory, so many people have noted that excessive treatment occurs, yet if the blood flow reserve can be evaluated noninvasively, the decision as to whether to place or not to place a stent may be improved. Likewise, blood flow reserves are also suitable for perfusion of brain tissue (e.g., associated with congestion in the brain).
Known problems of functional testing are sensitivity and specificity. It is estimated that some 30-50% of patients with cardiovascular disease are misclassified and overstreated or under-treated, with significant monetary and quality of life costs. Functional testing is also expensive, time consuming and limited in use for patients with pre-obstructive disease. False positives from non-invasive functional tests are an important factor in stabilizing patients over-using both invasive coronary angiography and percutaneous coronary intervention. Studies of false negative effects it is estimated that, in 380 ten thousand annual MPI tests given to U.S. patients suspected of having coronary dynamic disease (CAD), nearly 600,000 will report false negative results leading to 13,700 cases of acute coronary events, many of which will be preventable by the introduction of appropriate drug therapies. Another drawback of functional testing is temporary in nature, ischemia is a hysteresis indicator that follows anatomical changes brought about by disease progression. Patients at high risk of ACS may be better served if future criminal lesions can be detected and reduced with intensive medication prior to the onset of ischemia.
Coronary Computed Tomography Angiography (CCTA), especially when used in tandem with quantitative analysis software, is evolving into an ideal test modality to fill in the gap in understanding the extent and rate of progression of coronary artery disease. Over the past 10 years, the CT scanner population in most countries has been upgraded to enable excellent spatial resolution without slowing the heart or a broad breath-hold higher speed, higher detector count machine. The radiation dose has been greatly reduced to a point at or below SPECT MPI and invasive angiography.
Recent analysis of data from milestone trials such as SCOT-HEART, PREDICT and procise and others has shown that the use of CCTA detects the value of non-obstructive disease, sometimes referred to as High Risk Plaque (HRP) or vulnerable plaque, by identifying patients at increased risk of future adverse events. Study design is different and contains a cohort of nested case controls that compare CCTA enrolled patients with similar risk factors/demographics for Cardiovascular (CV) events to controls, comparison to FFR, and years of follow-up for large "test and treatment" studies. A recently advantageous determination of NICE localization of CCTA as a front line diagnosis is the significant reduction of CV events in the CCTA branch based on a SCOT-heat study, which is due to changes in drug therapy initiation or plaque discovery with CCTA.
Based on PROMISE findings indicating that plaque assessment is most needed, important target patient groups are those with stable chest pain and no prior history of CAD with typical or atypical angina (based on SCOT-HEART data), as well as those with non-obstructive disease (< 70% stenosis) and younger patients (e.g., 50-65 year old groups). CCTA-based analysis of patients with non-obstructive CAD found that patients with high risk plaque patterns could be assigned the most appropriate high intensity statin therapies (especially when considering decisions regarding very expensive new lipid lowering therapies such as PCSK9 inhibitors or anti-inflammatory drugs such as kanamab (canakinumab)) or to add new anti-platelet agents for reducing the risk of coronary thrombosis and/or a possible intensive or degraded longitudinal follow-up of the therapy. CCTA is an ideal diagnostic tool because it is non-invasive and requires less radiation than cardiac catheterization.
The pathological literature on criminal lesions involved in fatal heart attacks indicates that clinically non-obstructive CAD is more likely to be the home for most high risk plaques than more occlusive plaques, which tend to be more stable. Recent studies confirm these findings, which point to criminal lesions from ACS patients undergoing invasive angiography, and compare the criminal lesions to precursor plaques in baseline CCTA. In one cohort receiving clinically indicated CCTA, patients with non-obstructive CAD were found, 38% of those so tested, still at significant risk of mid to long term significant adverse cardiovascular and/or cerebrovascular events (MACCE). The hazard ratio, independent of occlusion, based on the number of diseased segments was found to be a significant long-term predictor of MACCE in this group. One contributing factor to the predictive value of clinically non-obstructive CAD is that these lesions are more likely to become the home for most high risk plaques than are more occlusive plaques that tend to be more stable.
In several recently published longitudinal studies of statin and anti-inflammatory drug therapeutic effects, further demonstration of the potential utility of CCTA in the detection and management of obstructive and pre-obstructive atherosclerotic lesions was seen, with plaque and plaque regression being observed in the treatment branches remodelling to a more stable representation. This demonstrates the body of earlier intravascular ultrasound (IVUS, sometimes using "virtual histology" VH), near infrared spectroscopy (NIRS), optical Coherence Tomography (OCT), etc., studies exploring disease progression and therapeutic effects under various lipid-lowering drug regimens. Recent drug trials provide potential plaque biomarkers for demonstrating the efficacy of new medical therapies. Integrated biomarker and imaging study-4 (IBIS-4) found that calcification progressed with the potential protective effects of statins. Other studies have found that lipid-rich necrotic nuclei (LRNC) decrease under statin treatment. In these studies, clinical variables, when used alone, poorly differentiated the characteristics of identifying high-risk plaques. These studies emphasize the importance of complete characterization and assessment of the entire coronal tree, rather than just criminal lesions, to allow for a more accurate risk stratification that can properly analyze CCTA. In meta-analysis, CCTA has good diagnostic accuracy for detecting coronary plaque compared to IVUS which has small differences in assessing plaque area and volume, stenosis area percentage, and slight overestimation of lumen area. It has also been found that increasing the rate of change of lipid-rich necrotic nuclei and their distance from the lumen has a high prognostic value. In addition, the results of the ROMICAT II trial show that for stable CAD patients with acute chest pain but initial ECG and troponin negativity, identifying high risk patches on CCTA increases the likelihood of ACS, independent of significant CAD and clinical risk assessment. An examination by CCTA has been established for evaluating coronary atherosclerotic plaques. For patients with uncertain necessity of an invasive procedure, it would be beneficial and feasible to non-invasively predict MACCE utilization CCTA, which gives an overall estimate of disease burden and risk of future events.
The prevalence of carotid artery disease is closely related to CAD. Carotid atherosclerosis has been shown to be an independent predictor of MACCE, even in patients without pre-existing CAD. Such findings indicate a common underlying pathogenesis common to both conditions, which is further supported by the Multi-ethnic study of atherosclerosis (Multi-Ethnic Study of Atherosclerosis) (MESA). Atherosclerosis progresses through the evolution of arterial wall lesions, resulting in the accumulation of cholesterol-rich lipids and an inflammatory response. These vary similarly (if not identically) in the coronary, carotid, aortic and peripheral arteries. Certain plaque properties such as large atherosclerotic nuclei and lipid rich content, thin caps, outward remodeling, infiltration of plaque by macrophages and lymphocytes, and thinning of the culture medium are susceptible to induced vulnerability and rupture.
Non-invasive determination of hrp and/or FFR:
Non-invasive assessment of functional significance of stenosis using CCTA is of clinical and economic interest. The combination of the geometry or anatomy of the lesion or vessel and the characteristics or composition of the tissue including the wall and/or plaque in the wall (collectively plaque morphology) may explain the outcome of lesions with higher or lower risk plaques (HRP) and or orthogonal considerations of normal and abnormal blood flow reserves (FFR). Lesions with large necrotic nuclei may develop dynamic stenosis due to outward remodeling during plaque formation, resulting in more tissue extension, tissue stiffening, or smooth muscle layers having extended to the limit of Glagov phenomena, after which the lesion gradually invades the lumen itself. Likewise, inflammatory injury and/or oxidative stress may lead to local endothelial dysfunction, manifested by impaired vasodilation capacity.
If the tissue that makes up the plaque is predominantly a matrix or "fatty patch" that does not organize into necrotic nuclei, the plaque will expand sufficiently to keep up with the demand. However, if the plaque has larger necrotic nuclei, it will not expand. The blood supply will not keep up with the demand. Plaque morphology can improve accuracy by evaluating complex factors such as LRNC, calcification, bleeding, and ulceration versus objective reality that can be used to verify underlying information in ways that would otherwise be impossible due to lack of intermediate measurement target verification.
But that is not all that the plaque can do. Often, the plaque actually breaks, suddenly causing a clot, which may then cause infarction of heart or brain tissue. Plaque morphology may also identify and quantify these high risk plaques. For example, plaque morphology can be used to determine how close to the lumen the necrotic core is, which is a key determinant of risk of infarction. Knowing whether a lesion restricts blood flow under stress does not indicate the risk of rupture and vice versa. Other methods such as Computational Fluid Dynamics (CFD) can simulate blood flow restriction but not infarction risk without objectively validated plaque morphology. The fundamental advantage of plaque morphology is that its accuracy depends on the determination of vascular structure and tissue characteristics, while allowing the determination of phenotypes.
Clinical guidelines for optimal management of patients with different assessments of blood flow reserves are increasing. It is well known that obstructive lesions (large necrotic nuclei and thin caps) with high risk characteristics are predictive of the greatest likelihood of future events and importantly, vice versa.
In cases where plaque morphology is not accurately assessed, methods for determining FFR using CFD have been published. CFD-based flow reserves take into account only the lumen, or at most changes in lumen surface at different parts of the cardiac cycle. At best, it is considered that only the luminal surface needs to process both systole and diastole to get motion vectors (which is not even done by most available methods), but even if nothing happens under stress, because these analyses are done with computer simulation of what might happen under stress rather than measuring actual stress, and are not based on actual properties that give rise to vasodilation capability but on blood channel only. Some methods attempt to simulate forces and apply biomechanical models, but they use assumptions rather than validated measurements of wall tissue. Thus, these methods fail to anticipate what might happen if the stress actually caused cracking beyond the underlying assumption. Conversely, characterizing the tissue may solve these problems. Wall characteristics, including the effect on the vasodilatory capacity of the blood vessel due to its wall expandability, are considered advantageous because the composition of lesions determines flexibility and energy absorption, stable lesions remain overstreated and assessment of MACCE risk is incomplete. Advantages of assessing FFR using morphology include the fact that morphology is a leading indicator, FFR hysteresis, and the presence and extent of HRP better informs the edge subjects of treatment. FFR can be predicted by studying increasingly showing morphology but FFR does not, reinforcing the importance of accurate assessment by morphology resolution. That is, the morphology that is effectively evaluated not only enables determination of FFR, but may also produce discontinuous changes in plaque that shift the patient from ischemia to infarction or HRP.
Disclosure of Invention
Systems and methods are provided herein that utilize a hierarchical analysis framework to identify and quantify biological properties/analytes from imaging data, and then identify and characterize one or more medical conditions based on the quantified biological properties/analytes. In some embodiments, the systems and methods combine computer image analysis and data fusion algorithms with patient clinical chemistry and blood biomarker data to provide a multi-factor panel that can be used to distinguish between different subtypes of disease. Thus, the systems and methods of the present disclosure may advantageously implement biological and clinical insights in advanced computational models. These models can then interface with complex image processing by specifying rich ontologies associated with an increasing understanding of pathogenesis, and take the form of strict definitions of what was measured, how it was measured and evaluated, and how it was related to clinically relevant subtypes and stage of disease that can be validated.
Human diseases exhibit strong phenotypic differences that can be understood by applying complex classifiers to the extracted features that capture spatial, temporal and spectral results that can be measured by imaging but are unintelligible without assistance. Traditional computer-aided diagnosis only extrapolates from image features in a single step. In contrast, the systems and methods of the present disclosure employ a hierarchical inference scheme that includes intermediate steps of determining spatially resolved image features and temporally resolved dynamics on multiple levels of biological target components of morphology, composition, and structure, which are then used to derive clinical inferences. Advantageously, the hierarchical inference scheme ensures that the clinical inferences can be understood, validated, and interpreted at each level of the hierarchy.
Thus, in example embodiments, the systems and methods of the present disclosure utilize a hierarchical analysis framework that includes a first layer of algorithms that measure biological properties that can be objectively verified against real-phase criteria independent of imaging and a subsequent second set of algorithms for determining medical or clinical conditions based on the measured biological properties. Such a framework is applicable, for example, in a "and/or" manner, i.e., individually or in combination, to a variety of different biological properties such as angiogenesis, neovascularization, inflammation, calcification, lipid deposition, necrosis, hemorrhage, rigidity, density, stenosis, dilation, remodeling ratios, ulcers, blood flow (e.g., blood flow in a channel), pressure (e.g., blood pressure in a channel or pressure of one tissue against another), cell type (e.g., macrophages), cell arrangement (e.g., cell arrangement of smooth muscle cells), or shear stress (e.g., shear stress of blood in a channel), cap thickness, and/or tortuosity (e.g., entrance and exit angles). The measured variables of each of these properties, such as the number and/or extent of the properties and/or the trait, may be measured. Example conditions include perfusion/ischemia (e.g., limited) (e.g., perfusion/ischemia of brain or heart tissue), perfusion/infarction (e.g., complete resection) (e.g., perfusion/infarction of brain or heart tissue), oxygenation, metabolism, blood flow reserve (perfusion capacity), malignancy, invasiveness, and/or risk stratification (whether as probability of an event or as time of event (TTE)), such as a major adverse cardiovascular or cerebrovascular event (MACCE). The real-world basis may include, for example, biopsies, expert tissue annotations forming resected tissue (e.g., endarterectomy or necropsy), expert phenotypic annotations of resected tissue (e.g., endarterectomy or necropsy), physical pressure lines, other imaging modalities, physiological monitoring (e.g., ECG, sa02, etc.), genomic and/or proteomic and/or metabonomic and/or transcriptomic assays and/or clinical results. These properties and/or conditions may be measured at a given point in time and/or may vary (longitudinally) over time.
In an exemplary embodiment, the systems and methods of the present application advantageously relate to computer-aided phenotyping (CAP) of a disease. CAP is a new and exciting complement to the field of Computer Aided Diagnosis (CAD). As disclosed herein, CAP can apply hierarchical inference incorporating computer image analysis and data fusion algorithms to patient clinical chemistry and blood biomarker data to provide a measured multi-factor panel or "spectrum" that can be used to distinguish between different subtypes of a disease to be treated differently. Thus, CAP implements new methods for robust feature extraction, data integration, and scalable computation strategies to implement clinical decision support. For example, the spatiotemporal texture (SpTeT) method captures relevant statistical features for spatially and dynamically characterizing tissue. The spatial signature maps to a characteristic pattern of lipids mixed with, for example, extracellular matrix fibers, necrotic tissue, and/or inflammatory cells. The kinetic characteristics map to, for example, endothelial permeability, neovascularization, necrosis and/or collagen breakdown.
In contrast to current CAD methods where clinical inference is performed in a single step of machine classification from image features, the system and method of the present application can advantageously utilize a hierarchical inference scheme that can be applied not only to initially spatially resolved image features, but also to intermediate temporally resolved dynamics on multiple layers of morphological and structural constituent biological target components, and then to final clinical inference. This results in a system that can be understood, validated and interpreted at each level in the hierarchy from the low-level image features at the bottom to the biological and clinical features at the top.
The systems and methods of the present disclosure improve upon phenotype classification and outcome prediction. Phenotypic classification can occur on two layers, on the one hand, a separate anatomical location and, on the other hand, a more generally described body part. The input data of the former may be a 2D data set and the input data of the latter may be a 3D data set. However, for phenotype classification, the objective truth may be on any one of the layers, with the objective truth generally occurring on the patient layer for outcome prediction/risk stratification, but in some cases the objective truth may be more specific (e.g., on which side of the stroke symptoms are represented). The meaning here is that the same input data can be used for both purposes, but the model will differ significantly due to the layers used for the input data and the basis of the live notes.
While it is possible to perform modeling of the reading vector as input data, performance is typically limited by the variables being measured. The present application advantageously utilizes unique measured variables (e.g., cap thickness, calcium depth, and ulcers) to improve performance. Thus, the read-only vector method may be applied where the vector contains these measured variables (e.g., in combination with conventional measured variables). However, the systems and methods of the present disclosure may advantageously utilize a Deep Learning (DL) approach, which may nevertheless provide even richer data sets. The system and method of the present application may also advantageously utilize an unsupervised learning application, providing better scalability across data domains (taking into account the highly desirable features of the speed at which new biological data is generated).
In the example embodiments presented herein, convolutional Neural Networks (CNNs) may be used to build classifiers in a method that may be characterized as a transfer learning with fine-tuning method. CNNs that train large summary imaging data on powerful computing platforms can be successfully used to classify images that have not yet been annotated in network training. This is intuitively understandable, as many common feature classifications will help identify images of very different objects (i.e., shapes, boundaries, orientations in space, etc.). It is then conceivable that a CNN trained to identify thousands of different objects using pre-annotated data sets of tens of millions of images would perform the basic image recognition task better than by chance, and would have comparable performance to a CNN trained from the beginning after a relatively small slight adjustment of the last classification layer, sometimes referred to as the' softmax layer. Because these models are very large and have been trained on a large number of pre-annotated images, they tend to "learn" very unique, distinctive imaging features. Thus, a convolutional layer may be used as a feature extractor, or a trained convolutional layer may be slightly adjusted to accommodate the problem at hand. The first method is called transfer learning, and the latter is called fine tuning.
CNNs are adept at performing many different computer vision tasks. However, CNN has some drawbacks. Two important drawbacks of the importance of medical systems are 1) the need for extensive training and validation of data sets, and 2) that intermediate CNN calculations do not represent any measurable property (sometimes criticized as a "black box" whose principles are not described). The methods disclosed herein may advantageously utilize a pipeline that is composed of one or more stages that are individually biometrically and independently verifiable, followed by a convolutional neural network that begins at these stages rather than just the original image. Furthermore, some transformations may be applied to reduce variations unrelated to the problem at hand, such as expanding the cross-section of the doughnut-shaped vessel prior to feeding the network to become a rectangular representation with a normalized coordinate system. These front-end pipeline stages alleviate both of the two drawbacks of using CNNs for medical imaging.
Typically, early convolution layers act as feature extractors that increase specificity, and one or both of the last layers that are fully connected act as classifiers (e.g., "softmax layers"). Schematic representations of layer sequences and their functions in a typical CNN are available from a number of sources.
Advantageously, the systems and methods of the present disclosure utilize an enriched dataset to enable non-invasive phenotyping of tissue determined by radiological datasets. One type of enrichment is to pre-process the data to perform tissue classification and use a "pseudo-color" overlay to provide a data set that can be objectively verified (as opposed to using only the original image, which is not possible). Another type of enrichment is the use of transformations on the coordinate system to emphasize a biologically plausible spatial context while removing noise variations to improve classification accuracy, allow a smaller training set, or both.
In an example embodiment, the systems and methods of the present application may employ a multi-stage pipeline (i) semantic segmentation for identifying and classifying regions of interest (e.g., which may represent quantitative biological analytes), (ii) spatial expansion for converting cross sections of tubular structures (e.g., venous/arterial cross sections) into rectangles, and (iii) application of trained CNNs for reading annotated rectangles and identifying to which phenotype they belong (e.g., stable or unstable plaque and/or normal or abnormal peak FFR) and/or time to predicted event (TTE). It should be noted that by training and testing the CNN with respect to the doughnut-shaped dataset (in the case of unfolding) in the case of unfolding, the unfolding may prove to improve the verification accuracy of each particular embodiment. Thus, imaging various embodiments of tubular structures (e.g., plaque phenotyping) or other structures (e.g., lung cancer mass subtyping (subtyping)) or other applications may similarly benefit from performing similar steps (e.g., semantic segmentation, then spatial transformation such as unfolding (prior to applying CNN)). However, it is contemplated that in some alternative embodiments, the phenotype may be determined using an untransformed dataset (e.g., a dataset that is not spatially spread out) (e.g., in conjunction with or independent of an untransformed dataset).
In an example embodiment, the semantic segmentation and spatial transformation may involve pre-processing the image volume including target initialization, normalization, and any other desired pre-compression, such as deblurring or restoration, to form a region of interest containing a physiological target to be phenotyped. Notably, the region of interest may be the volume constituted by a cross section through the volume, a body part which may be determined automatically or provided explicitly by the user. The object of a body part that is tubular in nature may be accompanied by a centre line. The centerline may have branches when present. Branches may be marked automatically or by a user. Note that generalization of the centerline concept may represent anatomy that is not tubular, but benefits through some structural directionality, such as the region of a tumor. In any case, the centroid may be determined for each cross-section in the volume. For tubular structures, the centroid may be the center of the channel, e.g., lumen of a blood vessel. For lesions, the centroid may be the center of the mass of the tumor. The (optionally deblurred or restored) image may be represented in a cartesian dataset (CARTESIAN DATA SET) in which x is used to represent how far from the centroid, y represents the rotation θ, and z represents the cross section. Each branch or region will form one such cartesian set. When multiple sets are used, the "null" value may be used for the overlap region, that is, each physical voxel may be represented only once in the entire set in a geometrically fit together manner. Each dataset may be paired with another dataset having a sub-region marked by objectively verifiable tissue composition. Example labels for vascular tissue may be lumen, calcification, LRNC, IPH, etc. Example labels for lesions may be necrotic, neovascularized, etc. These tags can be objectively verified, for example, by histology. The paired data sets may be used as inputs to a training step for building a convolutional neural network. In an example embodiment, two layers of analysis may be supported, one layer at a separate cross-sectional layer and a second layer at a volumetric layer. The output signature indicates phenotype or risk stratification.
Exemplary image preprocessing may involve deblurring or restoration using, for example, a patient-specific point spread determination algorithm to mitigate artifacts or image limitations caused by the image formation process. These artifacts and image limitations may reduce the ability to determine the characteristics of the predicted phenotype. For example, deblurring or recovery may be achieved by iteratively fitting a physical model of the scanner point spread function, for example, using regularized assumptions about true latent densities of different regions of the image.
In an example embodiment, the CNN may be AlexNet, inception, caffeNet or other networks. In some embodiments, the CNN may be reconfigured, for example, where the same number and type of layers are used, but the input and output dimensions are changed (e.g., aspect ratio is changed). Example embodiments of various example CNNs are provided as open sources, e.g., on TensorFlow and/or in other frameworks, available as open sources and/or license configurations.
In an example embodiment, the data set may be enhanced. For example, in some embodiments, 2D or 3D rotation may be applied to the data set. Thus, in the case of an untransformed (e.g., doughnut-like) data set, the enhancement may involve, for example, a combination of a random longitudinal flip data set with a random rotation data set (e.g., rotated by a random angle between 0 degrees and 360 degrees). Similarly, in the case of a transformed (e.g., expanded) data set, the enhancement may involve, for example, a combination of random longitudinal flipping and random "scrolling" of the image, such as a random number of pixels ranging from 0 to the width of the image (where scrolling is analogous to rotation about θ).
In an example embodiment, the data set may be enriched by using different colors to represent different tissue analyte types. These colors may be selected to visually contrast with respect to each other and with respect to non-analyte surfaces (e.g., normal walls). In some embodiments, the non-analyte surface may be depicted in gray. In an example embodiment, the data set enrichment may cause reference live annotations of tissue characteristics (e.g., tissue characteristics indicative of plaque phenotypes), and may provide spatial context of how such tissue characteristics appear in cross-section (e.g., taken orthogonal to the axis of the blood vessel). Such spatial context may include a coordinate system (e.g., based on polar coordinates relative to the centroid of the cross section) that provides a common basis for data set analysis relative to histological cross sections. Thus, the enriched dataset may advantageously be superimposed on top of the color-coded pathologist annotation (and vice versa). Advantageously, the histology based on the annotated data set may then be used for training (e.g., training of CNN) in conjunction with or independent of image feature analysis of the radiological data set. Notably, the histology based on the annotated data set can increase the efficiency of the DL method because the histology based on the annotated data set uses relatively simple pseudo-color images instead of high resolution complete images without losing spatial context. In an example embodiment, the coordinate direction may be represented internally using unit phasors and phasor angles. In some embodiments, the coordinate system may be normalized, for example by normalizing the radial coordinates with respect to the wall thickness (e.g., to provide a common basis for comparing tubular structures/cross sections of different diameters/thicknesses). For example, the normalized radial distance has a value of 0 at the inner boundary (inner wall lumen boundary) and a value of 1 at the outer boundary (outer wall boundary). Notably, this may be applied to tubular structures (e.g., the gastrointestinal tract) associated with blood vessels or other pathophysiology.
Advantageously, the enriched dataset of the present application provides a non-invasive image-based classification (e.g., wherein a tissue classification scheme may be used to non-invasively determine phenotypes) that is based on known reference truths. In some embodiments, the known reference truth may be non-radiological (e.g., histological or another ex vivo based tissue analysis). Thus, for example, a radiological dataset (e.g., histological information) annotated as containing ex vivo reference real-phase data may be advantageously used as input data for the classifier. In some embodiments, multiple different known reference truths may be used in conjunction with each other or independent of each other in annotating the enriched dataset.
As described herein, in some embodiments, the enriched dataset may utilize a normalized coordinate system to avoid uncorrelated variations associated with, for example, wall thickness and radial representations. Furthermore, as described herein, in example embodiments, a dataset of "doughnut-like" shape may be "expanded" prior to training for classification (e.g., using CNN) and/or prior to running a training classifier on the dataset. Notably, in such embodiments, the analyte annotation of the training data set may be prior to transformation, e.g., after expansion, or a combination of both. For example, in some embodiments, the untransformed dataset may be annotated (e.g., using ex vivo classification data such as histological information) and then transformed for classifier training. In such embodiments, finer granularity of ex vivo based classification may collapse to match lower expected granularity for in vivo radiological analysis to reduce computational complexity while addressing what would otherwise be open to criticism of a "black box.
In some embodiments, the colors and/or axes used to visualize the annotated radiological dataset may be selected to correspond to the same colors/axes as typically presented in the classification based on ex vivo reference truth (e.g., the same colors/axes as used in histology). In an example embodiment, a transformed enriched data set may be presented (e.g., normalized for wall thickness), where each analyte is visually represented in a different contrasting color for all non-analyte regions and relative to background regions (e.g., black or gray). Notably, according to embodiments, the common background may or may not be annotated, and thus may or may not be visually distinguished between non-analyte regions inside and outside the vessel wall or between background features (e.g., lumen surface irregularities, varying wall thicknesses, etc.). Thus, in some embodiments, the annotated analyte region may be visually depicted (e.g., color coded and normalized for wall thickness) against a uniform (e.g., completely black, completely gray, completely white, etc.) background. In other embodiments, the annotated analyte regions (e.g., color coded for wall thickness but not normalized) may be visually depicted relative to the annotated background (e.g., where different shadows (gray, black, and/or white) may be used to distinguish between (i) a central lumen region inside an inner lumen of a tubular structure, (ii) a non-analyte region inside the wall, and/or (iii) a region outside an outer wall). This may enable analysis of wall thickness variations (e.g., due to ulcers or thrombi). In further example embodiments, the annotated data set may contain an identification (and visualization) of areas such as intra-plaque hemorrhage and/or other morphological aspects. For example, the area of intra-plaque hemorrhage may be viewed in red, LRNC in yellow, etc.
One embodiment of the systems and methods of the present application may be in guided vascular therapy. Classification may be established based on the likely dynamic behavior of plaque lesions (based on their physical properties or specific mechanisms, e.g. based on inflammation or cholesterol metabolism) and/or based on the progression of the disease (e.g. early versus late of their natural history). Such classification may be used to guide patient treatment. In an example embodiment, the Stary plaque typing (typing) system employed by the AHA may be used as a basis for the type of in vivo determination shown in color overlay. Examples map to [ 'I', 'II', 'III', 'IV', 'V', 'VI', 'VII', 'VIII', obtaining a class-graph = [ subclinical, subclinical subclinical, unstable, stable ]. However, the systems and methods of the present disclosure are not tied to Stary. As an example of this, in a further embodiment, the Virmani system [ "calcified nodules", "CTO", "FA", "FCP", "healing plaque rupture", "PIT", "IPH", "rupture", "TCFA"; "ULC" ] is used with class-map= [ stable, unstable ], and other typing systems may achieve similarly high performance. In example embodiments, the systems and methods of the present disclosure may incorporate disparate typing systems, may change class diagrams, or other variations. For the FFR phenotype, an equivalent value such as normal or abnormal may be used and/or an amount may be used to facilitate comparison with, for example, a physical FFR.
Thus, in example embodiments, the systems and methods of the present disclosure may provide phenotypic classification of plaque based on the enriched radiological dataset. In particular, the one or more phenotypic classifications may include distinguishing stable plaques from unstable plaques, for example, where the basis of the baseline true phase of classification is based on (i) lumen narrowing (possibly enhanced by additional measures such as tortuosity and/or ulceration), (ii) calcium content (possibly enhanced by depth, shape, and/or other complex representations), (iii) lipid content (possibly enhanced by cap thickness and/or other complex representations), (iv) anatomical structure or geometry, and/or (v) IPH or other content. Notably, this classification has been shown to have high overall accuracy, sensitivity and specificity, as well as high clinical relevance (potentially altering existing standards of care for patients undergoing catheter and cardiovascular care).
Another exemplary embodiment is lung cancer, wherein subtypes of a bolus can be determined in order to guide the most likely beneficial treatment to a patient based on an apparent phenotype. In particular, pretreatment and dataset enrichment can be used to isolate solid and semi-solid ("glass-ground") subregions that differ in both malignancy and indication of different optimal treatment methods.
In further example embodiments, the systems and methods of the present disclosure may provide image preprocessing, image denoising, and novel geometric representation (e.g., affine transformation) of CT angiography (CTA) diagnostic images to facilitate and maximize the performance of CNN-based markers of deep learning algorithms for developing a flow classifier and representing the risk of adverse cardiovascular effects during a procedure. Thus, as disclosed herein, image deblurring or restoration may be used to identify lesions of interest and quantitatively extract plaque composition. Furthermore, transforming the image of the cross-section (along the principal axis vessel) segmentation into a "unfolded" rectangular reference frame following, for example, an established lumen along the X-axis can be applied to provide a normalized frame to allow the DL method to learn the representative features best.
Although the example embodiments herein utilize 2D annotated cross sections for analysis and phenotyping, it should be noted that the application is not limited to such embodiments. In particular, some embodiments may utilize enriched 3D datasets, for example, instead of or in addition to processing 2D cross-sections alone. Thus, in an example embodiment, a video interpretation according to computer vision may be applied to the classifier input dataset. Note that processing multiple cross-sections sequentially, as along the centerline in a "movie" sequence, may summarize these methods for tubular structures, e.g., moving the centerline up and down and/or other 3D representations according to aspects best suited for anatomy.
In further example embodiments, the pseudo-color representation in the enriched dataset may have a continuous value across pixel or voxel locations. This can be used for "radiology" features (with or without explicit verification) or verified tissue types calculated independently for each voxel. This set of values may exist in any number of pre-processed stacks and may be fed into a phenotype classifier. Notably, in some embodiments, the value of each pixel/voxel may be for any number of different features (e.g., may be represented in any number of different stacks for different analytes, which is sometimes referred to as "multiple occupancy"). Alternatively, each pixel/voxel may be assigned to only one analyte (e.g., to only a single analyte stack). Further, in some embodiments, the pixel/voxel values for a given analyte (e.g., a given analyte) may be based on an all-or-nothing classification scheme (e.g., whether the pixels are calcium or not). Alternatively, the pixel/voxel values for a given analyte may be relative values, e.g., probability scores. In some embodiments, the relative values of pixels/voxels are normalized across a set of analytes (e.g., such that the total probability adds up to 100%).
In an example embodiment, the classification model may be trained by applying, in whole or in part, a multi-scale modeling technique such as partial differential equations to, for example, represent a representation of possible cell signaling pathways or seemingly rational biological motivations.
Other alternative embodiments include using, for example, variation data collected from multiple points in time, rather than (only) data from a single point in time. For example, if the amount or nature of a negative cell type increases, it may be referred to as a "progressive variable" phenotype, while a "regression variable" phenotype is directed to a decrease. The regression variable may be, for example, due to a response to a drug. Alternatively, if the rate of change of LRNC is fast, for example, this may suggest a different phenotype, such as a "rapid progression variable".
In some embodiments, non-spatial information, such as those derived from other assays (e.g., laboratory results) or demographics/risk factors, or other measurements extracted from radiological images, may be fed to the final layer of the CNN to combine spatial information with non-spatial information.
Notably, while the systems and methods focus on phenotype classification, similar methods can be applied to outcome prediction. Such classification may be based on a baseline true phase history result assigned to the training dataset. For example, life expectancy, quality of life, efficacy of treatment (including comparison of different treatment methods), and other outcome predictions may be determined using the systems and methods of the present application.
Examples of the systems and methods of the present application are further illustrated in the following figures and detailed description.
In further example embodiments, the systems and methods of the present disclosure provide for determination of fractional flow reserve in cardiac muscle and/or brain tissue through measurement of plaque morphology. The systems and methods of the present disclosure may use complex methods to characterize the vasodilatory capabilities of a blood vessel through objectively validated determinations of tissue type and traits affecting its expandability. In particular, from the point of view of blood flow reserves, plaque morphology can be used as input for analysis of the dynamic behavior of the vasculature (training a model with blood flow reserve real-phase data). Thus, it is possible to determine the dynamic behavior of the system instead of (only) a static description. Stenosis itself is well known for low predictive power because it provides only static descriptions, requiring the addition of accurate plaque morphology for the highest accuracy imaging-based assessment of dynamic function. The present disclosure provides systems and methods that determine accurate plaque morphology and then process to determine dynamic function.
In an example embodiment, deep learning is utilized to maintain spatial context of tissue characteristics and vascular anatomy (collectively plaque morphology) at an optimal level of granularity, thereby avoiding excessive non-material variability in the training set, while maintaining other simpler uses that require overriding machine learning. Other alternative methods use measurements of vascular structures alone, rather than a more complete treatment of tissue properties. Such methods may capture lesion length, stenosis, and possible entrance and exit angles, but they ignore determinants of vasodilation capacity. Advanced assumptions must be made about the flexibility of the arterial tree as a whole to use these models, but plaque and other tissue properties can lead to the expansibility of the coronary tree being expandable in a heterogeneous manner. Different parts of the tree are more or less extensible. Since the expandability is a key factor in determining FFR, these methods are inadequate. Other approaches to tissue characterization attempt to do so without objective verification of their accuracy and/or without maintaining the data enrichment methods needed to provide the spatial context necessary for the effectiveness of the deep learning approach that is optimal for medical image deep learning (e.g., transformations such as unfolding and verified pseudo-color tissue type superimposition). Some approaches attempt to increase the training set size by using synthetic data, but this is ultimately limited by the limited data on which the synthetic generation is based, and more by the data enhancement scheme than the amount of actual expansion of the input training set. In addition, the systems and methods of the present disclosure are capable of producing a continuous assessment across the length of a blood vessel.
The systems and methods of the present disclosure effectively utilize objective tissue characterization through histological verification across multiple arterial beds. Regarding the relevance to the example application of atherosclerosis, plaque composition is similar in terms of coronary and carotid arteries, independent of their age, and this will largely determine the relative stability, indicating that the representation at CCTA is similar to that at CTA. Minor differences in the range of plaque characteristics may include thicker caps and higher prevalence of intra-plaque hemorrhage and calcified nodules in carotid arteries, however, there is no difference in the nature of the plaque components. In addition, carotid arteries and coronary arteries share many similarities in the physiology of vascular tone regulation, which has an impact on plaque evolution. Myocardial blood perfusion is regulated by vasodilation of epicardial coronary arteries in response to various stimuli such as NO, resulting in dynamic changes in coronary tension that may lead to multiple changes in blood flow. In a similar manner, the carotid artery is not only a simple conduit supporting the cerebral circulation, but it exhibits vascular response properties in response to stimuli, including changes in shear stress. Endothelial shear stress contributes to endothelial health and favorable transcriptome profile of the vessel wall. Clinical studies have demonstrated that areas of low endothelial shear stress are associated with atherosclerosis progression and high risk plaque characteristics. Similarly, in carotid arteries, lower wall shear stress is associated with plaque development and localization. (endothelial shear stress is itself a useful measure, but not in lieu of plaque morphology.) it is important to acknowledge that the technical challenges are different across the bed (e.g., using gating, vessel size, amount and nature of motion), but these effects can be mitigated by the scanning protocol, which can result in approximately in-plane voxel sizes in the range of 0.5-0.75mm, and the resolution of the penetration plane of the coronal (smaller vessel) is actually superior to, but not inferior to, the resolution of the neck (where voxels are isotropic in the coronal and not in the cervical and peripheral tip).
The present disclosure is based on the solid mathematical principle respecting the Nyquist-Shannon (Nyquist-Shannon) sampling theorem, with the use of conventionally acquired CTAs to achieve effective resolution in the same active domain as IVUS VH. IVUS imaging has excellent spatial resolution for the overall structure (i.e., lumen), but generally lacks the ability to characterize plaque components with high accuracy. The literature estimates the use of typical transducers in the range of 20-40MHz with IVUS resolution of 70-200 μm in the axial direction and 200-400 μm in the lateral direction. IVUS VH is a method of spectral backscatter analysis that enables plaque composition analysis (and hence measurement). For IVUS VH methods using a large (e.g. 480 μm) moving window in the axial direction, the relatively large size of this moving window (and hence the accuracy of the composition analysis) is fundamentally limited by the bandwidth requirements of the spectral analysis. In the case of IVUS VH images displayed in a small moving window (e.g., 250 μm), this limits the accuracy of this analysis, since each IVUS pixel is classified into a discrete class. The 64-layer multi-detector CCTA scan has been described as being in the range of 300-400 μm resolution. Although this has brought CCTA resolution very close to that of IVUS VH, other factors specific to the analysis of the present invention are considered. Thus, rather than discretely classifying CCTA pixels, the systems and methods of the present disclosure perform iterative deblurring or restoration modeling steps with sub-voxel accuracy, e.g., using a triangular tessellated surface to represent the true surface of the lipid core. The lipid core area was in the range of 6.5-14.3mm2 (corresponding radius of curvature of 1.4-2.1 mm) in 393 patients. Using a chord length formula in which the chords diagonally span a single voxel, this represents an upper limit of 44 μm for the error of the tessellated surface representation of the lipid core. There are additional factors associated with deblurring or recovery analysis that may cause errors in about half of the pixels with a total range of accuracy of 194-244 μm, which is generally equivalent to the accuracy of IVUS VH for measuring cap thickness.
The present disclosure is also innovative in dealing with the fundamental limitations of applying artificial intelligence and deep learning to analysis of atherosclerosis imaging data. Conventional competing methods, which lack a validated and objective basis, are fundamentally limited in many ways. First, the use of arbitrary thresholds results in an inability to evaluate accuracy unless in a form of weak correlation with other markers that lack objective verification themselves. Second, this lack of objectivity increases the need for large amounts of data based on relevance in the environment. This places an unfeasible requirement on manual annotation of radiological images, which are themselves the subject of analysis (that is, as opposed to verification from an independent modality). Third, due to the interpretability of the models generated, these models must be presented to regulatory authorities such as the FDA as "black boxes" that lack a scientifically rigorous description of the mechanisms of action that can be linked to traditional biological hypothesis testing. In particular, while CNN has proven to be excellent in performing many different computer vision tasks, it has significant drawbacks when applied to radiological datasets, 1) the need for extensive training and validation of the dataset, and 2) intermediates.
CNN calculations often cannot represent any measurable property, which makes regulatory approval difficult. To address these challenges, a pipelined approach consisting of stages is utilized with the separate ability to objectively verify at the biological level to feed the output of the CNN. The present invention overcomes these shortcomings by using a pipeline that is made up of one or more stages that are biometrically (i.e., objectively verifiable) and a subsequent smaller-scale convolutional neural network that processes these verified biological properties to output desired output conditions that may be said to be not based on subjective or qualitative "image features. These architectural capabilities reduce drawbacks by increasing the efficiency of the available training data and enable intermediate steps to be objectively verified. The system and method of the present disclosure uses CNNs for medical imaging while reducing the drawbacks by 1) reducing the complexity of visual tasks to within acceptable levels when training CNNs with medium-sized data sets, and 2) producing intermediate outputs that are objectively validated and easily interpreted by users or regulatory authorities. Intermediate CNN calculations are generally unable to represent any measurable property, spatial context is often difficult to obtain in ML methods using feature extraction, and using raw datasets that do contain spatial context often lacks objective reference real-world labels of the extracted features, whether processed using traditional machine learning or deep learning methods. Also, the original dataset contains variations that are "noise" with respect to the classification problem at hand, which is overcome in computer vision applications outside of medicine by having a scale of very large training sets that are not generally available in the medical field, particularly datasets annotated with reference truths.
The quantitative capabilities of the systems and methods of the present disclosure make such ideal for analysis of more advanced imaging protocols (e.g., early/late phase alignment, dual-energy and multi-spectral techniques for tissue characterization studies, etc.).
While the system and method of the present disclosure has been particularly shown and described with reference to exemplary embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the disclosure.
Drawings
The foregoing will be apparent from the following more particular description of example embodiments as illustrated in the accompanying drawings in which like reference characters refer to the same parts throughout the different views. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating embodiments of the present disclosure.
Fig. 1 depicts a schematic diagram of an exemplary system for determining and characterizing medical conditions by implementing a hierarchical analysis framework in accordance with the present disclosure.
Fig. 2 outlines a resampling-based model building method that may be implemented by the systems and methods described herein, in accordance with the present disclosure.
Fig. 3 depicts a sample patient report that may be output by the systems and methods described herein, in accordance with the present disclosure.
Fig. 4 depicts an example segmentation stage of a multi-scale vessel wall analyte map in accordance with the present disclosure.
Fig. 5 depicts an exemplary pixel-level probability mass function as a set of analyte probability vectors in accordance with the present disclosure.
FIG. 6 illustrates a technique for calculating a putative analyte spot according to the present disclosure.
Fig. 7 depicts normalized vessel wall coordinates for an exemplary vessel wall composition model in accordance with the present disclosure.
Fig. 8 depicts an example edge between a removed plaque and an outer vessel wall of a histological sample according to the present disclosure.
Fig. 9 illustrates some complex vessel topologies that may be interpreted using the techniques described herein, in accordance with the present disclosure.
FIG. 10 depicts a representation of an exemplary analyte spot with normalized vessel wall coordinate distribution in accordance with the present disclosure.
Fig. 11 depicts an exemplary distribution of blog descriptors according to the present disclosure.
Fig. 12 depicts an exemplary model for imaging data associated between hidden reference real-phase states and observed states in accordance with the present disclosure.
Fig. 13 depicts a diagram of an example Markov (Markov) model/Viterbi algorithm for correlating observed states with hidden states in an image model according to the present disclosure.
Fig. 14 depicts an example frequency distribution of total number of spots per histological slide for a plurality of histological slides according to the present disclosure.
Fig. 15 depicts an exemplary implantation of a 1D markov chain according to the present disclosure.
Fig. 16 depicts an example first order markov chain for a text probability table according to the present disclosure.
Fig. 17 depicts a condition dependency of a first pixel based on its neighboring pixels according to the present disclosure.
Fig. 18 depicts a further exemplary hierarchical analysis framework in accordance with the present disclosure.
Fig. 19 depicts an example application of phenotyping purposes in guided vascular therapy according to the present disclosure. The depicted example uses the Stary plaque typing system employed by AHA as a basis, with the type determined in vivo shown as a color overlay.
Fig. 20 depicts an example application of phenotyping lung cancer according to the present disclosure.
FIG. 21 illustrates exemplary image preprocessing steps for deblurring or restoration according to the present disclosure. Deblurring or restoration uses patient-specific point spread determination algorithms to mitigate artifacts or image limitations caused by the image formation process, which may reduce the ability to determine the characteristics of the predicted phenotype. The de-blurred or restored (processed) image depicted (bottom) is derived from CT imaging of the plaque (top) and is the result of iteratively fitting a physical model of the scanner point spread function with regularization assumptions about the true latent density of the different regions of the image.
Fig. 22 depicts an exemplary application of an enriched dataset exhibiting an atherosclerotic plaque phenotype according to the present disclosure.
Fig. 23 illustrates tangential and radial direction variables using internal representations of unit phasors and phasor angles (shown in gray scale encoding) according to the present disclosure, illustrating the use of normalization axes for vessel-related tubular structures and other pathophysiology associated with such structures (e.g., the gastrointestinal tract).
Fig. 24 illustrates an exemplary superposition of notes generated from a CTA (contour region) radiological analysis application according to the present disclosure on top of a pathologist generated note from histology (solid region).
Fig. 25 illustrates an additional step of data enrichment to avoid uncorrelated variations associated with wall thickness and radial representations, in particular, using a normalized coordinate system, in accordance with the present disclosure. In particular, the "donut shape" is "unfolded" while retaining pathologist notes.
Fig. 26 shows additional steps of data enrichment associated with plaque phenotyping in accordance with the present disclosure. Working from the expanded form system, lumen irregularities (e.g., due to ulcers or thrombi) and locally varying wall thickening are indicated.
Fig. 27 shows data enrichment imaging (in both undeployed and deployed forms) including intra-plaque hemorrhage and/or other morphologies according to the present disclosure.
FIG. 28 shows the results of validation of an algorithm trained and validated with different variations of data enrichment imaging according to the present disclosure.
Fig. 29 provides an example application of phenotyping lung lesions according to the present disclosure.
Fig. 30 provides an example of biological properties (including, for example, tissue characteristics, morphology, etc.) for phenotyping lung lesions according to the present disclosure. Note that in example embodiments, the pseudo-color may be represented as a continuous value, rather than a discrete value, with respect to one or more of these biological properties.
FIG. 31 illustrates a high-level view of one example method for user interaction with a computer-aided phenotyping system in accordance with the present disclosure.
Fig. 32 illustrates an example system architecture according to this disclosure.
Fig. 33 illustrates components of an example system architecture according to this disclosure.
Fig. 34 is a block diagram of an example data analysis according to the present disclosure. Images of the patient are collected, raw slice data is used in a set of algorithms to measure biological properties that can be objectively validated, and these biological properties are then formed into an enriched dataset for feeding one of the CNNs, in this example where the results are propagated forward and backward using recursive CNNs to implement constraints or to create continuous conditions (such as fractional flow reserve that monotonically decreases from proximal to distal throughout the vascular tree, or constant HRP values in focal lesions, or other constraints).
Fig. 35 illustrates causal relationships and available diagnostics of coronary ischemia according to the present disclosure.
Fig. 36 depicts an exemplary 3D segmentation of the lumen, vessel wall, and plaque components (LRNC and calcification) of two patients exhibiting chest pain and similar risk factors, as well as stenosis, in accordance with the present disclosure. Left, 68 year old men with NSTEMI at follow-up. Right, 65 year old men without event at follow-up. The systems and methods of the present disclosure correctly predict their respective results.
FIG. 37 depicts example histological processing steps for objectively verifying tissue composition in accordance with the present disclosure.
Fig. 38 is a schematic depiction of a plurality of cross-sections that may be processed to provide dynamic analysis of a vessel tree across an entire scan, showing the relationship between the cross-sections, and where the processing of one cross-section depends on its neighbors, in accordance with the present disclosure.
Detailed Description
Presented herein are systems and methods for analyzing pathology using quantitative imaging. Advantageously, the systems and methods of the present disclosure utilize a hierarchical analysis framework that identifies and quantifies biological properties/analytes from imaging data and then identifies and characterizes one or more pathologies based on the quantified biological properties/analytes. Using imaging to examine this hierarchical approach of underlying biology as an intermediate for assessing pathology provides a number of analysis and processing advantages over systems and methods configured to directly determine and characterize pathology from raw imaging data without a validation step and/or without the advantageous processing described herein.
For example, one advantage is the ability to utilize a training set from a non-radiological source, such as from a tissue sample source, e.g., histological information, in combination with or independent of a training set of radiological sources, to correlate radiological imaging features with biological properties/to correlate analytes with pathology. For example, in some embodiments, histological information may be used in training algorithms for identifying and characterizing one or more pathologies based on quantified biological properties/analytes. More specifically, it is also possible to identify in the radiological data a biological property/analyte that is identifiable/quantifiable in non-radiological data (as in an invasively obtained histological data set or obtainable by gene expression profiling) and to quantify said biological property/analyte (which is advantageously non-invasive). Information from non-radiological sources may then be used, for example, to correlate these biological properties/analytes with clinical findings regarding pathology using histological information, gene expression profiles, or other clinically rich datasets. Such clinically relevant data sets may then serve as or be part of a training set for determining/tuning (e.g., using machine learning) algorithms that relate biological properties/analytes to pathology having a known relationship to clinical outcome. These algorithms, which relate biological properties/analytes to pathology obtained using a training set of non-radioactive sources, can then be applied to evaluate biological properties/analytes derived from radiological data. Thus, the systems and methods of the present disclosure may advantageously enable the use of radiological imaging (which may advantageously be cost-effective and non-invasive) to provide alternative measures for predicting clinical outcome or guiding therapy.
Notably, in some cases, training data of non-radiological sources (e.g., histological information) may be more accurate/reliable than training data of radiological sources. Further, in some embodiments, training data from non-radiological sources may be used to enhance training data from radiological sources. Thus, the disclosed hierarchical analysis framework advantageously improves the trainability and resulting reliability of the algorithms disclosed herein, as better data input may result in better data output. As noted above, one major advantage is that once trained, the systems and methods of the present disclosure can be implemented to derive clinical information comparable to existing histological and other non-radiological diagnostic tests without undergoing invasive and/or expensive procedures.
Alternatively, in some embodiments, a training set of radiological sources (e.g., non-radiological imaging sources, e.g., histological sources, and/or non-imaging sources) may be utilized in conjunction with or independent of the training set of radiological sources, e.g., in correlating image features with biological properties/analytes. For example, in some embodiments, one or more biological models may be extrapolated and fit to correlate radiological data and non-radiological data. For example, the histological information may be correlated with radiological information based on a fundamental biological model. This correlation may enable training identification of biological properties/analytes in radiological data using non-radiological, e.g., histological, information.
In some embodiments, data extracted from complementary modalities may be used to correlate, for example, image features with biological properties/analytes from blood panels, physical FFR, and/or other data sources.
In an example embodiment, imaging data extracted from one imaging modality may be utilized to extrapolate and fit one or more biological models, the one imaging modality being correlated and/or fused with another imaging modality or non-imaging source (e.g., bloodwork). These biological models may advantageously be correlated across and between imaging and non-imaging data sets based on the biological models. Thus, these biological models may enable the hierarchical analysis framework to utilize data from one imaging modality with another imaging modality or with a non-imaging source to identify/quantify or identify/characterize one or more medical conditions.
Another advantage of the hierarchical analysis framework disclosed herein is the ability to consolidate data from multiple data sources of the same or different types into a process that identifies and characterizes pathology based on imaging data. For example, in some embodiments, one or more non-imaging data sources may be used in combination with one or more imaging data sources to identify and quantify a set of biological properties/analytes. Thus, in particular, the set of biological properties/analytes may comprise one or more biological properties/analytes identified and/or quantified based on one or more imaging data sources, one or more biological properties/analytes identified and/or quantified based on one or more non-imaging data sources, and/or one or more biological properties/analytes identified and/or quantified based on a combination of imaging and non-imaging data sources (note that for purposes of the quantitative imaging systems and methods of this disclosure, the set of biological properties/analytes may generally comprise at least one or more biological properties/analytes identified and/or quantified based at least in part on imaging data). The ability to enhance information from imaging data sources with information from other imaging and/or non-imaging data sources increases the robustness of the systems and methods presented herein and enables the use of any and all relevant information in identifying and characterizing pathologies when identifying and quantifying a set of biological properties/analytes.
Yet another advantage of the hierarchical analysis framework relates to, for example, the ability to adjust/fine tune the data on each level before or after evaluating subsequent levels with the data (note that this may be an iterative process in some embodiments). For example, in some embodiments, information relating to a set of identified and quantified biological properties/analytes may be adjusted in a post-test manner (e.g., after their initial identification and/or quantification). Similarly, in some embodiments, information relating to a set of identified and characterized pathologies may be adjusted in a post-test manner (e.g., after their preliminary identification and/or characterization). These adjustments may be automatic or user-based and may be objective or subjective. The ability to adjust/fine tune the data on each level may advantageously improve data declaration and reliability.
In an example embodiment, the adjustment may be based on context information, which may be used to update one or more probabilities that affect the determination or quantification of the biological property/analyte. In example embodiments, the contextual information used to experimentally adjust the information relating to the set of identified and quantified biological properties/analytes may include correlation between patient demographics, biological properties/analytes, or correlation between identified/characterized pathology and biological properties/analytes. For example, in some cases, the biological properties/analytes may be related in the sense that the identity/quantification of a first biological property/analyte may affect the probability associated with the identity/quantification of a second biological property/analyte. In other cases, the identification/characterization of the first pathology based on the identified/quantified biological property/analyte of the initial set may affect the probability related to the identification/quantification of the biological property/analyte in the initial set or even the biological property/analyte not in the first set. In other cases, the pathology may be related, for example, where the identification/characterization of the first pathology may affect the probability related to the identification/characterization of the first pathology. As described above, information related to identification and quantification of biological properties/analytes and/or information related to identification and characterization of pathology may be updated in an iterative manner, e.g., until data aggregation or threshold/baseline implementation or for a selected number of cycles.
A further advantage of the hierarchical analysis framework relates to the ability to provide information to a user, e.g., a physician, concerning both pathology and underlying biology. This increased context may facilitate clinical diagnosis/assessment and assessment/determination of the next step, e.g., therapy/treatment options or additional diagnosis. For example, the systems and methods may be configured to determine those biological parameters/analytes that are least certain/have the highest degree of uncertainty (e.g., due to lack of data or conflicting data) that are relevant to the identification/quantification of one or more pathologies. In such cases, specific additional diagnostics may be recommended. Providing the user with increased context of information related to both pathology and underlying biology may further assist the user in evaluating/error checking various clinical conclusions and recommendations drawn by the analysis.
A hierarchical analysis framework, as used herein, refers to an analysis framework in which one or more intermediate groups of data points are used as intermediate transformations between intermediate processing layers or initial groups of data points and end groups of data points. This is similar to the concept of deep learning or hierarchical learning where algorithms are used to model higher level abstractions using multiple processing layers or otherwise utilizing multiple transformations, such as multiple nonlinear transformations. In general, the hierarchical analysis framework of the systems and methods of the present disclosure includes data points that relate to biological properties/analytes as intermediate processing layers or intermediate transformations between imaging data points and pathology data points, which in examples, embodiments may include multiple processing layers or multiple transformations (e.g., embodied by multiple levels of data points) for determining each of imaging information, underlying biological information, and pathology information. Although example hierarchical analysis framework structures (e.g., with specific processing layers, transforms, and data points) are introduced herein, the systems and methods of the present disclosure are not limited to such embodiments. Rather, any number of different types of analysis framework structures may be utilized without departing from the scope and spirit of the present disclosure.
In an example embodiment, the hierarchical analysis framework of the present application can be conceptualized as comprising a logical data layer that is an intermediate between an empirical data layer (comprising imaging data) and a result layer (comprising pathology information). However, the empirical data layer represents directly sourced data, and the logical data layer advantageously adds a degree of logic and reasoning that can extract this raw data to a set of useful analyte lengths for the result layer in question. Thus, for example, empirical information from diagnosis, such as raw imaging information, may be advantageously extracted down to logical information about a particular set of biological features that is relevant for assessing a selected pathology or group of pathologies (e.g., pathology involves imaged areas of a patient's body). In this way, the biological features/analytes of the application can also be regarded as pathological symptoms/indicators.
The biological features/analytes of the application may sometimes be referred to herein as biomarkers. Although the term "biological" or the prefix "biological" is used to characterize a biological feature or biomarker, this is only intended to mean that the feature or marker has a degree of correlation with respect to the patient's body. For example, the biological feature may be anatomical, morphological, constitutive, functional, chemical, biochemical, physiological, histological, genetic, or any number of other types of features related to the patient's body. Example biological features utilized by particular embodiments of the systems and methods of the present disclosure are disclosed herein (e.g., involving a particular anatomical region of a patient, such as a vascular system, respiratory system, organ such as lung, heart, or kidney, or other anatomical region).
While the example systems and methods of the present disclosure may be adapted to detect, characterize, and treat pathologies/diseases, the application of the systems and methods of the present disclosure is not limited to pathologies/diseases, but rather may be more generally applicable with respect to any clinically relevant medical condition of a patient, including, for example, syndromes, disorders, wounds, allergic reactions, and the like.
In an exemplary embodiment, the systems and methods of the present disclosure relate to computer-aided phenotyping, for example, by analyzing medical images using knowledge about biology to measure differences between disease types that have been determined by research to indicate a phenotype that in turn predicts outcome. Thus, in some embodiments, characterizing a pathology may comprise determining a phenotype of the pathology, which in turn may determine a predicted outcome.
Referring first to FIG. 1, a schematic diagram of an exemplary system 100 is depicted. There are three basic functions that may be provided by the system 100, represented as a trainer module 110, an analyzer module 120, and a queue tool module 130. As depicted, the analyzer module 120 advantageously implements a hierarchical analysis framework that first utilizes a combination of (i) imaging features 122 from one or more acquired images 121A of the patient 50, and (ii) non-imaging input data 121B of the patient 50 to identify and quantify biological properties/analytes 130, and then identifies and characterizes one or more pathologies 124 (e.g., prognostic phenotypes) based on the quantified biological properties/analytes 123. Advantageously, the analyzer module 120 may be independent of the baseline true phase or validation reference operation by implementing one or more pre-trained, e.g., machine-learned, algorithms to derive its inferences.
In an example embodiment, the analyzer may contain an algorithm for computing the imaging features 122 from the acquired image 121A of the patient 50. Advantageously, some of the image features 122 may be calculated on a per voxel basis, while other image features may be calculated on a region of interest basis. An example non-imaging input 121B that may be used with the acquired image 121A may contain data from a laboratory system, patient reported symptoms, or patient history.
As described above, the image features 122 and non-imaging inputs may be used by the analyzer module 120 to calculate biological properties/analytes 123. Notably, biological properties/analytes are typically quantitative, objective properties (e.g., objectively verifiable rather than expressed as an impression or appearance) that may represent the presence and extent of, for example, a marker (e.g., a chemical substance) or other measure such as the structure, size, or anatomical characteristics of a region of interest. In an example embodiment, the quantified biological property/analyte 123 may be displayed or exported for direct consumption by a user, e.g., by a clinician, in addition to or independent of further processing of the analyzer module.
In an example embodiment, one or more of the quantified biological properties/analytes 123 may be used as input for determining a phenotype. Phenotypes are typically defined in a disease-specific manner, independent of imaging, and are typically extracted from isolated pathophysiological samples that have a documented relationship with the intended outcome. In an example embodiment, the analyzer module 120 may also provide a prediction 125 for the determined phenotype.
It should be appreciated that example implementations of the analyzer module 120 are further described herein with respect to specific examples following the general description of the system 100. In particular, specific imaging features, biological properties/analytes and pathology/phenotype are described with respect to specific medical applications, such as with respect to the vascular system or with respect to the respiratory system.
Still referring to fig. 1, the cohort tool module 130 enables the definition of a cohort for patients for which a cohort analysis is to be performed, e.g., based on a selected set of criteria related to the cohort study in question. An example cohort analysis may be for a group of patients incorporated into a clinical trial, e.g., where the patients are further grouped based on one or more branches of the trial, e.g., a treatment branch and a control branch. Another type of cohort analysis may be directed to a set of subjects whose reference truth or reference exists, and this type of cohort may be further broken down into a training set or "development" set and a test set or "hold" set. A development set may be supported for training 112 algorithms and models within the analyzer module 120, and a maintenance set may be supported for evaluating/verifying 113 performance of algorithms or models within the analyzer module 120.
With continued reference to fig. 1, the trainer module 110 may be used to train 112 algorithms and models within the analyzer module 120. Specifically, the trainer module 110 may rely on the baseline truth 111 and/or the reference annotations 114 in order to derive weights or models, such as according to an established machine learning paradigm or by notifying algorithm developers. In an example embodiment, classification and regression models are employed that are highly adaptable, for example, capable of revealing complex relationships between predictors and responses. However, its ability to adapt to the infrastructure within existing data may enable the model to find patterns that are not reproducible for another sample of the subject. Adapting to structures that are not reproducible within existing data is often referred to as model overfitting. To avoid building an overfitting model, a systematic approach can be applied that prevents the model from finding false structures and enables the end user to be confident that the final model will predict new samples with a similar degree of accuracy with the dataset evaluating its model.
The continuous training set may be used to determine one or more optimal tuning parameters and the test set may be used to estimate the predictive performance of the algorithm or model. The training set may be used to train each of the classifiers through randomized cross-validation. The data set may be repeatedly split into training and test sets and may be used to determine classification performance and model parameters. Splitting of the data set into training and testing sets occurs using either a hierarchical approach or a maximum dissimilarity approach. In an example embodiment, a resampling method (e.g., bootstrap) may be used within the training set in order to obtain confidence intervals for (i) optimal parameter estimates and (ii) the predicted performance of the model.
FIG. 2 outlines a resampling-based model creation method 200 that may be utilized by the systems and methods of the present disclosure. First, at step 210, a tuning parameter set may be defined. Next, at step 220, the model is fitted and the hold samples are predicted for resampling each tuning parameter set data. At step 230, the resampled estimate is combined into a performance spectrum. Next, at step 240, final tuning parameters may be determined. Finally, at step 250, the entire training set is re-fitted with the final tuning parameters. After each model has been tuned according to the training set, each model may be evaluated for predictive performance over the test set. The test set evaluation occurs once for each model to ensure that the model building process does not overfit the test set. For each model constructed, the best tuning parameter estimate, resampled training set performance, and test set performance may be reported. The values of the model parameters above the random split are then compared to evaluate model stability and robustness to the training data.
According to the systems and methods of the present disclosure, multiple models may be tuned for each of the biological properties/analytes (e.g., tissue types) represented in the reference real-phase diagram. Model responses may include, for example, covariance-based techniques, non-covariance-based techniques, and tree-based models. Depending on its construction, the endpoint may have a continuous response and a categorical response, some of the techniques in the above categories being used for both categorical and continuous responses, while others are specific to either a categorical response or a continuous response. The best tuning parameter estimate, resampled training set performance, and test set performance may be reported for each model.
Table 1:
Table 1 above provides a summary of some example functions of analyzer module 120 of system 100. That is, the analyzer module 120 may be configured to describe fields, e.g., to register multiple data streams across fields, to segment organs, vessels, lesions, and other application-specific objects, and/or to reformat/reconfigure anatomy for a particular analysis. The analyzer module 120 may be further configured to describe a target, such as a lesion, in the described fields. Describing the target may include, for example, registering multiple data streams at one region setting, performing fine-grained segmentation, measuring the size and/or other characteristics of the relevant anatomy, and/or extracting the entire target feature (e.g., biological properties/analyte characteristics of the entire target region). In some embodiments, one or more sub-target regions may also be described, e.g., a target region may be split into sub-targets (e.g., biological properties/analyte characteristics of the sub-target region) depending on the particular application and sub-target-specific calculations. The analyzer module 120 may also describe, for example, components or related features (e.g., compositions) in a particular field, target, or sub-target area. This may involve segmenting or re-segmenting the component/feature, calculating values of the segmented component/feature (e.g., biological properties/analyte characteristics of the component/feature) and assigning a probability map to the reading. The next pathology may be determined based on the biologically quantified properties/analytes and characterized, for example, by determining the phenotype and/or predictive outcome of the pathology. In some embodiments, the analyzer module 120 may be configured to compare data across multiple points in time, e.g., one or more of the biological components/analytes may involve time-based quantification. In further embodiments, a wide scan field may be utilized to assess multi-foci pathology, e.g., based on quantification of biological properties/aggregation of analytes across multiple targets in the described field. Finally, based on the foregoing analysis, the analyzer module 120 may be configured to generate a patient report.
Sample patient report 300 is depicted in fig. 3. As shown, the sample patient report 300 may contain quantification of biological parameters/analytes as related to the structure 310 and composition 320, as well as data from non-imaging sources such as hemodynamics 330. The sample patient report may further comprise a visualization 340, such as a 2D and/or 3D visualization of imaging data, a combined visualization of non-imaging data, such as hemodynamic data superimposed on the imaging data. Various analyses 350 may be displayed for assessing biological parameters/analytes, including, for example, visualization of one or more models (e.g., decision tree models) for determining/characterizing pathology. Patient context and identification information may further be included. Thus, the analyzer module 120 of the system 100 may advantageously provide the user, e.g., clinician, with integrated feedback for evaluating the patient.
Advantageously, the systems and methods of the present disclosure may be adapted to specific applications. Example vascular and pulmonary applications are described in more detail in the following sections (although it should be understood that the specific applications described have general meaning and interoperability with respect to many other applications). Table 2 provides an overview of vessel and lung related applications utilizing a hierarchical analysis framework as described herein.
Table 2:
the following sections provide specific examples of quantitative biological properties/analytes that may be used by the systems and methods of the present disclosure with respect to vascular applications:
Anatomical structures vascular structure measurements, in particular measurements that lead to a determination of% stenosis, have long been and remain the single most commonly used measurement in patient care. These measurements were initially limited to inner lumen measurements, rather than wall measurements involving both the inner and outer surfaces of the vessel wall. However, all of the main non-invasive modalities are different from X-ray angiography, the vessel wall can be resolved and with the resolution of the vessel wall, an extended measurement is made that can be achieved. Class is broad and the objects measured have varying sizes and should therefore be carefully summarized. The main consideration is spatial sampling or resolution limitations. However, by taking advantage of subtle changes in intensity levels due to partial volume effects, the minimum detectable change in wall thickness can be lower than spatial sampling. Furthermore, the prescribed resolution generally refers to the actual resolution of the grid size and the field of view reconstructed after acquisition, rather than the minimum feature size that can be resolved by the determination of the imaging plan. Also, the in-plane resolution may or may not be the same as the cross-plane resolution, and the size of a given feature, as well as its proportion and shape, will drive the measurement accuracy. Last but not least, in some cases, classification conclusions are drawn by applying a threshold to the measurement, which can then be interpreted according to the signal detection theory with the ability to optimize the trade-off between sensitivity and specificity (terms not otherwise referring to the measurement in the normal sense).
Quantitative assessment of individual component parts comprising, for example, lipid-rich necrotic nuclei (LRNC), fibrosis, intraplaque hemorrhage (IPH), permeability, and calcified atherosclerotic plaques may provide critical information about the relative structural integrity of the plaque, which may assist in physician decision-making during medical or surgical therapy. From an imaging technology perspective, the ability to do so is very little dependent on spatial resolution, just as contrast resolution and tissue differentiation, which is made possible by differentiating the tissue differently corresponding to the incident energy, thereby producing different received signals. Each imaging modality does so to some extent, terms such as "sound permeability", CT number of Hounsfield units, and differential MR enhancement in ultrasound vary with various sequences such as (but not limited to) T1, T2, and T2.
Dynamic tissue behavior (e.g., permeability) in addition to morphological features of vessel walls/plaque, it is increasingly recognized that dynamic features are valuable quantitative indicators of vascular pathology. Wherein a dynamic sequence acquired at a plurality of closely spaced times (referred to as phases) expands the library to a value t beyond spatial resolution, which contains time-resolved values that can be used in compartment modeling or other techniques to determine the dynamic response of tissue to stimuli such as, but not limited to, inward and outward wash of contrast agents. By using dynamic contrast enhanced imaging with ultrasound or MR in the carotid arteries or delayed contrast enhancement in the coronary arteries, a sensitive assessment of the relative permeability (e.g., ktrans and Vp parameters from kinetic analysis) of the neo-angiogenic microvascular network within the plaque of interest can be determined. In addition, these dynamic arrays may also assist in partitioning between increased vascular permeability and intra-plaque hemorrhage.
Hemodynamic-the essential hemodynamic parameters of the circulation have a direct effect on vascular lesions. Blood pressure, blood flow velocity and vessel wall shear stress can be measured by techniques ranging from very simple oscillation methods to complex imaging analysis. Using the general principles of fluid dynamics, the calculation of vessel wall shear stress for different regions of the wall can be determined. In a similar manner, MRI, with or without US, has been used to calculate Wall Shear Stress (WSS) and correlate the results with structural changes in the vessel of interest. In addition, short-term and long-term studies have been conducted on the effects of antihypertensive drugs on hemodynamics.
Thus, in example embodiments, applying key aspects of the systems and methods of the present disclosure in vascular settings may include evaluating plaque structure and plaque composition. Evaluating plaque structures may advantageously include, for example, luminal measurements (which improve stenosis measurements by providing area rather than just diameter measurements) as well as wall measurements (e.g., wall thickness and vascular remodeling). Assessing plaque composition may advantageously involve quantification of tissue properties (e.g., lipid core, fibrosis, calcification, permeability, etc.) rather than just the "soft" or "hard" designations as commonly found in the prior art. Tables 3 and 4 below describe example structural calculations and tissue property calculations, respectively, that may be utilized by vascular applications of the systems and methods of the present disclosure.
TABLE 3 structural calculation of vessel anatomy supported by vessel applications of the systems and methods disclosed herein
Example systems related to evaluating vascular systems may advantageously incorporate/employ algorithms for evaluating vascular structures. Thus, the system may employ, for example, a target/vessel segmentation/cross-section model for segmenting the infrastructure of the imaged vessel. Advantageously, a fast-marching competition filter can be applied to individual vessel segments. The system may be further configured to treat a vascular bifurcation. Image registration may be applied using Mattes mutual information (MR) or mean square error (CT) metrics, rigid versor transforms, LBFGSB optimizers, and the like. As described herein, vessel segmentation may advantageously comprise lumen segmentation. The initial lumen segmentation may utilize a confidence connected filter (e.g., carotid artery, vertebrae, femur, etc.) to distinguish the lumens. Lumen segmentation may utilize MR imaging (e.g., combination of normalized (e.g., inverted) dark contrast images) or CT imaging (e.g., pre-contrast, post-contrast CT and 2D Gaussian (Gaussian) distribution using registration) to define the vascular function (vessel-less function). The constituent parts of the various connections may be analyzed and thresholding may be applied. Vessel segmentation may further require outer wall segmentation (e.g., with minimal curvature (k 2) flow to account for lumen irregularities). In some embodiments, the fringe potential graph is calculated as an outward-downward gradient in both contrast and non-contrast. In an example embodiment, the outer wall segmentation may utilize a cumulative distribution function in the velocity function (e.g., merging previous distributions of wall thickness, e.g., 1-2 adjacent levels) to allow for median thickness in the absence of any other edge information. In an example embodiment, a fisher diameter (FERRET DIAMETER) may be employed for vessel characterization. In further embodiments, the wall thickness may be calculated as the sum of the distance to the lumen plus the distance to the outer wall. In further embodiments, semantic segmentation may be used, with, for example, CNN for lumen and/or wall segmentation.
Example systems related to evaluating vascular systems may further advantageously analyze vascular composition. For example, in some embodiments, the composition may be determined based on image intensity and other image characteristics. In some embodiments, lumen shape may be utilized, for example, in connection with determining thrombosis. Advantageously, an analyte spot model may be employed to better analyze the composition of a particular sub-region of a blood vessel. Analyte spots are defined as spatially contiguous regions in a 2D, 3D or 4D image of a class of biological analytes. The speckle model may utilize an anatomically aligned coordinate system using contours at, for example, a normalized radial distance from the luminal surface to the adventitia surface of the vessel wall. The model may advantageously identify one or more blobs and analyze each of the blobs' locations, for example, relative to the entire vascular structure and relative to other blobs. In an example embodiment, a hybrid Bayesian/Markovian (Bayesian) network can be used to model the relative positions of blobs. The model may advantageously interpret the image intensities observed at pixels or voxels affected by the local neighborhood of hidden analyte class nodes, thereby interpreting the partial volume and scanner Point Spread Function (PSF). The model may further allow for dynamically describing analyte spot boundaries during the inference by the analyzer module based on the analyte probability map. This is a major difference from typical machine vision methods, such as the super-pixel method, which pre-compute small areas to be analyzed but cannot dynamically adjust these areas. Iterative inference procedures using current estimates of both analyte probability and spot boundaries may be applied. In some embodiments, the probability density estimation between sparse data for training the model may be implemented using parametric modeling assumptions or kernel density estimation methods.
Presented herein are novel models for classifying the composition of vascular plaque components that eliminate the need for histological to radiological registration. This model still uses expert annotated histology as a reference standard, but training of the model does not require registration with radiological imaging. The multi-scale model computes statistics, which may be referred to as "spots", for each successive region of a given analyte type. In a cross-section through a blood vessel, the wall is defined by two boundaries, an inner boundary with the lumen and an outer boundary with the wall of the blood vessel, thereby creating a doughnut-like shape in the cross-section. Within the doughnut-shaped wall region, there are discrete numbers of spots (unlike the default background class of normal wall tissue, which is not considered a spot). The number of blobs is modeled as a discrete random variable. Then, each spot is assigned a label of the analyte type and various shape descriptors are calculated. In addition, the blobs are considered to be paired. Finally, within each spot, each pixel may produce a radiological imaging intensity value modeled as a sample from a separate and identical distribution (i.i.d.) of the continuous estimated distribution specific to each analyte type. Note that in this last step, the parameters of the imaging intensity distribution are not part of the training process.
One key feature of this model is that it takes into account the spatial relationship of the analyte spots within the blood vessel and also with respect to each other, recognizing that point-by-point image features (whether histologically and/or radiologically based) are not the only sources of information for the expert to determine plaque composition. While the model allows for training without explicit histological-to-radiological registration, it can also be applied to situations where the configuration is known. It is believed that statistically modeling the spatial layout of atherosclerotic plaque components to classify invisible plaque is a novel concept.
Example techniques for estimating vessel wall composition from CT or MR images are further described in the following sections. In particular, the method may employ a multi-scale bayesian analysis model. The basic bayesian formula is as follows:
in the context of the present disclosure, it is assumed that a multiscale vessel wall analyte map a can be based, wherein observations are combed from CT or MR image intensity information I.
As depicted in fig. 4, the multi-scale vessel wall analyte map may advantageously include wall-level segmentation 410 (e.g., a cross-sectional slice of a vessel), spot-level segmentation, and pixel-level segmentation 430 (e.g., based on individual image pixels). For example, a= (B, C) may be defined as a graph of vessel wall class labels (similar to a graph with vertices B and edges C), where B is a set of blobs (areas of continuous cross-section of non-background walls sharing one label) and C is a set of blobs pairs or pairs. B b can be defined as a generic single blob, where B ε [ l..n B ] is the index of all blobs in A and B b a is the blob with tag a. For statistical purposes, the individual blob descriptor operator D B { } is in a low-dimensional space. C c can be defined as a blob pair, where C ε [ l..n B(nB -l)/2 ] is the index of all the blob pairs in A. C c f,g is the blob pair with labels f and g. For statistical purposes, the blob pair descriptor operator Dc { } is in a low dimensional space. A (x) =a can be defined as a class label of pixel x, where \a e { 'CALC', 'LRNC', 'TIBR', 'IPH', 'background', (combination property). In an exemplary embodiment, I (x) is the continuously estimated pixel intensity at pixel x. Within each blob, I (x) is modeled independently. Note that because the model is used to classify wall composition in 3D radiological images, the word "pixel" is used to genetically represent both 2D pixels and 3D voxels
The nature of the speckle regions of similar composition/structure may advantageously provide insight into the disease process. Each slice of the blood vessel (e.g., a cross-sectional slice) may advantageously contain a plurality of spots. The relationship between spots can be evaluated in a pair-wise manner. The number of spots within the cross-section is modeled as a discrete random variable and may also have quantifiable significance. At the slice level of segmentation, the relevant property (e.g., biological property/analyte) may comprise a quantitative, inter-blob relationship, e.g., a spatial relationship, such as closer to the interior, of the blobs and/or the total number of blobs classifying a particular structure/composition. At the segmented spot level, the characteristics (e.g., size and shape) of each spot, such as structural characteristics, as well as combination characteristics, etc., can be evaluated to act as biological properties/analytes. Finally, at the segmented pixel level, an individual pixel level analysis may be performed, for example, based on the image intensity distribution.
The probability map of the characteristic may be applied with respect to the multi-scale vessel wall analyte map depicted in fig. 4. The probability map may advantageously establish a probability vector for each pixel, where the components of the vector are for the probability of each class of analyte and one component is for the probability of the background tissue. In an example embodiment, the set of probability vectors may represent mutually exclusive characteristics. Thus, each set of probability vectors representing mutually exclusive characteristics will sum to 1. For example, in some embodiments, it may be known that a pixel should fall into one and only one constituent class (e.g., a single coordinate of a blood vessel cannot be both fibrous and calcified). It should be noted in particular that the probability mapping does not assume independence of the analyte class between pixels. This is because adjacent pixels or pixels within the same blob may generally have the same or similar characteristics. Thus, the probability map accounts as described in more detail herein advantageously address dependencies between pixels.
F (a=α) can be defined as the probability density of map a. f (A) is a probability distribution function over all vessel walls. f (D B{Ba } = β) is the probability density of the descriptor vector P with the label a. f (D B{Ba }) is the probability density function (pdf) of the blob descriptor with label a. There is a probability distribution function for each value of a. f (B) =pi f (D B{Ba})f(Dc{Cf,g } =γ) is the probability density of the pair descriptor vector γ with labels f and g. f (D c{Cf,g }) is the probability density function (pdf) of the pair-wise blob descriptors. For each ordered pair f, g, there is a probability distribution function. Thus:
f(c)=Πf(Dc{Ca})
f(A)=f(B)f(C)=Πf(Db{Ba})Πf(Dc{Ca})
P (a (x) =a) is the probability of pixel x having label a. P (a (x)) is the probability mass function (pmf) (prevalence) of the analyte. Which can be seen as a vector of probabilities at a particular pixel x, or a probability map of particular class label values.
Note that f (a) =p (N) ·f (C) ·f (B) =p (N) ·pi f (C c)·Πf(Bb)
F (C c =γ) is the probability density of the pair descriptor vector γ. f (C c) is the probability density function (pdf) of the pair-wise blob descriptors. f (B b = β) is the probability density of the descriptor vector β. f (B b) is the probability density function (pdf) of the blob descriptor. P (a (x) =a) is the probability of pixel x having label a. P (a (x)) is the probability mass function (pmf) of the analyte (prevalence in a given plot). Which can be considered as a vector of probabilities at a particular pixel x, or a spatial probability map of a particular analyte type. P (a (x) =a|i (x) =i) is the probability of the analyte given the image intensity as the primary target of the calculation. P (I (x) =i|a (x) =a) is the distribution of image intensities for a given analyte.
FIG. 5 depicts an exemplary pixel-level probability mass function as a set of analyte probability vectors. As described above, the probability mass function may be informed by the assumption that, in an example embodiment, sufficiently small pixels must fall into at least one of the analyte classes (including the generic "background" class), and thus the sum of the probabilities totals 1. Mutual repellency-it can be assumed that pixels small enough belong to only one class of analyte-if a combination (i.e. needle-like calcium in LRNC background) is present, a new combination class can be created to maintain mutual repellency. Independence it can be assumed that each pixel is highly dependent on its neighbor and the overall structure of a.
An alternative view of an analyte map is a spatial map of the probability of a given analyte. At any given point during the inference, the analyte spot may be defined using the full width half maximum rule. Using this rule, for each local maximum of the probability of the analyte, the region grows out to a lower threshold of half the local maximum. Note that this 50% value is an adjustable parameter. Spatial regularization of the blobs can be performed by performing some curvature evolution on the probability map to keep the boundaries more realistic (smooth with few topological holes). Note that different possible putative spots of different analyte classes may typically have spatial overlap, because these spatial overlap represent alternative hypotheses for the same pixel until the different possible putative spots collapse the probability, and thus the modifier is "putative".
When the iterative inference terminates, there are several options for the representation of the results. First, the continuous estimation probability map may be presented directly to the user in one of several forms including, but not limited to, surface mapping, contour mapping, or using image fusion similar to visualizing PET values as changes in hue and saturation at the top of the CT. A second alternative is to collapse the probability map at each pixel by selecting a single analyte label for each pixel. This can be done most directly by independently selecting the maximum posterior value at each pixel, thereby creating a visual classification map by assigning a different color to each analyte label and full or partial opacity on top of the radiological image. Under this second alternative, the tag values may be assigned by resolving overlapping putative blobs based on the priority probability of each blob, rather than independently. Thus, at a given pixel, a lower priority analyte probability may be used for the tag if the given pixel belongs to a higher probability of blob.
FIG. 6 illustrates a technique for calculating a pseudo-putative analyte spot. In an example embodiment, the estimated blobs may have overlapping regions. Therefore, it may be advantageous to apply an analysis technique to segment pixels by estimating blobs. For the probability of a given analyte, a local maximum of the probability is determined. A full width half maximum rule may then be applied to determine discrete blobs. In any given iteration of the inference, the full-width half-peak maximum rule may be used to define an analyte spot. The local maximum is found and then the region grows at a lower threshold of 0.5 max. (50% of the values may be adjustable parameters.) in some embodiments, spatial regularization of the blobs may also be applied, for example by performing some curvature evolution on the probability map in order to keep the boundaries smooth and avoid holes. Note that at this stage, the different possible estimated spots of different analyte classes may typically have spatial overlap, since these do not represent alternative hypotheses until the probability collapses. Thus, an image level analyte map is calculated, e.g. based on the collapse of the probability map function. Note that this collapse may be determined based on a pixel-level analyte probability map, a putative spot, or a combination of both. The collapse may be determined by selecting, for each pixel, the label with the highest probability a (x): =arg maxa P (a (x) =a) relative to the pixel-level analyte probability map. This is similar to the embodiment viterbi algorithm. Basically, the highest probability of each mutually exclusive set of probability vectors is locked (e.g., analyte priority breaks the possible links). All other probabilities in the set can then be set to zero. In some embodiments, the probability of neighboring pixels/regions may be considered when collapsing the data on the pixel level. With respect to the estimated blob level collapse, overlapping estimated blobs can be resolved. In some embodiments, prioritization may be based on the blob probability density f (D 1{Aa b}=d1). This may affect the analysis of the blob-level characteristics, as the higher probability blobs may change the shape of the overlapping lower probability blobs. In an example embodiment, the entire range of probabilities may be maintained instead of collapsing the data.
To model the relative spatial positioning of the blobs within the vessel wall, an appropriate coordinate system may be selected to provide rotational invariance, translational invariance, and dimensional invariance between the different images. These invariance are important for the model because it allows training on one type of vessel (e.g., carotid arteries where endarterectomy samples are readily available) and applying the model to other vascular beds (e.g., coronary where plaque samples are generally not readily available) under the assumption that the atherosclerosis process is similar across different vascular beds. For tubular objects, the natural coordinate system follows the vessel centerline, with the distance along the centerline providing longitudinal coordinates, and each plane perpendicular to the centerline having polar coordinates of radial distance and angle. However, due to variability in vessel wall geometry, particularly in diseased patients that may be intended for analysis, an improved coordinate system may be utilized. The longitudinal distance is calculated along the centerline or along the interpolated vertical plane in such a way that a value is given to each 3D radiological image pixel. For a given patch, the proximal and distal planes perpendicular to the centerline are each used to create an unsigned distance map on the original image grid, denoted as P (x) and D (x), where x represents the 3D coordinates, respectively. Distance plot l (x) =p (x)/(P (x) +d (x)) represents the relative distance along the plaque, with a value of 0 at the proximal plane and a value of 1 at the distal plane. The direction of the l axis is determined by l (x).
Because the wall geometry may be significantly non-circular, the radial distance may be defined based on the shortest distance to the inner lumen surface and the shortest distance to the outer adventitia surface. Expert annotation of histological images contains regions defining lumens and vessels (defined as the union of lumens and vessel walls). A symbol distance function may be created for each of these L (x) and V (x), respectively. It is common practice that the interior of these regions is negative, so that in the wall, L is positive and V is negative. The relative radial distance is calculated as r (x) =l (x)/(L (x) -V (x)). It has a value of 0 at the luminal surface and a value of 1 at the adventitia surface. The direction of the r axis is determined by r (x).
Due to the geometry of the non-circular wall, the normalized tangential distance can be defined as lying along contour r (contour 1 if processed in 3D). the direction of the t axis is determined by%r × · l. It is common practice that histological sections are assumed to be seen from the proximal to distal direction, so that positive 1 points into the image. Note that unlike the other, t does not have a natural origin, as it wraps around the vessel to itself. Thus, the origin of this coordinate may be defined differently for each blob relative to the centroid of the blob.
Another wall coordinate used is normalized wall thickness. In a sense, this is a surrogate indicator of disease progression. Thicker walls are assumed to be due to more severe disease. It is assumed that the statistical relationship of analytes changes with more severe disease. The absolute wall thickness is easily calculated as w abs (x) =l (x) -V (x). To normalize it to the range of [0-1], the maximum possible wall thickness can be determined when the lumen is near zero size and is completely eccentric and near the outer surface. In this case, the maximum diameter is the maximum fishery diameter D max of the blood vessel. Thus, the relative wall thickness is calculated as w (x) =w abs(x)/Dmax.
The extent to which the aforementioned coordinates may or may not be used in the model depends in part on the amount of training data available. When training data is limited, several options are available. The relative longitudinal distance is negligible by processing different sections through each plaque as if they were from the same statistical distribution. Plaque composition has been observed to vary along the longitudinal axis with a more severe plaque appearance in the middle. However, this dimension may collapse, as opposed to parameterizing the distribution by l (x). Similarly, the relative wall thickness may also collapse. It has been observed that certain analytes occur in the "shoulder" region of the plaque, where w (x) will have an intermediate value. However, this dimension may also collapse until sufficient training data is available.
As described above, the vessel wall composition model may be utilized as an initial assumption (e.g., at the previous P (a)). Fig. 7 depicts normalized vessel wall coordinates of an exemplary vessel wall composition model. In the depicted model, 1 is the relative longitudinal distance along the vascular target from proximal to distal, which longitudinal distance may be calculated, for example, at a normalized interval [0,1 ]. The longitudinal distance can be calculated using 2 fast travel propagates starting at the proximal and distal planes to calculate unsigned distance fields P and D, where 1 = P/(p+d). The l axis direction is l. As depicted, r is a normalized radial distance, which may also be calculated at a normalized spacing [0,1] from the luminal surface to the adventitia surface. Thus, r=l/(l+ (-V)), where L is the lumen Sign Distance Field (SDF) and V is the vascular SDF. The r axis direction is @ r. Finally, t is a normalized tangential distance, which may be calculated at, for example, a normalized interval [ -0.5,0.5 ]. It is worth noting that in an example embodiment, there may not be a meaningful origin for the entire wall, which is only for individual analyte spots (thus, the origin of t may be at the centroid of the spot). The tangential distance is calculated along the contour curves of 1 and r. the direction of the t axis is%r ×%l.
Fig. 9 illustrates some complex vessel topologies that may be addressed using the techniques described herein. In particular, when CT or MR is processed in 3D, the different branches can advantageously be analyzed separately, such that the relationship between the analyte spots in the separate branches is appropriately ignored. Thus, if the segmented view (cross-sectional slice) contains more than one lumen, this can be explained by performing a watershed transformation on r to split the wall into domains belonging to each lumen, which can then be considered/analyzed separately.
As described above, many of the coordinates and probability measurements described herein may be represented using normalized dimensions, thereby maintaining dimensional invariance, for example, between vessels of different sizes. Thus, under the assumption that the disease process is similar and proportional to the different orifice vessels, the proposed model can advantageously be independent of absolute vessel size.
In some embodiments, the model may be configured to characterize concentric plaques and eccentric plaques. Notably, all thicknesses normalized close to 1 can indicate where the height is eccentric. In further embodiments, an inward plaque characterization and an outward plaque characterization may be implemented. Notably, histological information about this property is hindered by deformation. Thus, in some embodiments, CT and training data may be utilized to establish an algorithm for determining an inward plaque characterization and an outward plaque characterization.
As described above, in example embodiments, non-imaging data, such as histological data, may be utilized as a training set for establishing algorithms linking image features to biological properties/analytes. However, there are some differences between the data types that need to be resolved in ensuring proper correlation. For example, the following differences between histology and imaging may affect the proper correlation that Carotid Endarterectomy (CEA) leaves adventitia and some mediator in CT or MR image analysis of a patient assuming that an external adventitia surface is found. (see, e.g., fig. 8, which depicts the edges between removed plaque of a histological specimen relative to the outer vessel wall). Notably, the scientific literature shows uncertainty whether calcification will occur in the adventitia. The following technique may be employed to account for this difference. Histology may expand outward, for example, based on the assumption that little analyte remains in the wall. Alternatively, the image segmentation may erode inward, e.g., based on knowledge of the typical or specific edges remaining. For example, an average edge may be utilized. In some embodiments, the average edge may be normalized to a percentage of the total diameter of the vessel. In further embodiments, histology may be used to mask the imaging (e.g., based on alignment standard superposition). In such embodiments, it may be desirable to apply one or more transformations to the histological data to match the appropriate alignment. Finally, in some embodiments, the difference may be negligible (which is equivalent to a uniform scaling of the plaque removed from the entire wall). Although this may induce some small errors, it is speculated that the remaining walls may be thin compared to the plaque of the CEA patient.
There may also be longitudinal differences between the histological data (e.g., training set) and the imaging data as represented by the vessel wall composition model. In an example embodiment, the longitudinal distance may be explicitly modeled/related. Thus, for example, histological section numbers (e.g., a-G) may be used to generally determine the location within the resected portion of plaque. However, this approach limits analysis relative to other sections that do not have corresponding histological data. Thus, alternatively, in some embodiments, all histological sections may be considered to be caused by the same distribution. In an example embodiment, some limited regularization may still be employed in the longitudinal direction.
As mentioned above, normalized wall thickness is an imperfect surrogate indicator of disease progression in a sense. In particular, it is assumed that thicker walls are due to more severe disease, for example based on the assumption that the statistical relationship of analytes changes with more severe disease. The normalized wall thickness may be calculated by determining the absolute wall thickness T a (in mm), e.g., as T a =l+ (-V), where L is the lumen SDF, V is the vessel SDF, and D max is the maximum fisher diameter (in mm) of the vessel. The relative wall thickness T may then be calculated based on t=t a/Dmax, e.g., at interval [0,1], where 1 indicates the thickest part of the small lumen, which indicates that the plaque is completely eccentric. In an example embodiment, the probability may be adjusted based on the wall thickness, e.g., such that the distribution of analyte spots will depend on the wall thickness. This can advantageously model differences in analyte composition during disease progression.
FIG. 10 depicts an exemplary representation of analyte spots with normalized vessel wall coordinate distribution. Specifically, the origin of t is located at the centroid of the spot. The (r, t) coordinates are random vectors in which the position/shape is represented entirely by a joint distribution of points. This can be simplified by considering the edge distribution (as the radial and tangential shape characteristics appear to be relatively independent). The edge distribution can be calculated as projections along r and T (note that 1 and T coordinates are also contemplated). Notably, the edge distribution in the radial direction can advantageously represent/characterize plaque growth in concentric layers (e.g., medial, adventitia, and intima layers). Similarly, the tangential edge distribution may advantageously represent a growth factor that may be indicative of the stage of the disease. In an example embodiment, the analyte spot descriptor may be calculated based on an edge distribution. For example, on may make low-order statistics on edge distributions (or use histograms or fit to parametric probability distribution functions).
In an example embodiment, the following analyte spot descriptors may be used, for example, to capture the location, shape, or other structural characteristics of individual spots:
-position in normalized vessel coordinates
Mainly with respect to r
For example, to facilitate distinguishing between shallow/deep calcifications
Ignoring the t-direction [ optional model l-direction ]
-Degree on normalized vessel coordinates
Intentionally avoid the word "size", which implies an absolute measurement, however the degree is a normalized value
-An asymmetry, said asymmetry being used to represent the degree of asymmetry in the distribution
Clinical significance is not yet clear, but it can help regularize shapes against incredible uneven shapes
-Alignment for representing definition of parallel tissue layers
The analyte spots appear to remain well within the radial layer (contour of r), so this will help select similar image-processed shapes
Wall thickness where the spots are located
Thick (e.g., severe) plaque is assumed to have different statistics than thin plaque
In some embodiments, paired blob descriptors may also be utilized. For example:
-relative position
For example, if fibrosis is on the luminal side of LRNC
-Relative degree
For example, how thick/wide the fibrosis is relative to LRNC
-Wrapping around
How close is one edge projection to the middle of the other edge
For example, napkin ring symptoms or fibrosis around LRNC
-Relative wall thickness
For the degree of "shoulder" (the shoulder will be relatively less thick than the central plaque body)
Note that higher order interactions (e.g., between three blobs or between two blobs and another feature) may also be implemented. However, decremental returns and training limits may be considered.
The following is an example quantification of blob descriptors:
individual blob descriptors
Paired blob descriptors
Notably, a set of descriptors (e.g., 8-12 descriptors) forms a limited shape space in which the blob resides. The distribution of a group of spots can then be considered as a distribution in this limited space. Fig. 11 depicts an exemplary distribution of blog descriptors. In an example embodiment, the distribution of blob descriptors may be calculated throughout the training set. In some embodiments, lower order statistics (assuming independence) may be utilized on individual blob descriptors, e.g., location E [ alpha r],Var[αr ]. In other embodiments, multidimensional gaussian (mean vector + covariance matrix) analysis may be used to model descriptors (e.g., where independence is not assumed). In further embodiments, if the distribution is non-normal, it may be modeled with a density estimation technique.
As described above, the number of spots per cross section (or the number of each class) can also be modeled, e.g., η does not consider the analyte class, and η i counts the number in each analyte class. Fig. 14 depicts the frequency distribution of the total number of spots per histological slide. Toxicity resolution was applied as excess. Note that the analysis chart of fig. 14 depicts the number N of spots per cross section irrespective of the analyte class (the number of spots per analyte class is denoted by B).
Summarizing the above section, in an example embodiment, the entire vessel wall composition model may contain the following:
per pixel analyte before pmf
–P(A(x)=ai)=ρi
Individual blob descriptors
–B1=(αrrtrtrtt)
–B1~N(μ1,Σ1)
Paired blob descriptors
-C2=(αrrttrrttrrttTT)
-C2~N(μ2,∑2)
Quantity of blobs
-Eta-poisson (lambda η)
Wherein:
P(A(x)=al)=ρl
f(Ab)=f(Bl b)
As described above, the imaging model may be used as a likelihood of a bayesian analysis model (e.g., P (i\a)). The maximum likelihood estimate may then be determined. In an example embodiment, this may be done taking into account each individual pixel (e.g., without taking into account the prior probability of the structure in the model). The estimated analyte map is typically only smoothed, as the image is smoothed (which is why pre-smoothing is typically not performed). Independent pixel-by-pixel analysis may be performed, for example, at least until a point at which the scanner PSF is considered. Imperfect imaging data is interpreted using an imaging model. For example, imaging small components of plaque adds independent noise to the top of the pixel values. Furthermore, local volume effects and scanner PSFs are well known for application to small objects. Thus, given a model (e.g., a level set representation of the analyte region), CT is simple and fast by Gaussian blur simulation with PSF. The imaging models described herein may also be applied to determine (or estimate) the distribution of true (non-ambiguous) densities of different analytes. Notably, this cannot come from typical imaging studies, as these will have blurred image intensities. In some embodiments, a wide variance may be used to represent uncertainty. Alternatively, the distribution parameters may be optimized according to the training set, but the objective function will have to be based on downstream readings (of the analyte region), e.g. unless aligned histological data is available. Fig. 12 depicts an exemplary model for imaging data (e.g., correlating hidden (classified) states (a (x)) with observed (continuous) states (I (x)) whereby random (e.g., analyte density distribution (H (a (x)) and deterministic (e.g., scanner blur) G (x)) noise factors) parameters of H (ratio of each analyte and HU mean/variance) for N different analyte classes assuming a normal distribution, note that θ= (τ 111,...,τNNN) is patient specific and will be estimated in a desired maximization (EM) manner, e.g., where the analyte label is a latent variable and the image is observed data.
E-step of determining the membership probability of a given current parameter
M-step maximizing the probability of the parameter given the membership probability
FIG. 13 depicts a diagram of an example Markov model/Viterbi algorithm for correlating observed states with hidden states in an image model. In particular, the graph depicts observed states (grey) (observed image intensity, I (x)) and hidden states (white) (pure analyte intensity, H (a (x)) that can be modeled with an empirical histogram or with a gaussian or boxcar probability distribution function. The PSF of the imaging system is modeled as a gaussian G (x). Thus, the first and second substrates are bonded together,
I(x)=G(x)*H(A(x))
It should be noted that viterbi-like algorithms can be applied to this, but convolution will instead be modeled as gaussian or uniform emission probability H.
As described above, one part of the inference procedure is based on Expectation Maximization (EM). In a typical application of EM, data points are modeled as belonging to one of several classes that are unknown. Each data point has a feature vector and for each class this feature vector can be modeled with a parameter distribution represented by a mean vector and covariance matrix, such as a multidimensional gaussian. In the context of the model presented herein, a straightforward EM embodiment will work as follows:
Wherein G is a Gaussian function
Wherein delta is Cronecker delta
(Member probability)
The main problem with this simple model is that it does not encode any higher order structures for the pixel. There is no prior probability associated with a more realistic pixel arrangement. Only tau determines the proportion of analyte species. Thus, once tau variables can be used to interpolate in the blob prior probability model, especially in the step of updating the membership probabilities.
Thus, the modified Bayesian inference procedure can be applied with a much more complex Bayesian prior. In a basic EM embodiment, there is no real a priori distribution. The variable tau represents a priori relative proportions of each class, but even so, this variable is unspecified and estimated in the inference procedure. Thus, there is no a priori belief about class distribution in the underlying EM model. In the model, the model a priori model is represented by a multi-scale analyte model. Tau is a function of position (and other variables), not just global scale.
The membership probability function is defined as follows:
The inference algorithm is as follows. At each step of the iteration, the membership probability map is initialized to zero so that the probabilities for all classes are zero. The membership probability map may then be added for all possible model configurations as follows:
Finally, the probability vectors may be normalized at each pixel in the membership probability map to recover the integrity assumption. Advantageously, all model configurations may be iterated. This is done by considering the value of N in turn to a value of 0 to a relatively low value, e.g. 9, at which point extremely few segments are observed with so many spots. For each value of N, a different configuration of estimated blobs may be checked. The estimated blobs may be thresholded to a small number (N) based on individual blob probabilities. Then, consider all the arrangements of the arrangement of N spots. Thus, all of the most likely blob configurations can be considered simultaneously and each model weighted by its prior probability. This procedure is obviously an approximate inference scheme, as the full space of the multi-scale model configuration may not be considered. However, it can be assumed that a good approximation is achieved by taking into account the most probable (in terms of both N and blobs). This procedure also assumes that the weighted average of the most likely configurations provides a good estimate at each individual pixel. Another alternative is to perform a constrained search of the model configuration and select the highest likelihood model as the MAP (maximum a posteriori) estimate.
Additional exemplary statistical models (e.g., posterior P (a\i)) are also described herein. In CT angiography, the following information is available:
Intensity of
CT Hensfield unit or MR intensity
Possible other imaging features
Relative to anatomical position
-Wherein in the patch the pixels are
Adjacent pixels
For example for smoothing contours by means of level sets
The posterior probability can be calculated as:
P(A|I)∝P(I|A)·P(A)
Thus, the following image information may influence the probability Ai (x) of the analyte
I (x) is the observed image intensity (possibly vector)
T (x) is the observed relative wall thickness from the image segmentation
F (x) is CT image feature
S (x) is a feature of the shape of the vessel wall (e.g., lumen bulge)
In some embodiments, a method similar to Mei Teluo Bolus-Black Studies (Metropolis-Hastings) may be utilized. In other embodiments, a maximum post-test method may be applied.
The following are example algorithmic possibilities for a statistical analysis model. In some embodiments, the model may utilize belief propagation (AKA maximum sum, maximum product, sum product messaging). Thus, for example, a viterbi (HMM) type approach may be utilized, for example, where the hidden state is analyte assignment a and the observed state is image intensity I. This approach may advantageously find that the MAP estimate may be the selection mechanism P (a|i). In some embodiments, a Soft Output Viterbi Algorithm (SOVA) may be utilized. Note that the reliability of each decision may be indicated by the difference between the selected (survivor) path and the discard path. Thus, this may indicate the reliability of each pixel analyte classification. In further example embodiments, a forward/backward Baum-Welch (HMM) method may be utilized. For example, the most likely state may be calculated at any point in time, but the most likely sequence cannot be calculated (see viterbi).
Another possible technique is the Mei Teluo-bose-nistins (MCMC) method, for example, where a is resampled and weighted by likelihood and a priori. In some embodiments, a simple MRF version for sampling may be utilized. Note that it may be particularly advantageous to sample the posterior directly. In an example embodiment, a per-pixel histogram of the analyte class may be established.
Other algorithm possibilities include the application of Gibbs (Gibbs) samplers, variational bayes (similar to EM), average field approximation, kalman filters or other techniques.
As described above, in some embodiments, a expectation-maximization (EM) post-test method may be utilized. Under this approach, the observed data X is the imaging value, the unknown parameter θ is due to the analyte map (but does not contain analyte probabilities), and the latent variable Z is the analyte probability vector. A key feature of this approach is that it enables iteration between estimated class members (Z) and model parameters (θ) as a result of their dependence on each other. However, since the analyte map will separate out the analyte probabilities, the method can be modified so that the current class members do not have to influence the model parameters (as these are learned in the training step). Thus, EM basically learns model parameters as it iterates through the current data. Advantageously, an exemplary embodiment of the EM method iteratively calculates the maximum likelihood, but assuming a flat prior.
Techniques for representing longitudinal covariance are also provided herein. Due to the wide spacing of histological sections (e.g., 4 mm), sampling may not faithfully capture longitudinal variations in the analyte. However, 3D image analysis is typically performed and, supposedly, there is some true longitudinal covariance. The problem is that histological information is not typically provided for longitudinal covariance. Nonetheless, the exemplary statistical models disclosed herein may reflect slow changes in the longitudinal direction.
In some embodiments, a Markov model/chain may be applied. Fig. 15 depicts an exemplary implantation of a 1D markov chain for text/DNA. Conventionally, markov chains are typically as low order as possible when applied to images in MRFs. However, higher order chains may be advantageous due to conditional independence (markov properties). Otherwise the data may be too bad in order without value. This is demonstrated by a 1D sampling of an exemplary markov chain as applied to text:
Uniformity rate sampling output:
–earryjnv anr jakroyvnbqkrxtgashqtzifzstqaqwgktlfgidmxxaxmmhzmgbya mjgxnlyattvc rwpsszwfhimovkvgknlgddou nmytnxpvdescbg k syfdhwqdrj jmcovoyodzkcofmlycehpcqpuflje xkcykcwbdaifculiluyqerxfwlmpvtlyqkv
0 th order markov chain output:
–ooyusdii eltgotoroo tih ohnnattti gyagditghreay nm roefnnasos r naa euuecocrrfca ayas el s yba anoropnn laeo piileo hssiod idlif beeghec ebnnioouhuehinely neiis cnitcwasohs ooglpyocp h trog 1
1 st order markov chain output:
–icke inginatenc blof ade and jalorghe y at helmin by hem owery fa st sin r d n cke s t w anks hinioro e orin en s ar whes ore jot j whede chrve blan ted sesourethegebe inaberens s ichath fle watt o
2 nd order markov chain output:
–he ton th a s my caroodif flows an the er ity thayertione wil ha m othenre re creara quichow mushing whe so mosing bloack abeenem used she sighembs inglis day p wer wharon the graiddid wor thad k
3 rd order markov chain output:
–es in angull o shoppinjust stees ther a kercourats allech is hote temal liked be weavy because in coy mrs hand room him rolio und ceran in that he mound a dishine when what to bitcho way forgot p
Fig. 16 depicts an example first order markov chain of a text probability table. Note that the size of such tables is exponentially set in terms of order:
D=order of markov chain
N=number of letters
Size=n D
Thus, higher orders lead to dimensional problems. Advantageously, the histological sample has a very high resolution. However, since the histological samples are not statistically independent, this may lead to an overfitting, as will be described in more detail later. In general, the higher the condition dependence of the modeling, the higher the predictability of the model.
In an example embodiment, a 2D Markov Random Field (MRF) may be used for pixel values instead of using a 1D sequence as for letters. Fig. 17 depicts the condition dependence of a first pixel (black) based on its neighboring pixels (gray). In an example embodiment, the clique may take advantage of symmetry to reduce the number of dependencies by half. In some embodiments, the value of a pixel may be a simple image intensity or may be a probability value of a classification problem. The use of typical MRFs is problematic. Conventional MRFs are almost always limited to nearest neighbor pixels providing conditional dependence, which greatly reduces the specificity of the represented probability space, and for very general, usually only black/white blobs,
Segmentation/filtering of the object, very short range dependencies. However, although the pixels are highly discrete, a blob that misses only one pixel and falls into the next pixel may completely change the probability distribution. Thus, the true image structure is much more continuous than what is usually solved using MRF.
For this reason, the systems and methods of the present disclosure may advantageously utilize an inference procedure, e.g., bayesian type rules of posterior, likelihood x prior (P (A/I) αP (I/A) x P (A)). Using a cross-word type analogy, the inference procedure implemented by the system and method of the present application somewhat looks like attempting to OCR a cross-word puzzle based on a noise scan. Knowledge (even imperfect knowledge of a few squares) can also help inform of unknown squares in a crossword puzzle. By considering both the vertical direction and the horizontal direction at the same time, the improvement is more effective. In an example embodiment, the inference program may be heuristic. For example, the initialization may be done without an unintelligible a priori, and then the simpler problem is resolved first, which may provide clues about the more difficult problem to be resolved later. Thus, relatively easy detection of biological properties such as calcium concentrate may inform about the presence of other more difficult problems for detecting analytes such as lipids. Each step of the inference procedure may narrow the probability distribution of unresolved pixels.
As described above, in order to obtain usable data, a high-order markov chain is preferable. A disadvantage of using higher order markov methods is that there may not be enough data to inform the inference program. In example embodiments, this problem may be solved by using a density estimation method such as a Parzen window or using a kriging (kriging) technique.
To form an inference program, an unconditional prior probability of the analyte can be utilized for initialization and then the highest level of evidence is used to begin narrowing the probability. For example, in some embodiments, an uncertainty width may be associated with each analyte probability estimate. In other embodiments, a near 1/N may represent such uncertainty.
It should be noted that the term "markov" is used loosely herein, as the proposed markov implementation is not memoryless but rather explicitly attempts to model long-range (spatial) dependencies.
Because CT resolution is low compared to histological and plaque anatomy, it may be preferable in some embodiments to utilize continuous space (time) markov models rather than discrete space (time). This works well with the level set representation of the probability map, as it works well with sub-pixel interpolation naturally. Discrete analyte states make the model a discrete spatial model. However, if one model represents a continuous probability rather than the presence/absence of an analyte, it becomes a continuous spatial model.
Turning to lung-based applications, table 4 below depicts exemplary biological properties/analytes that may be utilized with respect to a hierarchical analysis framework of such applications.
TABLE 5 biological objective measured variables supported by pulmonary based applications
In particular, the system may be configured to detect lung lesions. Thus, the exemplary system may be configured for whole lung segmentation. In some embodiments, this may involve solving the near pleural lesion problem using minimal curvature development. In some embodiments, the system may perform pulmonary component analysis (blood vessels, fissures, bronchi, lesions, etc.). Advantageously, a black-on (Hessian) filter may be utilized to facilitate lung component analysis. In some embodiments, the lung component analysis may further comprise pleural involvement, e.g., depending on the fracture geometry. In further embodiments, attachment to anatomical structures is also contemplated. In addition to pulmonary component analysis, separate analysis of ground glass state and solid state may also be applied. This may include determining geometric features such as volume, diameter, spheres, image features such as density and mass, and fractal analysis.
Fractal analysis can be used to infer scale growth patterns. To perform fractal analysis on very small regions of interest, the method adaptively modifies the support for convolution kernels to limit them to the region of interest (i.e., the pulmonary nodules). Intersecting vessels/bronchi as well as non-lesion features may be masked for fractal analysis purposes. This is done by applying an IIR gaussian filter over the masked local neighborhood and normalizing with an IIR blurred binary mask. In some embodiments, the fractal analysis may further include determining porosity (variance based on local mean). This may be applied with respect to lesions, sub-portions of lesions of the lung. In an example embodiment, an IIR gaussian filter or a circular neighborhood may be applied. In some embodiments, the variance may be calculated using IIR. An average of local variances (AVL) may also be calculated, e.g. as applied to lung lesions. Also, the variance of the local variance may be calculated.
In an example embodiment, both lesion structure and composition may be calculated. Advantageously, calculating the lesion structure may utilize the full volume of the thin section, thereby improving the calculation of the size measurement variation. Measurements such as sub-solid and Ground Glass Opacity (GGO) volumes can also be determined as part of assessing lesion structure. Turning to lesion composition, tissue characteristics such as consolidation, trauma, proximity, and perfusion may be calculated, thereby, for example, reducing the false positive rate relative to conventional analysis.
Referring now to fig. 18, a further exemplary hierarchical analysis framework 1800 of the system of the present disclosure is depicted. FIG. 18 may be understood as a detailed illustration of FIG. 1, which is described in greater detail with respect to an exemplary intermediate processing layer of the hierarchical inference system. Advantageously, the hierarchical inference still flows from the imaging data 1810 to the underlying biological information 1820 to the clinical disease 1800. Notably, however, the framework 1800 contains multiple levels of data points for processing imaging data to determine biological properties/analytes. At the preprocessing stage 1812, physical parameters, registration transformations, and region segmentation may be determined. This preprocessed imaging information can then be used to extract imaging features 1814, such as intensity features, shape, texture, temporal characteristics, etc., of the next level of data points. The extracted image features may then be utilized at stage 1816 to fit one or more biological models to the imaged anatomy. Example models may include bayesian/markov network lesion substructures, fractal growth models, or other models as described herein. The biological model may advantageously serve as a bridge for correlating imaging features to underlying biological properties/analytes at stage 1822. Exemplary biological properties/analytes include anatomical structure, tissue composition, biological function, correlation of gene expression, and the like. Finally, at stage 1832, biological properties/analytes may be utilized to determine a clinical finding associated with a pathology, including, for example, associated with a disease subtype, prognosis, decision support, and the like.
Fig. 19 is an example application for the purpose of guiding phenotyping in vascular therapy using the Stary plaque typing system employed by AHA as a basis, wherein the in vivo determined type is shown in a color overlay. The left panel shows an example of labeling according to possible dynamic behavior of plaque lesions based on their physical characteristics, and the right panel shows an example of using classification results to guide patient treatment. Examples map [ ' I ', ' II ', ' III ', ' IV ', ' V ', ' VI ', ' VII ', ' VIII ', ' resulting in a class = [ subclinical, labile, stable ]. This method is not linked to the Stary, for example, the Virmani system [ "calcified nodules", "CTO", "FA", "FCP", "healing plaque rupture", "PIT", "IPH", "rupture", "TCFA"; "ULC" ] is used with class-map= [ stable, unstable ], and other typing systems may achieve similarly high performance. In example embodiments, the systems and methods of the present disclosure may incorporate disparate typing systems, may change class diagrams, or other variations. For the FFR phenotype, an equivalent value such as normal or abnormal may be used and/or an amount may be used to facilitate comparison with, for example, a physical FFR.
Fig. 20 is an example of different diseases, such as lung cancer. In this example, the subtype of the bolus is determined so as to guide the most likely beneficial treatment to the patient based on the apparent phenotype.
CNNs are expected to perform better than read vector classification because they contain filters that extract spatial contexts that are not included in (only) analyte area measurements. Although the amount of training is reduced, it may be practical to use CNN because
1) Corresponding to significantly different treatment alternatives, there are relatively few classes (rather than being entirely granular as can be done in research assays requiring ex vivo tissue), e.g. three phenotypes of classification problems, three risk levels of outcome prediction/risk stratification problems, and thus problems are often easier.
2) The analyte region is processed as a pseudo-color region, for example by a level set or other class of algorithms, and a substantial portion of the image interpretation is performed by generating a segmentation and presenting a simplified but rather enriched data set to the classifier. The measurable pipeline stage reduces the dimensionality of the data (reducing the complexity of the problem that CNN must solve) while also providing a verifiable intermediate value that can increase the confidence of the overall pipeline.
3) Reformatting the data using the normalized coordinate system removes noise variations due to variables that have no substantial effect on classification (e.g., blood vessel size in the plaque phenotyping example).
To examine this idea, a pipeline consisting of three stages was established:
1) Semantic segmentation to identify which regions of biomass fall into certain classes
2) Spatial expansion for converting vein/artery cross section into rectangular shape, and
3) A trained CNN that is used to read the annotated rectangle and identify to which class (stable or unstable) it belongs.
Without loss of generality, the example systems and methods described herein may apply spatial unfolding (e.g., training and testing CNNs with (unfolded data sets) and without (doughnut-shaped data sets) spatial unfolding). Deployment is observed to improve verification accuracy
Semantic segmentation and spatial expansion:
First, the image volume is preprocessed. This may involve target initialization, normalization, and other pre-treatments such as deblurring or restoration to form a region of interest containing the physiological target to be phenotyped. The region is a volume comprised of a cross-section through the volume. The body part is determined automatically or provided explicitly by the user. The object of a body part which is tubular in nature is accompanied by a centre line. The centerline may have branches when present. Branches may be marked automatically or by a user. The generalization of the centerline concept may represent anatomy that is not tubular, but benefits through some structural directionality, such as the area of a tumor. In any case, the centroid of each cross section in the volume is determined. For tubular structures, the centroid will be the center of the channel, e.g., lumen of a blood vessel. For lesions, the centroid will be the center of the mass of the tumor.
FIG. 21 illustrates an exemplary image preprocessing step, in which case deblurring or restoration uses a patient-specific point spread determination algorithm to mitigate artifacts or image limitations caused by the image formation process that may reduce the ability to determine the characteristics of the predicted phenotype. The figure shows a portion of the analysis applied according to the radiological analysis of plaques by CT. Shown here is a deblurred or recovered image that is the result of iteratively fitting a physical model of the scanner point spread function with regularized assumptions about true latent densities of different regions of the image. This figure is included to demonstrate that various image processing operations may be performed to aid in being able to perform quantitative steps, and in no way indicates that this method is necessary for a particular invention in the present disclosure, but is instead an example of a step that may be taken to improve overall performance.
The (optionally deblurred or restored) image is represented in a cartesian dataset where x is used to represent how far from the centroid, y represents the rotation θ, and z represents the cross section. Each branch or region will form one such cartesian set. When multiple sets are used, the "null" values are used for the overlap region, that is, each physical voxel will only be presented once in the whole set, in such a way as to fit together geometrically. Each dataset will be paired with another dataset having sub-regions marked by objectively verifiable tissue composition (see, e.g., fig. 36). Example labels for vascular tissue may be lumens, calcifications, LRNC, etc. Example labels for lesions may be necrotic, neovascularized, etc. These tags may be objectively validated, for example, by histology (see, e.g., fig. 37). The paired data sets will be used as input for the training step to build the convolutional neural network. Two-layer analysis is supported, one on a separate cross-section layer, optionally with the output continuously varying across adjacent cross-sections, and a second on a volumetric layer (where the separate cross-section can be considered a still frame and the vessel tree traversal can be considered movie-like).
Exemplary CNN design:
AlexNet is CNN, which is completed in ImageNet large-scale visual identification challenge (IMAGENET LARGE SCALE Visual Recognition Challenge) in 2012. The first 5 errors of the network implementation are 15.3%. AlexNet is designed by a supervisor group consisting of Alex Krizhevsky, geoffrey Hinton and Ilya Sutskever at the university of Toronto at the time (U Toronto). AlexNet are trained from scratch to classify a set of independent images (unused in the training and validation steps during network training). For the expanded data, a AlexNet-type network of 400×200 pixel inputs was used, and the doughnut-shaped network was AlexNet-type (approximately the same resolution but different aspect ratios) of 280×280 pixel inputs. All of the convolution filter values are initialized, with weights extracted from AlexNet trained on the ImageNet dataset. Although the ImageNet dataset is a natural image dataset, this is only used as an effective method of weight initialization. Once training begins, the ownership weights are adjusted to better fit the new task. Most of the training program is extracted directly from the open source AlexNet embodiment, but some adjustments are needed. Specifically, for both AlexNet doughnut-shaped networks and AlexNet expanded networks, the basic learning rate was reduced to 0.001 (solver. Prototxt), and the batch size was reduced to 32 (train_val. Prototxt). All models were trained to 10,000 iterations and compared to the snapshot after training until only 2,000 iterations. Although more extensive studies of overfitting can be performed, it is generally found that the reduction in both training and validation errors is between 2k and 10k iterations.
The substitution feature (prefix) may include:
·ResNet-https://arxiv.org/abs/1512.03385
·GoogLeNet-https://www.cs.unc.edu/~wliu/papers/GoogLeNet.pdf
·ResNext-https://arxiv.org/abs/1611.05431
·ShuffleNet V2-https://arxiv.org/abs/1807.11164
·MobileNet V2-https://arxiv.org/abs/1801.04381
runtime optimization, such as frame-to-frame redundancy between cross-sections (sometimes referred to as "temporal" redundancy, but in this case, a form of intra-cross-section redundancy) may be utilized to save computation (e.g., http:// arxiv. Org/abs/1803.06312). Many optimizations for training or inference can be implemented.
In an example test embodiment AlexNet is trained to classify two categories of clinical significance, e.g., independent image sets between "unstable" plaque and "stable" plaque, based on V and VI histologically referenced true phase plaque types, the latter containing plaque types VII and VIII that adhere to industry practice standard plaque classification nomenclature accepted by the American Heart Association (AHA) and on the associated but unique typing system of virani.
Without loss of generality, in the illustrated example, both the overall accuracy and confusion matrix are utilized to evaluate performance. This formal system is based on the concept of calculating four probabilities, true positive, true negative, false positive and false negative, in a binary classification system. In example embodiments, other outcome variables may be used, however, sensitivity and specificity may be utilized as outcome variables or F1 scores (harmony of accuracy and sensitivity and average), for example. Alternatively, AUC characteristics may be calculated for a binary classifier. Furthermore, the classifier need not be binary-based. For example, in some embodiments, the classifier may sort based on more than two possible states.
Data set enhancement:
Physician annotated data is expensive, and thus it is desirable to manually augment the medical dataset (e.g., for training and/or verification). Two different enhancement techniques are used in the example embodiments described herein. The doughnuts are randomly turned horizontally and rotated to a random angle of 0 to 360. The resulting rotated doughnut is then trimmed to the extent where the doughnut exists and then padded with black pixels to fill the image with a square aspect ratio. The result is then scaled to a 280 x 280 size and saved as PNG.
The expanded dataset is enhanced by random horizontal flipping, and then "scrolled" by a random number of pixels over a range of 0 to the width of the image. The result is then scaled to 400 x 200 size and saved as PNG.
Both data sets were 15-fold increased, meaning that the total number of images after enhancement was 15-fold of the original number. Class normalization is performed, meaning that the final dataset has approximately the same number of images as belong to each class. This is important because the original number of images for each class may be different, thereby biasing the classifier to classes with a larger number of images in the training set.
Without loss of generality, any number of tissue types may be used by each radiologist performing the annotation.
Fig. 22 illustrates an exemplary application for demonstrating aspects of the present invention, in this case, classification for atherosclerotic plaque phenotype classification (e.g., using subject-specific data). Different colors represent different tissue analyte types, with dark grey showing otherwise normal walls. The graph shows the results of baseline live annotation of tissue properties indicating plaque phenotype and how it exists in the spatial context of a cross-section taken normal to the axis of the vessel. It also demonstrates a coordinate system that has been developed to provide a common basis for analysis of a large number of histological cross sections. Grid lines are added to reveal the coordinate system (tangential to radial distance) and superimposed on top of the color-coded pathologist annotation. An important aspect is that this kind of data set can be effectively used in a deep learning method because it uses a relatively simple pseudo-color image instead of a higher resolution complete image to simplify the information, but without losing spatial context, for example to have formal representations of such representations as "napkin ring symptoms", proximal lumen calcium, thin (or thick) caps (LRNC-lumen spacing), etc.
Fig. 23 illustrates tangential and radial direction variables using unit phasors and an internal representation of the phasor angles shown here encoded in gray scale, illustrating the use of normalization axes for vessel-related tubular structures and other pathophysiology associated with such structures (e.g., the gastrointestinal tract). Note that the vertical bars from black to white are due to the purely arbitrary boundaries of the gray scale coding, and the normalized radial distance is 0 at the lumen boundary and 1 at the outer boundary.
Fig. 24 illustrates an exemplary superposition of notes generated by a radiological analysis application according to CTA (unfilled color outline) on notes generated by a pathologist according to histology (solid color area). An example aspect of the systems and methods presented herein is that contoured forms of in vivo non-invasive imaging can be used with classification schemes to non-invasively determine phenotypes, where the classifier trains a known reference truth. In particular, filling the outline shown unfilled in this figure (so as not to obscure the relationship with the ex vivo annotation of this particular section, which relationship is provided to display correspondence) creates input data for the classifier.
Fig. 25 illustrates an additional step of data enrichment using a normalized coordinate system to avoid uncorrelated variations associated with wall thickness and radial representations, in particular. In particular, the "donut shape" is "unfolded" while retaining pathologist notes. The left panel shows the pathological region annotation of histological sections of plaque after deformation to convert the cut "C" shape back to the in vivo "O" shape of the intact vessel wall. The horizontal axis is tangential around the wall. The vertical axis is the normalized radial direction (bottom is the luminal surface and top is the outer surface). Note also that the finer granularity of pathologist notes has collapsed to match the granularity intended for extraction by in vivo radiological analysis applications (e.g., LRNC, CALC, IPH). The right panel shows a comparable expanded radiological analysis application annotation. The axis and color are the same as pathologist notes.
Figure 26 shows the following refinements associated with the plaque phenotyping example. Working from the deployed formal system, lumen irregularities (e.g., caused by ulcers or thrombi) and locally varying wall thickening are presented. The light grey at the bottom represents the lumen (added in this step to represent lumen surface irregularities) and the black used in the previous step has now been replaced with dark grey to represent varying wall thickening. The black now fully represents the area outside the wall.
Fig. 27 shows additional examples to include, for example, intra-plaque hemorrhage and/or other morphological aspects (e.g., using subject-specific data) as desired. The left figure shows a doughnut-like representation and the right figure is unfolded, wherein lumen surface and localized wall thickening are represented.
Example CNNs for testing include CNNs based on AlexNet and Inception frameworks.
AlexNet results:
In the example embodiment tested, the convolutional filter values were initialized, with weights extracted from AlexNet that trained the ImageNet (referenced herein) dataset. Although the ImageNet dataset is a natural image dataset, this is only used as an effective method of weight initialization. Once training begins, the ownership weights are adjusted to better fit the new task.
Most of the training program is extracted directly from the open source AlexNet embodiment, but some adjustments are needed. Specifically, for both alexnet doughnut-shaped networks and alexnet expanded networks, the basic learning rate was reduced to 0.001 (solver. Prototxt), and the batch size was reduced to 32 (train_val. Prototxt).
All models were trained to 10,000 iterations and compared to the snapshot after training until only 2,000 iterations. Although more extensive studies of overfitting can be performed, it is generally found that the reduction in both training and validation errors is between 2k and 10k iterations.
The new AlexNet network model was trained from scratch for 4 (four) different combinations of baseline live results for the two main pathologists, two different ways of processing the images (see above), and the expanded and donut images. The results are shown in FIG. 28. With class normalization enabled, each dataset change enhanced its training data 15-fold. The network is trained with this enhanced data and then tested with corresponding unenhanced validation data corresponding to the changes. For the expanded data, a AlexNet-type network of 400×200 pixel inputs was used, and the doughnut-shaped network was AlexNet-type (approximately the same resolution but different aspect ratios) of 280×280 pixel inputs. Note that in the test examples, the dimensions of the conventional layer as well as the fully connected layer were changed. Thus, the network in AlexNet test embodiments may be described as a five convolutional layer, three full connection layer network. Without loss of generality, here are some high-level conclusions presented from these results:
1) In addition to the WN_RV dataset, it does seem that the expanded data is easier for data analysis because it receives more verification accuracy overall
2) Non-normalized data, as expected, proved to be more representative.
3) With respect to the WN-RV dataset, the initial idea is to pool WN and RV real-phase data to see the compatibility of the typing system and the extent to which the sets can merge. In so doing, a significant difference was observed in the WN and RV data. The initial goal of the wn_rv experiment was to pool training data from multiple pathologists to see if the information contributed to efficacy. In contrast, degradation was observed instead of improvement. This is determined to be because of the change in color scheme that impedes this pooling of data. Therefore, normalizing the color scheme to achieve pooling may be considered.
Exemplary alternative network Inception:
Transfer learning retraining of Inception v CNN begins with the 8-2016 version of the network uploaded for public use on the TensorFlow website. 10,000 steps were trained on the training. The training and validation sets are normalized by image enhancement over the number of images, so that the two sub-lumped meters have the same number of annotated images. All other network parameters are taken as their default values.
The pretrained CNN may be used to classify the imaged features using the output of the last convolutional layer, which in the case of google Inception v CNN is a digital tensor of dimension 2048 x 2. The SVM classifier is then trained to identify the object. This process is typically performed on the Inception model after a transfer learning and fine tuning step in which the model, which was initially trained with the ImageNet 2014 dataset, has its last softmax layer removed and retrained to identify a new class of images.
Alternative embodiments:
Fig. 29 provides an alternative embodiment for phenotyping potentially cancerous lung lesions. The leftmost plot indicates the contour of the segmented lesion, where pretreatment was performed to separate it into solid and semi-solid region ("glass ground") sub-regions. The middle graph indicates its position in the lungs, and the rightmost graph shows that it has a pseudo-color overlay. In this case, the 3-dimensional nature of the lesion may be considered significant, and thus techniques such as video interpretation from computer vision may be applied to the classifier input dataset as opposed to processing the 2D cross section alone. In fact, these methods for tubular structures can be summarized as if the multiple cross sections were sequentially processed in a "movie" sequence along the centerline.
Another generalization is that the pseudo-colors are not selected from a discrete palette, but have continuous values at pixel or voxel locations. Using the lung example, fig. 30 shows a set of features, sometimes described as so-called "radiology" features that can be computed for each voxel. This set of values may exist in any number of pre-processed stacks and be fed into the phenotype classifier.
Other alternative embodiments include using, for example, variation data collected from multiple points in time, rather than (only) data from a single point in time. For example, if the amount or nature of a negative cell type increases, it may be referred to as a "progressive variable" phenotype, while a "regression variable" phenotype is directed to a decrease. The regression variable may be, for example, due to a response to a drug. Alternatively, if the rate of change of LRNC is rapid, for example, this may suggest a different phenotype. It will be apparent to those skilled in the art that examples extend to the use of delta values or rates of change.
As a further alternative, non-spatial information, such as derived from other assays (e.g., laboratory results) or demographics/risk factors, or other measurements extracted from radiological images, may be fed to the final layer of the CNN to combine spatial information with non-spatial information. Also, positioning information may be determined, such as by inferring the full 3D coordinates at the imaging, using pressure lines and readings along the vessel at one or more certain locations referenced from, e.g., bifurcation or ostium, etc.
Although the focus of these examples is on phenotypic classification, as a further embodiment of the invention, a similar approach may be applied to the problem of outcome prediction.
Example embodiments:
The systems and methods of the present disclosure may advantageously include a pipeline comprised of a plurality of stages. Fig. 34 provides a further example embodiment of a hierarchical analysis framework. The biological properties are identified and quantified by semantic segmentation, digitally represented and in spatially expanded enriched data sets (in example applications, lipid rich necrotic nuclei, cap thickness, stenosis, dilation, remodeling ratios, tortuosity (e.g., entrance and exit angles), calcification, IPH, and/or ulceration), singly or in combination, to convert venous/arterial cross sections to rectangles, and then to medical conditions (e.g., fractional flow reserve FFR causing ischemia, time to high risk phenotype HRP, and/or risk stratification event (TTE) using one or more trained CNNs to read the enriched data sets to identify and characterize conditions, collecting images of the patient, using raw slice data in a set of algorithms to measure biological properties that can be objectively verified, these biological properties are in turn formed into an enriched dataset for feeding one of the CNNs, in this example where the results are propagated forward and backward using recursive CNNs to implement constraints or to create a continuous condition (such as fractional flow reserve that monotonically decreases from proximal to distal throughout the vessel tree, or constant HRP values in focal lesions or other constraints.) the baseline true phase data for HRP may exist as a pathologist determines the plaque type at a given cross section that has been determined to be ex vivo, the baseline true phase data for FFR may be from a physical pressure line, with one or more measurements and network training, the location proximal to a given measurement along the centerline is constrained to be greater than or equal to the measurements, the location distal is constrained to be less than or equal to the measurements, and when two measurements on the same centerline are known, the value between the two measurements will be limited to the interval.
These properties and/or conditions may be measured at a given point in time and/or may vary (longitudinally) over time. Other embodiments that perform similar steps in plaque phenotyping or other applications without loss of generality would be embodiments of the present invention.
In example embodiments, the biological property may comprise one or more of the following:
angiogenesis
Neovascularization
Inflammation of
Calcification
Lipid deposition
Necrosis of
Bleeding out
Ulcers (ulcer)
Rigidity (rigidity)
Density of
Stenosis of
Expansion of
Remodeling ratio
Tortuosity of
Blood flow (e.g. blood flow in a blood vessel)
Pressure (e.g. pressure of blood in a channel or pressure of one tissue against another tissue)
Cell type (e.g. macrophage)
Cell alignment (e.g. smooth muscle cells)
Shear stress (e.g. of blood in a channel)
Analysis may include determining one or more of the number, extent, and/or traits of each of the aforementioned biological properties.
The condition that may be determined based on biological properties may include one or more of the following:
perfusion/ischemia (e.g., limited) (e.g., perfusion/ischemia of brain or heart tissue)
Perfusion/infarct (e.g. excision) (e.g. perfusion/infarct of brain or heart tissue)
Oxygenation
Metabolism
Blood flow reserves (perfusion capacity), e.g. FFR (+) and (-) and/or continuous amounts
Malignant tumor
Infringement of
Gao Fengxian plaques, such as phenotypes of HRP (+) and (-) and/or markers
Risk stratification (whether as probability of event or time of event) (e.g. MACCE explicitly mentioned)
Verification of the true phase basis form may include the following:
biopsy
Expert tissue annotation to form cut tissue (e.g., endarterectomy or autopsy)
Expert phenotypic annotation of resected tissue (e.g., endarterectomy or autopsy)
Physical pressure line
Other imaging modalities
Physiological monitoring (e.g. ECG, saO2, etc.)
Genomics and/or proteomics and/or metabolomics and/or transcriptomics assays
Clinical results
The analysis may be at a given point in time and may be both longitudinal (i.e., time-varying)
Exemplary system architecture:
FIG. 31 illustrates a high-level view of a user and other systems interacting with an analysis platform in accordance with the systems and methods of the present disclosure. Stakeholders of this view include system administrators, support technicians, with interoperability, security, failover and disaster recovery, regulatory issues.
The platform may be deployed in two main configurations, a preset or remote server (fig. 32). The platform deployment may be an independent configuration (top left), a preset server configuration (bottom left), or a remote server configuration (right). The preset deployment configuration may have two sub-configurations, desktop or rack-only. In a remote configuration, the platform may be deployed on a HIPAA compliance data center. The client accesses the API server over a secure HTTP connection. The client may be a desktop or tablet browser. No hardware is deployed on the customer site other than the computer running the Web browser. The deployed servers may be on the public cloud or on an extension of the customer's private network using VPN.
Exemplary embodiments include clients and servers. For example, fig. 33 shows a client as a c++ application and a server as a Python application. These components interact using HTML 5.0, CSS 5.0, and JavaScript. Open standards are used for interfaces where possible, including but not limited to HTTP (S), REST, DICOM, SPARQL, and JSON. As shown in this view, a third party library is also used, which shows the main elements of the technology stack. Many variations and different approaches will be appreciated by those skilled in the art.
The various embodiments of the systems and methods described above may be implemented in digital electronic circuitry, in computer hardware, firmware, and/or software. Embodiments may be implemented as a computer program product (i.e. a computer program tangibly embodied in an information carrier). Embodiments may be in a machine-readable storage and/or in a propagated signal, for execution by, or to control the operation of, data processing apparatus. Embodiments may be, for example, a programmable processor, a computer, and/or multiple computers.
A computer program can be written in any form of programming language, including compiled and/or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a subroutine, element, and/or other unit suitable for use in a computing environment. A computer program can be deployed to be executed on one computer or on multiple computers at one site.
The method steps may be performed by one or more programmable processors executing a computer program to perform functions of the invention by operating on input data and generating output. Method steps may also be performed by, and apparatus may be implemented as, special purpose logic circuitry. The circuitry may be, for example, an FPGA (field programmable gate array) and/or an ASIC (application specific integrated circuit). Modules, subroutines, and software agents may refer to portions of computer programs, processors, special purpose circuitry, software, and/or hardware that implement the described functionality.
Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a processor for executing instructions and one or more memory devices for storing instructions and data. Generally, a computer can include one or more mass storage devices (e.g., magnetic, magneto-optical, or optical disks) for storing data, be operably coupled to receive data from and/or transmit data to the one or more mass storage devices.
Data transmission and instruction may also occur over a communication network. Information carriers suitable for embodying computer program instructions and data include all forms of non-volatile memory, including by way of example semiconductor memory devices. The information carrier may be, for example, an EPROM, EEPROM, flash memory device, magnetic disk, internal hard disk, removable magnetic disk, magneto-optical disk, CD-ROM and/or DVD-ROM disk. The processor and the memory may be supplemented by, and/or incorporated in special purpose logic circuitry.
To provide for interaction with a user, the techniques described above may be implemented on a computer having a display device. The display device may be, for example, a Cathode Ray Tube (CRT) and/or a Liquid Crystal Display (LCD) monitor. The interaction with the user can be, for example, a keyboard and a pointing device (e.g., a mouse or trackball) by which the user can present information to the user and by which the user can provide input to the computer (e.g., interact with user interface elements). Other kinds of devices may be used to provide for interaction with a user. Other means may be feedback provided to the user, for example, in any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback). The input from the user may be received, for example, in any form, including acoustic input, speech input, and/or tactile input.
The techniques described above may be implemented in a distributed computing system that includes a back-end component. The back-end component may be, for example, a data server, a middleware component, and/or an application server. The techniques described above may be implemented in a distributed computing system that includes a front-end component. The front-end component may be, for example, a client computer with a graphical user interface, a Web browser that a user may interact with using the example embodiments, and/or other graphical user interface for a transmission device. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a Local Area Network (LAN), a Wide Area Network (WAN), the Internet, wired networks, and/or wireless networks.
The system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
The packet-based network may include, for example, the internet, an operator Internet Protocol (IP) network (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), a Campus Area Network (CAN), a Metropolitan Area Network (MAN), a Home Area Network (HAN), a private IP network, an IP private branch exchange (IPBX), a wireless network (e.g., a Radio Access Network (RAN), an 802.11 network, an 802.16 network, a General Packet Radio Service (GPRS)) network, a HiperLAN), and/or other packet-based networks. The circuit-based network may include, for example, a Public Switched Telephone Network (PSTN), a private branch exchange (PBX), a wireless network (e.g., RAN, bluetooth, code Division Multiple Access (CDMA) network, time Division Multiple Access (TDMA) network, global system for mobile communications (GSM) network, and/or other circuit-based network.
The computing device may include, for example, a computer, a telephone, an IP telephone, a mobile device (e.g., a cellular telephone, a Personal Digital Assistant (PDA) device, a laptop computer, an email device), and/or other communication device with a browser device. Browser means include, for example, a web browser (e.g., available from Microsoft corporation (Microsoft Corporation))InternetAvailable from Morsela company (Mozilla Corporation)Firefox) computer (e.g., desktop computer, laptop computer). The mobile computing device includes, for exampleOr other smart phone device.
While many variations and modifications of the present disclosure will no doubt become apparent to those of ordinary skill in the art after having read the foregoing description, it is to be understood that the particular embodiments shown and described by way of illustration are in no way intended to be considered limiting. Further, the present subject matter has been described with reference to specific embodiments, but variations within the spirit and scope of the present disclosure will occur to those skilled in the art. Note that the above examples are provided for illustrative purposes only and are in no way to be construed as limiting the present disclosure.
Although the present disclosure has been described with reference to specific embodiments, the present disclosure is not intended to be limited to the specific details disclosed herein, but rather extends to all variations and generalizations thereof that will be apparent to one of ordinary skill in the art, including those within the broadest scope of the appended claims.

Claims (15)

1. A method for computer-aided phenotyping or outcome prediction of a pathology using an enriched radiological dataset, the method comprising:
receiving a radiological dataset of a patient, wherein the radiological dataset is obtained without subjecting the patient to an invasive procedure;
Enriching said data set by performing an analyte measurement and/or classification of anatomical structure, shape or geometry and/or tissue characteristics, type or traits and an objective verification of a set of analytes associated with a pathology, and
Processing the enriched dataset using a machine learning classification method based on known reference truths and determining one or both of (i) a phenotype of the pathology, or (ii) a prediction associated with the pathology;
Wherein enriching the dataset further comprises spatial transformation of the dataset for highlighting a biologically significant spatial context;
wherein the transforming comprises spatially transforming the cross-section of the pathology-specific structure in the radiological data set to produce a pathology-appropriate transformed data set, and
Wherein the machine learning classification method uses a trained Convolutional Neural Network (CNN) that is applied to the pathologically appropriate transformed dataset.
2. The method of claim 1, wherein an image volume in the radiological dataset is pre-processed to form a region of interest containing a physiological target, lesion and/or group of lesions to be analyzed, and wherein the region of interest comprises one or more cross-sections, each cross-section being comprised of projections through the volume.
3. The method of claim 2, wherein if the region of interest comprises a volume that is oriented in nature, a centerline of the volume is indicated such that the cross-sections are taken along the centerline and the centroid of each cross-section is determined, and wherein the preprocessed radiological dataset is represented in a cartesian coordinate system in which axes are used to represent a perpendicular distance from the centerline and a rotation θ about the centerline, respectively.
4. A method according to claim 3, wherein the region of interest comprises a plurality of branches in a branch pathology-specific network having a corresponding directionality, wherein a different cartesian coordinate system is applied with respect to each branch.
5. The method of claim 4, wherein a single branch is assigned any initial overlapping sections of the branch.
6. The method of claim 2, wherein if the region of interest comprises a volume that is oriented in nature, a centerline of the volume is indicated such that the cross-sections are taken along the centerline and a centroid of each cross-section is determined, and wherein sub-regions within each cross-section are classified according to objectively verifiable characteristics based on tissue composition.
7. The method of claim 6, wherein tissue composition classification comprises classification of pathology specific tissue properties alone and/or in combination.
8. The method of claim 6, wherein the tissue composition classification further comprises classification of intra-pathological bleeding (IPH).
9. The method of claim 6, wherein the sub-regions are further classified based on abnormal morphology.
10. The method of claim 9, wherein the abnormal morphology classification comprises identification and/or classification of lesions.
11. The method of claim 1, wherein the machine learning classification method is using a trained Convolutional Neural Network (CNN), wherein the CNN is based on AlexNET, inception, caffeNet or other open source or reconstruction of a commercially available framework.
12. The method of claim 1, wherein data set enrichment includes benchmark live notes of analyte subregions and providing a spatial context of how such analyte subregions exist in cross-sections, and wherein the spatial context includes providing a coordinate system based on polar coordinates relative to a centroid of each cross-section.
13. The method of claim 12, wherein the coordinate system is normalized by normalizing radial coordinates with respect to a tubular structure.
14. The method of claim 1, wherein the dataset is enriched by visually representing different analyte subregions using different colors, and wherein color-coded analyte regions are visually depicted relative to an annotated background that distinguishes between (i) a visualized pathology-specific target, a central lumen region inside an inner lumen of a lesion and/or a lesion group, (ii) a non-analyte subregion of the pathology-specific target, a lesion and/or a lesion group, and (iii) an outer region outside an outer wall of the pathology-specific target, a lesion and/or a lesion group.
15. A method according to claim 1, wherein the machine learning classification method comprises constructing one or more machine learning models for computer-aided phenotyping or outcome prediction of pathology.
CN201980049912.9A 2018-05-27 2019-05-24 Methods and systems utilizing quantitative imaging Active CN112567378B (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US201862676975P 2018-05-27 2018-05-27
US62/676,975 2018-05-27
US201862771448P 2018-11-26 2018-11-26
US62/771,448 2018-11-26
PCT/US2019/033930 WO2019231844A1 (en) 2018-05-27 2019-05-24 Methods and systems for utilizing quantitative imaging

Publications (2)

Publication Number Publication Date
CN112567378A CN112567378A (en) 2021-03-26
CN112567378B true CN112567378B (en) 2024-12-13

Family

ID=68697312

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980049912.9A Active CN112567378B (en) 2018-05-27 2019-05-24 Methods and systems utilizing quantitative imaging

Country Status (5)

Country Link
EP (1) EP3803687A4 (en)
JP (2) JP7113916B2 (en)
KR (1) KR102491988B1 (en)
CN (1) CN112567378B (en)
WO (1) WO2019231844A1 (en)

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11969280B2 (en) 2020-01-07 2024-04-30 Cleerly, Inc. Systems, methods, and devices for medical image analysis, diagnosis, risk stratification, decision making and/or disease tracking
EP4087486A4 (en) 2020-01-07 2024-02-14 Cleerly, Inc. SYSTEMS, METHODS AND DEVICES FOR MEDICAL IMAGE ANALYSIS, DIAGNOSIS, RISK STRATIFICATION, DECISION MAKING AND/OR DISEASE TRACKING
CN111504675B (en) * 2020-04-14 2021-04-09 河海大学 An online diagnosis method for mechanical faults of gas-insulated combined electrical appliances
CN111784704B (en) * 2020-06-24 2023-11-24 中国人民解放军空军军医大学 Automatic quantitative grading sequential method for MRI hip joint inflammation segmentation and classification
US20220036560A1 (en) * 2020-07-30 2022-02-03 Biosense Webster (Israel) Ltd. Automatic segmentation of anatomical structures of wide area circumferential ablation points
CN112669439B (en) * 2020-11-23 2024-03-19 西安电子科技大学 Method for establishing intracranial angiography enhanced three-dimensional model based on transfer learning
JP2023551869A (en) * 2020-12-04 2023-12-13 コーニンクレッカ フィリップス エヌ ヴェ Pressure and X-ray image prediction of balloon inflation events
CN112527374B (en) * 2020-12-11 2024-08-27 北京百度网讯科技有限公司 Marking tool generation method, marking device, marking equipment and storage medium
WO2022197045A1 (en) 2021-03-16 2022-09-22 주식회사 딥바이오 Prognosis prediction method using result of disease diagnosis through neural network and system therefor
CN113499098A (en) * 2021-07-14 2021-10-15 上海市奉贤区中心医院 Carotid plaque detector based on artificial intelligence and evaluation method
US20230115927A1 (en) * 2021-10-13 2023-04-13 Shanghai United Imaging Healthcare Co., Ltd. Systems and methods for plaque identification, plaque composition analysis, and plaque stability detection
CN113935258B (en) * 2021-10-15 2022-05-20 北京百度网讯科技有限公司 Computational fluid dynamics acceleration method, device, equipment and storage medium
CN114170143A (en) * 2021-11-11 2022-03-11 复旦大学 Method for aneurysm detection and rupture risk prediction in digital subtraction angiography
KR102597081B1 (en) * 2021-11-30 2023-11-02 (주)자비스 Method, apparatus and system for ensemble non-constructing inspection of object based on artificial intelligence
KR102602559B1 (en) * 2021-11-30 2023-11-16 (주)자비스 Method, apparatus and system for non-constructive inspection of object based on selective artificial intelligence engine
KR102567138B1 (en) * 2021-12-30 2023-08-17 가천대학교 산학협력단 Method and system for diagnosing hair health based on machine learning
CN114533201B (en) * 2022-01-05 2024-06-18 华中科技大学同济医学院附属协和医院 An extracorporeal wave clot-breaking auxiliary device
US12144669B2 (en) 2022-03-10 2024-11-19 Cleerly, Inc. Systems, devices, and methods for non-invasive image-based plaque analysis and risk determination
CN115115654B (en) * 2022-06-14 2023-09-08 北京空间飞行器总体设计部 An object image segmentation method based on saliency and neighbor shape query
CN115099682B (en) * 2022-07-18 2024-09-06 同济大学 A method for classifying the softness and hardness of shield tunnel faces and grading excavation risks
CN115294191B (en) * 2022-10-08 2022-12-27 武汉楚精灵医疗科技有限公司 Marker size measuring method, device, equipment and medium based on electronic endoscope
CN115546612A (en) * 2022-11-30 2022-12-30 中国科学技术大学 Image interpretation method and device combining graph data and graph neural network
CN116110589B (en) * 2022-12-09 2023-11-03 东北林业大学 A predictive method for diabetic retinopathy based on retrospective correction
CN115797333B (en) * 2023-01-29 2023-05-09 成都中医药大学 A personalized and customized intelligent vision training method
CN117132577B (en) * 2023-09-07 2024-02-23 湖北大学 Method for non-invasively detecting myocardial tissue tension and vibration

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7467119B2 (en) * 2003-07-21 2008-12-16 Aureon Laboratories, Inc. Systems and methods for treating, diagnosing and predicting the occurrence of a medical condition
US8090164B2 (en) * 2003-08-25 2012-01-03 The University Of North Carolina At Chapel Hill Systems, methods, and computer program products for analysis of vessel attributes for diagnosis, disease staging, and surgical planning
US20050118632A1 (en) * 2003-11-06 2005-06-02 Jian Chen Polynucleotides and polypeptides encoding a novel metalloprotease, Protease-40b
JP2006181025A (en) * 2004-12-27 2006-07-13 Fuji Photo Film Co Ltd Abnormal shadow detecting method, device and program
CN1952981A (en) * 2005-07-13 2007-04-25 西门子共同研究公司 Method for knowledge based image segmentation using shape models
US8734823B2 (en) * 2005-12-14 2014-05-27 The Invention Science Fund I, Llc Device including altered microorganisms, and methods and systems of use
EP1976567B1 (en) * 2005-12-28 2020-05-13 The Scripps Research Institute Natural antisense and non-coding rna transcripts as drug targets
PT2145276T (en) * 2007-04-05 2020-07-30 Fund D Anna Sommer Champalimaud E Dr Carlos Montez Champalimaud Systems and methods for treating, diagnosing and predicting the occurrence of a medical condition
US8774479B2 (en) * 2008-02-19 2014-07-08 The Trustees Of The University Of Pennsylvania System and method for automated segmentation, characterization, and classification of possibly malignant lesions and stratification of malignant tumors
US9878177B2 (en) * 2015-01-28 2018-01-30 Elekta Ab (Publ) Three dimensional localization and tracking for adaptive radiation therapy
US10176408B2 (en) * 2015-08-14 2019-01-08 Elucid Bioimaging Inc. Systems and methods for analyzing pathologies utilizing quantitative imaging
DE102016010909A1 (en) * 2015-11-11 2017-05-11 Adobe Systems Incorporated Structured modeling, extraction and localization of knowledge from images
JP7075882B2 (en) * 2016-03-02 2022-05-26 多恵 岩澤 Diagnostic support device for lung field lesions, control method and program of the device
US10163040B2 (en) * 2016-07-21 2018-12-25 Toshiba Medical Systems Corporation Classification method and apparatus
KR101980955B1 (en) * 2016-08-22 2019-05-21 한국과학기술원 Method and system for analyzing feature representation of lesions with depth directional long-term recurrent learning in 3d medical images
CN107730489A (en) * 2017-10-09 2018-02-23 杭州电子科技大学 Wireless capsule endoscope small intestine disease variant computer assisted detection system and detection method

Also Published As

Publication number Publication date
CN112567378A (en) 2021-03-26
JP2021525929A (en) 2021-09-27
KR102491988B1 (en) 2023-01-27
JP2022123103A (en) 2022-08-23
JP7542578B2 (en) 2024-08-30
JP7113916B2 (en) 2022-08-05
EP3803687A4 (en) 2022-03-23
KR20210042267A (en) 2021-04-19
EP3803687A1 (en) 2021-04-14
WO2019231844A1 (en) 2019-12-05

Similar Documents

Publication Publication Date Title
CN112567378B (en) Methods and systems utilizing quantitative imaging
US12045983B2 (en) Functional measures of stenosis significance
US11607179B2 (en) Non-invasive risk stratification for atherosclerosis
US11120312B2 (en) Quantitative imaging for cancer subtype
US20210312622A1 (en) Quantitative imaging for instantaneous wave-free ratio (ifr)
US12008751B2 (en) Quantitative imaging for detecting histopathologically defined plaque fissure non-invasively
US11113812B2 (en) Quantitative imaging for detecting vulnerable plaque
US11676359B2 (en) Non-invasive quantitative imaging biomarkers of atherosclerotic plaque biology
US20230368365A9 (en) Quantitative imaging for detecting histopathologically defined plaque erosion non-invasively

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant