[go: up one dir, main page]

WO2023220150A1 - Artificial intelligence catheter optical connection or disconnection evaluation, including deep machine learning and using results thereof - Google Patents

Artificial intelligence catheter optical connection or disconnection evaluation, including deep machine learning and using results thereof Download PDF

Info

Publication number
WO2023220150A1
WO2023220150A1 PCT/US2023/021695 US2023021695W WO2023220150A1 WO 2023220150 A1 WO2023220150 A1 WO 2023220150A1 US 2023021695 W US2023021695 W US 2023021695W WO 2023220150 A1 WO2023220150 A1 WO 2023220150A1
Authority
WO
WIPO (PCT)
Prior art keywords
catheter
model
connection
disconnection
imaging
Prior art date
Application number
PCT/US2023/021695
Other languages
French (fr)
Inventor
Aaron David Selya
Gabriel B. KLAVANS
Original Assignee
Canon U.S.A., Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon U.S.A., Inc. filed Critical Canon U.S.A., Inc.
Publication of WO2023220150A1 publication Critical patent/WO2023220150A1/en

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/63ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for local operation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/70ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • G06N3/0442Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/09Supervised learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/096Transfer learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • G06V2201/034Recognition of patterns in medical or anatomical images of medical instruments
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/67ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for remote operation

Definitions

  • This present disclosure generally relates to computer imaging, computer vision, and/ or to the field of medical imaging, particularly to devices/ apparatuses, systems, methods, and storage mediums for artificial intelligence (“Al”) catheter optical connection or disconnection evaluation and/or for using one or more imaging modalities, including but not limited to, angiography, Optical Coherence Tomography (OCT), Multi-modality OCT (MM-OCT), near-infrared fluorescence (NIRF), near-infrared auto-fluorescence (NIRAF), OCT-NIRAF, robot imaging, robot imaging, continuum robot imaging, etc.
  • Al artificial intelligence
  • OCT applications include imaging, evaluating and diagnosing biological objects, including but not limited to, for gastro-intestinal, cardio and/or ophthalmic applications, and being obtained via one or more optical instruments, including but not limited to, one or more optical probes, one or more catheters, one or more endoscopes, one or more capsules, and one or more needles (e.y., a biopsy needle).
  • optical instruments including but not limited to, one or more optical probes, one or more catheters, one or more endoscopes, one or more capsules, and one or more needles (e.y., a biopsy needle).
  • One or more devices, systems, methods and storage mediums for characterizing, examining and/or diagnosing, and/or measuring viscosity of, a sample or object in artificial intelligence application(s) using an apparatus or system that uses and/or controls one or more imaging modalities are discussed herein.
  • Fiber optic catheters and endoscopes have been developed to gain access to internal organs.
  • OCT optical coherence tomography
  • the catheter which may include a sheath, a coil and an optical probe, may be navigated to a coronary artery.
  • OCT optical coherence tomography
  • the aim of the OCT techniques is to measure the time delay of light by using an interference optical system or interferometry, such as via Fourier Transform or Michelson interferometers.
  • Light from a light source delivers and splits into a reference arm and a sample (or measurement) arm with a splitter e.g., a beamsplitter).
  • a reference beam is reflected from a reference mirror (partially reflecting or other reflecting element) in the reference arm while a sample beam is reflected or scattered from a sample in the sample arm.
  • Both beams combine (or are recombined) at the splitter and generate interference patterns.
  • the output of the interferometer is detected w ith one or more detectors, such as, but not limited to, photodiodes or multi-array cameras, in one or more devices, such as, but not limited to, a spectrometer (e.g., a Fourier Transform infrared spectrometer).
  • the interference patterns are generated when the path length of the sample arm matches that of the reference arm to within the coherence length of the light source.
  • a spectrum of an input radiation may be derived as a function of frequency.
  • the frequency of the interference patterns corresponds to the distance between the sample arm and the reference arm. The higher frequencies are, the greater are the differences in path length.
  • Single mode fibers may be used for OCT optical probes, and double clad fibers may be used for fluorescence and/or spectroscopy.
  • a multi-modality system such as, but not limited to, an OCT, fluorescence, and/or spectroscopy system with an optical probe, is developed to obtain multiple information at the same time.
  • OCT optical coherence tomography
  • PCI Percutaneous Coronary Intervention
  • OCT optical coherence tomography
  • IVI intravascular imaging
  • IVUS intravascular ultrasound
  • OCT optical coherence tomography
  • IVI modalities provide cross-sectional imaging of coronary arteries with precise lesion information (e.g., lumen size, plaque morphology, implanted devices, etc.). That said, only about 20% of interventional cardiologists in the United States use IVI imaging in conjunction w ith coronary angiography during PCI procedures. Additionally, IVI imaging uses the mating of disposable single use sterile catheters or probes to non-disposable imaging systems.
  • the mating process involves mechanically connecting the catheter/ probe to a system to get an adequate electrical, optical, and/or radio frequency (RF) connection (e.g., in addition to or alternatively to a mechanical connection) depending on the type of catheter/probe.
  • RF radio frequency
  • a signal is veiy small, it may be hard to measure. Additionally, where a signal is not location specific (e.g., a signal may be from all reflections making it back to a system), it may be unclear if the signal is from a fully mated probe/catheter, a partially mated probe/catheter, or reflection(s) from an endface of a probe connector of the probe/catheter.
  • Performing a pullback when a probe is not properly mated may yield useless or less useful data, and may cause a physician a lot of wasted time.
  • the user unaware, may attempt to remove the probe/catheter from the system just to be left, for example, with the core connected to a patient interface unit (PIU) of the system, potentially rendering the system unusable and/or causing damage to the PIU.
  • PIU patient interface unit
  • the catheter preferably is seated correctly w ithin an imaging device. Extended use of a catheter that is not properly seated can result in incorrect images, no images, or even damage to the catheter and the imaging device.
  • RFID chips and other electronic methods/devices are vulnerable to signal loss or corruption that can occur with wireless signals as well as risks associated with adding an additional component. Since RFID chips are typically installed at a manufacturer, RFID chips are also subjected to all hazards associated with packaging. Moreover, issues regarding prior evaluation methods relate to the fact that the prior evaluation methods do not actually evaluate image(s) but merely evaluate the level of signal coming back. While a signal may be used to determine if a connection has occurred, such a signal lacks information to help determine the quality of the connection without the use of a trained operator or user.
  • detecting, monitoring, and guiding the mating step would be desirable to increase likelihood of catheter/probe mating success (e.q., to reduce mating failure(s), to minimize mating failure(s), to avoid mating failure(s), etc.), to confirm mating status, and to reduce case delays and user frustration.
  • Such issues also apply to robot technologies (e.q., robots, continuum robots, robots using data of similar connection(s) or disconnection(s), etc.).
  • At least one imaging or optical apparatus/device, system, method, and storage medium that applies machine learning, especially deep learning, to evaluate and achieve catheter optical connection(s) and/or disconnection(s) with a higher success rate when compared to traditional techniques, and to use the one or more results to achieve catheter optical connection more efficiently.
  • one or more probe/catheter/robot device detecting, monitoring, and/or connection/disconnection evaluation techniques and/or structure for use in at least one optical device, assembly or system to achieve consistent, reliable detection, monitor, and connection results (e.g., to reduce mating failure(s), to minimize mating failure(s), to avoid mating failure(s), to identify and fix mating or connection failure! s), etc.) at high efficiency and a reasonable cost of manufacture and maintenance.
  • imaging e.g., OCT, NIRF, NIRAF, robots, continuum robots, etc.
  • apparatuses, systems, methods and storage mediums for using and/or controlling multiple imaging modalities, that apply machine learning, especially deep learning, to evaluate catheter optical connection(s) with greater or maximum success, and that use the results to achieve catheter optical connection(s) more efficiently or with maximum efficiency.
  • an interferometer e.g., spectral- domain OCT (SD-OCT), swept-source OCT (SS-OCT), multimodal OCT (MM-OCT), Intravascular Ultrasound (IVUS), Near-Infrared Autofluorescence (NIRAF), Near-Infrared Spectroscopy (NIRS), Near-Infrared Fluorescence (NIRF), therapy modality using light, sound, or other source of
  • One or more embodiments of the present disclosure may apply machine learning, especially deep learning, to evaluate catheter optical connection(s) or disconnection(s) with greater or maximum success, and that use the results to achieve catheter optical connection(s) or disconnection(s) more efficiently or with maximum efficiency.
  • a catheter preferably is seated correctly within an imaging device to ensure that the catheter and/ or the imaging device provides accurate imaging.
  • One or more embodiments of the present disclosure may provide one or more probe/catheter detecting, monitoring, and/or connection or disconnection evaluation techniques and/or structure for use in at least one optical device, assembly or system to achieve consistent, reliable detection, monitor, and connection results (e.g., to reduce mating failure(s), to minimize mating failure(s), to avoid mating failure(s), to identify and fix mating, connection, or disconnection failure! s), etc.) at high efficiency and a reasonable cost of manufacture and maintenance.
  • one or more techniques build off of use of return signals.
  • one or more embodiments of the present disclosure may use a neural net to identify specific objects within the data collected to ensure the connection or disconnection status is valid. This additional information allows the one or more techniques of the present disclosure to combine the advantages of using a human evaluator and evaluation of return signals while minimizing the disadvantages of each technique(s).
  • neural nets may evaluate image(s) faster than any human operator, and neural nets may be deployed across an unlimited number of devices and/or systems. This avoids the issue related to training human operators to evaluate connection status, and the shorter time required for evaluation reduces the chances of harm to a patient or object, tissue, or specimen by shortening active collection time. Additionally, in one or more embodiments, neural net classifiers may be used to detect specific objects (such as, but not limited to, a catheter sheath, robot components, etc.) such that more useful information is obtained, evaluated, and used (in comparison with evaluating the return signal only, which provides limited information, as aforementioned).
  • specific objects such as, but not limited to, a catheter sheath, robot components, etc.
  • one or more embodiments of the present disclosure may achieve a better or maximum success rate of connection or disconnection evaluation(s) without (or with less) user interactions, and may reduce processing and/or prediction time to display connection result(s).
  • a model may be defined as software that takes images as input and returns predictions for the given images as output.
  • a model may be a particular instance of a model architecture (set of parameter values) that has been obtained by model training and selection using a machine (and/or deep) learning and/or optimization algorithm/process.
  • a model generally consists or is comprised of the following parts: an architecture defined by a source code (e.p., a convolutional neural network comprised of layers of parameterized convolution kernels and activation functions, etc.) and configuration values (parameters, weights, or features) that are initially set to random values and are then over the course of the training iteratively optimized given data examples (e.q., image-label pairs), an objective function (loss function), and an optimization algorithm (optimizer).
  • a source code e.p., a convolutional neural network comprised of layers of parameterized convolution kernels and activation functions, etc.
  • configuration values parameters, weights, or features
  • Neural networks are a computer system/systems, and take inspiration from how neurons in a brain work.
  • a neural network may consist of or may comprise an input layer, some hidden layers of neurons or nodes, and an output layer (as further discussed below).
  • the input layer may be where the values are passed to the rest of the model.
  • the input layer may be the place where the transformed OCT data may be passed to a model for evaluation.
  • the hidden layer(s) may be a series of layers that contain or include neurons or nodes that establish connections between the neurons or nodes in the other hidden layers.
  • the values of each of the connections may be altered so that, due to the training, the system/ systems will trigger when the expected pattern is detected.
  • the output layer provides the result(s) of the model. In the case of the MM-OCT application(s), this may be a Boolean (true/false) value for detecting connection or disconnection (e.p., partial connection or disconnection, complete connection or disconnection, improper connection or disconnection, etc.).
  • connection or disconnection evaluation techniques may be used with an OCT or other imaging modality device, system, storage medium, etc.
  • one or more connection or disconnection evaluation techniques may be used for any type of OCT, including, but not limited to, MM-OCT.
  • One or more embodiments of the present disclosure may: (i) calculate a connection or disconnection status of a catheter, and may perform the connection or disconnection status calculation without the use of another piece of equipment; (ii) make evaluations during a normal course of operations (instead of a separate operation) for a catheter; and (iii) work on small numbers of samples (e.q., as little as one image in an imaging, OCT, and/or MM-OCT application(s)), or may work on large numbers of samples, to evaluate connection/disconnection status.
  • One or more embodiments of the present disclosure may evaluate catheter connection(s) and/or disconnection(s) in one or more ways, and may use one or more features to determine a connection or disconnection status.
  • one or more embodiments may include one or more of the following: (i) one or more visual indicators (e.q., a light emitting diode (LED), a display for displaying the connection or disconnection status, etc.) that operates to send information to an outside of the device or system to indicate the connection or disconnection status (e.c/., a successful connection, a successful optical connection, a successful connection of wires/catheter, a partial connection, no connection, a disconnection, etc.); (ii) one or more circuits or sensors that operate to detect a catheter connection or disconnection status (e.q., a successful connection, a successful optical connection, a successful connection of wires/catheter, a partial connection, no connection, a disconnection, etc.); and/or (
  • the circuit(s)/sensor(s) may operate to be useful for (i) identifying case(s) where service/maintenance would be useful (e.q., to detect a partial connection that may be returned to a complete connection, to detect a disconnected wire or wires (or other component(s) of a catheter, robot, or other imaging device), and/or (ii) determining a case or cases w here an imaging apparatus or system using a catheter may operate or run in a mode that occurs in a case where a connection or disconnection is not ideal or is less than at full potential/capacity (e.g., in a case where a partial connection exists when a full connection may be ideal, in a case where an amount of wires less than all of the wires (e.g., 8 out of 9 wires) are connected, etc.).
  • identifying case(s) where service/maintenance would be useful e.q., to detect a partial connection that may be returned to a complete connection, to detect a disconnected wire or wires
  • an artificial intelligence structure such as, but not limited to, a neural net or network (e.g., the same neural net or network that operates to determine whether a video or image from an endoscope or other imaging device is working; an additional neural net or network, a convolutional network, another network discussed herein, etc.) may be used to determine one or more of the following: (i) whether a video function or image capturing function from an endoscope or other imaging device is working; and/ or (ii) a catheter connection or disconnection status.
  • a neural net or network e.g., the same neural net or network that operates to determine whether a video or image from an endoscope or other imaging device is working; an additional neural net or network, a convolutional network, another network discussed herein, etc.
  • One or more embodiments of the present disclosure may achieve at least the following advantages or may include at least the following feature(s) : (i) one or more embodiments may achieve the efficient connection/disconnection evaluation and may obtain result(s) without the use of additional equipment; (ii) one or more embodiments may not use trained operators or users (while trained operators or users may be involved, such involvement is not required); (iii) one or more embodiments may perform during a normal course of operation of a catheter and/or an imaging device (as compared with and instead of a separate operation); (iv) one or more embodiments may perform connection/disconnection evaluation using a set of collected images (automatically or manually); and (v) one or more embodiments may provide usable measurement(s) with small and/or large samples.
  • a model (which, in one or more embodiments, may be software, software/har ware combination, or a procedure that utilizes one or more machine or deep learning algorithms/procedures/processes that has/have been trained on data to make one or more predictions for future, unseen data) has enough resolution to predict and/or evaluate the connection/disconnection with sufficient accuracy depending on the application or procedure being performed.
  • the performance of the model may be further improved by subsequently adding more training data and retraining the model to create a new instance of the model with better or optimized performance.
  • additional training data may include data based on user input, where the user may identify or correct the location of a catheter (e.g., a portion of the catheter, a connection portion of the catheter, etc.) in an image.
  • a model and/or sets of training data or images may be obtained by collecting a series of images (e.g., OCT images, images of another imaging modality, etc.) with and without catheters connected properly.
  • a series of images e.g., OCT images, images of another imaging modality, etc.
  • thousands of images e.g., OCT images, images of another imaging modality, etc.
  • the testing data may be fed or inserted through the neural net/networks, and accuracy of the model may be evaluated based on the results of the test data.
  • a neural net/ network may be able to determine whether a catheter has established a valid (e.g., complete, properly situated, optical, etc.) connection based on a single image (e.g., a single OCT image, a single image of another imaging modality, etc.) . While one or more embodiments may use one image, it may be advantageous from a safety perspective to have the neural net/network evaluate more than one frame/image to establish the status of a connection/disconnection with more certainty/ accuracy.
  • a single image e.g., a single OCT image, a single image of another imaging modality, etc.
  • One or more methods, medical imaging devices, Intravascular Ultrasound (IVUS) or Optical Coherence Tomography (OCT) devices, imaging systems, and/or computer-readable storage mediums for evaluating catheter connections and/or disconnections using artificial intelligence may be employed in one or more embodiments of the present disclosure.
  • an artificial intelligence training apparatus may include: a memoiy; one or more processors in communication with the memory, the one or more processors operating to: acquire or receive image data for instances or cases where a catheter is connected and for instances or cases where a catheter is not connected; establish ground truth for all the acquired image data; split the acquired image data into training, validation, and test sets or groups; evaluate a connection status for a new catheter (or other type of imaging device); receive image data (e.g., angiography, OCT images, tomography images, etc.) for the new catheter (or other type of imaging device); evaluate the image data (e.g., OCT images, images of another imaging modality, etc.) by a neural network (e.g., convolution network, neural network, recurrent network, etc.) using artificial intelligence; determine whether a connection is detected for the new catheter (e.g., determine whether the new catheter is connected to an imaging device or system, determine whether the new catheter is not connected or is improperly connected to an
  • One or more embodiments may repeat the training, and evaluation procedure, for a variety of parameter or hyper-parameter choices and finally select one or more models with the optimal, highest, and/or improved performance defined by one or more predefined evaluation metrics.
  • the one or more processors may further operate to split the ground truth data into sets or groups for training, validation, and testing.
  • the one or more processors may further operate to one or more of the following: (i) calculate or improve a connection or disconnection detection success rate using application of machine learning or deep learning; (ii) decide on the model to be trained based on a connection or disconnection detection success rate associated with the model (e.g., if an apparatus or system embodiment has multiple models to be saved, which have already been trained previously, a method of the apparatus/system may select a model for further training based on a previous success rate, based on a predetermined success factor, or based on which model is more optimal than another(s), etc.); (hi) determine whether a connection or disconnection determination is correct based on the trained model; and (iv) evaluate the connection or disconnection detection success rate.
  • a connection or disconnection detection success rate e.g., if an apparatus or system embodiment has multiple models to be saved, which have already been trained previously, a method of the apparatus/system may select a model for further training based on a previous success rate, based on a predetermined
  • the one or more processors may further operate to one or more of the following: (i) split the acquired or received image data into data sets or groups having a certain ratio or percentages (while not limited to such percentages, one or more embodiments may be, for example, 60% training data, 20% validation data, and 20% test data; 60% training data, 30% validation data, and 10% test data; any other predetermined or set split amongst the training data, validation data, and test data as desired for one or more applications may be used; etc.); (ii) split the acquired or received image data randomly; (iii) split the acquired or received image data randomly either on a pullback-basis, or a frame-basis; (iv) split the acquired or received image data based on or using a new set of a certain or predetermined kinds of data; and (v) split the acquired or received image data based on or using a new set of a certain or predetermined data ty pe, the new set being one or more of the following: a new pullback-basis
  • the one or more processors may further operate to one or more of the following: (i) employ data quality control; (ii) allow a user to manually select training samples or training data; and (iii) use any angio image that is captured during Optical Coherence Tomography (OCT) (or other imaging modality) pullback for testing.
  • OCT Optical Coherence Tomography
  • One or more embodiments may include or have one or more of the following: (i) parameters including one or more hyper-parameters; (ii) the saved, trained model is used as a created detector for identifying or detecting a catheter connection or disconnection in image data; (iii) the model is one or a combination of the following: a segmentation model, a segmentation model with post-processing, a model with pre-processing, a model with post-processing, a segmentation model with pre-processing, a deep learning or machine learning model, a semantic segmentation model or classification model, an object detection or regression model, an object detection or regression model, a combination of a semantic segmentation model and an object detection or regression model, a model using repeated segmentation model technique(s), a model using feature pyramid(s), and a model using repeated object detection or regression model technique(s); (iv) the one or more processors further operate to use one or more neural networks, convolutional neural networks, or recurrent neural networks to detect the catheter connection(s)
  • the one or more processors may further operate to: (i) acquire or receive the image data during a pullback operation of the intravascular imaging catheter.
  • the one or more processors may further operate to use one or more neural networks, convolutional neural networks, and/or recurrent neural networks to one or more of: load the trained model, select a set of angiography frames or other type of image frames, evaluate the catheter connection/disconnection, determine whether the catheter connection/disconnection determination is appropriate with respect to given prior knowledge, for example, vessel location and pullback direction, modify the detected results or the detected catheter location or catheter connection or disconnection for each frame, perform the coregistration, insert the intravascular image, and acquire or receive the image data during the pullback operation.
  • one or more neural networks may further operate to use one or more neural networks, convolutional neural networks, and/or recurrent neural networks to one or more of: load the trained model, select a set of angiography frames or other type of image frames, evaluate the catheter connection/disconnection, determine whether the catheter connection/disconnection determination is appropriate with respect to given prior knowledge, for example, vessel location and pullback direction, modify the detected results or the detected catheter location or catheter connection or disconnection
  • the object or sample may include one or more of the following: a vessel, a target specimen or object, and a patient.
  • the one or more processors may further operate to perform the coregistration by coregistering an acquired or received angiography image and an obtained one or more Optical Coherence Tomography (OCT) or Intravascular Ultrasound (IVUS) images or frames.
  • OCT Optical Coherence Tomography
  • IVUS Intravascular Ultrasound
  • a loaded, trained model may be one or a combination of the following: a segmentation (classification) model, a segmentation model with pre-processing, a segmentation model with post-processing, an object detection (regression) model, an object detection model with pre-processing, an object detection model with post-processing, a combination of a segmentation (classification) model and an object detection (regression) model, a deep convolutional neural network model, a recurrent neural network model with long short-term memory that can take temporal relationships across images or frames into account, a model using feature pyramid(s) that can take different image resolutions into account, a genetic algorithm that operates to breed multiple models for improved performance (as compared with a case where the genetic algorithm is not used), and/or a model using residual learning technique(s).
  • the one or more processors may further operate to one or more of the following: (i) display angiography data along with an image for each of one or more imaging modalities on the display, wherein the one or more imaging modalities include one or more of the following: a tomography image; an Optical Coherence Tomography (OCT) image; a fluorescence image; a near-infrared auto-fluorescence (NIRAF) image; a near-infrared auto-fluorescence (NIRAF) image in a predetermined view, a carpet view, and/or an indicator view; a near-infrared fluorescence (NIRF) image, a near-infrared fluorescence (NIRF) image in a predetermined view, a carpet view, and/or an indicator view; a three-dimensional (3D) rendering; a 3D rendering of a vessel; a 3D rendering of a vessel in a half-pipe view or display; a 3D rendering of
  • One or more embodiments of a method for training a model using artificial intelligence may repeat the selection, training, and evaluation procedure, for a variety of model configurations (e.g., hyper-parameter values) and finally select one or more models with the highest performance defined by one or more predefined evaluation metrics.
  • model configurations e.g., hyper-parameter values
  • One or more embodiments of a non-transitory computer-readable storage medium storing at least one program for causing a computer to execute a method for training a model using artificial intelligence may be used w ith any method(s) discussed in the present disclosure, including but not limited to, one or more catheter connection or disconnection evaluation/determination method(s).
  • One or more embodiments of any method discussed herein may be used with any feature or features of the apparatuses, systems, other methods, storage mediums, or other structures discussed herein.
  • One or more embodiments of a non-transitory computer-readable storage medium storing at least one program for causing a computer to execute a method for detecting a catheter connection or disconnection using artificial intelligence and/or performing coregistration using artificial intelligence may be used with any method(s) discussed in the present disclosure, including but not limited to, a method including: acquiring or receiving image data; receiving a trained model or loading a trained model from a memory; applying the trained model to the acquired or received image data; selecting one image frame; detecting or evaluating a catheter connection or disconnection on the selected image frame with the trained model; and saving the trained model and/or the catheter connection or disconnection status in a memory.
  • One or more of the artificial intelligence features discussed herein includes but is not limited to, using one or more of deep learning, a computer vision task, keypoint detection, a unique architecture of a model or models, a unique training process or algorithm, a unique optimization process or algorithm, input data preparation techniques, input mapping to the model, pre-processing, post-processing, and/or interpretation of the output data as substantially described herein or as shown in any one of the accompanying drawings.
  • a catheter connection or disconnection may be evaluated and determined using an algorithm, such as, but not limited to, the Viterbi algorithm.
  • One or more embodiments may automate catheter connection or disconnection detection in images using convolutional neural networks, any other types of neural network(s), and may fully automate frame detection and/or catheter connection or disconnection detection on angiographies using training (e.g., offline training) and using applications (e.g., online application(s)) to extract and process frames via deep learning.
  • training e.g., offline training
  • applications e.g., online application(s)
  • One or more embodiments of the present disclosure may track and/or calculate a catheter connection or disconnection detection success rate.
  • one or more additional devices, one or more systems, one or more methods and one or more storage mediums using OCT and/or other imaging modality technique(s) to detect catheter connection(s) or disconnection(s) and to perform coregistration using artificial intelligence, including, but not limited to, deep or machine learning, using results of the catheter detection for performing coregistration, etc. are discussed herein. Further features of the present disclosure will in part be understandable and will in part be apparent from the following description and with reference to the attached drawings. BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1A is a schematic diagram showing at least one embodiment of a system that may be used for performing one or multiple imaging modality viewing and control and/or for detecting catheter connection(s) or disconnection(s) in accordance with one or more aspects of the present disclosure
  • FIG. 1B is a schematic diagram illustrating an imaging system for executing one or more steps to process image data and/or for detecting catheter connection(s) or disconnection(s) in accordance with one or more aspects of the present disclosure
  • FIG. 2 is a schematic diagram of at least one embodiment of a neural network using artificial intelligence that may be used in accordance with one or more aspects of the present disclosure
  • FIG. 3 is a flowchart of at least one embodiment of a method for using a single frame or image to detect catheter connection(s) or disconnection(s) that may be used in accordance with one or more aspects of the present disclosure
  • FIG. 4 is a flowchart of at least one embodiment of a method for using multiple frames or images to detect catheter connection(s) or disconnection(s) that may be used in accordance with one or more aspects of the present disclosure
  • FIG. 5 is a flowchart of at least one embodiment of a method for using a single frame or image to detect catheter disconnection(s) that may be used in accordance with one or more aspects of the present disclosure
  • FIG. 6 is a flowchart of at least one embodiment of a method for training at least one model that may be used to detect catheter connection(s) or disconnection(s) in accordance with one or more aspects of the present disclosure
  • FIG. 7 is a diagram of at least one embodiment of a catheter that may be used with one or more embodiments for detecting catheter connection(s) and/or disconnection(s) in accordance w ith one or more aspects of the present disclosure;
  • FIG. 8 illustrates data from an experiment conducted to detect catheter connection(s) and/or catheter disconnection(s) in accordance w ith one or more aspects of the present disclosure
  • FIG. 9A shows at least one embodiment of an OCT apparatus or system for utilizing one or more imaging modalities and artificial intelligence for detecting catheter connection(s) and/or catheter disconnection(s) and/or for performing coregistration in accordance with one or more aspects of the present disclosure
  • FIG. 9B shows at least another embodiment of an OCT apparatus or system for utilizing one or more imaging modalities and artificial intelligence for detecting catheter connection(s) and/or catheter disconnection(s) and/or for performing coregistration in accordance with one or more aspects of the present disclosure
  • FIG. 9C shows at least a further embodiment of an OCT and NIRAF apparatus or system for utilizing one or more imaging modalities and artificial intelligence for detecting catheter connection(s) and/or catheter disconnection(s) and/or for performing coregistration in accordance with one or more aspects of the present disclosure
  • FIG. 10 is a flow diagram showing a method of performing an imaging feature, function, or technique in accordance with one or more aspects of the present disclosure
  • FIG. 11 shows a schematic diagram of an embodiment of a computer that may be used with one or more embodiments of an apparatus or system or one or more methods discussed herein in accordance with one or more aspects of the present disclosure
  • FIG. 12 shows a schematic diagram of another embodiment of a computer that may be used with one or more embodiments of an imaging apparatus or system or methods discussed herein in accordance with one or more aspects of the present disclosure
  • FIG. 13 shows a schematic diagram of at least an embodiment of a system using a computer or processor, a memory, a database, and input and output devices in accordance with one or more aspects of the present disclosure
  • FIG. 14 shows a created architecture of or for a regression model(s) that may be used for catheter connection and/or disconnection detection and/or any other technique discussed herein in accordance with one or more aspects of the present disclosure
  • FIG. 15 shows a convolutional neural network architecture that may be used for catheter connection and/or disconnection detection and/or any other technique discussed herein in accordance with one or more aspects of the present disclosure
  • FIG. 16 shows a created architecture of or for a regression model(s) that may be used for catheter connection and/or disconnection detection and/or any other technique discussed herein in accordance with one or more aspects of the present disclosure.
  • FIG. 17 is a schematic diagram of or for a segmentation model(s) that may be used for catheter connection and/or disconnection detection and/or any other technique discussed herein in accordance with one or more aspects of the present disclosure.
  • One or more devices, systems, methods and storage mediums for characterizing tissue, or an object, using one or more imaging techniques or modalities such as, but not limited to, OCT, fluorescence, NIRF, NIRAF, etc.
  • imaging techniques or modalities such as, but not limited to, OCT, fluorescence, NIRF, NIRAF, etc.
  • artificial intelligence for evaluating and detecting catheter connection(s) or disconnection(s) and/or performing coregistration.
  • imaging modalities may be displayed in one or more ways as discussed herein.
  • One or more displays discussed herein may allow a user of the one or more displays to use, control and/or emphasize multiple imaging techniques or modalities, such as, but not limited to, OCT, NIRF, NIRAF, etc., and may allow the user to use, control, and/or emphasize the multiple imaging techniques or modalities synchronously.
  • one or more embodiments for visualizing, emphasizing and/or controlling one or more imaging modalities and artificial intelligence such as, but not limited to, machine and/or deep learning, residual learning, using results of catheter connection or disconnection detection to perform coregistration, etc.
  • one or more predetermined or desired procedures such as, but not limited to, medical procedure planning and performance (e.g., PCI as aforementioned).
  • the system 2 may communicate with the image scanner 5 (e.g., a CT scanner, an X-ray machine, etc.) to request information for use in the medical procedure (e.g., PCI) planning and/or performance, such as, but not limited to, bed positions, and the image scanner 5 may send the requested information along with the images to the system 2 once a clinician uses the image scanner 5 to obtain the information via scans of the patient.
  • the system 2 may further communicate with a workstation such as a Picture Archiving and Communication System (PACS) 4 to send and receive images of a patient to facilitate and aid in the medical procedure planning and/or performance.
  • PES Picture Archiving and Communication System
  • a clinician may use the system 2 along w ith a medical procedure/imaging device 1 (e.g., an imaging device, an OCT device, an IVUS device, a PCI device, an ablation device, a 3D structure construction or reconstruction device, etc.) to consult a medical procedure chart or plan to understand the shape and/or size of the targeted biological object to undergo the imaging and/or medical procedure.
  • a medical procedure/imaging device 1 e.g., an imaging device, an OCT device, an IVUS device, a PCI device, an ablation device, a 3D structure construction or reconstruction device, etc.
  • Each of the medical procedure/imaging device 1, the system 2, the locator device 3, the PACS 4 and the scanning device 5 may communicate in any way known to those skilled in the art, including, but not limited to, directly (via a communication network) or indirectly (via one or more of the other devices such as 1 or 5, or additional flush and/or contrast delivery devices; via one or more of the PACS 4 and the system 2; via clinician interaction; etc.).
  • physiological assessment is very useful for deciding treatment for cardiovascular disease patients.
  • physiological assessment maybe used as a decision-making tool - e.g., whether a patient should undergo a PCI procedure, whether a PCI procedure is successful, etc. While the concept of using physiological assessment is theoretically sound, physiological assessment still waits for more adaption and improvement for use in the clinical setting(s). This situation may be because physiological assessment may involve adding another device and medication to be prepared, and/ or because a measurement result may vary between physicians due to technical difficulties. Such approaches add complexities and lack consistency.
  • one or more embodiments of the present disclosure may employ computational fluid dynamics based (CFD-based) physiological assessment that may be performed from imaging data to eliminate or minimize technical difficulties, complexities and inconsistencies during the measurement procedure.
  • CFD-based computational fluid dynamics based
  • an accurate 3D structure of the vessel may be reconstructed from the imaging data as disclosed in U.S. Provisional Pat. App. No. 62/901,472, filed on September 17, 2019, the disclosure of which is incorporated by reference herein in its entirety.
  • a method may be used to provide more accurate 3D structure(s) compared to using only one imaging modality.
  • a combination of multiple imaging modalities may be used, catheter connection(s) or disconnection(s) may be detected, and coregistration may be processed/performed using artificial intelligence.
  • One or more embodiments of the present disclosure may apply machine learning, especially deep learning, to detect a catheter connection(s) in an image frame without user input(s) that define an area where intravascular imaging pullback occurs.
  • machine learning especially deep learning
  • one or more embodiments of the present disclosure may achieve a better or maximum success rate of catheter connection(s) or disconnection(s) detection from image data without (or with less) user interactions, and may reduce processing and/or prediction time to display coregistration result(s) based on the catheter connection(s) or disconnection(s) detection result(s) and/or based on the improved image quality obtained when detecting proper or complete catheter connection(s) or disconnection(s).
  • One or more embodiments of the present disclosure may evaluate catheter connection(s) and/or disconnection(s) in one or more ways, and may use one or more features to determine a connection or disconnection status.
  • one or more embodiments may include one or more of the following: (i) one or more visual indicators (e.g., a light emitting diode (LED), a display for displaying the connection or disconnection status, etc.) that operate to send information to an outside of the device or system to indicate the connection or disconnection status (e.g., a successful connection, a successful optical connection, a successful connection of wires/catheter, a partial connection, no connection, a disconnection, etc.); (ii) one or more circuits or sensors that operate to detect a catheter connection or disconnection status (e.g., a successful connection, a successful optical connection, a successful connection of wires/catheter, a partial connection, no connection, a disconnection, etc.); and/or (i) one or more visual indicators (e.g.
  • the circuit(s)/sensor(s) may operate to be useful for (i) identifying case(s) where service/maintenance would be useful (e.g., to detect a partial connection that may be returned to a complete connection, to detect a disconnected wire or wires (or other component(s) of a catheter or other imaging device), etc.), and/or (ii) determining a case or cases where an imaging apparatus or system using a catheter or other imaging device(s) may operate or run in a mode that occurs in a case where a connection or disconnection is not ideal or is less than at full potential/capacity (e.g., in a case where a partial connection exists when a full connection may be ideal, in a case where an amount of wires less than all of the wires (e.g., 8 out of 9 wires) are connected, etc.).
  • identifying case(s) where service/maintenance would be useful e.g., to detect a partial connection that may be returned to a complete connection, to detect a
  • an artificial intelligence structure such as, but not limited to, a neural net or network (e.g., the same neural net or network that operates to determine whether a video or image from an endoscope or other imaging device is working; an additional neural net or network; another type of network discussed herein; etc.) may be used to determine one or more of the following: (i) whether a video function or image capturing function from an endoscope or other imaging device is working; and/or (ii) a catheter connection or disconnection status.
  • a neural net or network e.g., the same neural net or network that operates to determine whether a video or image from an endoscope or other imaging device is working; an additional neural net or network; another type of network discussed herein; etc.
  • One or more embodiments of the present disclosure may achieve the efficient catheter (or other imaging device) detection and/or efficient coregistration result(s) from image(s).
  • the image data may be acquired during intravascular imaging pullback using a catheter (or other imaging device) that may be visualized in an image.
  • a ground truth identifies a location or locations of the catheter or a portion of the catheter (or of another imaging device or a portion of the another imaging device).
  • a model has enough resolution to predict the catheter location and/or connection in a given image with sufficient accuracy depending on the application or procedure being performed.
  • the performance of the model may be further improved by adding more training data.
  • additional training data may include image annotations, where a user labels or corrects the catheter location(s) and/or catheter detection(s) in each image.
  • a catheter connection or disconnection may be detected and/or monitored using an algorithm, such as, but not limited to, the Viterbi algorithm.
  • One or more embodiments may automate characterization of catheter connection(s) or disconnection(s) and/or of stenosis in images using convolutional neural networks, and may fully automate frame detection on angiographies using training (e.q., offline training) and using applications ⁇ e.g., online application(s)) to extract and process frames via deep learning.
  • One or more embodiments of the present disclosure may track and/or calculate a catheter connection(s) or disconnection(s) detection success rate.
  • a method of 3D reconstruction without adding any imaging requirements or conditions may be employed.
  • One or more methods of the present disclosure may use intravascular imaging, e.g., IVUS, OCT, etc., and one (1) view of angiography.
  • intravascular imaging of the present disclosure is not limited to OCT, OCT is used as a representative of intravascular imaging for describing one or more features herein.
  • FIG. 1B shown is a schematic diagram of at least one embodiment of an imaging system 20 for generating an imaging catheter path based on a detected location of an imaging catheter, based on a catheter connection or disconnection detection, and/or a regression line representing the imaging catheter path by using an image frame that is simultaneously acquired during intravascular imaging pullback.
  • the embodiment of FIG. 1B may be used with one or more of the artificial intelligence feature(s) discussed herein.
  • the imaging system 20 may include an angiography system 30, an intravascular imaging system 40, an image processor 50, a display or monitor 1209, and an electrocardiography (ECG) device 60.
  • ECG electrocardiography
  • the angiography system 30 may include an X-ray imaging device such as a C-arm 22 that is connected to an angiography system controller 24 and an angiography image processor 26 for acquiring angiography image frames of an object (e.g., any object that may be imaged using the size and shape of the imaging device, a sample, a vessel, a target specimen or object, etc.) or patient 106.
  • an X-ray imaging device such as a C-arm 22 that is connected to an angiography system controller 24 and an angiography image processor 26 for acquiring angiography image frames of an object (e.g., any object that may be imaged using the size and shape of the imaging device, a sample, a vessel, a target specimen or object, etc.) or patient 106.
  • an object e.g., any object that may be imaged using the size and shape of the imaging device, a sample, a vessel, a target specimen or object, etc.
  • the intravascular imaging system 40 of the imaging system 20 may include a console 32, a catheter 120 and a patient interface unit or PIU 110 that connects between the catheter 120 and the console 32 for acquiring intravascular image frames.
  • the catheter 120 may be inserted into a blood vessel of the patient 106 (or inside a specimen or other target object).
  • the catheter 120 may function as a light irradiator and a data collection probe that is disposed in a lumen of a particular blood vessel, such as, for example, a coronaiy artery.
  • the catheter 120 may include a probe tip, one or more markers or radiopaque markers, an optical fiber, and a torque wire.
  • the probe tip may include one or more data collection systems.
  • the catheter 120 may be threaded in an artery of the patient 106 to obtain images of the coronary 7 artery.
  • the patient interface unit no may include a motor M inside to enable pullback of imaging optics during the acquisition of intravascular image frames.
  • the imaging pullback procedure may obtain images of the blood vessel.
  • the imaging pullback path may represent the co-registration path, which maybe a region of interest or a targeted region of the vessel.
  • the console 32 may include a light source(s) 101 and a computer 1200.
  • the computer 1200 may include features as discussed herein and below (see e.g., FIG. 11, FIG. 13, etc.), or alternatively may be a computer 1200’ (see e.g., FIG. 12, FIG. 13, etc.) or any other computer or processor discussed herein.
  • the computer 1200 may include an intravascular system controller 35 and an intravascular image processor 36.
  • the intravascular system controller 35 and/or the intravascular image processor 36 may operate to control the motor M in the patient interface unit 110.
  • the intravascular image processor 36 may also perform various steps for image processing and control the information to be displayed.
  • the intravascular imaging system 40 is merely one example of an intravascular imaging system that may be used within the imaging system 20.
  • Various types of intravascular imaging systems may be used, including, but not limited to, an OCT system, a multi-modality OCT system or an IVUS system, by way of example.
  • the imaging system 20 may also connect to an electrocardiography (ECG) device 60 for recording the electrical activity of the heart over a period of time using electrodes placed on the skin of the patient 106.
  • ECG electrocardiography
  • the imaging system 20 may also include an image processor 40 for receiving angiography data, intravascular imaging data, and data from the ECG device 60 to execute various image-processing steps to transmit to a display 1209 for displaying an angiography image frame with a co-registration path.
  • an image processor 40 for receiving angiography data, intravascular imaging data, and data from the ECG device 60 to execute various image-processing steps to transmit to a display 1209 for displaying an angiography image frame with a co-registration path.
  • the image processor 40 associated with the imaging system 20 appears external to both the angiography system 20 and the intravascular imaging system 30 in FIG. 1B, the image processor 40 may be included within the angiography system 30, the intravascular imaging system 40, the display 1209, or a stand-alone device.
  • the image processor 40 may not be required if the various image processing steps are executed using one or more of the angiography image processor 26, the intravascular image processor 36 of the imaging system 20, or any other processor discussed herein (e.g., computer 1200, computer 1200’, computer or processor 2, etc.).
  • FIG. 2 diagramatically shows at least one embodiment of a neural net that may be employed using artificial intelligence and/or deep learning that may be used to perform one or more of the features herein, including, but not limited to, detecting catheter connection(s) or disconnection(s), in accordance with one or more aspects of the present disclosure.
  • Neural networks may include a computer system or systems.
  • a neural network may include or may comprise an input layer 200, one or more hidden layers of neurons or nodes (e.g., hidden layer #1 201, hidden layer #2 202, a third hidden layer, additional hidden layers, etc.), and an output layer 203.
  • the input layer 200 may be where the values are passed to the rest of the model. While not limited thereto, in one or more MM-OCT application(s), the input layer may be the place where the transformed OCT data may be passed to a model for evaluation.
  • the hidden layer(s) may be a series of layers that contain or include neurons or nodes that establish connections between the neurons or nodes in the other hidden layers (e.g., hidden layer #1 201, hidden layer #2 202, a third hidden layer, additional hidden layers, etc.) as shown, for example via the numerous arrows between neurons or nodes of the hidden layers 201, 202, in FIG. 2.
  • hidden layer #1 201, hidden layer #2 202, a third hidden layer, additional hidden layers, etc. may be a series of layers that contain or include neurons or nodes that establish connections between the neurons or nodes in the other hidden layers (e.g., hidden layer #1 201, hidden layer #2 202, a third hidden layer, additional hidden layers, etc.) as shown, for example via the numerous arrows between neurons or nodes of the hidden layers 201, 202, in FIG. 2.
  • the output layer provides the result(s) of the model.
  • connection or disconnection e.g., partial connection or disconnection, complete connection or disconnection, improper connection or disconnection, etc.
  • connection or disconnection e.g., partial connection or disconnection, complete connection or disconnection, improper connection or disconnection, etc.
  • an OCT device or system e.g., an MM-OCT device or system, a SS-OCT device or system, etc.
  • Collecting a series of OCT images with and without catheters connected properly may result in a plurality (e.g., several thousand) of training images.
  • the data may be labeled based on whether a catheter was connected, was partially connected, was disconnected, etc. (asconfirmedbyatrainedoperator or userofthedeviceor system).
  • the data after at least 30,000 OCT images are captured and labeled, the data may be split into a training population and a test population.
  • data collection may be performed in the same environment or in different environments.
  • a flashlight (or any light source) may be used to shine the light down a barrel of an imaging device with no catheter imaging core to confirm that a false positive would not occur in a case where a physician pointed the imaging device at external lights (e.g., operating room lights, a computer screen, etc.).
  • the testing data may be fed through the neural net or neural networks, and the accuracy of the model(s) may be evaluated based on the result(s) of the test data.
  • Embodiments of a method or methods for detecting one or more catheter connections or disconnections may be used independently or in combination. While not limited to the discussed combination or arrangement, one or more steps may be involved in both of the workflows or processes in one or more embodiments of the present disclosure, for example, as shown in FIG. 3, FIG. 4 and/or FIG. 5 and as discussed below.
  • a neural network or networks may use a single image (e.g., a single OCT image, an image of a different imaging modality, etc.) to determine whether a catheter has established a valid or complete connection or disconnection.
  • FIG. 3 shows at least one embodiment of a method for a single image frame detection method that may be used in accordance with one or more aspects of the present disclosure.
  • a single frame detection method or methods may include one or more of the following: (i) connecting a catheter (e.g., a new catheter) (see e.g., step S300 in FIG.
  • step S301 in FIG. 3 receives or obtains an image (e.g., an OCT image, an image of another imaging modality, etc.) (see e.g., step S301 in FIG. 3); (iii) using a neural net (or other Al compatible network or Al-ready network) to evaluate the image (e.g., an OCT image, an image of another imaging modality, etc.) (see e.g., step S302 in FIG. 3); (iv) determining whether the catheter is connected/disconnected or whether a connection for the catheter is detected (see e.g., step S303 in FIG.
  • an image e.g., an OCT image, an image of another imaging modality, etc.
  • a neural net or other Al compatible network or Al-ready network
  • connection or disconnection status (e.g., “Yes” in a case w here a connection is detected in the determining step, “No” in a case where a connection (e.g., a partial connection, a complete connection, any other kind of connection discussed herein, etc.) is not detected in the determining step, etc.) (see e.g., step S304 in FIG. 3).
  • the connection or disconnection status may be saved in one or more memories or may be sent to one or more processors for use in the Al evaluations/determinations or for use with any other technique(s) or process(es) discussed herein.
  • one or more embodiments may have a neural net or network evaluate more than one frame to establish the connection or disconnection status with greater or more certainty.
  • employing a plurality of images or frames may improve the accuracy of the connection or disconnection status for a plurality of benefits, including, but not limited to improving safety, improving imaging accuracy, improving connection or disconnection status accuracy or success, etc.
  • At least one way of involving or using a plurality of images or frames may be performed by reusing the neural network from the single frame evaluation (or using a neural network that may be different from the originally employed neural network or another type of network discussed herein) for the plurality of images or frames and by comparing the results across a preset or predetermined number of images or frames.
  • Another embodiment for performing evaluation using multiple frames may be to train a separate neural network that takes multiple frames of image data (e.g., OCT data) as input and that outputs a connection or disconnection status through evaluating the images or frames. In both approaches, the workflow may be the same e.g., as shown in FIG. 4).
  • the method(s) may include one or more of the following: (i) connecting a catheter (e.g., a new catheter) (see e.g., step S300 in FIG. 4); (ii) receiving or obtaining an image (e.g., an OCT image, an image of another imaging modality, etc.) (see e.g., step S301 in FIG. 4); (iii) evaluating or determining whether a set, predetermined, or minimum amount of images or frames have been collected to perform the evaluation of the plurality of images or frames (see e.g., step 3402a in FIG.
  • step 3402a in a case where the evaluation or determination of step 3402a is “NO”, return to step S301 to obtain or receive one or more additional images or frames, or, in a case where the evaluation or determination of step 3402a is “Yes”, using a neural net (or other Al compatible network or Al -ready network) to evaluate the images or frames (e.g., OCT images or frames, images or frames of another imaging modality, etc.) (see e.g., step 3402b in FIG. 4); (v) determining whether the catheter is connected/ disconnected or whether a connection for the catheter is detected (see e.g., step S303 in FIG.
  • a neural net or other Al compatible network or Al -ready network
  • connection or disconnection status (e.g., “Yes” in a case where a connection is detected in the determining step, “No” in a case where a connection (e.g., a partial connection, a complete connection, any other kind of connection discussed herein, etc.) is not detected in the determining of the catheter connection or disconnection step, etc.).
  • the connection or disconnection status may be saved in one or more memories or may be sent to one or more processors for use in the Al evaluations/determinations or for use with any other technique(s) or process(es) discussed herein.
  • optics of catheter devices/ apparatuses or systems may be sensitive, and, when damaged, may not provide reliable images or frames.
  • the same or similar process(es) or method(s) may be used, for example, in a case where the device/ apparatus or system may receive a notification that a catheter has been connected or has been disconnected, in a case where the device/ apparatus or system may be in a start up phase where a catheter may not be attached yet, etc.
  • the method(s) may include one or more of the following: (i) disconnecting a catheter or having a catheter be disconnected during start up or any other phase of the device/apparatus or system (see e.g., step S500 in FIG. 5); (ii) receiving or obtaining an image (e.g., an OCT image, an image of another imaging modality, etc.) (see e.g., step S301 in FIG.
  • an image e.g., an OCT image, an image of another imaging modality, etc.
  • step S302 in FIG. 5 (iii) using a neural net (or other Al compatible network or Al-ready network) to evaluate the image (e.g., an OCT image, an image of another imaging modality, etc.) (see e.g., step S302 in FIG. 5); (iv) determining whether the catheter is connected/disconnected or whether a connection or disconnection for the catheter is detected (see e.g., step S303 in FIG.
  • a neural net or other Al compatible network or Al-ready network
  • connection or disconnection status (e.g., “Yes” in a case where a connection is detected in the determining step, “No” in a case where a connection (e.g., a partial connection, a complete connection, any other kind of connection discussed herein, etc.) is not detected in the determining step, etc.) (see e.g., step S304 in FIG. 5).
  • the connection or disconnection status may be saved in one or more memories or may be sent to one or more processors for use in the Al evaluations/determinations or for use with any other technique(s) or process(es) discussed herein.
  • connection or disconnection evaluation techniques may be used with an OCT or other imaging modality device, system, storage medium, etc.
  • one or more connection or disconnection evaluation techniques may be used for any type of OCT, including, but not limited to, MM-OCT.
  • One or more embodiments of the present disclosure may: (i) calculate a connection or disconnection status of a catheter (or other imaging device), and may perform the connection or disconnection status calculation without the use of another piece of equipment; (ii) make evaluations during a normal course of operations (instead of a separate operation) for a catheter (or other imaging device); and (iii) work on small numbers of samples (e.g., as little as one image in an imaging, OCT, and/or MM- OCT application(s)), or may work on large numbers of samples (e.g., a plurality of images or frames, a plurality of samples, a plurality of samples in a plurality of images or frames, etc.), to evaluate connection/disconnection status.
  • small numbers of samples e.g., as little as one image in an imaging, OCT, and/or MM- OCT application(s)
  • large numbers of samples e.g., a plurality of images or frames, a plurality of samples, a plurality of
  • one or more embodiments of the present disclosure may achieve at least the following advantages or may include at least the following feature(s): (i) one or more embodiments may achieve the efficient connection/disconnection evaluation and may obtain result(s) without the use of additional equipment; (ii) one or more embodiments may not use trained operators or users (while trained operators or users maybe involved, such involvement is not required); (iii) one or more embodiments may perform during a normal course of operation of a catheter and/or an imaging device (as compared with and instead of a separate operation); (iv) one or more embodiments may perform connection/disconnection evaluation using a set of collected images (manually and/or automatically); and (v) one or more embodiments may provide usable measurement(s) with small and/ or large samples.
  • a model (which, in one or more embodiments, may be software, software/hardware combination, or a procedure that utilizes one or more machine or deep learning algorithms/procedures/processes that has/have been trained on data to make one or more predictions for future, unseen data) has enough resolution to predict and/or evaluate the connection/disconnection with sufficient accuracy depending on the application or procedure being performed.
  • the performance of the model may be further improved by subsequently adding more training data and retraining the model to create a new instance of the model with better or optimized performance.
  • additional training data may include data based on user input, where the user may identify or correct the location of a catheter (e.g., a portion of the catheter, a connection portion of the catheter, etc.) in an image (or another imaging device in an image).
  • a catheter e.g., a portion of the catheter, a connection portion of the catheter, etc.
  • a model and/or sets of training data or images may be obtained by collecting a series of images (e.g., OCT images) with and without catheters connected properly. For example, thousands of images (e.g., OCT images) may be captured and labeled (e.g., to establish ground truth data), and the data may be split into a training population of data and a test population of data. After training is complete, the testing data may be fed or inserted through the neural net/ networks, and accuracy of the model may be evaluated based on the results of the test data.
  • a series of images e.g., OCT images
  • thousands of images e.g., OCT images
  • the testing data may be fed or inserted through the neural net/ networks, and accuracy of the model may be evaluated based on the results of the test data.
  • a neural net/network may be able to determine whether a catheter (or other imaging device) has established a valid (e.g., complete, properly situated, optical, etc.) connection based on a single image (e.g., a single OCT image). While one or more embodiments may use one image, it may be advantageous from a safety perspective to have the neural net/network evaluate more than one frame/image to establish the status of a connection/disconnection with more certainty/ accuracy.
  • One or more methods, medical imaging devices, Intravascular Ultrasound (IVUS) or Optical Coherence Tomography (OCT) devices, imaging systems, and/or computer-readable storage mediums for evaluating catheter connections and/or disconnections using artificial intelligence may be employed in one or more embodiments of the present disclosure.
  • an artificial intelligence training apparatus may include: a memory; one or more processors in communication w ith the memory, the one or more processors operating to: acquire or receive image data for instances or cases where a catheter is connected and for instances or cases where a catheter is not connected; establish ground truth for all the acquired image data; split the acquired image data into training, validation, and test sets or groups; evaluate a connection status for a new catheter; receive image data ⁇ e.g., angiography images, OCT images, images of another modality, etc.) for the new catheter; evaluate the image data (e.q., OCT images) by a neural network ⁇ e.g., convolution network, neural network, recurrent network, other Al-ready or Al compatible network, etc.) using artificial intelligence; determine whether a connection is detected for the new catheter ⁇ e.g., determine whether the new catheter is connected to an imaging device or system, determine whether the new catheter is not connected or is improperly connected to an imaging device or system, etc.); set
  • the one or more processors may further operate to split the ground truth data into sets or groups for training, validation, and testing.
  • the one or more processors may further operate to one or more of the following: (i) calculate or improve a connection or disconnection detection success rate using application of machine learning or deep learning; (ii) decide on the model to be trained based on a connection or disconnection detection success rate associated with the model ⁇ e.g., if an apparatus or system embodiment has multiple models to be saved, which have already been trained previously, a method of the apparatus/system may select a model for further training based on a previous success rate, based on a predetermined success factor, or based on which model is more optimal than another(s), etc.); (iii) determine whether a connection or disconnection determination is correct based on the trained model; and (iv) evaluate the connection or disconnection detection success rate.
  • the one or more processors may further operate to one or more of the following: (i) split the acquired or received image data into data sets or groups having a certain ratio or percentages, for example, 60% training data, 20% validation data, and 20% test data; (ii) split the acquired or received image data randomly; (iii) split the acquired or received image data randomly either on a pullback-basis, or a frame-basis; (iv) split the acquired or received image data based on or using a new set of a certain or predetermined kinds of data; and (v) split the acquired or received image data based on or using a new set of a certain or predetermined data ty pe, the new set being one or more of the following: a new pullback-basis data set, a new frame-basis data set, new clinical data, new animal data, new potential additional training data, new data for a first type of catheter where the new data has a marker that is similar to a marker of a catheter used for the acquired or received image data,
  • the one or more processors may further operate to one or more of the following: (i) employ data quality control; (ii) allow a user to manually select training samples or training data; and (iii) use any angio image that is captured during Optical Coherence Tomography (OCT) pullback for testing.
  • OCT Optical Coherence Tomography
  • One or more embodiments may include or have one or more of the following: (i) parameters including one or more hyper-parameters; (ii) the saved, trained model is used as a created detector for identifying or detecting a catheter connection or disconnection in image data; (iii) the model is one or a combination of the following: a segmentation model, a segmentation model with post-processing, a model with pre-processing, a model with post-processing, a segmentation model with pre-processing, a deep learning or machine learning model, a semantic segmentation model or classification model, an object detection or regression model, an object detection or regression model, a combination of a semantic segmentation model and an object detection or regression model, a model using repeated segmentation model technique(s), a model using feature pyramid(s), and a model using repeated object detection or regression model technique(s); (iv) the one or more processors further operate to use one or more neural networks, convolutional neural networks, or recurrent neural networks (or other Al-ready
  • the one or more processors may further operate to: (i) acquire or receive the image data during a pullback operation of the intravascular imaging catheter.
  • the one or more processors may further operate to use one or more neural networks, convolutional neural networks, and/or recurrent neural networks (or other Al -ready or Al compatible network(s)) to one or more of: load the trained model, select a set of image frames, evaluate the catheter connection/disconnection, determine whether the catheter connection/disconnection determination is appropriate with respect to given prior knowledge, for example, vessel location and pullback direction, modify the detected results or the detected catheter location or catheter connection or disconnection status for each frame, perform the coregistration, insert the intravascular image, and acquire or receive the image data during the pullback operation.
  • the object or sample may include one or more of the following: a vessel, a target specimen or object, and a patient.
  • the one or more processors may further operate to perform the coregistration by coregistering an acquired or received angiography image and an obtained one or more Optical Coherence Tomography (OCT) or Intravascular Ultrasound (IVUS) images or frames.
  • OCT Optical Coherence Tomography
  • IVUS Intravascular Ultrasound
  • a loaded, trained model may be one or a combination of the follow ing: a segmentation (classification) model, a segmentation model with pre-processing, a segmentation model with post-processing, an object detection (regression) model, an object detection model with pre-processing, an object detection model with post-processing, a combination of a segmentation (classification) model and an object detection (regression) model, a deep convolutional neural network model, a recurrent neural network model with long short-term memory that can take temporal relationships across images or frames into account, a model using feature pyramid(s) that can take different image resolutions into account, a genetic algorithm that operates to breed multiple models for improved performance, and/or a model using residual learning technique(s).
  • the one or more processors may further operate to one or more of the following: (i) display angiography data along with an image for each of one or more imaging modalities on the display, wherein the one or more imaging modalities include one or more of the following: a tomography image; an Optical Coherence Tomography (OCT) image; a fluorescence image; a near-infrared auto-fluorescence (NIRAF) image; a near-infrared auto-fluorescence (NIRAF) image in a predetermined view, a carpet view, and/or an indicator view; a near-infrared fluorescence (NIRF) image; a near-infrared fluorescence (NIRF) image in a predetermined view, a carpet view, and/or an indicator view; a three-dimensional (3D) rendering; a 3D rendering of a vessel; a 3D rendering of a vessel in a half-pipe view or display; a 3D rendering of
  • One or more embodiments of a method for training a model using artificial intelligence may repeat the selection, training, and evaluation procedure, for a variety of model configurations (e.c/., hyper-parameter values) and finally select one or more models with the highest performance defined by one or more predefined evaluation metrics.
  • One or more embodiments of a non-transitory computer-readable storage medium storing at least one program for causing a computer to execute a method for training a model using artificial intelligence may be used w ith any method(s) discussed in the present disclosure, including but not limited to, one or more catheter connection or disconnection evaluation/determination method(s).
  • One or more embodiments of the present disclosure improve or maximize a catheter connection or disconnection detection success rate by, for example, improving or using alternative approaches to evaluating a catheter connection or disconnection, improving the detection method/ algorithm that may utilize features that are difficult to capture via other image processing techniques (e.g., via the use of artificial intelligence, via the application of machine or deep learning, via the use of artificial intelligence results to perform coregistration, etc.), etc.
  • image processing techniques e.g., via the use of artificial intelligence, via the application of machine or deep learning, via the use of artificial intelligence results to perform coregistration, etc.
  • At least one artificial intelligence, computer-implemented task may be co-registration of images between images acquired by one or more imaging modalities, where one image is an angiography image that is acquired during intravascular imaging of a sample or object, such as, but not limited to, the coronary arteries, using an OCT probe (pullback of OCT probe upon contrast agent application, for example), and where the other intravascular imaging may be, but is not limited to, IVUS, OCT, etc.
  • At least another artificial intelligence, computer- implemented task may be a specific machine learning task: keypoint detection, where the keypoint is a radiopaque marker that has been “introduced” into one or more images (e.g., angiography images, OCT images, etc.) to facilitate detection for a catheter (or other imaging device).
  • keypoint detection where the keypoint is a radiopaque marker that has been “introduced” into one or more images (e.g., angiography images, OCT images, etc.) to facilitate detection for a catheter (or other imaging device).
  • One or more embodiments of the present disclosure may use other artificial intelligence technique(s) or method(s) for performing training, for splitting data into different groups (e.g., training group, validation group, test group, etc.), or other artificial intelligence technique(s) or method(s), such as, but not limited to, embodiment(s) as discussed in PCT/US2020/051615, filed on September 18, 2020 and published as WO 2021/055837 A9 on March 25, 2021, and as discussed in U.S. Pat. App. No. 17/ 761,561, filed on March 17, 2022, the applications and publications of which are incorporated by reference herein in their entireties. For example, turning to the details of FIG. 6 of the present disclosure (see e.g., FIG.
  • one or more methods or processes of the present disclosure may include one or more of the following steps (starting at step S101 in FIG. 6): (i) acquiring angiography image data (see step S102 in FIG. 6); (ii) establishing a ground truth for all of the acquired angiography data/images (see step S103 in FIG. 6); (iii) splitting the acquired angiography data/image set (examples of images and/or corresponding ground truths) into training, validation, and test groups or sets (see step S104 in FIG.
  • step S105 in FIG. 6 choosing the hyper-parameters for model training, including, but not limited to, the model architecture, the learning rate, and the initialization of parameter values (see step S105 in FIG. 6); (v) training a model with data in the training group or training set and evaluating it with data in the validation group or validation set (see step S106 in FIG. 6); (vi) determining whether the performance of the trained model is good or sufficient (see step S107 in FIG.
  • step S107 results in a “No”, then return to before step S105 and repeat steps S105-S106, or in the event that step S107 results in a “Yes”, then proceed to step S108; (viii) estimating a generalization error of the trained model with data in the test group or test set (see step S108 in FIG. 6); and (ix) saving the trained model to a memory (see step S109 in FIG. 6) (and then ending the process at step S110 in FIG. 6).
  • the steps shown in FIG. 6 may be performed in any logical sequence and may be omitted in parts in one or more embodiments.
  • step S109 may involve saving the trained model to the memory or a disk, and may automatically save the trained model or may prompt a user (one or more times) to save the trained model.
  • a model may be selected based on its performance on the validation set, and the generalization error may be estimated on the test using the selected model.
  • an apparatus, system, method, or storage medium may have multiple models to be saved, which have already been trained previously, and the apparatus, system, method, or storage medium may select a model for further training based on a previous or prior success rate.
  • any trained model works for any angio apparatus or system with a same or similar success rate; in a situation where more data exists from different angio apparatuses or systems, one model may work better for a certain angio apparatus or system whereas another model may work better for a different angio apparatus or system.
  • one or more embodiments may create test or validation data set(s) for specific angio apparatus(es) or system(s), and may identify which model works best for a specific angio apparatus(es) or system(s) with the test set(s) and/or validation set(s).
  • FIG. 7 shows at least one embodiment of a catheter 120 that may be used in one or more embodiments of the present disclosure for obtaining images; for using and/or controlling multiple imaging modalities, that apply machine learning, especially deep learning, to identify a catheter connection or disconnection in an angiography image frame with greater or maximum success; and for using the results to perform coregistration more efficiently or with maximum efficiency.
  • FIG. 7 shows an embodiment of the catheter 120 including a sheath 121, a coil 122, a protector 123 and an optical probe 124. As shown schematically in FIGS.
  • the catheter 120 may be connected to a patient interface unit (PIU) 110 to spin the coil 122 with pullback (e.g., at least one embodiment of the PIU no operates to spin the coil 122 with pullback).
  • PIU patient interface unit
  • the coil 122 delivers torque from a proximal end to a distal end thereof (e.g., via or by a rotational motor in the PIU 110).
  • the coil 122 is fixed with/to the optical probe 124 so that a distal tip of the optical probe 124 also spins to see an omnidirectional new of the object (e.g., a biological organ, sample or material being evaluated, such as, but not limited to, hollow organs such as vessels, a heart, a coronary artery, etc.).
  • a biological organ, sample or material being evaluated such as, but not limited to, hollow organs such as vessels, a heart, a coronary artery, etc.
  • fiber optic catheters and endoscopes may reside in the sample arm (such as the sample arm 103 as shown in one or more of FIGS. 9A-9C discussed below) of an OCT interferometer in order to provide access to internal organs, such as intravascular images, gastrointestinal tract or any other narrow area, that are difficult to access.
  • the optical probe 124 As the beam of light through the optical probe 124 inside of the catheter 120 or endoscope is rotated across the surface of interest, cross-sectional images of one or more objects are obtained.
  • the optical probe 124 is simultaneously translated longitudinally during the rotational spin resulting in a helical scanning pattern. This translation is most commonly performed by pulling the tip of the probe 124 back towards the proximal end and therefore referred to as a pullback.
  • the catheter 120 which, in one or more embodiments, comprises the sheath 121, the coil 122, the protector 123 and the optical probe 124 as aforementioned (and as shown in FIG. 7), may be connected to the PIU 110.
  • the optical probe 124 may comprise an optical fiber connector, an optical fiber and a distal lens.
  • the optical fiber connector may be used to engage with the PIU 110.
  • the optical fiber may operate to deliver light to the distal lens.
  • the distal lens may operate to shape the optical beam and to illuminate light to the object (e.g., the object 106 (e.g., a vessel) discussed herein), and to collect light from the sample (e.g., the object 106 (e.g., a vessel) discussed herein) efficiently.
  • the target, sample, or object 106 may be a vessel in one or more embodiments, the target, sample, or object 106 may be different from a vessel (and not limited thereto) depending on the particular use(s) or application(s) being employed with the catheter 120.
  • the coil 122 delivers torque from a proximal end to a distal end thereof (e.g., via or by a rotational motor in the PIU no). There may be a mirror at the distal end so that the light beam is deflected outward.
  • the coil 122 is fixed with/to the optical probe 124 so that a distal tip of the optical probe 124 also spins to see an omnidirectional view of an object (e.g., a biological organ, sample or material being evaluated, such as, but not limited to, hollow organs such as vessels, a heart, a coronary artery, etc.).
  • an object e.g., a biological organ, sample or material being evaluated, such as, but not limited to, hollow organs such as vessels, a heart, a coronary artery, etc.
  • the optical probe 124 may include a fiber connector at a proximal end, a double clad fiber and a lens at distal end.
  • the fiber connector operates to be connected with the PIU 110.
  • the double clad fiber may operate to transmit & collect OCT light through the core and, in one or more embodiments, to collect Raman and/ or fluorescence from an object (e.g., the object 106 (e.g. , a vessel) discussed herein, an object and/or a patient (e.g., a vessel in the patient), etc.) through the clad.
  • an object e.g., the object 106 (e.g. , a vessel) discussed herein, an object and/or a patient (e.g., a vessel in the patient), etc.
  • the lens may be used for focusing and collecting light to and/or from the object (e.g., the object 106 (e.g., a vessel) discussed herein).
  • the scattered light through the clad is relatively higher than that through the core because the size of the core is much smaller than the size of the clad.
  • disconnected/broken catheter imaging may appear as random static whereas connected catheter imaging may appear with distinct characteristics (e.g., such as, but not limited to, lumen size and shape, lumen dimensions, a size and shape of an object being imaged, geometric aspects or characteristics of an object being imaged, other characteristics of an object being imaged, etc.).
  • images where one or more distinct characteristics may not be shown and/or where static (e.g., random static) may occur may indicate that a catheter is not validly connected or validly disconnected or that a catheter or a portion thereof may be broken.
  • the neural network may be used to troubleshoot and identify the broken portion of the catheter so that: (i) the catheter may be fixed and may achieve a proper or valid connection or disconnection state(s) or mode(s); and (ii) an image or images being obtained may show a proper or valid connection/disconnection of the catheter and/or may have little or no static in the image or images.
  • a binary classifier was used since a catheter may be either connected (fully connected, partially connected in cases where a partial connection is intended by a user of the imaging device or catheter, etc.), disconnected (fully disconnected, partially disconnected in a case where a complete connection is intended by a user of the imaging device or catheter, etc.), or broken.
  • pullbacks While not limited thereto, a number of pullbacks that may be used is about 40 (as done for one or more of the experiments). That said, using human expertise, imaging applications, desired features, etc., hundreds of pullbacks, thousands of pullbacks, or more pullbacks may be used.
  • the imaging device or system used was an MM-OCT apparatus. That said, the various technique(s), structure(s), and method(s) are not limited thereto, and one or more other imaging modalities, or other type(s) of imaging devices or systems (e.g., a robot, a continuum robot, or other robot apparatus or system discussed further below) may be used with one or more features of the present disclosure.
  • the experiments involving the MM-OCT application saved data in the scan(s) of case files automatically when a pullback was completed. This feature was used to collect a series of pullbacks in different lighting, calibration, and environmental conditions with an invalid catheter attachment (dummy handle) and a valid catheter attachment.
  • the resulting case files were then read through the use of the resizeLabelAndSaveCase method(s) contained in dataProcessing.py, such methods operating to read the MM-OCT case files and extract the selected data (in this case, Processed Polar OCT Data) formatted as a two-dimensional (2-D) array.
  • the extracted data was then resized (in the case of the subject experiments, the data was resized to 25 X 25 pixels) to allow the neural net to be trained more efficiently.
  • the appropriate label indicating known catheter connection/disconnection status was then appended to the resized data, and the new data was flattened into a single array.
  • Each training Epoch represents passing data or all of the data in the training set through the neural network to train the model.
  • the loss value is a measure of how efficient the model is as it is getting trained (as shown in the data, the loss value gets lower over the course of the training, which confirms that accuracy and efficiency of the one or more features of the instant application).
  • a goal may be set to get the loss value as close to zero as possible. While not limited to this embodiment, one way to achieve a loss value as close to zero as possible is to have as many Epochs as necessary to get a slope of the loss values to flatten or substantially flatten.
  • a number of Epochs that may be used may be one or more of the following: to Epochs, more than to Epochs, 20 Epochs, 30 Epochs, 40 Epochs, 50 Epochs, 60 Epochs, 70 Epochs, 80 Epochs, 90 Epochs, too Epochs, more than too Epochs, etc.
  • Accuracy, precision, and recall are all measures of an effectiveness of a model at predictions based on the test data that was given.
  • a preferred goal is to have a 1.00 value, which represents 100%.
  • the data for the too Epoch experiment showed an initial accuracy as 0.43 (or 43%), and, as the training went on in the first Epoch, that accuracy increased to 0.94 (94%).
  • initial accuracy increased to 0.9518 (95.18%) as the training went on in the first Epoch, and an accuracy of 1.0000 (100%) was obtained by the second Epoch.
  • a goal of catheter connection determination may be viewed or defined as discrete, 100% accuracy, 100% precision, and 100% recall were obtained quickly.
  • the file containing the data gathered and labeled in the data collection was read.
  • the data was then split into a training population and a test population each comprising of image data and a known catheter connection/ disconnection status.
  • the training and test data were taken from the same population, and were selected at random from the original population to ensure that there is no bias between test and training populations.
  • a neural network was then compiled comprising or consisting of multiple layers.
  • the input layer to the neural network had the same number of nodes as the resized image had pixels, to ensure that each pixel was taken into consideration (although the method(s) discussed herein are not limited thereto, and the neural network may have a different number of nodes as compared with a number of pixels of a resized image).
  • One or more additional hidden layers were added to allow for the weights of the neural network to be significant enough for accurate prediction.
  • the first layer preferably may be the size of the inputs (in the case of the experiment, 125 (one for each pixel of the image).
  • hidden layer(s) may be up to the discretion of the engineer, neural network user or expert, or imaging device expert or user.
  • additional hidden layers may increase accuracy but come with an additional cost in performance.
  • an engineer, neural network expert/ user, or imaging device expert/ user may start with one or two hidden layers, and may add additional layers if desired or useful. The final layer only had a single neuron activated by a sigmoid function.
  • a sigmoid function was used with binary classification because, for values less than or equal to 0.5, the sigmoid function returns a value of zero (0) and for values larger than 0.5, the sigmoid function returns a value of one (1).
  • the sigmoid function used during the experiment may be found online at https://www.tensorflow.org/api_docs/python/tf/keras/activations/sigmoid). Other activation functions like a linear, binary step; a Rectified Linear Unit (ReLU); etc. may be used as alternatives. That said, the sigmoid function was used in the binary situation because the results were binary.
  • the results of this neural net are binary which require only a single node for indication, and a sigmoid activation function allowed for clearer indication of a binary result than other potential activation functions.
  • the results may be non-binary depending on how a user of the image device defines the catheter connection(s) or disconnection(s) (e.g., there may be multiple types of connections, such as, but not limited to, fully connected status, partially connected status in a case where a partial connection is acceptable, etc.; and there may be multiple types of disconnections, such as, but not limited to, fully disconnected status, partially disconnected status in a case where a partial disconnection is not acceptable, etc.).
  • the training image data and image labels were then passed to the neural network for training.
  • the resulting training adjusted the weights of the different nodes to activate when or in a case where a catheter connection / disconnection was detected.
  • the model was evaluated using the testing image data and image labels to determine how accurately the neural net predicts the correct label.
  • data obtained in live mode may not be as accurate as the pullback data.
  • one or more ways to improve the accuracy of such live mode data may include one or more of the following: filtering or smoothing the data to improve the resolution or image quality of the data, collecting data in a pullback state and in a live mode state to compare the accuracy of the live mode state with the accuracy of the pullback state, etc.
  • all of the images were transformed to 25 x 25 pixels.
  • images may be larger than 25 x 25 pixels.
  • FIG. 8 shows the efficiency of the technique! s), feature(s), and/or method! s) discussed herein because values of 1.0 were achieved for accuracy, precision, and recall by Epoch 2, and the loss value continued to decrease and improve from Epoch 1 through Epoch 10.
  • MM-OCT devices or systems may benefit from establishing catheter connections, robot or continuum robot devices or systems, or other types of robot devices or systems, that may use the same or similar connections may benefit from accurately detecting component connections and/or disconnections.
  • the majority of the connection or disconnection detection method(s) may be reused, including the neural network features. Where changes may be made, for example, would be in how the initial data is collected and transformed in one or more embodiments.
  • the MM-OCT application(s) may have mechanisms for saving data, and the continuum robot or robot application(s) may also use saving data mechanisms in one or more embodiments.
  • the robot, continuum robot e.g., a chip-on-tip or other camera may be passed through a tool channel of a robot or continuum robot and such a chip-on-tip or other camera may be connected to return imaging data), or other robot camera(s), may be or may include a color camera, and the images collected by the imaging application(s) may be greyscale or may include greyscale images, so a shift from color to grayscale may also be employed for imaging application(s) and for considering imaging quality and related data in one or more embodiments.
  • the neural network may be trained to detect the end of the tool channel.
  • This detection may occur by training the neural network to recognize the difference between the signals present in the tool channel environment and the signals that are present outside of the tool channel or channel environment.
  • the neural net may be trained to identify a case or cases where the field of perception or the field of view expands.
  • One or more other embodiments may use a neural net to recognize a unique marker placed at the end of the tool channel during manufacturing. This marker may span the circumference of the inside of the tool chamber or tool channel or may include one or more icons placed along the circumference of the tool chamber or tool channel. In some embodiments, multiple markers may be used to signify distance to the end of the tool chamber or the tool channel.
  • One or more embodiments of the present disclosure may use one or more different types of models, such as those discussed in PCT/US2020/051615, filed on September 18, 2020 and published as WO 2021/055837 A9 on March 25, 2021, and as discussed in U.S. Pat. App. No. 17/761,561, filed on March 17, 2022, the applications and publications of which are incorporated by reference herein in their entireties.
  • selecting a model may depend on a success rate of coregistration, which may be affected by a catheter connection or disconnection detection success rate, in the setting of a final application on validation and/or test data set(s).
  • a success rate of coregistration which may be affected by a catheter connection or disconnection detection success rate, in the setting of a final application on validation and/or test data set(s).
  • Such consideration(s) may be balanced with time (e.g., a predetermined time period, a desired time period, an available time period, a target time period, etc.) for processing/predicting and user interaction.
  • success rate(s) may be calculated in many different ways.
  • catheter connection or disconnection detection success rate may be calculated in various ways
  • one example of a catheter connection or disconnection detection success rate is to calculate the number of frames for which the predicted and the true catheter connections or disconnections are considered the same (e.g., when the distance between predicted and true catheter connections or disconnections is within a certain tolerance or below a pre-defined distance threshold, which is defined by a user or pre-defined in the system; etc.) divided by the total number of frames obtained, received, or imaged during the OCT pullback.
  • catheter connection or disconnection detection success rates and coregistration success rates may be improved or maximized.
  • the success rate of catheter connection or disconnection detection (and consequently the success rate of coregistration) may depend on how good the prediction of a catheter location, connection, or disconnection is across all frames. As such, by improving estimation of the catheter location, the success rate of the catheter connection or disconnection detection may be improved and likewise the success rate of coregistration may be improved.
  • segmentation model also referred to as classification model or a semantic segmentation model
  • segmentation model architecture one or more certain area(s) of an image are predicted to belong to one or more classes in one or more embodiments.
  • segmentation model architectures or ways to formulate or frame the image segmentation task or issue such as, but not limited to, embodiment(s) as discussed in PCT/US2020/051615, filed on September 18, 2020 and published as WO 2021/055837 A9 on March 25, 2021, and as discussed in U.S. Pat. App. No. 17/761,561, filed on March 17, 2022, the applications and publications of which are incorporated by reference herein in their entireties.
  • one or more embodiments of a semantic segmentation model may be performed using the One-Hundred Layers Tiramisu method discussed in “The One Hundred Layers Tiramisu: Fully Convolutional DenseNets for Semantic Segmentation” to Simon Jegou, et al., Montreal Institute for Learning Algorithms, published October 31, 2017 (https://arxiv.org/pdf/1611.09326.pdf), which is incorporated by reference herein in its entirety.
  • CNNs Convolutional Neural Networks
  • CNNs may be used for one or more features of the present invention, including, but not limited to, artificial intelligence feature(s), detecting one or more catheter connections or disconnections, using the catheter connection or disconnection detection results to perform coregistration, image classification, semantic image segmentation, etc.
  • artificial intelligence feature(s) detecting one or more catheter connections or disconnections, using the catheter connection or disconnection detection results to perform coregistration
  • image classification image classification
  • semantic image segmentation etc.
  • one or more embodiments may combine U-net, ResNet, and DenseNet architectural components to perform segmentation.
  • U-net is a popular convolutional neural network architecture for image segmentation
  • ResNet improves training deep convolutional neural network models due to its skip connections
  • DenseNet has reliable and good feature extractors because of its compact internal representations and reduced feature redundancy.
  • a network may be trained by slicing the training data set, and not downsampling the data (in other words, image resolution may be preserved or maintained).
  • one or more features such as, but not limited to, convolution, concatenation, transition up, transition down, dense block, etc., may be employed by slicing the training data set.
  • a slicing size may be one or more of the following: too x too, 224 x 224, and 512 x 512.
  • 16 images/batch may be used.
  • the optimization of all of these hyper-parameters depends on the size of the available data set as well as the available computer/computing resources; thus, once more data is available, different hyper- parameter values may be chosen.
  • a convolutional autoencoder CAE may be used.
  • a segmentation model may be used to demarcate regions of interest in an image representing a blood vessel. Since we know that the catheter may be located inside a vessel (intravascular OCT imaging probe) in one or more embodiments, demarcation of vessels may be used to improve the accuracy and precision of catheter detection.
  • the segmentation model with post-processing may be used with one or more features from “The One Hundred Layers Tiramisu: Fully Convolutional DenseNets for Semantic Segmentation” to Simon Jegou, et al., Montreal Institute for Learning Algorithms, published October 31, 2017 (https://arxiv.org/pdf/1611.09326.pdf), which is incorporated by reference herein in its entirety.
  • one or more embodiments may use an angio image or images as an input and may predict the catheter location or catheter connection or disconnection in a form of a spatial coordinate.
  • This approach/architecture has advantages over semantic segmentation because the object detection model predicts the catheter location or catheter connection or disconnection directly, and may avoid post-processing in one or more embodiments.
  • the object detection model architecture may be created or built by using or combining convolutional layers, maxpooling layers, fully-connected dense layers, and/ or multi-scale image or feature pyramids. Different combinations may be used to determine the best performance test result. The performance test result(s) may be compared with other model architecture test results to determine which architecture to use for a given application or applications.
  • One or more embodiments of architecture model(s) discussed herein may be used with one or more of: a neural network(s), a convolutional neural network(s), and a random forest.
  • One or more embodiments may use convolutional neural network architectures with residual connections as discussed in “Deep Residual Learning for Image Recognition” by Kaiming He, et al., Microsoft Research, December 10, 2015 (https://arxiv.org/pdf/1512.03385.pdf), which is incorporated by reference herein in its entirety.
  • a different neural network architecture may be used.
  • a neural network architecture may use feature pyramids as described in “Feature Pyramid Networks for Object Detection” by Tsung-Yi Lin, et al., Facebook Al Research (FAIR), April 19, 2017 (https://arxiv.org/abs/1612.03144).
  • FAIR Facebook Al Research
  • the machine learning algorithm or model architecture is not limited to the structures or details discussed herein.
  • One or more embodiments may use a recurrent convolutional neural network object detection model with long short-term memory (see e.g., “long short-term memory” as discussed in “Long Short-Term Memory” by Hochreiter, et al., Neural Computation, Volume 9, Issue 8, November 1997 (https://dl.acm.0rg/d0i/10.1162/nec0.1997.9.8.1735); as discussed in “Fundamentals of Recurrent Neural Network (RNN) and Long Short-Term Memory (LSTM) Network” by Alex Sherstinsky, Elsevier Journal “Physica D: Nonlinear Phenomena”, Volume 404, March 2020 (https://arxiv.org/abs/1808.03314l; as discussed in “Sequence to Sequence Learning with Neural Networks”, by Sutskeyer, et al., December 2014 (https://papers.nips.cc/paper/5346-sequence-to
  • model input is a sequence of multiple frames
  • model output is a sequence of spatial coordinates for marker and/or catheter locations in each of the given images.
  • One or more embodiments may use a neural network model that is created by transfer learning.
  • Transfer learning is a method of using a model with pre-trained (instead of randomly initialized) parameters, that have been optimized for the same or a different objective (e.g., to solve a different image recognition or computer vision issue) on a different data set with a potentially different underlying data distribution.
  • the model architecture may be adapted or used to solve new objective(s) or issue(s), for example, by adding, removing, or replacing one or more layers of the neural network, and the potentially modified model is then further trained (fine-tuned) on the new data set.
  • this learning approach may help improve the performance of the model, especially when the size of the available data set is small.
  • the success rate improves about 30%.
  • evaluation metric(s) may be used for model evaluation such as discussed in PCT/US2020/051615, filed on September 18, 2020 and published as WO 2021/055837 A9 on March 25, 2021, and as discussed in U.S. Pat. App. No. 17/761,561, filed on March 17, 2022, the applications and publications of which are incorporated by reference herein in their entireties.
  • an object detection model may not have enough resolution for accurate prediction of the marker location. That said, in one or more embodiments, a sufficiently optimized object detection model may achieve better or maximized performance.
  • a segmentation model may provide better resolution than at least one embodiment of an object detection model, as aforementioned, at least one embodiment of a segmentation model may use post-processing to obtain a coordinate of predicted marker location (which may lead to a lower marker detection success rate in one or more embodiments).
  • a combination model may be employed, which, for example, involves running a semantic segmentation model and then applying an object detection model to an area with higher probability from the segmentation model (one or more features of such combined approaches may be used in one or more embodiments of the present disclosure, one or more features, including, but not limited to, those as discussed in “Mask R-CNN” to Kaiming He, et al., Facebook Al Research (FAIR), January 24, 2018 (https://arxiv.0rg/pdf/1703.06870.pdf), which is incorporated by reference herein in its entirety); and/or (ii) running an object detection model with a bigger normalized range, applying the object detection model, and then applying the object detection model again with a higher probability area from the first object detection model.
  • Visualization, PCI procedure planning, and physiological assessment may be combined to perform complete PCI planning beforehand, and to perform complete assessment after the procedure.
  • an interventional device e.g., a stent
  • virtual PCI may be performed in a computer simulation (e.g., by one or more of the computers discussed herein, such as, but not limited to, the computer 2, the processor computer 1200, the processor or computer 1200’, any other processor discussed herein, etc.).
  • another physiological assessment may be performed based on the result of the virtual PCI. This approach allows a user to find the best device (e.g., interventional device, implant, stent, etc.) for each patient before or during the procedure.
  • GUIs While a few examples of GUIs have been discussed herein and shown in one or more of the figures of the present disclosure, other GUI features, imaging modality features, or other imaging features, may be used in one or more embodiments of the present disclosure, such as the GUI feature(s), imaging feature(s), and/or imaging modality feature(s) disclosed in U.S. Pat. No. 16/401,390, filed May 2, 2019, and disclosed in U.S. Pat. Pub. No. 2019/0029624 and WO 2019/023375, which application(s) and publication(s) are incorporated by reference herein in their entireties.
  • One or more methods or algorithms for calculating stent expansion/underexpansion or apposition/malapposition may be used in one or more embodiments of the present disclosure, including, but not limited to, the expansion/underexpansion and apposition/malapposition methods or algorithms discussed in U.S. Pat. Pub. Nos. 2019/0102906 and 2019/0099080, which publications are incorporated by reference herein in their entireties.
  • One or more methods or algorithms for calculating or evaluating cardiac motion using an angiography image and/or for displaying anatomical imaging may be used in one or more embodiments of the present disclosure, including, but not limited to, the methods or algorithms discussed in U.S. Pat. Pub. No. 2019/0029623 and U.S. Pat. Pub. No. 2018/0271614 and WO 2019/ 023382, which publications are incorporated by reference herein in their entireties.
  • One or more methods or algorithms for performing co-registration and/or imaging may be used in one or more embodiments of the present disclosure, including, but not limited to, the methods or algorithms discussed in U.S. Pat. App. No. 62/798,885, filed on January 30, 2019, and discussed in U.S. Pat. Pub. No. 2019/0029624, which application(s) and publication(s) are incorporated by reference herein in their entireties.
  • control bars may be contoured, curved, or have any other configuration desired or set by a user.
  • a user may define or create the size and shape of a control bar based on a user moving a pointer, a finger, a stylus, another tool, etc. on the touch screen (or alternatively by moving a mouse or other input tool or device regardless of whether a touch screen is used or not).
  • One or more embodiments of the present disclosure may include taking multiple views (e.cr, OCT image, ring view, tomo view, anatomical view, etc.), and one or more embodiments may highlight or emphasize NIRAF.
  • two handles may operate as endpoints that may bound the color extremes of the NIRAF data in or more embodiments.
  • the user may select to display multiple longitudinal views.
  • the Graphical User Interface may also display angiography images.
  • the aforementioned features are not limited to being displayed or controlled using any particular GUI.
  • the aforementioned imaging modalities may be used in various ways, including with or without one or more features of aforementioned embodiments of a GUI or GUIs.
  • a GUI may show an OCT image with a tool or marker to change the image view as aforementioned even if not presented with a GUI (or with one or more other components of a GUI; in one or more embodiments, the display may be simplified for a user to display set or desired information).
  • the procedure to select the region of interest and the position of a marker, an angle, a plane, etc. for example, using a touch screen, a GUI (or one or more components of a GUI; in one or more embodiments, the display may be simplified for a user to display the set or desired information), a processor (e.g., processor or computer 2, 1200, 1200’, or any other processor discussed herein) may involve, in one or more embodiments, a single press with a finger and dragging on the area to make the selection or modification.
  • the new orientation and updates to the view may be calculated upon release of a finger, or a pointer.
  • two simultaneous touch points may be used to make a selection or modification, and may update the view based on calculations upon release.
  • One or more functions may be controlled with one of the imaging modalities, such as the angiography image view or the OCT image view, to centralize user attention, maintain focus, and allow the user to see all relevant information in a single moment in time.
  • one of the imaging modalities such as the angiography image view or the OCT image view
  • one imaging modality may be displayed or multiple imaging modalities may be displayed.
  • One or more procedures may be used in one or more embodiments to select a region of choice or a region of interest for a view. For example, after a single touch is made on a selected area (e.p., by using a touch screen, by using a mouse or other input device to make a selection, etc.), the semi-circle (or other geometric shape used for the designated area) may automatically adjust to the selected region of choice or interest. Two (2) single touch points may operate to connect/draw the region of choice or interest. [0161] FIG.
  • FIG. 9A shows an OCT system 100 (as referred to herein as “system too” or “the system too”) which may be used for one or more imaging modalities, such as, but not limited to, angiography, Optical Coherence Tomography (OCT), Multi-modality OCT (MM-OCT), near-infrared autofluorescence (NIRAF), near-infrared fluorescence (NIRF), OCT-NIRAF, etc., and/or for employing one or more additional features discussed herein, including, but not limited to, artificial intelligence processes (e.g., machine or deep learning, residual learning, artificial intelligence (“Al”) co-registration, catheter connection/disconnection detection, etc.) in accordance with one or more aspects of the present disclosure.
  • artificial intelligence processes e.g., machine or deep learning, residual learning, artificial intelligence (“Al”) co-registration, catheter connection/disconnection detection, etc.
  • the system too comprises a light source 101, a reference arm 102, a sample arm 103, a deflected or deflecting section 108, a reference mirror (also referred to as a “reference reflection”, “reference reflector”, “partially reflecting mirror” and a “partial reflector”) 105, and one or more detectors 107 (which may be connected to a computer 1200).
  • the system too may include a patient interface device or unit (“PIU”) 110 and a catheter 120 (see e.g., embodiment examples of a PIU and a catheter as shown in FIGS. 1A-1B, FIG. 7 and/or FIGS.
  • PIU patient interface device or unit
  • the system too may interact with an object 106, a patient (e.g., a blood vessel of a patient) 106, a sample, etc. (e.g., via the catheter 120 and/or the PIU 110).
  • the system too includes an interferometer or an interferometer is defined by one or more components of the system too, such as, but not limited to, at least the light source 101, the reference arm 102, the sample arm 103, the deflecting section 108 and the reference mirror 105.
  • bench top systems maybe utilized for one or more imaging modalities, , such as, but not limited to, angiography, Optical Coherence Tomography (OCT), Multi-modality OCT (MM-OCT), near-infrared autofluorescence (NIRAF), near-infrared fluorescence (NIRF), OCT-NIRAF, OCT-NIRF, etc., and/or for employing one or more additional features discussed herein, including, but not limited to, artificial intelligence processes (e.g., machine or deep learning, residual learning, artificial intelligence (“Al”) co-registration, catheter connection/disconnection detection, marker detection, etc.) in accordance with one or more aspects of the present disclosure.
  • FIG. 9B shows an example of a system that can utilize the one or more imaging modalities, such as, but not limited to, angiography, Optical Coherence Tomography (OCT), Multi-modality OCT (MM-OCT), near-infrared auto-fluorescence (NIRAF), near-infrared fluorescence (NIRF), OCT-NIRAF, OCT-NIRF, etc., and/or for employing one or more additional features discussed herein, including, but not limited to, artificial intelligence processes (e.g., machine or deep learning, residual learning, artificial intelligence (“Al”) coregistration, catheter connection/disconnection detection, marker detection, etc.) in accordance with one or more aspects of the present disclosure discussed herein for a bench -top such as for ophthalmic applications.
  • artificial intelligence processes e.g., machine or deep learning, residual learning, artificial intelligence (“Al”) coregistration, catheter connection/disconnection detection, marker detection, etc.
  • a light from a light source 101 delivers and splits into a reference arm 102 and a sample arm 103 with a deflecting section 108.
  • a reference beam goes through a length adjustment section 904 and is reflected from a reference mirror (such as or similar to the reference mirror or reference reflection 105 shown in FIG. 9A) in the reference arm 102 while a sample beam is reflected or scattered from an object, a patient (e.g., blood vessel of a patient), etc. 106 in the sample arm 103 (e.g., via the PIU 110 and the catheter 120).
  • both beams combine at the deflecting section 108 and generate interference patterns.
  • the beams go to the combiner 903, and the combiner 903 combines both beams via the circulator 901 and the deflecting section 108, and the combined beams are delivered to one or more detectors (such as the one or more detectors 107).
  • the output of the interferometer is continuously acquired w ith one or more detectors, such as the one or more detectors 107.
  • the electrical analog signals are converted to the digital signals to analyze them w ith a computer, such as, but not limited to, the computer 1200 (see FIGS. 9A-9C; also shown in FIGS. 11 and 13 discussed further below), the computer 1200’ (see e.g., FIGS. 12 and 13 discussed further below), the computer 2 (see FIG.
  • processors 26, 36, 50 any other computer or processor discussed herein, etc. Additionally or alternatively, one or more of the computers, CPUs, processors, etc. discussed herein may be used to process, control, update, emphasize, and/or change one or more of imaging modalities, and/or process the related techniques, functions or methods, or may process the electrical signals as discussed above.
  • the electrical analog signals may be converted to the digital signals to analyze them with a computer, such as, but not limited to, the computer 1200 (see FIGS. 1B and 9A-9C; also shown in FIGS. 11 and 13 discussed further below), the computer 1200’ (see e.g., FIGS. 12-13 discussed further below), the computer 2 (see FIG. 1A), any other processor or computer discussed herein, etc. Additionally or alternatively, one or more of the computers, CPUs, processors, etc. discussed herein may be used to process, control, update, emphasize, and/or change one or more imaging modalities, and/or process the related techniques, functions or methods, or may process the electrical signals as discussed above. In one or more embodiments (see e.g., FIG.
  • the sample arm 103 includes the PIU 110 and the catheter 120 so that the sample beam is reflected or scattered from the object, patient ⁇ e.g., blood vessel of a patient), etc. 106 as discussed herein.
  • the PIU 110 may include one or more motors to control the pullback operation of the catheter 120 (or one or more components thereof) and/or to control the rotation or spin of the catheter 120 (or one or more components thereof) (see e.g., the motor M of FIG. 1B).
  • the motor M of FIG. 1B see e.g., the motor M of FIG. 1B.
  • the PIU 110 may include a pullback motor (PM) and a spin motor (SM), and/or may include a motion control unit 112 that operates to perform the pullback and/or rotation features using the pullback motor PM and/or the spin motor SM.
  • the PIU 110 may include a rotary junction e.g., rotary junction RJ as shown in FIGS. 9B and 9C).
  • the rotary junction RJ may be connected to the spin motor SM so that the catheter 120 may obtain one or more views or images of the object, patient ⁇ e.g., blood vessel of a patient), etc. 106.
  • the computer 1200 (or the computer 1200’, computer 2, any other computer or processor discussed herein, etc.) may be used to control one or more of the pullback motor PM, the spin motor SM and/or the motion control unit 112.
  • An OCT system may include one or more of a computer ⁇ e.g., the computer 1200, the computer 1200’, computer 2, any other computer or processor discussed herein, etc.), the PIU no, the catheter 120, a monitor (such as the display 1209), etc.
  • One or more embodiments of an OCT system may interact with one or more external systems, such as, but not limited to, an angio system, external displays, one or more hospital networks, external storage media, a power supply, a bedside controller (e.g., which may be connected to the OCT system using Bluetooth technology or other methods known for wireless communication), etc.
  • external systems such as, but not limited to, an angio system, external displays, one or more hospital networks, external storage media, a power supply, a bedside controller (e.g., which may be connected to the OCT system using Bluetooth technology or other methods known for wireless communication), etc.
  • the deflected section 108 may operate to deflect the light from the light source 101 to the reference arm 102 and/ or the sample arm 103, and then send light received from the reference arm 102 and/or the sample arm 103 towards the at least one detector 107 (e.g., a spectrometer, one or more components of the spectrometer, another type of detector, etc.).
  • the at least one detector 107 e.g., a spectrometer, one or more components of the spectrometer, another type of detector, etc.
  • the deflected section may include or may comprise one or more interferometers or optical interference systems that operate as described herein, including, but not limited to, a circulator, a beam splitter, an isolator, a coupler (e.g., fusion fiber coupler), a partially severed mirror with holes therein, a partially severed mirror with a tap, etc.
  • the interferometer or the optical interference system may include one or more components of the system too (or any other system discussed herein) such as, but not limited to, one or more of the light source 101, the deflected section 108, the rotary junction RJ, a PIU 110, a catheter 120, etc.
  • the system too or any other system discussed herein
  • One or more features of the aforementioned configurations of at least FIGS. 1-9B (and/ or any other configurations discussed below) may be incorporated into one or more of the systems, including, but not limited to, the system too, too’, too”, etc. discussed herein.
  • FIG. 9C shows an example of a system too” that may utilize the one or more multiple imaging modalities, such as, but not limited to, angiography, Optical Coherence Tomography (OCT), Multi-modality OCT (MM-OCT), near-infrared auto-fluorescence (NIRAF), near-infrared fluorescence (NIRF), OCT-NIRAF, OCT-NIRF, etc., and/or for employing one or more additional features discussed herein, including, but not limited to, artificial intelligence processes (e.g., machine or deep learning, residual learning, artificial intelligence (“Al”) co-registration, catheter connection/disconnection detection, marker detection, etc.) and/or related technique!
  • artificial intelligence processes e.g., machine or deep learning, residual learning, artificial intelligence (“Al”) co-registration, catheter connection/disconnection detection, marker detection, etc.
  • FIG. 9C shows an exemplary schematic of an OCT-fluorescence imaging system too”, according to one or more embodiments of the present disclosure.
  • An OCT light source 101 e.g., with a 1.3pm
  • a deflector or deflected section e.g., a splitter
  • the reference beam from the OCT light source 101 is reflected by a reference mirror 105 while a sample beam is reflected or scattered from an object (e.g., an object to be examined, an object, a target, a patient, etc.) 106 through a circulator 901, a rotary junction 90 (“RJ”) and a catheter 120.
  • the fiber between the circulator 901 and the reference mirror or reference reflection 105 may be coiled to adjust the length of the reference arm 102 (best seen in FIG. 9C).
  • Optical fibers in the sample arm 103 may be made of double clad fiber (“DCF”).
  • Excitation light for the fluorescence may be directed to the RJ 90 and the catheter 120, and illuminate the object (e.g., an object to be examined, an object, a patient, etc.) 106.
  • the light from the OCT light source 101 may be delivered through the core of DCF while the fluorescence light emitted from the object (e.g., an object to be examined, an object, a target, a patient, etc.) 106 may be collected through the cladding of the DCF.
  • the RJ 90 may be moved with a linear stage to achieve helical scanning of the object (e.g., an object to be examined, an object, a target, a patient, etc.) 106.
  • the RJ 90 may include any one or more features of an RJ as discussed herein.
  • Dichroic filters DFi, DF2 may be used to separate excitation light and the rest of fluorescence and OCT lights.
  • DFi may be a long pass dichroic filter with a cutoff wavelength of ⁇ iooo nm, and the OCT light, which may be longer than a cutoff wavelength of DFi, may go through the DFi while fluorescence excitation and emission, which are a shorter wavelength than the cut off, reflect at DFi.
  • DF2 may be a short pass dichroic filter; the excitation wavelength may be shorter than fluorescence emission light such that the excitation light, which has a wavelength shorter than a cutoff wavelength of DF2, may pass through the DF2, and the fluorescence emission light reflect with DF2.
  • both beams combine at the deflecting section 108 and generate interference patterns.
  • the beams go to the coupler or combiner 903, and the coupler or combiner 903 combines both beams via the circulator 901 and the deflecting section 108, and the combined beams are delivered to one or more detectors (such as the one or more detectors 107; see e.g., the first detector 107 connected to the coupler or combiner 903 in FIG. 9C).
  • one or more detectors such as the one or more detectors 107; see e.g., the first detector 107 connected to the coupler or combiner 903 in FIG. 9C).
  • the optical fiber in the catheter 120 operates to rotate inside the catheter 120, and the OCT light and excitation light may be emitted from a side angle of a tip of the catheter 120.
  • the OCT light may be delivered back to an OCT interferometer (e.g., via the circulator 901 of the sample arm 103), which may include the coupler or combiner 903, and combined with the reference beam (e.g., via the coupler or combiner 903) to generate interference patterns.
  • the output of the interferometer is detected with a first detector 107, wherein the first detector 107 may be photodiodes or multi-array cameras, and then may be recorded to a computer (e.g., to the computer 2, the computer 1200 as shown in FIG. 9C, the computer 1200’, or any other computer discussed herein) through a first data-acquisition unit or board (“DAQ1”).
  • a computer e.g., to the computer 2, the computer 1200 as shown in FIG. 9C, the computer 1200’, or any other computer discussed herein
  • DAQ1 first data-acquisition unit or board
  • the fluorescence intensity may be recorded through a second detector 107 (e.g., a photomultiplier) through a second data-acquisition unit or board (“DAQ2”).
  • the OCT signal and fluorescence signal may be then processed by the computer (e.g., to the computer 2, the computer 1200 as shown in FIG. 9C, the computer 1200’, or any other computer discussed herein) to generate an OCT-fluorescence data set 140, which includes or is made of multiple frames of helically scanned data. Each set of frames includes or is made of multiple data elements of co-registered OCT and fluorescence data, which correspond to the rotational angle and pullback position.
  • Detected fluorescence or auto-fluorescence signals may be processed or further processed as discussed in U.S. Pat. App. No. 62/861,888, filed on June 14, 2019, the disclosure of which is incorporated herein by reference in its entirety, and/or as discussed in U.S. Pat. App. No. 16/368,510, filed March 28, 2019, the disclosure of which is incorporated herein by reference herein in its entirety.
  • one or more embodiments of the devices, apparatuses, systems, methods, storage mediums, GUI’s, etc. discussed herein may be used with an apparatus or system as aforementioned, such as, but not limited to, for example, the system 100, the system 100’, the system 100”, the devices, apparatuses, or systems of FIGS. 1A-1B and 9A-17, any other device, apparatus or system discussed herein, etc.
  • one user may perform the method(s) discussed herein.
  • one or more users may perform the method(s) discussed herein.
  • one or more of the computers, CPUs, processors, etc. discussed herein may be used to process, control, update, emphasize, and/or change one or more of the imaging modalities, and/or process the related techniques, functions or methods, or may process the electrical signals as discussed above.
  • the light source 101 may include a plurality of light sources or may be a single light source.
  • the light source 101 may be a broadband lightsource, and may include one or more of a laser, an organic light emitting diode (OLED), a light emitting diode (LED), a halogen lamp, an incandescent lamp, supercontinuum light source pumped by a laser, and/or a fluorescent lamp.
  • the light source 101 may be any light source that provides light which may then be dispersed to provide light which is then used for imaging, performing control, viewing, changing, emphasizing methods for imaging modalities, constructing or reconstructing 3D structure(s), and/or any other method discussed herein.
  • the light source 101 may be fiber coupled or may be free space coupled to the other components of the apparatus and/or system too, too’, too”, the devices, apparatuses or systems of FIGS. 1A-1B and 12A-17, or any other embodiment discussed herein.
  • the light source 101 may be a swept-source (SS) light source.
  • the one or more detectors 107 may be a linear array, a charge- coupled device (CCD), a plurality of photodiodes or some other method of converting the light into an electrical signal.
  • the detector(s) 107 may include an analog to digital converter (ADC).
  • the one or more detectors may be detectors having structure as shown in one or more of FIGS. 1A-1B and 12A- 17 and as discussed herein.
  • FIG. 10 illustrates a flow chart of at least one embodiment of a method for performing imaging.
  • the method(s) may include one or more of the follow ing: (i) splitting or dividing light into a first light and a second reference light (see step S4000 in FIG. 10); (ii) receiving reflected or scattered light of the first light after the first light travels along a sample arm and irradiates an object (see step S4001 in FIG. 10); (iii) receiving the second reference light after the second reference light travels along a reference arm and reflects off of a reference reflection (see step S4002 in FIG.
  • One or more methods may further include using low frequency monitors to update or control high frequency content to improve image quality.
  • one or more embodiments may use multiple imaging modalities, related methods or techniques for same, etc. to achieve improved image quality.
  • an imaging probe may be connected to one or more systems (e.g., the system too, the system too’, the system too”, the devices, apparatuses or systems of FIGS.
  • connection member or interface module is a rotary junction for an imaging probe
  • the rotary junction may be at least one of: a contact rotary junction, a lenseless rotary junction, a lensbased rotary junction, or other rotary junction known to those skilled in the art.
  • the rotary junction may be a one channel rotary junction or a two channel rotary junction.
  • the illumination portion of the imaging probe may be separate from the detection portion of the imaging probe.
  • a probe may refer to the illumination assembly, which includes an illumination fiber (e.g., single mode fiber, a GRIN lens, a spacer and the grating on the polished surface of the spacer, etc.).
  • an illumination fiber e.g., single mode fiber, a GRIN lens, a spacer and the grating on the polished surface of the spacer, etc.
  • a scope may refer to the illumination portion which, for example, may be enclosed and protected by a drive cable, a sheath, and detection fibers (e.g., multimode fibers (MMFs)) around the sheath. Grating coverage is optional on the detection fibers (e.g., MMFs) for one or more applications.
  • the illumination portion may be connected to a rotary joint and may be rotating continuously at video rate.
  • the detection portion may include one or more of: a detection fiber, a detector (e.g., the one or more detectors 107, a spectrometer, etc.), the computer 1200, the computer 1200’, the computer 2, any other computer or processor discussed herein, etc.
  • the detection fibers may surround the illumination fiber, and the detection fibers may or may not be covered by a grating, a spacer, a lens, an end of a probe or catheter, etc.
  • the one or more detectors 107 may transmit the digital or analog signals to a processor or a computer such as, but not limited to, an image processor, a processor or computer 1200, 1200’ (see e.g., FIGS. 9A-9C and 11-13), a computer 2 (see e.g., FIG. 1A), any other processor or computer discussed herein, a combination thereof, etc.
  • the image processor may be a dedicated image processor or a general purpose processor that is configured to process images.
  • the computer 1200, 1200’, 2 or any other processor or computer discussed herein may be used in place of, or in addition to, the image processor.
  • the image processor may include an ADC and receive analog signals from the one or more detectors 107.
  • the image processor may include one or more of a CPU, DSP, FPGA, ASIC, or some other processing circuitry.
  • the image processor may include memory for storing image, data, and instructions.
  • the image processor may generate one or more images based on the information provided by the one or more detectors 107.
  • a computer or processor discussed herein, such as, but not limited to, a processor of the devices, apparatuses or systems of FIGS. 1-9C, the computer 1200, the computer 1200’, the computer 2, the image processor, may also include one or more components further discussed herein below (see e.g., FIGS. 11-13).
  • a console or computer 1200, 1200’, a computer 2, any other computer or processor discussed herein, etc. operates to control motions of the RJ via the motion control unit (MCU) 112 or a motor M, acquires intensity data from the detector(s) in the one or more detectors 107, and displays the scanned image (e.g., on a monitor or screen such as a display, screen or monitor 1209 as shown in the console or computer 1200 of any of FIGS. 9A-9C and FIGS. 11 and 13 and/or the console 1200’ of FIGS. 12-13 as further discussed below; the computer 2 of FIG. 1A; any other computer or processor discussed herein; etc.).
  • MCU motion control unit
  • the MCU 112 or the motor M operates to change a speed of a motor of the RJ and/or of the RJ.
  • the motor may be a stepping or a DC servo motor to control the speed and increase position accuracy (e.g., compared to when not using a motor, compared to when not using an automated or controlled speed and/or position change device, compared to a manual control, etc.).
  • the output of the one or more components of any of the systems discussed herein may be acquired with the at least one detector 107, e.g., such as, but not limited to, photodiodes, Photomultiplier tube(s) (PMTs), line scan camera(s), or multi-array camera(s). Electrical analog signals obtained from the output of the system too, too’, too”, and/or the detector(s) 107 thereof, and/or from the devices, apparatuses, or systems of FIGS. 1-9C and/or 11-17, are converted to digital signals to be analyzed with a computer, such as, but not limited to, the computer 1200, 1200’.
  • a computer such as, but not limited to, the computer 1200, 1200’.
  • the light source 101 may be a radiation source or a broadband light source that radiates in a broad band of wavelengths.
  • a Fourier analyzer including software and electronics may be used to convert the electrical analog signals into an optical spectrum.
  • the light source 101, the motor or MCU 112, the RJ, the at least one detector 107, and/or one or more other elements of the system too may operate in the same or similar fashion to those like-numbered elements of one or more other systems, such as, but not limited to, the devices, apparatuses or systems of FIGS. 1-9C and/ or 11-17, the system 100’, the system 100”, or any other system discussed herein.
  • the devices, apparatuses or systems of FIGS. 1-9C and/ or 11-17 such as, but not limited to, the devices, apparatuses or systems of FIGS. 1-9C and/ or 11-17, the system 100’, the system 100”, or any other system discussed herein.
  • console or computer 1200 may be used in one or more systems (e.g., the system 100, the system 100’, the system 100”, the devices, apparatuses or systems of any of FIGS. 1-17, or any other system discussed herein, etc.), one or more other consoles or computers, such as the console or computer 1200’, any other computer or processor discussed herein, etc., may be used additionally or alternatively.
  • a computer such as the console or computer 1200, 1200’, may be dedicated to control and monitor the imaging (e.g., OCT, single mode OCT, multimodal OCT, multiple imaging modalities, etc.) devices, systems, methods and/or storage mediums described herein.
  • imaging e.g., OCT, single mode OCT, multimodal OCT, multiple imaging modalities, etc.
  • the electric signals used for imaging may be sent to one or more processors, such as, but not limited to, a computer or processor 2 (see e.g., FIG. 1A), a computer 1200 (see e.g., FIGS. 9A-9B, 11, and 13), a computer 1200’ (see e.g., FIGS. 12 and 13), etc. as discussed further below, via cable(s) or wire(s), such as, but not limited to, the cable(s) or wire(s) 113 (see FIG. 11). Additionally or alternatively, the electric signals, as aforementioned, may be processed in one or more embodiments as discussed above by any other computer or processor or components thereof.
  • the computer or processor 2 as shown in FIG.
  • any other computer or processor discussed herein e.g., computer or processors 1200, 1200’, etc.
  • the computer or processor 1200, 1200’ may be used instead of any other computer or processor discussed herein (e.g., computer or processor 2).
  • the computers or processors discussed herein are interchangeable, and may operate to perform any of the multiple imaging modalities feature(s) and method(s) discussed herein, including using, controlling, and changing a GUI or multiple GUI’s.
  • a computer system 1200 may include a central processing unit (“CPU”) 1201, a ROM 1202, a RAM 1203, a communication interface 1205, a hard disk (and/or other storage device) 1204, a screen (or monitor interface) 1209, a keyboard (or input interface; may also include a mouse or other input device in addition to the keyboard) 1210 and a BUS (or “Bus”) or other connection lines (e.g., connection line 1213) between one or more of the aforementioned components (e.g., including but not limited to, being connected to the console, the probe, the imaging apparatus or system, any motor discussed herein, a light source, etc.).
  • a computer system 1200 may comprise one or more of the aforementioned components.
  • a computer system 1200 may include a CPU 1201, a RAM 1203, an input/output (I/O) interface (such as the communication interface 1205) and a bus (which may include one or more lines 1213 as a communication system between components of the computer system 1200; in one or more embodiments, the computer system 1200 and at least the CPU 1201 thereof may communicate with the one or more aforementioned components of a device or system, such as, but not limited to, an apparatus or system using one or more imaging modalities and related method(s) as discussed herein), and one or more other computer systems 1200 may include one or more combinations of the other aforementioned components (e.g., the one or more lines 1213 of the computer 1200 may connect to other components via line 113).
  • I/O input/output
  • the CPU 1201 is configured to read and perform computer-executable instructions stored in a storage medium.
  • the computer-executable instructions may include those for the performance of the methods and/or calculations described herein.
  • the system 1200 may include one or more additional processors in addition to CPU 1201, and such processors, including the CPU 1201, may be used for tissue or object characterization, diagnosis, evaluation, imaging and/or construction or reconstruction.
  • the system 1200 may further include one or more processors connected via a network connection (e.g., via network 1206).
  • the CPU 1201 and any additional processor being used by the system 1200 may be located in the same telecom network or in different telecom networks (e.g., performing feature(s), function(s), technique(s), method(s), etc. discussed herein maybe controlled remotely).
  • the I/O or communication interface 1205 provides communication interfaces to input and output devices, which may include a light source, a spectrometer, a microphone, a communication cable and a network (either wired or wireless), a keyboard 1210, a mouse (see e.g., the mouse 1211 as shown in FIG. 12), a touch screen or screen 1209, a light pen and so on.
  • the communication interface of the computer 1200 may connect to other components discussed herein via line 113 (as diagrammatically shown in FIG. 11).
  • the Monitor interface or screen 1209 provides communication interfaces thereto.
  • Any methods and/or data of the present disclosure such as the methods for performing tissue or object characterization, diagnosis, examination, imaging (including, but not limited to, increasing image resolution, performing imaging using one or more imaging modalities, viewing or changing one or more imaging modalities and related methods (and/or option(s) or feature(s)), etc.), and/ or catheter connection and/or disconnection detection, for example, as discussed herein, may be stored on a computer-readable storage medium.
  • a computer-readable and/or writable storage medium used commonly such as, but not limited to, one or more of a hard disk e.g., the hard disk 1204, a magnetic disk, etc.), a flash memory, a CD, an optical disc (e.p., a compact disc (“CD”) a digital versatile disc (“DVD”), a Blu-rayTM disc, etc.), a magneto-optical disk, a random-access memory (“RAM”) (such as the RAM 1203), a DRAM, a read only memory (“ROM”), a storage of distributed computing systems, a memory card, or the like (e.g., other semiconductor memory, such as, but not limited to, a non-volatile memory card, a solid state drive (SSD) (see SSD 1207 in FIG.
  • SSD solid state drive
  • the computer-readable storage medium may be a non-transitory computer-readable medium, and/or the computer-readable medium may comprise all computer- readable media, with the sole exception being a transitory, propagating signal in one or more embodiments.
  • the computer-readable storage medium may include media that store information for predetermined, limited, or short period(s) of time and/or only in the presence of power, such as, but not limited to Random Access Memory (RAM), register memory, processor cache(s), etc.
  • Embodiment(s) of the present disclosure may also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a “non-transitory computer- readable storage medium”) to perform the functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment(s).
  • ASIC application specific integrated circuit
  • the methods, systems, and computer-readable storage mediums related to the processors may be achieved utilizing suitable hardware, such as that illustrated in the figures.
  • suitable hardware such as that illustrated in the figures.
  • Functionality of one or more aspects of the present disclosure may be achieved utilizing suitable hardware, such as that illustrated in FIG. 11.
  • Such hardware may be implemented utilizing any of the known technologies, such as standard digital circuitry, any of the known processors that are operable to execute software and/or firmware programs, one or more programmable digital devices or systems, such as programmable read only memories (PROMs), programmable array logic devices (PALs), etc.
  • the CPU 1201 (as shown in FIG.
  • the processor or computer 2 (as shown in FIG. 1A) and/or the computer or processor 1200’ (as shown in FIG. 12) may also include and/or be made of one or more microprocessors, nanoprocessors, one or more graphics processing units (“GPUs”; also called a visual processing unit (“VPU”)), one or more Field Programmable Gate Arrays (“FPGAs”), or other types of processing components (e.g., application specific integrated circuit(s) (ASIC)).
  • GPUs graphics processing units
  • FPGAs Field Programmable Gate Arrays
  • ASIC application specific integrated circuit
  • the various aspects of the present disclosure may be implemented by way of software and/or firmware program(s) that may be stored on suitable storage medium (e.g., computer-readable storage medium, hard drive, etc.) or media (such as floppy disk(s), memory chip(s), etc.) for transportability and/or distribution.
  • the computer may include a network of separate computers or separate processors to read out and execute the computer executable instructions.
  • the computer executable instructions may be provided to the computer, for example, from a network or the storage medium.
  • the computers or processors e.g., 2, 1200, 1200’, etc.
  • FIG. 12 hardware structure of an alternative embodiment of a computer or console 1200’ is shown in FIG. 12 (see also, FIG. 13).
  • the computer 1200’ includes a central processing unit (CPU) 1201, a graphical processing unit (GPU) 1215, a random access memory (RAM) 1203, a network interface device 1212, an operation interface 1214 such as a universal serial bus (USB) and a memory such as a hard disk drive or a solid state drive (SSD) 1207.
  • the computer or console 1200’ may include a display 1209.
  • the computer 1200’ may connect with a motor, a console, or any other component of the device(s) or system(s) discussed herein via the operation interface 1214 or the network interface 1212 (e.g., via a cable or fiber, such as the cable or fiber 113 as similarly shown in FIG. 11).
  • a computer such as the computer 1200’, may include a motor or motion control unit (MCU) in one or more embodiments.
  • the operation interface 1214 is connected with an operation unit such as a mouse device 1211, a keyboard 1210 or a touch panel device.
  • the computer 1200’ may include two or more of each component.
  • At least one computer program is stored in the SSD 1207, and the CPU 1201 loads the at least one program onto the RAM 1203, and executes the instructions in the at least one program to perform one or more processes described herein, as well as the basic input, output, calculation, memoiy writing and memoiy reading processes.
  • the computer such as the computer 2, the computer 1200, 1200’, (or other component(s) such as, but not limited to, the PCU, etc.), etc. may communicate with an MCU, an interferometer, a spectrometer, a detector, etc. to perform imaging, and may reconstruct an image from the acquired intensity data.
  • the monitor or display 1209 displays the reconstructed image, and may display other information about the imaging condition or about an object to be imaged.
  • the monitor 1209 also provides a graphical user interface for a user to operate any system discussed herein.
  • An operation signal is input from the operation unit (e.g., such as, but not limited to, a mouse device 1211, a keyboard 1210, a touch panel device, etc.) into the operation interface 1214 in the computer 1200’, and corresponding to the operation signal the computer 1200’ instructs any system discussed herein to set or change the imaging condition (e.g., improving resolution of an image or images), and to start or end the imaging.
  • a light or laser source and a spectrometer and/or detector may have interfaces to communicate with the computers 1200, 1200’ to send and receive the status information and the control signals.
  • one or more processors or computers 1200, 1200’ may be part of a system in which the one or more processors or computers 1200, 1200’ (or any other processor discussed herein) communicate with other devices (e.g., a database 1603, a memoiy 1602 (which may be used with or replaced by any other type of memory discussed herein or known to those skilled in the art), an input device 1600, an output device 1601, etc.).
  • other devices e.g., a database 1603, a memoiy 1602 (which may be used with or replaced by any other type of memory discussed herein or known to those skilled in the art), an input device 1600, an output device 1601, etc.
  • one or more models may have been trained previously and stored in one or more locations, such as, but not limited to, the memoiy 1602, the database 1603, etc.
  • one or more models and/or data discussed herein may be input or loaded via a device, such as the input device 1600.
  • a user may employ an input device 1600 (which may be a separate computer or processor, a keyboard such as the keyboard 1210, a mouse such as the mouse 1211, a microphone, a screen or display 1209 (e.g., a touch screen or display), or any other input device known to those skilled in the art).
  • an input device 1600 may not be used (e.g., where user interaction is eliminated by one or more artificial intelligence features discussed herein).
  • the output device 1601 may receive one or more outputs discussed herein to perform the marker detection, the coregistration, and/or any other process discussed herein.
  • the database 1603 and/or the memory 1602 may have outputted information (e.g., trained model(s), detected marker information, image data, test data, validation data, training data, coregistration result(s), segmentation model information, object detection/regression model information, combination model information, etc.) stored therein. That said, one or more embodiments may include several types of data stores, memory, storage media, etc. as discussed above, and such storage media, memoiy, data stores, etc. may be stored locally or remotely. [0187] Additionally, unless otherwise specified, the term “subset” of a corresponding set does not necessarily represent a proper subset and may be equal to the corresponding set.
  • any other model architecture, machine learning algorithm, or optimization approach may be employed.
  • One or more embodiments may utilize hyper- arameter combination(s).
  • One or more embodiments may employ data capture, selection, annotation as well as model evaluation (e.g., computation of loss and validation metrics) since data may be domain and application specific.
  • the model architecture may be modified and optimized to address a variety of computer visions issues (discussed below).
  • One or more embodiments of the present disclosure may automatically detect (predict a spatial location of) a radiodense OCT marker in a time series of X-ray images to co-register the X-ray images with the corresponding OCT images (at least one example of a reference point of two different coordinate systems).
  • One or more embodiments may use deep (recurrent) convolutional neural network(s), which may improve marker detection, catheter connection and/or disconnection detection, and image co-registration significantly.
  • One or more embodiments may employ segmentation and/or object/keypoint detection architectures to solve one or more computer vision issues in other domain areas in one or more applications.
  • One or more embodiments employ several novel materials and methods to solve one or more computer vision or other issues (e.g., radiodense OCT marker detection in time series of X-ray images, for instance; catheter connection or disconnection detection; etc.).
  • images may include a radiodense marker that is specifically used in one or more procedures (e.g., added to the OCT capsule, used in catheters/probes with a similar marker to that of an OCT marker, used in catheters/ probes with a similar or same marker even in a case where the catheters/probes use an imaging modality different from OCT, etc.) to facilitate computational detection of a marker and/or catheter connection or disconnection in one or more images (e.g., X-ray images).
  • a radiodense marker that is specifically used in one or more procedures (e.g., added to the OCT capsule, used in catheters/probes with a similar marker to that of an OCT marker, used in catheters/ probes with a similar or same marker even in a case where the catheters/probes use an imaging modality different from OCT, etc.) to facilitate computational detection of a marker and/or catheter connection or disconnection in one or more images (e.g., X-ray images).
  • One or more embodiments couple a software device or features (model) to hardware (e.g., an OCT probe, a probe/catheter using an imaging modality different from OCT while using a marker that is the same as or similar to the marker of an OCT probe/catheter, etc.).
  • a software device or features e.g., an OCT probe, a probe/catheter using an imaging modality different from OCT while using a marker that is the same as or similar to the marker of an OCT probe/catheter, etc.
  • One or more embodiments may utilize animal data in addition to patient data. Training deep learning may use a large amount of data, which may be difficult to obtain from clinical studies. Inclusion of image data from pre-clinical studies in animals into a training set may improve model performance. Training and evaluation of a model may be highly data dependent (e.g., a way in which frames are selected (e.
  • one or more embodiments may use a collection or collections of user annotations after introduction of a device/apparatus, system, and/or method(s) into a market, and may use post market surveillance, retraining of a model or models with new data collected (e.g., in clinical use), and/or a continuously adaptive algorithm/methodfs).
  • One or more embodiments employ data annotation. For example, one or more embodiments may label pixel(s) representing a marker or a catheter connection or disconnection as well as pixels representing a blood vessel(s) at different phase(s) of a procedure/method (e.g., different levels of contrast due to intravascular contrast agent) of frame(s) acquired during pullback.
  • a procedure/method e.g., different levels of contrast due to intravascular contrast agent
  • a marker location may be known inside a vessel and/or inside a catheter or probe.
  • simultaneous localization of the vessel and marker may be used to improve marker detection and/or catheter connection or disconnection detection.
  • a marker may move during a pullback inside a vessel, and such prior knowledge may be incorporated into the machine learning algorithm or the loss function.
  • One or more embodiments employ loss (cost) and evaluation function(s)/metric(s). For example, use of temporal information for model training and evaluation may be used in one or more embodiments.
  • One or more embodiments may evaluate a distance between prediction and ground truth per frame as well as consider a trajectory of predictions across multiple frames of a time series.
  • At least one embodiment of an overall process of machine learning is shown below: i.Create a dataset that contains both images and corresponding ground truth labels; ii.Split the dataset into a training set and a testing set; iii.Select a model architecture and other hyper-parameters; iv.Train the model with the training set; v.Evaluate the trained model with the validation set; and vi.Repeat iv and v with new dataset(s).
  • steps i and iii may be revisited in one or more embodiments.
  • One or more models may be used in one or more embodiment(s) to detect a catheter connection or disconnection, such as, but not limited to, the one or more models as discussed in PCT/US2020/051615, filed on September 18, 2020 and published as WO 2021/055837 A9 on March 25, 2021, and as discussed in U.S. Pat. App. No. 17/761,561, filed on March 17, 2022, the applications and publications of which are incorporated by reference herein in their entireties.
  • one or more embodiments may use a segmentation model, a regression model, a combination thereof, etc.
  • the input may be the entire image frame or frames
  • the output may be the centroid coordinates of radiopaque markers (target marker and stationary marker, if necessary/ desired) and/or coordinates of a portion of a catheter or probe to be used in determining the connection or disconnection status.
  • FIGS. 14-16 an example of an input image on the left side of FIGS. 14-16 and a corresponding output image on the right side of FIGS. 14-16 are illustrated for regression model(s).
  • At least one architecture of a regression model is shown in FIG. 14. In at least the embodiment of FIG.
  • the regression model may use a combination of one or more convolution layers 900, one or more max-pooling layers 901, and one or more fully connected dense layers 902. While not limited to the Kernel size, Width/Number of filters (output size), and Stride sizes shown for each layer (e.q., in the left convolution layer of FIG. 14, the Kernel size is “3x3”, the Width/# of filters (output size) is “64”, and the Stride size is “2”). In one or more embodiments, another hyper-parameter search with a fixed optimizer and with a different width may be performed, and at least one embodiment example of a model architecture for a convolutional neural network for this scenario is shown in FIG. 15.
  • One or more embodiments may use one or more features for a regression model as discussed in “Deep Residual Learning for Image Recognition” to Kaiming He, et al., Microsoft Research, December 10, 2015 (https://arxiv.org/pdf/1512.03385.pdf), which is incorporated by reference herein in its entirety.
  • FIG. 16 shows at least a further embodiment example of a created architecture of or for a regression model(s).
  • the output from a segmentation model is a “probability” of each pixel that may be categorized as a catheter connection or disconnection
  • postprocessing after prediction via the trained segmentation model may be developed to better define, determine, or locate the final coordinate of catheter location (or a marker location where the marker is a part of the catheter) and/or determine the connection or disconnection status of the catheter.
  • One or more embodiments of a semantic segmentation model may be performed using the One- Hundred Layers Tiramisu method discussed in “The One Hundred Layers Tiramisu: Fully Convolutional DenseNets for Semantic Segmentation” to Simon Jegou, et al., Montreal Institute for Learning Algorithms, published October 31, 2017 (https://arxiv.org/pdf/1611.09326.pdf), which is incorporated by reference herein in its entirety.
  • a segmentation model may be used in one or more embodiment, for example, as shown in FIG. 17. At least one embodiment may utilize an input 600 as shown to obtain an output 605 of at least one embodiment of a segmentation model method.
  • a slicing size may be one or more of the following: 100 x 100, 224 x 224, 512 x 512, and, in one or more of the experiments performed, a slicing size of 224 x 224 performed the best.
  • a batch size (of images in a batch) may be one or more of the following: 2, 4, 8, 16, and, from the one or more experiments performed, a bigger batch size typically performs better (e.g., with greater accuracy).
  • 16 images/batch may be used. The optimization of all of these hyper-parameters depends on the size of the available data set as well as the available computer/computing resources; thus, once more data is available, different hyper-parameter values may be chosen. Additionally, in one or more embodiments, steps/epoch may be too, and the epochs may be greater than (>) 1000. In one or more embodiments, a convolutional autoencoder (CAE) maybe used.
  • CAE convolutional autoencoder
  • hyper-parameters may include, but are not limited to, one or more of the following: Depth (i.e., # of layers), Width (i.e., # of filters), Batch size (/. «., # of training images/step): May be >4 in one or more embodiments, Learning rate (i.e., a hyper-parameter that controls how fast the weights of a neural network (the coefficients of regression model) are adjusted with respect the loss gradient), Dropout (i.e., % of neurons (filters) that are dropped at each layer), and/or Optimizer: for example, Adam optimizer or Stochastic gradient descent (SGD) optimizer.
  • Depth i.e., # of layers
  • Width i.e., # of filters
  • Batch size /. «., # of training images/step
  • Learning rate i.e., a hyper-parameter that controls how fast the weights of a neural network (the coefficients of regression model) are adjusted with respect the loss gradient
  • Dropout
  • other hyper-parameters may be fixed or constant values, such as, but not limited to, for example, one or more of the following: Input size (e.g., 1024 pixel x 1024 pixel, 512 pixel x 512 pixel, another preset or predetermined number or value set, etc.), Epochs: too, 200, 300, 400, 500, another preset or predetermined number, etc. (for additional training, iteration may be set as 3000 or higher), and/or Number of models trained with different hyper-parameter configurations (e.g., 10, 20, another preset or predetermined number, etc.).
  • Input size e.g., 1024 pixel x 1024 pixel, 512 pixel x 512 pixel, another preset or predetermined number or value set, etc.
  • Epochs too, 200, 300, 400, 500, another preset or predetermined number, etc. (for additional training, iteration may be set as 3000 or higher)
  • Number of models trained with different hyper-parameter configurations
  • One or more features discussed herein may be determined using a convolutional autoencoder, Gaussian filters, Haralick features, and/or thickness or shape of the sample or object.
  • One or more embodiments of the present disclosure may use machine learning to determine marker location; to determine, detect, or evaluate catheter connection or disconnection; to perform coregistration; and/or to perform any other feature discussed herein.
  • Machine learning is a field of computer science that gives processors the ability to learn, via artificial intelligence.
  • Machine learning may involve one or more algorithms that allow processors or computers to learn from examples and to make predictions for new unseen data points.
  • such one or more algorithms may be stored as software or one or more programs in at least one memory or storage medium, and the software or one or more programs allow a processor or computer to carry out operation(s) of the processes described in the present disclosure.
  • the present disclosure and/ or one or more components of devices, systems, and storage mediums, and/or methods, thereof also may be used in conjunction with optical coherence tomography probes.
  • optical coherence tomography probes include, but are not limited to, the OCT imaging systems disclosed in U.S. Pat. Nos. 6,763,261; 7,366,376; 7,843,572; 7,872,759; 8,289,522; 8,676,013; 8,928,889; 9,087,368; 9,557,154; 10,912,462; 9,795,301; and 9,332,942 to Tearney et al., and U.S. Pat. Pub. Nos.
  • present disclosure and/or one or more components of devices, systems, and storage mediums, and/or methods, thereof also may be used in conjunction with OCT imaging systems and/or catheters and catheter systems, such as, but not limited to, those disclosed in U.S. Pat. Nos. 9,869,828; 10,323,926; 10,558,001; 10,601,173; 10,606,064; 10,743,749; 10,884,199; 10,895,692; and 11,175,126 as well as U.S. Patent Publication Nos.
  • present disclosure and/or one or more components of devices, systems, and storage mediums, and/or methods, thereof also may be used in conjunction with continuum robotic systems and catheters, such as, but not limited to, those described in U.S. Patent Publication Nos. 2019/0105468; 2021/0369085; 2020/0375682; 2021/0121162; 2021/0121051; and 2022- 0040450, each of which patents and/or patent publications are incorporated by reference herein in their entireties.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Radiology & Medical Imaging (AREA)
  • Theoretical Computer Science (AREA)
  • Pathology (AREA)
  • Evolutionary Computation (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Endoscopes (AREA)

Abstract

One or more devices, systems, methods, and storage mediums using artificial intelligence application(s) using an apparatus or system that uses and/or controls one or more imaging modalities, such as, but not limited to, angiography, Optical Coherence Tomography (OCT), Multi-modality OCT, near-infrared auto-fluorescence (NIRAF), near-infrared fluorescence (NIRF), OCT-NIRAF, etc. are provided herein. Examples of AI applications discussed herein, include, but are not limited to, using one or more of: AI catheter connection or disconnection evaluation and/or detection, AI coregistration, deep or machine learning, computer vision or image recognition task(s), keypoint detection, feature extraction, model training, input data preparation techniques, input mapping to the model, post-processing, and/or interpretation of output data, one or more types of machine learning models (including, but not limited to, segmentation, regression, combining or repeating regression and/or segmentation), catheter connection or disconnection detection and/or coregistration success rates to improve or optimize catheter connection or disconnection detection and/or coregistration.

Description

TITLE
ARTIFICIAL INTELLIGENCE CATHETER OPTICAL CONNECTION OR DISCONNECTION EVALUATION, INCLUDING DEEP MACHINE LEARNING AND USING RESULTS THEREOF
CROSS-REFERENCE TO RELATED APPLICATION(S)
[0001] This application relates, and claims priority, to U.S. Patent Application Serial No. 63/341,233, filed May 12, 2022, the entire disclosure of which is incorporated by reference herein in its entirety.
FIELD OF THE INVENTION
[0002] This present disclosure generally relates to computer imaging, computer vision, and/ or to the field of medical imaging, particularly to devices/ apparatuses, systems, methods, and storage mediums for artificial intelligence (“Al”) catheter optical connection or disconnection evaluation and/or for using one or more imaging modalities, including but not limited to, angiography, Optical Coherence Tomography (OCT), Multi-modality OCT (MM-OCT), near-infrared fluorescence (NIRF), near-infrared auto-fluorescence (NIRAF), OCT-NIRAF, robot imaging, robot imaging, continuum robot imaging, etc. Examples of OCT applications include imaging, evaluating and diagnosing biological objects, including but not limited to, for gastro-intestinal, cardio and/or ophthalmic applications, and being obtained via one or more optical instruments, including but not limited to, one or more optical probes, one or more catheters, one or more endoscopes, one or more capsules, and one or more needles (e.y., a biopsy needle). One or more devices, systems, methods and storage mediums for characterizing, examining and/or diagnosing, and/or measuring viscosity of, a sample or object in artificial intelligence application(s) using an apparatus or system that uses and/or controls one or more imaging modalities are discussed herein.
BACKGROUND OF THE INVENTION
[0003] Fiber optic catheters and endoscopes have been developed to gain access to internal organs. For example, in cardiology OCT (optical coherence tomography) has been developed to capture and visualize depth-resolved images of vessels with a catheter. The catheter, which may include a sheath, a coil and an optical probe, may be navigated to a coronary artery.
[0004] Optical coherence tomography (OCT) is a technique for obtaining high-resolution cross- sectional images of tissues or materials, and enables real time visualization. The aim of the OCT techniques is to measure the time delay of light by using an interference optical system or interferometry, such as via Fourier Transform or Michelson interferometers. Light from a light source delivers and splits into a reference arm and a sample (or measurement) arm with a splitter e.g., a beamsplitter). A reference beam is reflected from a reference mirror (partially reflecting or other reflecting element) in the reference arm while a sample beam is reflected or scattered from a sample in the sample arm. Both beams combine (or are recombined) at the splitter and generate interference patterns. The output of the interferometer is detected w ith one or more detectors, such as, but not limited to, photodiodes or multi-array cameras, in one or more devices, such as, but not limited to, a spectrometer (e.g., a Fourier Transform infrared spectrometer). The interference patterns are generated when the path length of the sample arm matches that of the reference arm to within the coherence length of the light source. By evaluating the output beam, a spectrum of an input radiation may be derived as a function of frequency. The frequency of the interference patterns corresponds to the distance between the sample arm and the reference arm. The higher frequencies are, the greater are the differences in path length. Single mode fibers may be used for OCT optical probes, and double clad fibers may be used for fluorescence and/or spectroscopy.
[0005] A multi-modality system, such as, but not limited to, an OCT, fluorescence, and/or spectroscopy system with an optical probe, is developed to obtain multiple information at the same time. During vascular diagnosis and intervention procedures, such as Percutaneous Coronary Intervention (PCI), users of optical coherence tomography (OCT) sometimes have difficulty understanding the tomography image in correlation with other modalities because of an overload of information, which causes confusion in image interpretation.
[0006] Percutaneous coronary intervention (PCI), and other vascular diagnosis and intervention procedures, have improved with the introduction of intravascular imaging (IVI) modalities, such as, but not limited to, intravascular ultrasound (IVUS) and optical coherence tomography (OCT). IVI modalities provide cross-sectional imaging of coronary arteries with precise lesion information (e.g., lumen size, plaque morphology, implanted devices, etc.). That said, only about 20% of interventional cardiologists in the United States use IVI imaging in conjunction w ith coronary angiography during PCI procedures. Additionally, IVI imaging uses the mating of disposable single use sterile catheters or probes to non-disposable imaging systems. The mating process involves mechanically connecting the catheter/ probe to a system to get an adequate electrical, optical, and/or radio frequency (RF) connection (e.g., in addition to or alternatively to a mechanical connection) depending on the type of catheter/probe. However, the mating step/process is not always robust and may fail in one or more situations. Failure of this mating step may lead to procedure delay and user frustration among other issues.
[0007] Where a signal is veiy small, it may be hard to measure. Additionally, where a signal is not location specific (e.g., a signal may be from all reflections making it back to a system), it may be unclear if the signal is from a fully mated probe/catheter, a partially mated probe/catheter, or reflection(s) from an endface of a probe connector of the probe/catheter.
[0008] Performing a pullback when a probe is not properly mated may yield useless or less useful data, and may cause a physician a lot of wasted time. Likewise, if the automatic disconnection of the probe/catheter fails, the user, unaware, may attempt to remove the probe/catheter from the system just to be left, for example, with the core connected to a patient interface unit (PIU) of the system, potentially rendering the system unusable and/or causing damage to the PIU.
[0009] For a catheter to function properly, the catheter preferably is seated correctly w ithin an imaging device. Extended use of a catheter that is not properly seated can result in incorrect images, no images, or even damage to the catheter and the imaging device.
[0010] In the past, devices have relied on human observation (evaluation of images or sounds that may be caused by a bad connection) to determine if a catheter is properly connected. Other approaches have included the use of electronics, such as radio frequency identification (RFID) chips, to establish co-registration, but use of such electronics introduces its own set of risks inherent with wireless signals. Still others have tried to remove the human error and wireless signal factor through the use of evaluating the return loss on the signals captured from the catheter. The use of human operators or users requires that those operators or users be trained to recognize the image(s) that indicate a good catheter connection. Such human evaluation takes longer and is less reliable than the use of alternative means. Additionally, RFID chips and other electronic methods/devices are vulnerable to signal loss or corruption that can occur with wireless signals as well as risks associated with adding an additional component. Since RFID chips are typically installed at a manufacturer, RFID chips are also subjected to all hazards associated with packaging. Moreover, issues regarding prior evaluation methods relate to the fact that the prior evaluation methods do not actually evaluate image(s) but merely evaluate the level of signal coming back. While a signal may be used to determine if a connection has occurred, such a signal lacks information to help determine the quality of the connection without the use of a trained operator or user.
[0011] Accordingly, detecting, monitoring, and guiding the mating step (e.q., engagement, disengagement, etc.) would be desirable to increase likelihood of catheter/probe mating success (e.q., to reduce mating failure(s), to minimize mating failure(s), to avoid mating failure(s), etc.), to confirm mating status, and to reduce case delays and user frustration. Such issues also apply to robot technologies (e.q., robots, continuum robots, robots using data of similar connection(s) or disconnection(s), etc.). [0012] Accordingly, it would be desirable to provide at least one imaging or optical apparatus/device, system, method, and storage medium that applies machine learning, especially deep learning, to evaluate and achieve catheter optical connection(s) and/or disconnection(s) with a higher success rate when compared to traditional techniques, and to use the one or more results to achieve catheter optical connection more efficiently. It also would be desirable to provide one or more probe/catheter/robot device detecting, monitoring, and/or connection/disconnection evaluation techniques and/or structure for use in at least one optical device, assembly or system to achieve consistent, reliable detection, monitor, and connection results (e.g., to reduce mating failure(s), to minimize mating failure(s), to avoid mating failure(s), to identify and fix mating or connection failure! s), etc.) at high efficiency and a reasonable cost of manufacture and maintenance.
SUMMARY OF THE INVENTION
[0013] Accordingly, it is a broad object of the present disclosure to provide imaging (e.g., OCT, NIRF, NIRAF, robots, continuum robots, etc.) apparatuses, systems, methods and storage mediums for using and/or controlling multiple imaging modalities, that apply machine learning, especially deep learning, to evaluate catheter optical connection(s) with greater or maximum success, and that use the results to achieve catheter optical connection(s) more efficiently or with maximum efficiency. It is also a broad object of the present disclosure to provide OCT devices, systems, methods and storage mediums using an interference optical system, such as an interferometer (e.g., spectral- domain OCT (SD-OCT), swept-source OCT (SS-OCT), multimodal OCT (MM-OCT), Intravascular Ultrasound (IVUS), Near-Infrared Autofluorescence (NIRAF), Near-Infrared Spectroscopy (NIRS), Near-Infrared Fluorescence (NIRF), therapy modality using light, sound, or other source of radiation, etc.).
[0014] One or more embodiments of the present disclosure may apply machine learning, especially deep learning, to evaluate catheter optical connection(s) or disconnection(s) with greater or maximum success, and that use the results to achieve catheter optical connection(s) or disconnection(s) more efficiently or with maximum efficiency.
[0015] In one or more embodiments of the present disclosure, a catheter preferably is seated correctly within an imaging device to ensure that the catheter and/ or the imaging device provides accurate imaging. One or more embodiments of the present disclosure may provide one or more probe/catheter detecting, monitoring, and/or connection or disconnection evaluation techniques and/or structure for use in at least one optical device, assembly or system to achieve consistent, reliable detection, monitor, and connection results (e.g., to reduce mating failure(s), to minimize mating failure(s), to avoid mating failure(s), to identify and fix mating, connection, or disconnection failure! s), etc.) at high efficiency and a reasonable cost of manufacture and maintenance. [0016] In one or more embodiments of the present disclosure, one or more techniques build off of use of return signals. However, instead of simply detecting whether expected signals have been altered in an expected way, one or more embodiments of the present disclosure may use a neural net to identify specific objects within the data collected to ensure the connection or disconnection status is valid. This additional information allows the one or more techniques of the present disclosure to combine the advantages of using a human evaluator and evaluation of return signals while minimizing the disadvantages of each technique(s).
[0017] In one or more embodiments of the present disclosure, neural nets may evaluate image(s) faster than any human operator, and neural nets may be deployed across an unlimited number of devices and/or systems. This avoids the issue related to training human operators to evaluate connection status, and the shorter time required for evaluation reduces the chances of harm to a patient or object, tissue, or specimen by shortening active collection time. Additionally, in one or more embodiments, neural net classifiers may be used to detect specific objects (such as, but not limited to, a catheter sheath, robot components, etc.) such that more useful information is obtained, evaluated, and used (in comparison with evaluating the return signal only, which provides limited information, as aforementioned).
[0018] Using artificial intelligence, for example (but not limited to), deep/machine learning, residual learning, a computer vision task (keypoint or object detection and/or image segmentation), using a unique architecture structure of a model or models, using a unique training process, using input data preparation techniques, using input mapping to the model, using post-processing and interpretation of the output data, etc., one or more embodiments of the present disclosure may achieve a better or maximum success rate of connection or disconnection evaluation(s) without (or with less) user interactions, and may reduce processing and/or prediction time to display connection result(s). In the present disclosure, a model may be defined as software that takes images as input and returns predictions for the given images as output. In one or more embodiments a model may be a particular instance of a model architecture (set of parameter values) that has been obtained by model training and selection using a machine (and/or deep) learning and/or optimization algorithm/process. A model generally consists or is comprised of the following parts: an architecture defined by a source code (e.p., a convolutional neural network comprised of layers of parameterized convolution kernels and activation functions, etc.) and configuration values (parameters, weights, or features) that are initially set to random values and are then over the course of the training iteratively optimized given data examples (e.q., image-label pairs), an objective function (loss function), and an optimization algorithm (optimizer).
[0019] Neural networks are a computer system/systems, and take inspiration from how neurons in a brain work. In one or more embodiments, a neural network may consist of or may comprise an input layer, some hidden layers of neurons or nodes, and an output layer (as further discussed below). The input layer may be where the values are passed to the rest of the model. In MM-OCT application(s), the input layer may be the place where the transformed OCT data may be passed to a model for evaluation. In one or more embodiments, the hidden layer(s) may be a series of layers that contain or include neurons or nodes that establish connections between the neurons or nodes in the other hidden layers. Through training, the values of each of the connections may be altered so that, due to the training, the system/ systems will trigger when the expected pattern is detected. The output layer provides the result(s) of the model. In the case of the MM-OCT application(s), this may be a Boolean (true/false) value for detecting connection or disconnection (e.p., partial connection or disconnection, complete connection or disconnection, improper connection or disconnection, etc.).
[0020] In one or more embodiments of the present disclosure, one or more connection or disconnection evaluation techniques may be used with an OCT or other imaging modality device, system, storage medium, etc. In one or more embodiments, one or more connection or disconnection evaluation techniques may be used for any type of OCT, including, but not limited to, MM-OCT. One or more embodiments of the present disclosure may: (i) calculate a connection or disconnection status of a catheter, and may perform the connection or disconnection status calculation without the use of another piece of equipment; (ii) make evaluations during a normal course of operations (instead of a separate operation) for a catheter; and (iii) work on small numbers of samples (e.q., as little as one image in an imaging, OCT, and/or MM-OCT application(s)), or may work on large numbers of samples, to evaluate connection/disconnection status.
[0021] One or more embodiments of the present disclosure may evaluate catheter connection(s) and/or disconnection(s) in one or more ways, and may use one or more features to determine a connection or disconnection status. For example, while not limited to these examples, one or more embodiments may include one or more of the following: (i) one or more visual indicators (e.q., a light emitting diode (LED), a display for displaying the connection or disconnection status, etc.) that operates to send information to an outside of the device or system to indicate the connection or disconnection status (e.c/., a successful connection, a successful optical connection, a successful connection of wires/catheter, a partial connection, no connection, a disconnection, etc.); (ii) one or more circuits or sensors that operate to detect a catheter connection or disconnection status (e.q., a successful connection, a successful optical connection, a successful connection of wires/catheter, a partial connection, no connection, a disconnection, etc.); and/or (iii) an artificial intelligence structure, such as, but not limited to, a neural net or network (e.q., the same neural net or network that operates to determine whether a video or image from an endoscope or other imaging device is working; an additional neural net or network, a convolutional network, another network discussed herein, etc.). In one or more embodiments, the circuit(s)/sensor(s) may operate to be useful for (i) identifying case(s) where service/maintenance would be useful (e.q., to detect a partial connection that may be returned to a complete connection, to detect a disconnected wire or wires (or other component(s) of a catheter, robot, or other imaging device), and/or (ii) determining a case or cases w here an imaging apparatus or system using a catheter may operate or run in a mode that occurs in a case where a connection or disconnection is not ideal or is less than at full potential/capacity (e.g., in a case where a partial connection exists when a full connection may be ideal, in a case where an amount of wires less than all of the wires (e.g., 8 out of 9 wires) are connected, etc.). In one or more embodiments involving establishing an endoscope visual connection, an artificial intelligence structure, such as, but not limited to, a neural net or network (e.g., the same neural net or network that operates to determine whether a video or image from an endoscope or other imaging device is working; an additional neural net or network, a convolutional network, another network discussed herein, etc.) may be used to determine one or more of the following: (i) whether a video function or image capturing function from an endoscope or other imaging device is working; and/ or (ii) a catheter connection or disconnection status.
[0022] One or more embodiments of the present disclosure may achieve at least the following advantages or may include at least the following feature(s) : (i) one or more embodiments may achieve the efficient connection/disconnection evaluation and may obtain result(s) without the use of additional equipment; (ii) one or more embodiments may not use trained operators or users (while trained operators or users may be involved, such involvement is not required); (iii) one or more embodiments may perform during a normal course of operation of a catheter and/or an imaging device (as compared with and instead of a separate operation); (iv) one or more embodiments may perform connection/disconnection evaluation using a set of collected images (automatically or manually); and (v) one or more embodiments may provide usable measurement(s) with small and/or large samples.
[0023] In one or more embodiments, a model (which, in one or more embodiments, may be software, software/har ware combination, or a procedure that utilizes one or more machine or deep learning algorithms/procedures/processes that has/have been trained on data to make one or more predictions for future, unseen data) has enough resolution to predict and/or evaluate the connection/disconnection with sufficient accuracy depending on the application or procedure being performed. The performance of the model may be further improved by subsequently adding more training data and retraining the model to create a new instance of the model with better or optimized performance. For example, additional training data may include data based on user input, where the user may identify or correct the location of a catheter (e.g., a portion of the catheter, a connection portion of the catheter, etc.) in an image.
[0024] In one or more embodiments of the present disclosure, a model and/or sets of training data or images may be obtained by collecting a series of images (e.g., OCT images, images of another imaging modality, etc.) with and without catheters connected properly. For example, thousands of images (e.g., OCT images, images of another imaging modality, etc.) may be captured and labeled (e.g., to establish ground truth data), and the data may be split into a training population of data and a test population of data. After training is complete, the testing data may be fed or inserted through the neural net/networks, and accuracy of the model may be evaluated based on the results of the test data. Once trained, a neural net/ network may be able to determine whether a catheter has established a valid (e.g., complete, properly situated, optical, etc.) connection based on a single image (e.g., a single OCT image, a single image of another imaging modality, etc.) . While one or more embodiments may use one image, it may be advantageous from a safety perspective to have the neural net/network evaluate more than one frame/image to establish the status of a connection/disconnection with more certainty/ accuracy.
[0025] One or more methods, medical imaging devices, Intravascular Ultrasound (IVUS) or Optical Coherence Tomography (OCT) devices, imaging systems, and/or computer-readable storage mediums for evaluating catheter connections and/or disconnections using artificial intelligence may be employed in one or more embodiments of the present disclosure.
[0026] In one or more embodiments, an artificial intelligence training apparatus may include: a memoiy; one or more processors in communication with the memory, the one or more processors operating to: acquire or receive image data for instances or cases where a catheter is connected and for instances or cases where a catheter is not connected; establish ground truth for all the acquired image data; split the acquired image data into training, validation, and test sets or groups; evaluate a connection status for a new catheter (or other type of imaging device); receive image data (e.g., angiography, OCT images, tomography images, etc.) for the new catheter (or other type of imaging device); evaluate the image data (e.g., OCT images, images of another imaging modality, etc.) by a neural network (e.g., convolution network, neural network, recurrent network, etc.) using artificial intelligence; determine whether a connection is detected for the new catheter (e.g., determine whether the new catheter is connected to an imaging device or system, determine whether the new catheter is not connected or is improperly connected to an imaging device or system, etc.); set a connection or disconnection status for the new catheter (e.g., “Yes” a connection exists; “No” a connection does not exist; “No” a proper connection does not exist (e.g., in a case where the catheter is improperly or partially connected to an imaging device or system; etc.); and save the trained model and/or the connection or disconnection status to memory; etc. One or more embodiments may repeat the training, and evaluation procedure, for a variety of parameter or hyper-parameter choices and finally select one or more models with the optimal, highest, and/or improved performance defined by one or more predefined evaluation metrics. [0027] In one or more embodiments, the one or more processors may further operate to split the ground truth data into sets or groups for training, validation, and testing. The one or more processors may further operate to one or more of the following: (i) calculate or improve a connection or disconnection detection success rate using application of machine learning or deep learning; (ii) decide on the model to be trained based on a connection or disconnection detection success rate associated with the model (e.g., if an apparatus or system embodiment has multiple models to be saved, which have already been trained previously, a method of the apparatus/system may select a model for further training based on a previous success rate, based on a predetermined success factor, or based on which model is more optimal than another(s), etc.); (hi) determine whether a connection or disconnection determination is correct based on the trained model; and (iv) evaluate the connection or disconnection detection success rate. In one or more embodiments, the one or more processors may further operate to one or more of the following: (i) split the acquired or received image data into data sets or groups having a certain ratio or percentages (while not limited to such percentages, one or more embodiments may be, for example, 60% training data, 20% validation data, and 20% test data; 60% training data, 30% validation data, and 10% test data; any other predetermined or set split amongst the training data, validation data, and test data as desired for one or more applications may be used; etc.); (ii) split the acquired or received image data randomly; (iii) split the acquired or received image data randomly either on a pullback-basis, or a frame-basis; (iv) split the acquired or received image data based on or using a new set of a certain or predetermined kinds of data; and (v) split the acquired or received image data based on or using a new set of a certain or predetermined data ty pe, the new set being one or more of the following: a new pullback-basis data set, a new frame-basis data set, new clinical data, new animal data, new potential additional training data, new data for a first type of catheter or imaging device where the new data has a marker that is similar to a marker of a catheter or imaging device used for the acquired or received image data, new data having a marker that is similar to a marker of an Optical Coherence Tomography (OCT) catheter. The one or more processors may further operate to one or more of the following: (i) employ data quality control; (ii) allow a user to manually select training samples or training data; and (iii) use any angio image that is captured during Optical Coherence Tomography (OCT) (or other imaging modality) pullback for testing.
[0028] One or more embodiments may include or have one or more of the following: (i) parameters including one or more hyper-parameters; (ii) the saved, trained model is used as a created detector for identifying or detecting a catheter connection or disconnection in image data; (iii) the model is one or a combination of the following: a segmentation model, a segmentation model with post-processing, a model with pre-processing, a model with post-processing, a segmentation model with pre-processing, a deep learning or machine learning model, a semantic segmentation model or classification model, an object detection or regression model, an object detection or regression model, a combination of a semantic segmentation model and an object detection or regression model, a model using repeated segmentation model technique(s), a model using feature pyramid(s), and a model using repeated object detection or regression model technique(s); (iv) the one or more processors further operate to use one or more neural networks, convolutional neural networks, or recurrent neural networks to detect the catheter connection(s) or disconnection(s); (v) the one or more processors further operate to estimate a generalization error of the trained model with data in the test set or group; and (vi) the one or more processors further operate to estimate a generalization error of multiple trained models (ensemble) w ith data in the test set or group, and to select one model based on its performance on the validation set or group.
[0029] In one or more embodiments of a detection apparatus, the one or more processors may further operate to: (i) acquire or receive the image data during a pullback operation of the intravascular imaging catheter.
[0030] The one or more processors may further operate to use one or more neural networks, convolutional neural networks, and/or recurrent neural networks to one or more of: load the trained model, select a set of angiography frames or other type of image frames, evaluate the catheter connection/disconnection, determine whether the catheter connection/disconnection determination is appropriate with respect to given prior knowledge, for example, vessel location and pullback direction, modify the detected results or the detected catheter location or catheter connection or disconnection for each frame, perform the coregistration, insert the intravascular image, and acquire or receive the image data during the pullback operation.
[0031] In one or more embodiments, the object or sample may include one or more of the following: a vessel, a target specimen or object, and a patient.
[0032] The one or more processors may further operate to perform the coregistration by coregistering an acquired or received angiography image and an obtained one or more Optical Coherence Tomography (OCT) or Intravascular Ultrasound (IVUS) images or frames.
[0033] In one or more embodiments, a loaded, trained model may be one or a combination of the following: a segmentation (classification) model, a segmentation model with pre-processing, a segmentation model with post-processing, an object detection (regression) model, an object detection model with pre-processing, an object detection model with post-processing, a combination of a segmentation (classification) model and an object detection (regression) model, a deep convolutional neural network model, a recurrent neural network model with long short-term memory that can take temporal relationships across images or frames into account, a model using feature pyramid(s) that can take different image resolutions into account, a genetic algorithm that operates to breed multiple models for improved performance (as compared with a case where the genetic algorithm is not used), and/or a model using residual learning technique(s).
[0034] In one or more embodiments, the one or more processors may further operate to one or more of the following: (i) display angiography data along with an image for each of one or more imaging modalities on the display, wherein the one or more imaging modalities include one or more of the following: a tomography image; an Optical Coherence Tomography (OCT) image; a fluorescence image; a near-infrared auto-fluorescence (NIRAF) image; a near-infrared auto-fluorescence (NIRAF) image in a predetermined view, a carpet view, and/or an indicator view; a near-infrared fluorescence (NIRF) image, a near-infrared fluorescence (NIRF) image in a predetermined view, a carpet view, and/or an indicator view; a three-dimensional (3D) rendering; a 3D rendering of a vessel; a 3D rendering of a vessel in a half-pipe view or display; a 3D rendering of the object; a lumen profile; a lumen diameter display; a longitudinal view; computer tomography (CT); Magnetic Resonance Imaging (MRI); Intravascular Ultrasound (IVUS); an X-ray image or view; and an angiography view; and (ii) change or update the displays for the angiography data along with each of the one or more imaging modalities based on the catheter connection or disconnection evaluation results and/or an updated location of the catheter.
[0035] One or more embodiments of a method for training a model using artificial intelligence may repeat the selection, training, and evaluation procedure, for a variety of model configurations (e.g., hyper-parameter values) and finally select one or more models with the highest performance defined by one or more predefined evaluation metrics.
[0036] One or more embodiments of a non-transitory computer-readable storage medium storing at least one program for causing a computer to execute a method for training a model using artificial intelligence may be used w ith any method(s) discussed in the present disclosure, including but not limited to, one or more catheter connection or disconnection evaluation/determination method(s).
[0037] One or more embodiments of any method discussed herein (e.g., training method(s), detecting method(s), imaging or visualization method(s), artificial intelligence method(s), etc.) may be used with any feature or features of the apparatuses, systems, other methods, storage mediums, or other structures discussed herein.
[0038] One or more embodiments of a non-transitory computer-readable storage medium storing at least one program for causing a computer to execute a method for detecting a catheter connection or disconnection using artificial intelligence and/or performing coregistration using artificial intelligence may be used with any method(s) discussed in the present disclosure, including but not limited to, a method including: acquiring or receiving image data; receiving a trained model or loading a trained model from a memory; applying the trained model to the acquired or received image data; selecting one image frame; detecting or evaluating a catheter connection or disconnection on the selected image frame with the trained model; and saving the trained model and/or the catheter connection or disconnection status in a memory.
[0039] One or more of the artificial intelligence features discussed herein that may be used in one or more embodiments of the present disclosure, includes but is not limited to, using one or more of deep learning, a computer vision task, keypoint detection, a unique architecture of a model or models, a unique training process or algorithm, a unique optimization process or algorithm, input data preparation techniques, input mapping to the model, pre-processing, post-processing, and/or interpretation of the output data as substantially described herein or as shown in any one of the accompanying drawings.
[0040] In one or more embodiments, a catheter connection or disconnection may be evaluated and determined using an algorithm, such as, but not limited to, the Viterbi algorithm.
[0041] One or more embodiments may automate catheter connection or disconnection detection in images using convolutional neural networks, any other types of neural network(s), and may fully automate frame detection and/or catheter connection or disconnection detection on angiographies using training (e.g., offline training) and using applications (e.g., online application(s)) to extract and process frames via deep learning.
[0042] One or more embodiments of the present disclosure may track and/or calculate a catheter connection or disconnection detection success rate.
[0043] The following paragraphs describe certain explanatory embodiments. Other embodiments may include alternatives, equivalents, and modifications. Additionally, the explanatory embodiments may include several novel features, and a particular feature may not be essential to some embodiments of the devices, systems, and methods that are described herein.
[0044] According to other aspects of the present disclosure, one or more additional devices, one or more systems, one or more methods and one or more storage mediums using OCT and/or other imaging modality technique(s) to detect catheter connection(s) or disconnection(s) and to perform coregistration using artificial intelligence, including, but not limited to, deep or machine learning, using results of the catheter detection for performing coregistration, etc., are discussed herein. Further features of the present disclosure will in part be understandable and will in part be apparent from the following description and with reference to the attached drawings. BRIEF DESCRIPTION OF THE DRAWINGS
[0045] For the purposes of illustrating various aspects of the disclosure, wherein like numerals indicate like elements, there are shown in the drawings simplified forms that may be employed, it being understood, however, that the disclosure is not limited by or to the precise arrangements and instrumentalities shown. To assist those of ordinary skill in the relevant art in making and using the subject matter hereof, reference is made to the appended drawings and figures, wherein:
[0046] FIG. 1A is a schematic diagram showing at least one embodiment of a system that may be used for performing one or multiple imaging modality viewing and control and/or for detecting catheter connection(s) or disconnection(s) in accordance with one or more aspects of the present disclosure;
[0047] FIG. 1B is a schematic diagram illustrating an imaging system for executing one or more steps to process image data and/or for detecting catheter connection(s) or disconnection(s) in accordance with one or more aspects of the present disclosure;
[0048] FIG. 2 is a schematic diagram of at least one embodiment of a neural network using artificial intelligence that may be used in accordance with one or more aspects of the present disclosure;
[0049] FIG. 3 is a flowchart of at least one embodiment of a method for using a single frame or image to detect catheter connection(s) or disconnection(s) that may be used in accordance with one or more aspects of the present disclosure;
[0050] FIG. 4 is a flowchart of at least one embodiment of a method for using multiple frames or images to detect catheter connection(s) or disconnection(s) that may be used in accordance with one or more aspects of the present disclosure;
[0051] FIG. 5 is a flowchart of at least one embodiment of a method for using a single frame or image to detect catheter disconnection(s) that may be used in accordance with one or more aspects of the present disclosure;
[0052] FIG. 6 is a flowchart of at least one embodiment of a method for training at least one model that may be used to detect catheter connection(s) or disconnection(s) in accordance with one or more aspects of the present disclosure; [0053] FIG. 7 is a diagram of at least one embodiment of a catheter that may be used with one or more embodiments for detecting catheter connection(s) and/or disconnection(s) in accordance w ith one or more aspects of the present disclosure;
[0054] FIG. 8 illustrates data from an experiment conducted to detect catheter connection(s) and/or catheter disconnection(s) in accordance w ith one or more aspects of the present disclosure;
[0055] FIG. 9A shows at least one embodiment of an OCT apparatus or system for utilizing one or more imaging modalities and artificial intelligence for detecting catheter connection(s) and/or catheter disconnection(s) and/or for performing coregistration in accordance with one or more aspects of the present disclosure;
[0056] FIG. 9B shows at least another embodiment of an OCT apparatus or system for utilizing one or more imaging modalities and artificial intelligence for detecting catheter connection(s) and/or catheter disconnection(s) and/or for performing coregistration in accordance with one or more aspects of the present disclosure;
[0057] FIG. 9C shows at least a further embodiment of an OCT and NIRAF apparatus or system for utilizing one or more imaging modalities and artificial intelligence for detecting catheter connection(s) and/or catheter disconnection(s) and/or for performing coregistration in accordance with one or more aspects of the present disclosure;
[0058] FIG. 10 is a flow diagram showing a method of performing an imaging feature, function, or technique in accordance with one or more aspects of the present disclosure;
[0059] FIG. 11 shows a schematic diagram of an embodiment of a computer that may be used with one or more embodiments of an apparatus or system or one or more methods discussed herein in accordance with one or more aspects of the present disclosure;
[0060] FIG. 12 shows a schematic diagram of another embodiment of a computer that may be used with one or more embodiments of an imaging apparatus or system or methods discussed herein in accordance with one or more aspects of the present disclosure;
[0061] FIG. 13 shows a schematic diagram of at least an embodiment of a system using a computer or processor, a memory, a database, and input and output devices in accordance with one or more aspects of the present disclosure; [0062] FIG. 14 shows a created architecture of or for a regression model(s) that may be used for catheter connection and/or disconnection detection and/or any other technique discussed herein in accordance with one or more aspects of the present disclosure;
[0063] FIG. 15 shows a convolutional neural network architecture that may be used for catheter connection and/or disconnection detection and/or any other technique discussed herein in accordance with one or more aspects of the present disclosure;
[0064] FIG. 16 shows a created architecture of or for a regression model(s) that may be used for catheter connection and/or disconnection detection and/or any other technique discussed herein in accordance with one or more aspects of the present disclosure; and
[0065] FIG. 17 is a schematic diagram of or for a segmentation model(s) that may be used for catheter connection and/or disconnection detection and/or any other technique discussed herein in accordance with one or more aspects of the present disclosure.
DETAILED DESCRIPTION OF THE PRESENT INVENTION
[0066] One or more devices, systems, methods and storage mediums for characterizing tissue, or an object, using one or more imaging techniques or modalities (such as, but not limited to, OCT, fluorescence, NIRF, NIRAF, etc.), and using artificial intelligence for evaluating and detecting catheter connection(s) or disconnection(s) and/or performing coregistration are disclosed herein. Several embodiments of the present disclosure, which may be carried out by the one or more embodiments of an apparatus, system, method and/or computer-readable storage medium of the present disclosure are described diagrammatically and visually in at least FIGS. 1A through 17 and other tables and figures included herein below.
[0067] Turning now to the details of the figures, imaging modalities may be displayed in one or more ways as discussed herein. One or more displays discussed herein may allow a user of the one or more displays to use, control and/or emphasize multiple imaging techniques or modalities, such as, but not limited to, OCT, NIRF, NIRAF, etc., and may allow the user to use, control, and/or emphasize the multiple imaging techniques or modalities synchronously.
[0068] As shown diagrammatically in FIG. 1A, one or more embodiments for visualizing, emphasizing and/or controlling one or more imaging modalities and artificial intelligence (such as, but not limited to, machine and/or deep learning, residual learning, using results of catheter connection or disconnection detection to perform coregistration, etc.) for evaluating and detecting catheter connection(s) or disconnection(s) and/or performing co registration of the present disclosure may be involved with one or more predetermined or desired procedures, such as, but not limited to, medical procedure planning and performance (e.g., PCI as aforementioned). For example, the system 2 may communicate with the image scanner 5 (e.g., a CT scanner, an X-ray machine, etc.) to request information for use in the medical procedure (e.g., PCI) planning and/or performance, such as, but not limited to, bed positions, and the image scanner 5 may send the requested information along with the images to the system 2 once a clinician uses the image scanner 5 to obtain the information via scans of the patient. In some embodiments, one or more angiograms 3 taken concurrently or from an earlier session are provided for further planning and visualization. The system 2 may further communicate with a workstation such as a Picture Archiving and Communication System (PACS) 4 to send and receive images of a patient to facilitate and aid in the medical procedure planning and/or performance. Once the plan is formed, a clinician may use the system 2 along w ith a medical procedure/imaging device 1 (e.g., an imaging device, an OCT device, an IVUS device, a PCI device, an ablation device, a 3D structure construction or reconstruction device, etc.) to consult a medical procedure chart or plan to understand the shape and/or size of the targeted biological object to undergo the imaging and/or medical procedure. Each of the medical procedure/imaging device 1, the system 2, the locator device 3, the PACS 4 and the scanning device 5 may communicate in any way known to those skilled in the art, including, but not limited to, directly (via a communication network) or indirectly (via one or more of the other devices such as 1 or 5, or additional flush and/or contrast delivery devices; via one or more of the PACS 4 and the system 2; via clinician interaction; etc.).
[0069] In medical procedures, improvement or optimization of physiological assessment is preferable to decide a course of treatment for a particular patient. By way of at least one example, physiological assessment is very useful for deciding treatment for cardiovascular disease patients. In a catheterization lab, for example, physiological assessment maybe used as a decision-making tool - e.g., whether a patient should undergo a PCI procedure, whether a PCI procedure is successful, etc. While the concept of using physiological assessment is theoretically sound, physiological assessment still waits for more adaption and improvement for use in the clinical setting(s). This situation may be because physiological assessment may involve adding another device and medication to be prepared, and/ or because a measurement result may vary between physicians due to technical difficulties. Such approaches add complexities and lack consistency. Therefore, one or more embodiments of the present disclosure may employ computational fluid dynamics based (CFD-based) physiological assessment that may be performed from imaging data to eliminate or minimize technical difficulties, complexities and inconsistencies during the measurement procedure. To obtain accurate physiological assessment, an accurate 3D structure of the vessel may be reconstructed from the imaging data as disclosed in U.S. Provisional Pat. App. No. 62/901,472, filed on September 17, 2019, the disclosure of which is incorporated by reference herein in its entirety.
[0070] In at least one embodiment of the present disclosure, a method may be used to provide more accurate 3D structure(s) compared to using only one imaging modality. In one or more embodiments, a combination of multiple imaging modalities may be used, catheter connection(s) or disconnection(s) may be detected, and coregistration may be processed/performed using artificial intelligence.
[0071] One or more embodiments of the present disclosure may apply machine learning, especially deep learning, to detect a catheter connection(s) in an image frame without user input(s) that define an area where intravascular imaging pullback occurs. Using artificial intelligence, for example, deep learning, one or more embodiments of the present disclosure may achieve a better or maximum success rate of catheter connection(s) or disconnection(s) detection from image data without (or with less) user interactions, and may reduce processing and/or prediction time to display coregistration result(s) based on the catheter connection(s) or disconnection(s) detection result(s) and/or based on the improved image quality obtained when detecting proper or complete catheter connection(s) or disconnection(s).
[0072] One or more embodiments of the present disclosure may evaluate catheter connection(s) and/or disconnection(s) in one or more ways, and may use one or more features to determine a connection or disconnection status. For example, while not limited to these examples, one or more embodiments may include one or more of the following: (i) one or more visual indicators (e.g., a light emitting diode (LED), a display for displaying the connection or disconnection status, etc.) that operate to send information to an outside of the device or system to indicate the connection or disconnection status (e.g., a successful connection, a successful optical connection, a successful connection of wires/catheter, a partial connection, no connection, a disconnection, etc.); (ii) one or more circuits or sensors that operate to detect a catheter connection or disconnection status (e.g., a successful connection, a successful optical connection, a successful connection of wires/catheter, a partial connection, no connection, a disconnection, etc.); and/or (iii) an artificial intelligence structure, such as, but not limited to, a neural net or network (e.g., the same neural net or network that operates to determine whether a video or image from an endoscope or other imaging device is working; an additional neural net or network, other network discussed herein, etc.). In one or more embodiments, the circuit(s)/sensor(s) may operate to be useful for (i) identifying case(s) where service/maintenance would be useful (e.g., to detect a partial connection that may be returned to a complete connection, to detect a disconnected wire or wires (or other component(s) of a catheter or other imaging device), etc.), and/or (ii) determining a case or cases where an imaging apparatus or system using a catheter or other imaging device(s) may operate or run in a mode that occurs in a case where a connection or disconnection is not ideal or is less than at full potential/capacity (e.g., in a case where a partial connection exists when a full connection may be ideal, in a case where an amount of wires less than all of the wires (e.g., 8 out of 9 wires) are connected, etc.). In one or more embodiments involving establishing an endoscope visual connection, an artificial intelligence structure, such as, but not limited to, a neural net or network (e.g., the same neural net or network that operates to determine whether a video or image from an endoscope or other imaging device is working; an additional neural net or network; another type of network discussed herein; etc.) may be used to determine one or more of the following: (i) whether a video function or image capturing function from an endoscope or other imaging device is working; and/or (ii) a catheter connection or disconnection status.
[0073] One or more embodiments of the present disclosure may achieve the efficient catheter (or other imaging device) detection and/or efficient coregistration result(s) from image(s). In one or more embodiments, the image data may be acquired during intravascular imaging pullback using a catheter (or other imaging device) that may be visualized in an image. In one or more embodiments, a ground truth identifies a location or locations of the catheter or a portion of the catheter (or of another imaging device or a portion of the another imaging device). In one or more embodiments, a model has enough resolution to predict the catheter location and/or connection in a given image with sufficient accuracy depending on the application or procedure being performed. The performance of the model may be further improved by adding more training data. For example, additional training data may include image annotations, where a user labels or corrects the catheter location(s) and/or catheter detection(s) in each image.
[0074] In one or more embodiments, a catheter connection or disconnection may be detected and/or monitored using an algorithm, such as, but not limited to, the Viterbi algorithm.
[0075] One or more embodiments may automate characterization of catheter connection(s) or disconnection(s) and/or of stenosis in images using convolutional neural networks, and may fully automate frame detection on angiographies using training (e.q., offline training) and using applications {e.g., online application(s)) to extract and process frames via deep learning.
[0076] One or more embodiments of the present disclosure may track and/or calculate a catheter connection(s) or disconnection(s) detection success rate.
[0077] In at least one further embodiment example, a method of 3D reconstruction without adding any imaging requirements or conditions may be employed. One or more methods of the present disclosure may use intravascular imaging, e.g., IVUS, OCT, etc., and one (1) view of angiography. In the description below, while intravascular imaging of the present disclosure is not limited to OCT, OCT is used as a representative of intravascular imaging for describing one or more features herein.
[0078] Referring now to FIG. 1B, shown is a schematic diagram of at least one embodiment of an imaging system 20 for generating an imaging catheter path based on a detected location of an imaging catheter, based on a catheter connection or disconnection detection, and/or a regression line representing the imaging catheter path by using an image frame that is simultaneously acquired during intravascular imaging pullback. The embodiment of FIG. 1B may be used with one or more of the artificial intelligence feature(s) discussed herein. The imaging system 20 may include an angiography system 30, an intravascular imaging system 40, an image processor 50, a display or monitor 1209, and an electrocardiography (ECG) device 60. The angiography system 30 may include an X-ray imaging device such as a C-arm 22 that is connected to an angiography system controller 24 and an angiography image processor 26 for acquiring angiography image frames of an object (e.g., any object that may be imaged using the size and shape of the imaging device, a sample, a vessel, a target specimen or object, etc.) or patient 106.
[0079] The intravascular imaging system 40 of the imaging system 20 may include a console 32, a catheter 120 and a patient interface unit or PIU 110 that connects between the catheter 120 and the console 32 for acquiring intravascular image frames. The catheter 120 may be inserted into a blood vessel of the patient 106 (or inside a specimen or other target object). The catheter 120 may function as a light irradiator and a data collection probe that is disposed in a lumen of a particular blood vessel, such as, for example, a coronaiy artery. The catheter 120 may include a probe tip, one or more markers or radiopaque markers, an optical fiber, and a torque wire. The probe tip may include one or more data collection systems. The catheter 120 may be threaded in an artery of the patient 106 to obtain images of the coronary7 artery. The patient interface unit no may include a motor M inside to enable pullback of imaging optics during the acquisition of intravascular image frames. The imaging pullback procedure may obtain images of the blood vessel. The imaging pullback path may represent the co-registration path, which maybe a region of interest or a targeted region of the vessel.
[0080] The console 32 may include a light source(s) 101 and a computer 1200. The computer 1200 may include features as discussed herein and below (see e.g., FIG. 11, FIG. 13, etc.), or alternatively may be a computer 1200’ (see e.g., FIG. 12, FIG. 13, etc.) or any other computer or processor discussed herein. In one or more embodiments, the computer 1200 may include an intravascular system controller 35 and an intravascular image processor 36. The intravascular system controller 35 and/or the intravascular image processor 36 may operate to control the motor M in the patient interface unit 110. The intravascular image processor 36 may also perform various steps for image processing and control the information to be displayed.
[0081] Various types of intravascular imaging systems maybe used within the imaging system 20. The intravascular imaging system 40 is merely one example of an intravascular imaging system that may be used within the imaging system 20. Various types of intravascular imaging systems may be used, including, but not limited to, an OCT system, a multi-modality OCT system or an IVUS system, by way of example. [0082] The imaging system 20 may also connect to an electrocardiography (ECG) device 60 for recording the electrical activity of the heart over a period of time using electrodes placed on the skin of the patient 106. The imaging system 20 may also include an image processor 40 for receiving angiography data, intravascular imaging data, and data from the ECG device 60 to execute various image-processing steps to transmit to a display 1209 for displaying an angiography image frame with a co-registration path. Although the image processor 40 associated with the imaging system 20 appears external to both the angiography system 20 and the intravascular imaging system 30 in FIG. 1B, the image processor 40 may be included within the angiography system 30, the intravascular imaging system 40, the display 1209, or a stand-alone device. Alternatively, the image processor 40 may not be required if the various image processing steps are executed using one or more of the angiography image processor 26, the intravascular image processor 36 of the imaging system 20, or any other processor discussed herein (e.g., computer 1200, computer 1200’, computer or processor 2, etc.).
[0083] FIG. 2 diagramatically shows at least one embodiment of a neural net that may be employed using artificial intelligence and/or deep learning that may be used to perform one or more of the features herein, including, but not limited to, detecting catheter connection(s) or disconnection(s), in accordance with one or more aspects of the present disclosure.
[0084] Neural networks may include a computer system or systems. In one or more embodiments, a neural network may include or may comprise an input layer 200, one or more hidden layers of neurons or nodes (e.g., hidden layer #1 201, hidden layer #2 202, a third hidden layer, additional hidden layers, etc.), and an output layer 203. The input layer 200 may be where the values are passed to the rest of the model. While not limited thereto, in one or more MM-OCT application(s), the input layer may be the place where the transformed OCT data may be passed to a model for evaluation. In one or more embodiments, the hidden layer(s) (e.g., hidden layer #1 201, hidden layer #2 202, a third hidden layer, additional hidden layers, etc.) may be a series of layers that contain or include neurons or nodes that establish connections between the neurons or nodes in the other hidden layers (e.g., hidden layer #1 201, hidden layer #2 202, a third hidden layer, additional hidden layers, etc.) as shown, for example via the numerous arrows between neurons or nodes of the hidden layers 201, 202, in FIG. 2. Through training, the values of each of the connections may be altered so that, due to the training, the system/ systems will trigger when the expected pattern is detected. The output layer provides the result(s) of the model. In the case of the MM-OCT application(s), this may be a Boolean (true/false) value for detecting connection or disconnection (e.g., partial connection or disconnection, complete connection or disconnection, improper connection or disconnection, etc.). [0085] To collect data that may be used to train one or more neural nets, one or more features of an OCT device or system (e.g., an MM-OCT device or system, a SS-OCT device or system, etc.) may be used. Collecting a series of OCT images with and without catheters connected properly may result in a plurality (e.g., several thousand) of training images. In one or more embodiments, the data may be labeled based on whether a catheter was connected, was partially connected, was disconnected, etc. (asconfirmedbyatrainedoperator or userofthedeviceor system). In one or more embodiments, after at least 30,000 OCT images are captured and labeled, the data may be split into a training population and a test population. In one or more embodiments, data collection may be performed in the same environment or in different environments. For example, during data collection, a flashlight (or any light source) may be used to shine the light down a barrel of an imaging device with no catheter imaging core to confirm that a false positive would not occur in a case where a physician pointed the imaging device at external lights (e.g., operating room lights, a computer screen, etc.). After training is complete, the testing data may be fed through the neural net or neural networks, and the accuracy of the model(s) may be evaluated based on the result(s) of the test data.
[0086] Embodiments of a method or methods for detecting one or more catheter connections or disconnections (e.g., complete connection, partial connection, disconnection, any other connection or disconnection evaluation(s) discussed herein, etc.) may be used independently or in combination. While not limited to the discussed combination or arrangement, one or more steps may be involved in both of the workflows or processes in one or more embodiments of the present disclosure, for example, as shown in FIG. 3, FIG. 4 and/or FIG. 5 and as discussed below.
[0087] Once trained, a neural network or networks may use a single image (e.g., a single OCT image, an image of a different imaging modality, etc.) to determine whether a catheter has established a valid or complete connection or disconnection. FIG. 3 shows at least one embodiment of a method for a single image frame detection method that may be used in accordance with one or more aspects of the present disclosure. In one or more embodiments, a single frame detection method or methods may include one or more of the following: (i) connecting a catheter (e.g., a new catheter) (see e.g., step S300 in FIG. 3); (ii) receiving or obtaining an image (e.g., an OCT image, an image of another imaging modality, etc.) (see e.g., step S301 in FIG. 3); (iii) using a neural net (or other Al compatible network or Al-ready network) to evaluate the image (e.g., an OCT image, an image of another imaging modality, etc.) (see e.g., step S302 in FIG. 3); (iv) determining whether the catheter is connected/disconnected or whether a connection for the catheter is detected (see e.g., step S303 in FIG. 3); and/or (v) setting a connection or disconnection status (e.g., “Yes” in a case w here a connection is detected in the determining step, “No” in a case where a connection (e.g., a partial connection, a complete connection, any other kind of connection discussed herein, etc.) is not detected in the determining step, etc.) (see e.g., step S304 in FIG. 3). In one or more embodiments, the connection or disconnection status may be saved in one or more memories or may be sent to one or more processors for use in the Al evaluations/determinations or for use with any other technique(s) or process(es) discussed herein.
[0088] Despite the sufficiency of using a single frame to establish a connection or disconnection status as aforementioned, one or more embodiments may have a neural net or network evaluate more than one frame to establish the connection or disconnection status with greater or more certainty. For example, employing a plurality of images or frames may improve the accuracy of the connection or disconnection status for a plurality of benefits, including, but not limited to improving safety, improving imaging accuracy, improving connection or disconnection status accuracy or success, etc. At least one way of involving or using a plurality of images or frames may be performed by reusing the neural network from the single frame evaluation (or using a neural network that may be different from the originally employed neural network or another type of network discussed herein) for the plurality of images or frames and by comparing the results across a preset or predetermined number of images or frames. Another embodiment for performing evaluation using multiple frames may be to train a separate neural network that takes multiple frames of image data (e.g., OCT data) as input and that outputs a connection or disconnection status through evaluating the images or frames. In both approaches, the workflow may be the same e.g., as shown in FIG. 4). For example, in one or more embodiments of method(s) for performing catheter connection or disconnection evaluation using multi-frames or multi-images, the method(s) may include one or more of the following: (i) connecting a catheter (e.g., a new catheter) (see e.g., step S300 in FIG. 4); (ii) receiving or obtaining an image (e.g., an OCT image, an image of another imaging modality, etc.) (see e.g., step S301 in FIG. 4); (iii) evaluating or determining whether a set, predetermined, or minimum amount of images or frames have been collected to perform the evaluation of the plurality of images or frames (see e.g., step 3402a in FIG. 4); (iv) in a case where the evaluation or determination of step 3402a is “NO”, return to step S301 to obtain or receive one or more additional images or frames, or, in a case where the evaluation or determination of step 3402a is “Yes”, using a neural net (or other Al compatible network or Al -ready network) to evaluate the images or frames (e.g., OCT images or frames, images or frames of another imaging modality, etc.) (see e.g., step 3402b in FIG. 4); (v) determining whether the catheter is connected/ disconnected or whether a connection for the catheter is detected (see e.g., step S303 in FIG. 4); and/or (vi) setting a connection or disconnection status (e.g., “Yes” in a case where a connection is detected in the determining step, “No” in a case where a connection (e.g., a partial connection, a complete connection, any other kind of connection discussed herein, etc.) is not detected in the determining of the catheter connection or disconnection step, etc.). In one or more embodiments, the connection or disconnection status may be saved in one or more memories or may be sent to one or more processors for use in the Al evaluations/determinations or for use with any other technique(s) or process(es) discussed herein. [0089] In one or more embodiments, optics of catheter devices/ apparatuses or systems may be sensitive, and, when damaged, may not provide reliable images or frames. To protect an apparatus/device or system from a failed disconnection or a failed connection, the same or similar process(es) or method(s) may be used, for example, in a case where the device/ apparatus or system may receive a notification that a catheter has been connected or has been disconnected, in a case where the device/ apparatus or system may be in a start up phase where a catheter may not be attached yet, etc. In one or more embodiments of method(s) for performing catheter disconnection evaluation using a single frame, the method(s) may include one or more of the following: (i) disconnecting a catheter or having a catheter be disconnected during start up or any other phase of the device/apparatus or system (see e.g., step S500 in FIG. 5); (ii) receiving or obtaining an image (e.g., an OCT image, an image of another imaging modality, etc.) (see e.g., step S301 in FIG. 5); (iii) using a neural net (or other Al compatible network or Al-ready network) to evaluate the image (e.g., an OCT image, an image of another imaging modality, etc.) (see e.g., step S302 in FIG. 5); (iv) determining whether the catheter is connected/disconnected or whether a connection or disconnection for the catheter is detected (see e.g., step S303 in FIG. 5); and/or (v) setting a connection or disconnection status (e.g., “Yes” in a case where a connection is detected in the determining step, “No” in a case where a connection (e.g., a partial connection, a complete connection, any other kind of connection discussed herein, etc.) is not detected in the determining step, etc.) (see e.g., step S304 in FIG. 5). In one or more embodiments, the connection or disconnection status may be saved in one or more memories or may be sent to one or more processors for use in the Al evaluations/determinations or for use with any other technique(s) or process(es) discussed herein.
[0090] In one or more embodiments of the present disclosure, one or more connection or disconnection evaluation techniques may be used with an OCT or other imaging modality device, system, storage medium, etc. In one or more embodiments, one or more connection or disconnection evaluation techniques may be used for any type of OCT, including, but not limited to, MM-OCT. One or more embodiments of the present disclosure may: (i) calculate a connection or disconnection status of a catheter (or other imaging device), and may perform the connection or disconnection status calculation without the use of another piece of equipment; (ii) make evaluations during a normal course of operations (instead of a separate operation) for a catheter (or other imaging device); and (iii) work on small numbers of samples (e.g., as little as one image in an imaging, OCT, and/or MM- OCT application(s)), or may work on large numbers of samples (e.g., a plurality of images or frames, a plurality of samples, a plurality of samples in a plurality of images or frames, etc.), to evaluate connection/disconnection status.
[0091] Indeed, one or more embodiments of the present disclosure may achieve at least the following advantages or may include at least the following feature(s): (i) one or more embodiments may achieve the efficient connection/disconnection evaluation and may obtain result(s) without the use of additional equipment; (ii) one or more embodiments may not use trained operators or users (while trained operators or users maybe involved, such involvement is not required); (iii) one or more embodiments may perform during a normal course of operation of a catheter and/or an imaging device (as compared with and instead of a separate operation); (iv) one or more embodiments may perform connection/disconnection evaluation using a set of collected images (manually and/or automatically); and (v) one or more embodiments may provide usable measurement(s) with small and/ or large samples.
[0092] In one or more embodiments, a model (which, in one or more embodiments, may be software, software/hardware combination, or a procedure that utilizes one or more machine or deep learning algorithms/procedures/processes that has/have been trained on data to make one or more predictions for future, unseen data) has enough resolution to predict and/or evaluate the connection/disconnection with sufficient accuracy depending on the application or procedure being performed. The performance of the model may be further improved by subsequently adding more training data and retraining the model to create a new instance of the model with better or optimized performance. For example, additional training data may include data based on user input, where the user may identify or correct the location of a catheter (e.g., a portion of the catheter, a connection portion of the catheter, etc.) in an image (or another imaging device in an image).
[0093] In one or more embodiments of the present disclosure, a model and/or sets of training data or images may be obtained by collecting a series of images (e.g., OCT images) with and without catheters connected properly. For example, thousands of images (e.g., OCT images) may be captured and labeled (e.g., to establish ground truth data), and the data may be split into a training population of data and a test population of data. After training is complete, the testing data may be fed or inserted through the neural net/ networks, and accuracy of the model may be evaluated based on the results of the test data. Once trained, a neural net/network may be able to determine whether a catheter (or other imaging device) has established a valid (e.g., complete, properly situated, optical, etc.) connection based on a single image (e.g., a single OCT image). While one or more embodiments may use one image, it may be advantageous from a safety perspective to have the neural net/network evaluate more than one frame/image to establish the status of a connection/disconnection with more certainty/ accuracy.
[0094] One or more methods, medical imaging devices, Intravascular Ultrasound (IVUS) or Optical Coherence Tomography (OCT) devices, imaging systems, and/or computer-readable storage mediums for evaluating catheter connections and/or disconnections using artificial intelligence may be employed in one or more embodiments of the present disclosure. [0095] In one or more embodiments, an artificial intelligence training apparatus may include: a memory; one or more processors in communication w ith the memory, the one or more processors operating to: acquire or receive image data for instances or cases where a catheter is connected and for instances or cases where a catheter is not connected; establish ground truth for all the acquired image data; split the acquired image data into training, validation, and test sets or groups; evaluate a connection status for a new catheter; receive image data {e.g., angiography images, OCT images, images of another modality, etc.) for the new catheter; evaluate the image data (e.q., OCT images) by a neural network {e.g., convolution network, neural network, recurrent network, other Al-ready or Al compatible network, etc.) using artificial intelligence; determine whether a connection is detected for the new catheter {e.g., determine whether the new catheter is connected to an imaging device or system, determine whether the new catheter is not connected or is improperly connected to an imaging device or system, etc.); set a connection or disconnection status for the new catheter e.g., “Yes” a connection exists; “No” a connection does not exist; “No” a proper connection does not exist {e.g., in a case where the catheter is improperly or partially connected to an imaging device or system; etc.); and save the trained model and/or the connection or disconnection status to memory; etc. One or more embodiments may repeat the training, and evaluation procedure, for a variety of parameter or hyper-parameter choices and finally select one or more models with the optimal, highest, and/or improved performance defined by one or more predefined evaluation metrics.
[0096] In one or more embodiments, the one or more processors may further operate to split the ground truth data into sets or groups for training, validation, and testing. The one or more processors may further operate to one or more of the following: (i) calculate or improve a connection or disconnection detection success rate using application of machine learning or deep learning; (ii) decide on the model to be trained based on a connection or disconnection detection success rate associated with the model {e.g., if an apparatus or system embodiment has multiple models to be saved, which have already been trained previously, a method of the apparatus/system may select a model for further training based on a previous success rate, based on a predetermined success factor, or based on which model is more optimal than another(s), etc.); (iii) determine whether a connection or disconnection determination is correct based on the trained model; and (iv) evaluate the connection or disconnection detection success rate. In one or more embodiments, the one or more processors may further operate to one or more of the following: (i) split the acquired or received image data into data sets or groups having a certain ratio or percentages, for example, 60% training data, 20% validation data, and 20% test data; (ii) split the acquired or received image data randomly; (iii) split the acquired or received image data randomly either on a pullback-basis, or a frame-basis; (iv) split the acquired or received image data based on or using a new set of a certain or predetermined kinds of data; and (v) split the acquired or received image data based on or using a new set of a certain or predetermined data ty pe, the new set being one or more of the following: a new pullback-basis data set, a new frame-basis data set, new clinical data, new animal data, new potential additional training data, new data for a first type of catheter where the new data has a marker that is similar to a marker of a catheter used for the acquired or received image data, new data having a marker that is similar to a marker of an Optical Coherence Tomography (OCT) catheter. The one or more processors may further operate to one or more of the following: (i) employ data quality control; (ii) allow a user to manually select training samples or training data; and (iii) use any angio image that is captured during Optical Coherence Tomography (OCT) pullback for testing.
[0097] One or more embodiments may include or have one or more of the following: (i) parameters including one or more hyper-parameters; (ii) the saved, trained model is used as a created detector for identifying or detecting a catheter connection or disconnection in image data; (iii) the model is one or a combination of the following: a segmentation model, a segmentation model with post-processing, a model with pre-processing, a model with post-processing, a segmentation model with pre-processing, a deep learning or machine learning model, a semantic segmentation model or classification model, an object detection or regression model, an object detection or regression model, a combination of a semantic segmentation model and an object detection or regression model, a model using repeated segmentation model technique(s), a model using feature pyramid(s), and a model using repeated object detection or regression model technique(s); (iv) the one or more processors further operate to use one or more neural networks, convolutional neural networks, or recurrent neural networks (or other Al-ready or Al compatible network(s)) to detect the catheter connection(s) ; (v) the one or more processors further operate to estimate a generalization error of the trained model with data in the test set or group; and (vi) the one or more processors further operate to estimate a generalization error of multiple trained models (ensemble) with data in the test set or group, and to select one model based on its performance on the validation set or group.
[0098] In one or more embodiments of a detection apparatus, the one or more processors may further operate to: (i) acquire or receive the image data during a pullback operation of the intravascular imaging catheter.
[0099] The one or more processors may further operate to use one or more neural networks, convolutional neural networks, and/or recurrent neural networks (or other Al -ready or Al compatible network(s)) to one or more of: load the trained model, select a set of image frames, evaluate the catheter connection/disconnection, determine whether the catheter connection/disconnection determination is appropriate with respect to given prior knowledge, for example, vessel location and pullback direction, modify the detected results or the detected catheter location or catheter connection or disconnection status for each frame, perform the coregistration, insert the intravascular image, and acquire or receive the image data during the pullback operation. [0100] In one or more embodiments, the object or sample may include one or more of the following: a vessel, a target specimen or object, and a patient.
[oioi] The one or more processors may further operate to perform the coregistration by coregistering an acquired or received angiography image and an obtained one or more Optical Coherence Tomography (OCT) or Intravascular Ultrasound (IVUS) images or frames.
[0102] In one or more embodiments, a loaded, trained model may be one or a combination of the follow ing: a segmentation (classification) model, a segmentation model with pre-processing, a segmentation model with post-processing, an object detection (regression) model, an object detection model with pre-processing, an object detection model with post-processing, a combination of a segmentation (classification) model and an object detection (regression) model, a deep convolutional neural network model, a recurrent neural network model with long short-term memory that can take temporal relationships across images or frames into account, a model using feature pyramid(s) that can take different image resolutions into account, a genetic algorithm that operates to breed multiple models for improved performance, and/or a model using residual learning technique(s).
[0103] In one or more embodiments, the one or more processors may further operate to one or more of the following: (i) display angiography data along with an image for each of one or more imaging modalities on the display, wherein the one or more imaging modalities include one or more of the following: a tomography image; an Optical Coherence Tomography (OCT) image; a fluorescence image; a near-infrared auto-fluorescence (NIRAF) image; a near-infrared auto-fluorescence (NIRAF) image in a predetermined view, a carpet view, and/or an indicator view; a near-infrared fluorescence (NIRF) image; a near-infrared fluorescence (NIRF) image in a predetermined view, a carpet view, and/or an indicator view; a three-dimensional (3D) rendering; a 3D rendering of a vessel; a 3D rendering of a vessel in a half-pipe view or display; a 3D rendering of the object; a lumen profile; a lumen diameter display; a longitudinal view; computer tomography (CT); Magnetic Resonance Imaging (MRI); Intravascular Ultrasound (IVUS); an X-ray image or view; and an angiography view; and (ii) change or update the displays for the angiography data along with each of the one or more imaging modalities based on the catheter connection or disconnection evaluation results and/or an updated location of the catheter (or other imaging device).
[0104] One or more embodiments of a method for training a model using artificial intelligence may repeat the selection, training, and evaluation procedure, for a variety of model configurations (e.c/., hyper-parameter values) and finally select one or more models with the highest performance defined by one or more predefined evaluation metrics. [0105] One or more embodiments of a non-transitory computer-readable storage medium storing at least one program for causing a computer to execute a method for training a model using artificial intelligence may be used w ith any method(s) discussed in the present disclosure, including but not limited to, one or more catheter connection or disconnection evaluation/determination method(s).
[0106] One or more embodiments of the present disclosure improve or maximize a catheter connection or disconnection detection success rate by, for example, improving or using alternative approaches to evaluating a catheter connection or disconnection, improving the detection method/ algorithm that may utilize features that are difficult to capture via other image processing techniques (e.g., via the use of artificial intelligence, via the application of machine or deep learning, via the use of artificial intelligence results to perform coregistration, etc.), etc. In one or more embodiments, at least one artificial intelligence, computer-implemented task may be co-registration of images between images acquired by one or more imaging modalities, where one image is an angiography image that is acquired during intravascular imaging of a sample or object, such as, but not limited to, the coronary arteries, using an OCT probe (pullback of OCT probe upon contrast agent application, for example), and where the other intravascular imaging may be, but is not limited to, IVUS, OCT, etc. In one or more embodiments, at least another artificial intelligence, computer- implemented task may be a specific machine learning task: keypoint detection, where the keypoint is a radiopaque marker that has been “introduced” into one or more images (e.g., angiography images, OCT images, etc.) to facilitate detection for a catheter (or other imaging device).
[0107] One or more embodiments of the present disclosure may use other artificial intelligence technique(s) or method(s) for performing training, for splitting data into different groups (e.g., training group, validation group, test group, etc.), or other artificial intelligence technique(s) or method(s), such as, but not limited to, embodiment(s) as discussed in PCT/US2020/051615, filed on September 18, 2020 and published as WO 2021/055837 A9 on March 25, 2021, and as discussed in U.S. Pat. App. No. 17/ 761,561, filed on March 17, 2022, the applications and publications of which are incorporated by reference herein in their entireties. For example, turning to the details of FIG. 6 of the present disclosure (see e.g., FIG. 2 of WO 2021/055837 A9 and related text in the subject publication), one or more methods or processes of the present disclosure may include one or more of the following steps (starting at step S101 in FIG. 6): (i) acquiring angiography image data (see step S102 in FIG. 6); (ii) establishing a ground truth for all of the acquired angiography data/images (see step S103 in FIG. 6); (iii) splitting the acquired angiography data/image set (examples of images and/or corresponding ground truths) into training, validation, and test groups or sets (see step S104 in FIG. 6); (iv) choosing the hyper-parameters for model training, including, but not limited to, the model architecture, the learning rate, and the initialization of parameter values (see step S105 in FIG. 6); (v) training a model with data in the training group or training set and evaluating it with data in the validation group or validation set (see step S106 in FIG. 6); (vi) determining whether the performance of the trained model is good or sufficient (see step S107 in FIG. 6); (vii) in the event that step S107 results in a “No”, then return to before step S105 and repeat steps S105-S106, or in the event that step S107 results in a “Yes”, then proceed to step S108; (viii) estimating a generalization error of the trained model with data in the test group or test set (see step S108 in FIG. 6); and (ix) saving the trained model to a memory (see step S109 in FIG. 6) (and then ending the process at step S110 in FIG. 6). The steps shown in FIG. 6 may be performed in any logical sequence and may be omitted in parts in one or more embodiments. In one or more embodiments, step S109 may involve saving the trained model to the memory or a disk, and may automatically save the trained model or may prompt a user (one or more times) to save the trained model. In one or more embodiments, a model may be selected based on its performance on the validation set, and the generalization error may be estimated on the test using the selected model. In one or more embodiments, an apparatus, system, method, or storage medium may have multiple models to be saved, which have already been trained previously, and the apparatus, system, method, or storage medium may select a model for further training based on a previous or prior success rate. In one or more embodiments, any trained model works for any angio apparatus or system with a same or similar success rate; in a situation where more data exists from different angio apparatuses or systems, one model may work better for a certain angio apparatus or system whereas another model may work better for a different angio apparatus or system. In this scenario, one or more embodiments may create test or validation data set(s) for specific angio apparatus(es) or system(s), and may identify which model works best for a specific angio apparatus(es) or system(s) with the test set(s) and/or validation set(s).
[0108] FIG. 7 shows at least one embodiment of a catheter 120 that may be used in one or more embodiments of the present disclosure for obtaining images; for using and/or controlling multiple imaging modalities, that apply machine learning, especially deep learning, to identify a catheter connection or disconnection in an angiography image frame with greater or maximum success; and for using the results to perform coregistration more efficiently or with maximum efficiency. FIG. 7 shows an embodiment of the catheter 120 including a sheath 121, a coil 122, a protector 123 and an optical probe 124. As shown schematically in FIGS. 9A-9C (discussed further below), the catheter 120 may be connected to a patient interface unit (PIU) 110 to spin the coil 122 with pullback (e.g., at least one embodiment of the PIU no operates to spin the coil 122 with pullback). The coil 122 delivers torque from a proximal end to a distal end thereof (e.g., via or by a rotational motor in the PIU 110). In one or more embodiments, the coil 122 is fixed with/to the optical probe 124 so that a distal tip of the optical probe 124 also spins to see an omnidirectional new of the object (e.g., a biological organ, sample or material being evaluated, such as, but not limited to, hollow organs such as vessels, a heart, a coronary artery, etc.). For example, fiber optic catheters and endoscopes may reside in the sample arm (such as the sample arm 103 as shown in one or more of FIGS. 9A-9C discussed below) of an OCT interferometer in order to provide access to internal organs, such as intravascular images, gastrointestinal tract or any other narrow area, that are difficult to access. As the beam of light through the optical probe 124 inside of the catheter 120 or endoscope is rotated across the surface of interest, cross-sectional images of one or more objects are obtained. In order to acquire imaging data or three- dimensional data, the optical probe 124 is simultaneously translated longitudinally during the rotational spin resulting in a helical scanning pattern. This translation is most commonly performed by pulling the tip of the probe 124 back towards the proximal end and therefore referred to as a pullback.
[0109] The catheter 120, which, in one or more embodiments, comprises the sheath 121, the coil 122, the protector 123 and the optical probe 124 as aforementioned (and as shown in FIG. 7), may be connected to the PIU 110. In one or more embodiments, the optical probe 124 may comprise an optical fiber connector, an optical fiber and a distal lens. The optical fiber connector may be used to engage with the PIU 110. The optical fiber may operate to deliver light to the distal lens. The distal lens may operate to shape the optical beam and to illuminate light to the object (e.g., the object 106 (e.g., a vessel) discussed herein), and to collect light from the sample (e.g., the object 106 (e.g., a vessel) discussed herein) efficiently. While the target, sample, or object 106 may be a vessel in one or more embodiments, the target, sample, or object 106 may be different from a vessel (and not limited thereto) depending on the particular use(s) or application(s) being employed with the catheter 120.
[0110] As aforementioned, in one or more embodiments, the coil 122 delivers torque from a proximal end to a distal end thereof (e.g., via or by a rotational motor in the PIU no). There may be a mirror at the distal end so that the light beam is deflected outward. In one or more embodiments, the coil 122 is fixed with/to the optical probe 124 so that a distal tip of the optical probe 124 also spins to see an omnidirectional view of an object (e.g., a biological organ, sample or material being evaluated, such as, but not limited to, hollow organs such as vessels, a heart, a coronary artery, etc.). In one or more embodiments, the optical probe 124 may include a fiber connector at a proximal end, a double clad fiber and a lens at distal end. The fiber connector operates to be connected with the PIU 110. The double clad fiber may operate to transmit & collect OCT light through the core and, in one or more embodiments, to collect Raman and/ or fluorescence from an object (e.g., the object 106 (e.g. , a vessel) discussed herein, an object and/or a patient (e.g., a vessel in the patient), etc.) through the clad. The lens may be used for focusing and collecting light to and/or from the object (e.g., the object 106 (e.g., a vessel) discussed herein). In one or more embodiments, the scattered light through the clad is relatively higher than that through the core because the size of the core is much smaller than the size of the clad.
[0111] While experiments were conducted using the following architecture details, the subject examples are not limiting, and other architectures may be employed (other methods are being tested as well). A representative sampling of the experimental data and results are shown in FIG. 8 of the present disclosure. [0112] Experiment Summary:
[0113] Data was collected from pullbacks that had a valid catheter connection or disconnection as well as an invalid catheter connection or disconnection. The data was then used to train a binary classification neural network to identify from a single image, for example, in a case where the image indicated a connected catheter. The resulting neural network was able to accurately predict catheter connection or disconnection status. While not limited thereto, in this experiment, experts (e.g., a physician, a user of an imaging device, an expert regarding optics or catheter connections, etc.) were used to determine catheter connection status. In one or more embodiments, disconnected/broken catheter imaging may appear as random static whereas connected catheter imaging may appear with distinct characteristics (e.g., such as, but not limited to, lumen size and shape, lumen dimensions, a size and shape of an object being imaged, geometric aspects or characteristics of an object being imaged, other characteristics of an object being imaged, etc.). In one or more embodiments, images where one or more distinct characteristics may not be shown and/or where static (e.g., random static) may occur may indicate that a catheter is not validly connected or validly disconnected or that a catheter or a portion thereof may be broken. In a case where the catheter or a portion of the catheter is broken, the neural network may be used to troubleshoot and identify the broken portion of the catheter so that: (i) the catheter may be fixed and may achieve a proper or valid connection or disconnection state(s) or mode(s); and (ii) an image or images being obtained may show a proper or valid connection/disconnection of the catheter and/or may have little or no static in the image or images. In the experiments, a binary classifier was used since a catheter may be either connected (fully connected, partially connected in cases where a partial connection is intended by a user of the imaging device or catheter, etc.), disconnected (fully disconnected, partially disconnected in a case where a complete connection is intended by a user of the imaging device or catheter, etc.), or broken. While not limited thereto, a number of pullbacks that may be used is about 40 (as done for one or more of the experiments). That said, using human expertise, imaging applications, desired features, etc., hundreds of pullbacks, thousands of pullbacks, or more pullbacks may be used.
[0114] Data Collection Procedure for Experiments:
[0115] During the experiments, the imaging device or system used was an MM-OCT apparatus. That said, the various technique(s), structure(s), and method(s) are not limited thereto, and one or more other imaging modalities, or other type(s) of imaging devices or systems (e.g., a robot, a continuum robot, or other robot apparatus or system discussed further below) may be used with one or more features of the present disclosure. [0116] The experiments involving the MM-OCT application saved data in the scan(s) of case files automatically when a pullback was completed. This feature was used to collect a series of pullbacks in different lighting, calibration, and environmental conditions with an invalid catheter attachment (dummy handle) and a valid catheter attachment.
[0117] While the possible Invalid Catheter Connection/disconnection states are not limited thereto, the experiments tested the following Invalid Catheter Connection/disconnection states: o Standard connection (control) o Flashlight illuminated and aimed at optical sensor o Max MDL calibration position o Min MDL calibration position o PIU laid on side o PIU optical sensor pointed at florescent light on ceiling
[o 118] While the possible Valid Catheter Connection / disconnection states are not limited thereto, the experiments tested the following Valid Catheter Connection/disconnection states: o Standard connection (control) o Max MDL calibration position o Min MDL calibration position o Calibrated calibration position o +- 10% calibration position o In Catheter Sheath o In Linear Phantom o In Air
[0119] The resulting case files were then read through the use of the resizeLabelAndSaveCase method(s) contained in dataProcessing.py, such methods operating to read the MM-OCT case files and extract the selected data (in this case, Processed Polar OCT Data) formatted as a two-dimensional (2-D) array. The extracted data was then resized (in the case of the subject experiments, the data was resized to 25 X 25 pixels) to allow the neural net to be trained more efficiently. The appropriate label indicating known catheter connection/disconnection status was then appended to the resized data, and the new data was flattened into a single array. The subject data was then written out to a Comma Separated Values (CSV) file to be used for training the neural network. Each training Epoch represents passing data or all of the data in the training set through the neural network to train the model. The loss value is a measure of how efficient the model is as it is getting trained (as shown in the data, the loss value gets lower over the course of the training, which confirms that accuracy and efficiency of the one or more features of the instant application). In one or more embodiments, a goal may be set to get the loss value as close to zero as possible. While not limited to this embodiment, one way to achieve a loss value as close to zero as possible is to have as many Epochs as necessary to get a slope of the loss values to flatten or substantially flatten. In this experiment, to Epochs would have been sufficient (instead of the too Epochs used) to achieve a desired loss value because the loss value stabilized so quickly. That said, in one or more embodiments, a number of Epochs that may be used may be one or more of the following: to Epochs, more than to Epochs, 20 Epochs, 30 Epochs, 40 Epochs, 50 Epochs, 60 Epochs, 70 Epochs, 80 Epochs, 90 Epochs, too Epochs, more than too Epochs, etc. Accuracy, precision, and recall are all measures of an effectiveness of a model at predictions based on the test data that was given. In one or more embodiments, a preferred goal is to have a 1.00 value, which represents 100%. For example, the data for the too Epoch experiment showed an initial accuracy as 0.43 (or 43%), and, as the training went on in the first Epoch, that accuracy increased to 0.94 (94%). As shown in the experimental data of FIG. 8 (for an experiment having 10 Epochs), initial accuracy increased to 0.9518 (95.18%) as the training went on in the first Epoch, and an accuracy of 1.0000 (100%) was obtained by the second Epoch. In the experiments, since a goal of catheter connection determination may be viewed or defined as discrete, 100% accuracy, 100% precision, and 100% recall were obtained quickly.
[o 120] Creating and Training the Neural Network:
[0121] The file containing the data gathered and labeled in the data collection was read. The data was then split into a training population and a test population each comprising of image data and a known catheter connection/ disconnection status. During the experiments, the training and test data were taken from the same population, and were selected at random from the original population to ensure that there is no bias between test and training populations.
[0122] A neural network was then compiled comprising or consisting of multiple layers. The input layer to the neural network had the same number of nodes as the resized image had pixels, to ensure that each pixel was taken into consideration (although the method(s) discussed herein are not limited thereto, and the neural network may have a different number of nodes as compared with a number of pixels of a resized image). One or more additional hidden layers were added to allow for the weights of the neural network to be significant enough for accurate prediction. In one or more embodiments, the first layer preferably may be the size of the inputs (in the case of the experiment, 125 (one for each pixel of the image). In one or more embodiments, hidden layer(s) may be up to the discretion of the engineer, neural network user or expert, or imaging device expert or user. One consideration to keep in mind may be that, in one or more embodiments, additional hidden layers may increase accuracy but come with an additional cost in performance. In one or more embodiments, an engineer, neural network expert/ user, or imaging device expert/ user may start with one or two hidden layers, and may add additional layers if desired or useful. The final layer only had a single neuron activated by a sigmoid function. In the experiments, a sigmoid function was used with binary classification because, for values less than or equal to 0.5, the sigmoid function returns a value of zero (0) and for values larger than 0.5, the sigmoid function returns a value of one (1). The sigmoid function used during the experiment may be found online at https://www.tensorflow.org/api_docs/python/tf/keras/activations/sigmoid). Other activation functions like a linear, binary step; a Rectified Linear Unit (ReLU); etc. may be used as alternatives. That said, the sigmoid function was used in the binary situation because the results were binary. This is because the results of this neural net are binary which require only a single node for indication, and a sigmoid activation function allowed for clearer indication of a binary result than other potential activation functions. In one or more embodiments, the results may be non-binary depending on how a user of the image device defines the catheter connection(s) or disconnection(s) (e.g., there may be multiple types of connections, such as, but not limited to, fully connected status, partially connected status in a case where a partial connection is acceptable, etc.; and there may be multiple types of disconnections, such as, but not limited to, fully disconnected status, partially disconnected status in a case where a partial disconnection is not acceptable, etc.).
[0123] The training image data and image labels were then passed to the neural network for training. The resulting training adjusted the weights of the different nodes to activate when or in a case where a catheter connection / disconnection was detected. Once training was complete, the model was evaluated using the testing image data and image labels to determine how accurately the neural net predicts the correct label.
[0124] Potential Issues:
[0125] Data was collected while in a pullback state. As such, in one or more embodiments involving data obtained in live mode (which may be at a lower resolution) may not be as accurate as the pullback data. As such, one or more ways to improve the accuracy of such live mode data may include one or more of the following: filtering or smoothing the data to improve the resolution or image quality of the data, collecting data in a pullback state and in a live mode state to compare the accuracy of the live mode state with the accuracy of the pullback state, etc. In the case of the experiments, all of the images were transformed to 25 x 25 pixels. As such, any difference in resolution between live mode and a pullback state/mode did not detrimentally affect the results since the evaluation image was much smaller than the 800 x 800 pixel values that both the live mode and pullback state/mode were in during the experiments. Depending on processing capabilities of one or more imaging devices in one or more embodiments, images may be larger than 25 x 25 pixels.
[0126] Experimental Results: [0127] The training was successful, and the neural network performed with 100% accuracy against a test population of 1,350 images. The results were reviewed multiple times to ensure that the accuracy was correct (and not in error).
[0128] Additional Experimental Data and Results:
[0129] Two additional experiments were conducted to highlight the efficiency of the feature(s) and technique(s) of the present disclosure. As discussed at length above, more than 10 Epochs may be used to improve accuracy, precision, and recall as shown in the data. Indeed, the experimental data above used too Epochs. An additional experiment was conducted and the data and results were obtained. As shown in the obtained data, the improved/maximized accuracy, precision, and recall was achieved quickly (showing a value of 1.0 for each of accuracy, precision, and recall by Epoch 2 of the too Epochs; the loss value kept reducing and improving as the Epochs continued). That said, embodiments are not limited to too Epochs as aforementioned, and any other predetermined or set number of Epochs may be used. For example, as shown in FIG. 8, another experiment was conducted. The data of FIG. 8 shows the efficiency of the technique! s), feature(s), and/or method! s) discussed herein because values of 1.0 were achieved for accuracy, precision, and recall by Epoch 2, and the loss value continued to decrease and improve from Epoch 1 through Epoch 10.
[0130] Continuum Robot or Robot (e.g., continuum robot or other type of robot) Application(s) :
[0131] Just like MM-OCT devices or systems may benefit from establishing catheter connections, robot or continuum robot devices or systems, or other types of robot devices or systems, that may use the same or similar connections may benefit from accurately detecting component connections and/or disconnections. The majority of the connection or disconnection detection method(s) may be reused, including the neural network features. Where changes may be made, for example, would be in how the initial data is collected and transformed in one or more embodiments. The MM-OCT application(s) may have mechanisms for saving data, and the continuum robot or robot application(s) may also use saving data mechanisms in one or more embodiments. Additionally, the robot, continuum robot e.g., a chip-on-tip or other camera may be passed through a tool channel of a robot or continuum robot and such a chip-on-tip or other camera may be connected to return imaging data), or other robot camera(s), may be or may include a color camera, and the images collected by the imaging application(s) may be greyscale or may include greyscale images, so a shift from color to grayscale may also be employed for imaging application(s) and for considering imaging quality and related data in one or more embodiments. [0132] In one or more embodiments where a tool channel is used, the neural network may be trained to detect the end of the tool channel. This detection may occur by training the neural network to recognize the difference between the signals present in the tool channel environment and the signals that are present outside of the tool channel or channel environment. In the event that the tool chamber or channel restricts a field of perception or a field of view of the sensor, the neural net may be trained to identify a case or cases where the field of perception or the field of view expands.
[o 133] One or more other embodiments may use a neural net to recognize a unique marker placed at the end of the tool channel during manufacturing. This marker may span the circumference of the inside of the tool chamber or tool channel or may include one or more icons placed along the circumference of the tool chamber or tool channel. In some embodiments, multiple markers may be used to signify distance to the end of the tool chamber or the tool channel.
[0134] One or more embodiments of the present disclosure may use one or more different types of models, such as those discussed in PCT/US2020/051615, filed on September 18, 2020 and published as WO 2021/055837 A9 on March 25, 2021, and as discussed in U.S. Pat. App. No. 17/761,561, filed on March 17, 2022, the applications and publications of which are incorporated by reference herein in their entireties. For example, in one or more embodiments, selecting a model (segmentation model (classification model), object or keypoint detection model (regression model), or a combination thereof) may depend on a success rate of coregistration, which may be affected by a catheter connection or disconnection detection success rate, in the setting of a final application on validation and/or test data set(s). Such consideration(s) may be balanced with time (e.g., a predetermined time period, a desired time period, an available time period, a target time period, etc.) for processing/predicting and user interaction. There are many factors to consider when choosing a model, such as, but not limited to, the catheter connection or disconnection detection success rate and/or co registration success rate, etc., and because success rates may vary from method to method depending on the conditions for such methods, such success rate(s) may be calculated in many different ways. While catheter connection or disconnection detection success rate may be calculated in various ways, one example of a catheter connection or disconnection detection success rate is to calculate the number of frames for which the predicted and the true catheter connections or disconnections are considered the same (e.g., when the distance between predicted and true catheter connections or disconnections is within a certain tolerance or below a pre-defined distance threshold, which is defined by a user or pre-defined in the system; etc.) divided by the total number of frames obtained, received, or imaged during the OCT pullback. According to a first method where a user specifies a pullback region on one frame, according to a second method where a user points out catheter location on several or multiple frames, and according to a third method where a user specifies a pullback region on multiple frames, several success rates may highlight success rate variation(s) . By applying machine or deep learning as discussed herein, catheter connection or disconnection detection success rates and coregistration success rates may be improved or maximized. The success rate of catheter connection or disconnection detection (and consequently the success rate of coregistration) may depend on how good the prediction of a catheter location, connection, or disconnection is across all frames. As such, by improving estimation of the catheter location, the success rate of the catheter connection or disconnection detection may be improved and likewise the success rate of coregistration may be improved.
[0135] For the segmentation model (also referred to as classification model or a semantic segmentation model) architecture, one or more certain area(s) of an image are predicted to belong to one or more classes in one or more embodiments. There are many different segmentation model architectures or ways to formulate or frame the image segmentation task or issue, such as, but not limited to, embodiment(s) as discussed in PCT/US2020/051615, filed on September 18, 2020 and published as WO 2021/055837 A9 on March 25, 2021, and as discussed in U.S. Pat. App. No. 17/761,561, filed on March 17, 2022, the applications and publications of which are incorporated by reference herein in their entireties. For example, one or more embodiments of a semantic segmentation model may be performed using the One-Hundred Layers Tiramisu method discussed in “The One Hundred Layers Tiramisu: Fully Convolutional DenseNets for Semantic Segmentation” to Simon Jegou, et al., Montreal Institute for Learning Algorithms, published October 31, 2017 (https://arxiv.org/pdf/1611.09326.pdf), which is incorporated by reference herein in its entirety. Convolutional Neural Networks (CNNs) may be used for one or more features of the present invention, including, but not limited to, artificial intelligence feature(s), detecting one or more catheter connections or disconnections, using the catheter connection or disconnection detection results to perform coregistration, image classification, semantic image segmentation, etc. For example, while other architectures may be employed, one or more embodiments may combine U-net, ResNet, and DenseNet architectural components to perform segmentation. U-net is a popular convolutional neural network architecture for image segmentation, ResNet improves training deep convolutional neural network models due to its skip connections, and DenseNet has reliable and good feature extractors because of its compact internal representations and reduced feature redundancy. In one or more embodiments, a network may be trained by slicing the training data set, and not downsampling the data (in other words, image resolution may be preserved or maintained). By applying the One-Hundred Layers Tiramisu method(s) as aforementioned, one or more features, such as, but not limited to, convolution, concatenation, transition up, transition down, dense block, etc., may be employed by slicing the training data set. While not limited to only or by only these embodiment examples, in one or more embodiments, a slicing size may be one or more of the following: too x too, 224 x 224, and 512 x 512. A batch size (of images in a batch) that is larger typically performs better (e.g., with greater accuracy). In one or more embodiments, 16 images/batch may be used. The optimization of all of these hyper-parameters depends on the size of the available data set as well as the available computer/computing resources; thus, once more data is available, different hyper- parameter values may be chosen. In one or more embodiments, a convolutional autoencoder (CAE) may be used.
[0136] In addition to detection of the catheter connection or disconnection, a segmentation model may be used to demarcate regions of interest in an image representing a blood vessel. Since we know that the catheter may be located inside a vessel (intravascular OCT imaging probe) in one or more embodiments, demarcation of vessels may be used to improve the accuracy and precision of catheter detection.
[0137] In one or more embodiments, the segmentation model with post-processing may be used with one or more features from “The One Hundred Layers Tiramisu: Fully Convolutional DenseNets for Semantic Segmentation” to Simon Jegou, et al., Montreal Institute for Learning Algorithms, published October 31, 2017 (https://arxiv.org/pdf/1611.09326.pdf), which is incorporated by reference herein in its entirety.
[0138] For the object detection model (also referred to as the regression model or keypoint detection model as aforementioned) architecture, one or more embodiments may use an angio image or images as an input and may predict the catheter location or catheter connection or disconnection in a form of a spatial coordinate. This approach/architecture has advantages over semantic segmentation because the object detection model predicts the catheter location or catheter connection or disconnection directly, and may avoid post-processing in one or more embodiments. The object detection model architecture may be created or built by using or combining convolutional layers, maxpooling layers, fully-connected dense layers, and/ or multi-scale image or feature pyramids. Different combinations may be used to determine the best performance test result. The performance test result(s) may be compared with other model architecture test results to determine which architecture to use for a given application or applications.
[0139] One or more embodiments of architecture model(s) discussed herein may be used with one or more of: a neural network(s), a convolutional neural network(s), and a random forest.
[0140] One or more embodiments may use convolutional neural network architectures with residual connections as discussed in “Deep Residual Learning for Image Recognition” by Kaiming He, et al., Microsoft Research, December 10, 2015 (https://arxiv.org/pdf/1512.03385.pdf), which is incorporated by reference herein in its entirety.
[0141] In one or more embodiments, a different neural network architecture may be used. For example, one or more embodiment examples of a neural network architecture may use feature pyramids as described in “Feature Pyramid Networks for Object Detection” by Tsung-Yi Lin, et al., Facebook Al Research (FAIR), April 19, 2017 (https://arxiv.org/abs/1612.03144). Again, the machine learning algorithm or model architecture is not limited to the structures or details discussed herein.
[0142] One or more embodiments may use a recurrent convolutional neural network object detection model with long short-term memory (see e.g., “long short-term memory” as discussed in “Long Short-Term Memory” by Hochreiter, et al., Neural Computation, Volume 9, Issue 8, November 1997 (https://dl.acm.0rg/d0i/10.1162/nec0.1997.9.8.1735); as discussed in “Fundamentals of Recurrent Neural Network (RNN) and Long Short-Term Memory (LSTM) Network” by Alex Sherstinsky, Elsevier Journal “Physica D: Nonlinear Phenomena”, Volume 404, March 2020 (https://arxiv.org/abs/1808.03314l; as discussed in “Sequence to Sequence Learning with Neural Networks”, by Sutskeyer, et al., December 2014 (https://papers.nips.cc/paper/5346-sequence-to- sequence-learning-with-neural-networks.pdf); etc.) that enables consideration of spatial and temporal information for predicting maker locations. Since a radiopaque marker moves a certain direction during the pullback, utilizing that information may improve success rate of marker detection and/or catheter connection or disconnection detection. In this case, model input is a sequence of multiple frames, and model output is a sequence of spatial coordinates for marker and/or catheter locations in each of the given images.
[0143] One or more embodiments may use a neural network model that is created by transfer learning. Transfer learning is a method of using a model with pre-trained (instead of randomly initialized) parameters, that have been optimized for the same or a different objective (e.g., to solve a different image recognition or computer vision issue) on a different data set with a potentially different underlying data distribution. The model architecture may be adapted or used to solve new objective(s) or issue(s), for example, by adding, removing, or replacing one or more layers of the neural network, and the potentially modified model is then further trained (fine-tuned) on the new data set. Under the assumption that lower-level features, such as edge detector(s), are transferrable from one objective or issue domain to another, this learning approach may help improve the performance of the model, especially when the size of the available data set is small. In this specific application, by using pre-trained model with residual learning, the success rate improves about 30%.
[0144] In one or more embodiments, evaluation metric(s) may be used for model evaluation such as discussed in PCT/US2020/051615, filed on September 18, 2020 and published as WO 2021/055837 A9 on March 25, 2021, and as discussed in U.S. Pat. App. No. 17/761,561, filed on March 17, 2022, the applications and publications of which are incorporated by reference herein in their entireties.
[0145] Several non-limiting examples of differences between using a segmentation model and an object detection model are discussed herein. As discussed above, an object detection model may not have enough resolution for accurate prediction of the marker location. That said, in one or more embodiments, a sufficiently optimized object detection model may achieve better or maximized performance. On the other hand, while a segmentation model may provide better resolution than at least one embodiment of an object detection model, as aforementioned, at least one embodiment of a segmentation model may use post-processing to obtain a coordinate of predicted marker location (which may lead to a lower marker detection success rate in one or more embodiments).
[0146] As discussed further herein, there are multiple options that may be used to improve or address the above differences between segmentation and object detection models. By way of a couple of non-limiting, non-exhaustive examples: (i) a combination model may be employed, which, for example, involves running a semantic segmentation model and then applying an object detection model to an area with higher probability from the segmentation model (one or more features of such combined approaches may be used in one or more embodiments of the present disclosure, one or more features, including, but not limited to, those as discussed in “Mask R-CNN” to Kaiming He, et al., Facebook Al Research (FAIR), January 24, 2018 (https://arxiv.0rg/pdf/1703.06870.pdf), which is incorporated by reference herein in its entirety); and/or (ii) running an object detection model with a bigger normalized range, applying the object detection model, and then applying the object detection model again with a higher probability area from the first object detection model.
[0147] After making improvements to one or more architecture models as discussed herein, specific advantages may include, but are not limited to, one or more of the following : higher resolution leading to a more accurate prediction result; lower computational memory and/or processing may be utilized (less resource(s) used, faster processing achieved, etc.); and no user interaction is needed (while one or more embodiments may involve user interaction).
[0148] Visualization, PCI procedure planning, and physiological assessment may be combined to perform complete PCI planning beforehand, and to perform complete assessment after the procedure. Once a 3D structure is constructed or reconstructed and a user specifies an interventional device, e.g., a stent, that is planned to be used, virtual PCI may be performed in a computer simulation (e.g., by one or more of the computers discussed herein, such as, but not limited to, the computer 2, the processor computer 1200, the processor or computer 1200’, any other processor discussed herein, etc.). Then, another physiological assessment may be performed based on the result of the virtual PCI. This approach allows a user to find the best device (e.g., interventional device, implant, stent, etc.) for each patient before or during the procedure.
[o 149] While a few examples of GUIs have been discussed herein and shown in one or more of the figures of the present disclosure, other GUI features, imaging modality features, or other imaging features, may be used in one or more embodiments of the present disclosure, such as the GUI feature(s), imaging feature(s), and/or imaging modality feature(s) disclosed in U.S. Pat. No. 16/401,390, filed May 2, 2019, and disclosed in U.S. Pat. Pub. No. 2019/0029624 and WO 2019/023375, which application(s) and publication(s) are incorporated by reference herein in their entireties.
[0150] One or more methods or algorithms for calculating stent expansion/underexpansion or apposition/malapposition may be used in one or more embodiments of the present disclosure, including, but not limited to, the expansion/underexpansion and apposition/malapposition methods or algorithms discussed in U.S. Pat. Pub. Nos. 2019/0102906 and 2019/0099080, which publications are incorporated by reference herein in their entireties.
[0151] One or more methods or algorithms for calculating or evaluating cardiac motion using an angiography image and/or for displaying anatomical imaging may be used in one or more embodiments of the present disclosure, including, but not limited to, the methods or algorithms discussed in U.S. Pat. Pub. No. 2019/0029623 and U.S. Pat. Pub. No. 2018/0271614 and WO 2019/ 023382, which publications are incorporated by reference herein in their entireties.
[0152] One or more methods or algorithms for performing co-registration and/or imaging may be used in one or more embodiments of the present disclosure, including, but not limited to, the methods or algorithms discussed in U.S. Pat. App. No. 62/798,885, filed on January 30, 2019, and discussed in U.S. Pat. Pub. No. 2019/0029624, which application(s) and publication(s) are incorporated by reference herein in their entireties.
[0153] Such information and other features discussed herein may be applied to other applications, such as, but not limited to, co-registration, other modalities, etc. Indeed, the useful applications of the features of the present disclosure and of the aforementioned applications and patent publications are not limited to the discussed modalities, images, or medical procedures. Additionally, depending on the involved modalities, images, or medical procedures, one or more control bars may be contoured, curved, or have any other configuration desired or set by a user. For example, in an embodiment using a touch screen as discussed herein, a user may define or create the size and shape of a control bar based on a user moving a pointer, a finger, a stylus, another tool, etc. on the touch screen (or alternatively by moving a mouse or other input tool or device regardless of whether a touch screen is used or not).
[0154] One or more embodiments of the present disclosure may include taking multiple views (e.cr, OCT image, ring view, tomo view, anatomical view, etc.), and one or more embodiments may highlight or emphasize NIRAF. In one or more embodiments, two handles may operate as endpoints that may bound the color extremes of the NIRAF data in or more embodiments. In addition to the standard tomographic view, the user may select to display multiple longitudinal views. When connected to an angiography system, the Graphical User Interface (GUI) may also display angiography images.
[0155] In accordance with one or more aspects of the present disclosure, the aforementioned features are not limited to being displayed or controlled using any particular GUI. In general, the aforementioned imaging modalities may be used in various ways, including with or without one or more features of aforementioned embodiments of a GUI or GUIs. For example, a GUI may show an OCT image with a tool or marker to change the image view as aforementioned even if not presented with a GUI (or with one or more other components of a GUI; in one or more embodiments, the display may be simplified for a user to display set or desired information).
[0156] The procedure to select the region of interest and the position of a marker, an angle, a plane, etc., for example, using a touch screen, a GUI (or one or more components of a GUI; in one or more embodiments, the display may be simplified for a user to display the set or desired information), a processor (e.g., processor or computer 2, 1200, 1200’, or any other processor discussed herein) may involve, in one or more embodiments, a single press with a finger and dragging on the area to make the selection or modification. The new orientation and updates to the view may be calculated upon release of a finger, or a pointer.
[0157] For one or more embodiments using a touch screen, two simultaneous touch points may be used to make a selection or modification, and may update the view based on calculations upon release.
[0158] One or more functions may be controlled with one of the imaging modalities, such as the angiography image view or the OCT image view, to centralize user attention, maintain focus, and allow the user to see all relevant information in a single moment in time.
[0159] In one or more embodiments, one imaging modality may be displayed or multiple imaging modalities may be displayed.
[0160] One or more procedures may be used in one or more embodiments to select a region of choice or a region of interest for a view. For example, after a single touch is made on a selected area (e.p., by using a touch screen, by using a mouse or other input device to make a selection, etc.), the semi-circle (or other geometric shape used for the designated area) may automatically adjust to the selected region of choice or interest. Two (2) single touch points may operate to connect/draw the region of choice or interest. [0161] FIG. 9A shows an OCT system 100 (as referred to herein as “system too” or “the system too”) which may be used for one or more imaging modalities, such as, but not limited to, angiography, Optical Coherence Tomography (OCT), Multi-modality OCT (MM-OCT), near-infrared autofluorescence (NIRAF), near-infrared fluorescence (NIRF), OCT-NIRAF, etc., and/or for employing one or more additional features discussed herein, including, but not limited to, artificial intelligence processes (e.g., machine or deep learning, residual learning, artificial intelligence (“Al”) co-registration, catheter connection/disconnection detection, etc.) in accordance with one or more aspects of the present disclosure. The system too comprises a light source 101, a reference arm 102, a sample arm 103, a deflected or deflecting section 108, a reference mirror (also referred to as a “reference reflection”, “reference reflector”, “partially reflecting mirror” and a “partial reflector”) 105, and one or more detectors 107 (which may be connected to a computer 1200). In one or more embodiments, the system too may include a patient interface device or unit (“PIU”) 110 and a catheter 120 (see e.g., embodiment examples of a PIU and a catheter as shown in FIGS. 1A-1B, FIG. 7 and/or FIGS. 9A-9C), and the system too may interact with an object 106, a patient (e.g., a blood vessel of a patient) 106, a sample, etc. (e.g., via the catheter 120 and/or the PIU 110). In one or more embodiments, the system too includes an interferometer or an interferometer is defined by one or more components of the system too, such as, but not limited to, at least the light source 101, the reference arm 102, the sample arm 103, the deflecting section 108 and the reference mirror 105.
[0162] In accordance with one or more further aspects of the present disclosure, bench top systems maybe utilized for one or more imaging modalities, , such as, but not limited to, angiography, Optical Coherence Tomography (OCT), Multi-modality OCT (MM-OCT), near-infrared autofluorescence (NIRAF), near-infrared fluorescence (NIRF), OCT-NIRAF, OCT-NIRF, etc., and/or for employing one or more additional features discussed herein, including, but not limited to, artificial intelligence processes (e.g., machine or deep learning, residual learning, artificial intelligence (“Al”) co-registration, catheter connection/disconnection detection, marker detection, etc.) in accordance with one or more aspects of the present disclosure. FIG. 9B shows an example of a system that can utilize the one or more imaging modalities, such as, but not limited to, angiography, Optical Coherence Tomography (OCT), Multi-modality OCT (MM-OCT), near-infrared auto-fluorescence (NIRAF), near-infrared fluorescence (NIRF), OCT-NIRAF, OCT-NIRF, etc., and/or for employing one or more additional features discussed herein, including, but not limited to, artificial intelligence processes (e.g., machine or deep learning, residual learning, artificial intelligence (“Al”) coregistration, catheter connection/disconnection detection, marker detection, etc.) in accordance with one or more aspects of the present disclosure discussed herein for a bench -top such as for ophthalmic applications. A light from a light source 101 delivers and splits into a reference arm 102 and a sample arm 103 with a deflecting section 108. A reference beam goes through a length adjustment section 904 and is reflected from a reference mirror (such as or similar to the reference mirror or reference reflection 105 shown in FIG. 9A) in the reference arm 102 while a sample beam is reflected or scattered from an object, a patient (e.g., blood vessel of a patient), etc. 106 in the sample arm 103 (e.g., via the PIU 110 and the catheter 120). In one embodiment, both beams combine at the deflecting section 108 and generate interference patterns. In one or more embodiments, the beams go to the combiner 903, and the combiner 903 combines both beams via the circulator 901 and the deflecting section 108, and the combined beams are delivered to one or more detectors (such as the one or more detectors 107). The output of the interferometer is continuously acquired w ith one or more detectors, such as the one or more detectors 107. The electrical analog signals are converted to the digital signals to analyze them w ith a computer, such as, but not limited to, the computer 1200 (see FIGS. 9A-9C; also shown in FIGS. 11 and 13 discussed further below), the computer 1200’ (see e.g., FIGS. 12 and 13 discussed further below), the computer 2 (see FIG. 1A), the processors 26, 36, 50 (see FIG. 1B), any other computer or processor discussed herein, etc. Additionally or alternatively, one or more of the computers, CPUs, processors, etc. discussed herein may be used to process, control, update, emphasize, and/or change one or more of imaging modalities, and/or process the related techniques, functions or methods, or may process the electrical signals as discussed above.
[0163] The electrical analog signals may be converted to the digital signals to analyze them with a computer, such as, but not limited to, the computer 1200 (see FIGS. 1B and 9A-9C; also shown in FIGS. 11 and 13 discussed further below), the computer 1200’ (see e.g., FIGS. 12-13 discussed further below), the computer 2 (see FIG. 1A), any other processor or computer discussed herein, etc. Additionally or alternatively, one or more of the computers, CPUs, processors, etc. discussed herein may be used to process, control, update, emphasize, and/or change one or more imaging modalities, and/or process the related techniques, functions or methods, or may process the electrical signals as discussed above. In one or more embodiments (see e.g., FIG. 9B), the sample arm 103 includes the PIU 110 and the catheter 120 so that the sample beam is reflected or scattered from the object, patient {e.g., blood vessel of a patient), etc. 106 as discussed herein. In one or more embodiments, the PIU 110 may include one or more motors to control the pullback operation of the catheter 120 (or one or more components thereof) and/or to control the rotation or spin of the catheter 120 (or one or more components thereof) (see e.g., the motor M of FIG. 1B). For example, as best seen in FIG. 9B, the PIU 110 may include a pullback motor (PM) and a spin motor (SM), and/or may include a motion control unit 112 that operates to perform the pullback and/or rotation features using the pullback motor PM and/or the spin motor SM. As discussed herein, the PIU 110 may include a rotary junction e.g., rotary junction RJ as shown in FIGS. 9B and 9C). The rotary junction RJ may be connected to the spin motor SM so that the catheter 120 may obtain one or more views or images of the object, patient {e.g., blood vessel of a patient), etc. 106. The computer 1200 (or the computer 1200’, computer 2, any other computer or processor discussed herein, etc.) may be used to control one or more of the pullback motor PM, the spin motor SM and/or the motion control unit 112. An OCT system may include one or more of a computer {e.g., the computer 1200, the computer 1200’, computer 2, any other computer or processor discussed herein, etc.), the PIU no, the catheter 120, a monitor (such as the display 1209), etc. One or more embodiments of an OCT system may interact with one or more external systems, such as, but not limited to, an angio system, external displays, one or more hospital networks, external storage media, a power supply, a bedside controller (e.g., which may be connected to the OCT system using Bluetooth technology or other methods known for wireless communication), etc.
[0164] In one or more embodiments including the deflecting or deflected section 108 (best seen in FIGS. 9A-9C), the deflected section 108 may operate to deflect the light from the light source 101 to the reference arm 102 and/ or the sample arm 103, and then send light received from the reference arm 102 and/or the sample arm 103 towards the at least one detector 107 (e.g., a spectrometer, one or more components of the spectrometer, another type of detector, etc.). In one or more embodiments, the deflected section (e.g., the deflected section 108 of the system 100, too’, 100”, any other system discussed herein, etc.) may include or may comprise one or more interferometers or optical interference systems that operate as described herein, including, but not limited to, a circulator, a beam splitter, an isolator, a coupler (e.g., fusion fiber coupler), a partially severed mirror with holes therein, a partially severed mirror with a tap, etc. In one or more embodiments, the interferometer or the optical interference system may include one or more components of the system too (or any other system discussed herein) such as, but not limited to, one or more of the light source 101, the deflected section 108, the rotary junction RJ, a PIU 110, a catheter 120, etc. One or more features of the aforementioned configurations of at least FIGS. 1-9B (and/ or any other configurations discussed below) may be incorporated into one or more of the systems, including, but not limited to, the system too, too’, too”, etc. discussed herein.
[0165] In accordance with one or more further aspects of the present disclosure, one or more other systems may be utilized with one or more of the multiple imaging modalities and related method(s) as disclosed herein. FIG. 9C shows an example of a system too” that may utilize the one or more multiple imaging modalities, such as, but not limited to, angiography, Optical Coherence Tomography (OCT), Multi-modality OCT (MM-OCT), near-infrared auto-fluorescence (NIRAF), near-infrared fluorescence (NIRF), OCT-NIRAF, OCT-NIRF, etc., and/or for employing one or more additional features discussed herein, including, but not limited to, artificial intelligence processes (e.g., machine or deep learning, residual learning, artificial intelligence (“Al”) co-registration, catheter connection/disconnection detection, marker detection, etc.) and/or related technique! s) or method(s) such as for ophthalmic applications in accordance with one or more aspects of the present disclosure. FIG. 9C shows an exemplary schematic of an OCT-fluorescence imaging system too”, according to one or more embodiments of the present disclosure. An OCT light source 101 (e.g., with a 1.3pm) is delivered and split into a reference arm 102 and a sample arm 103 with a deflector or deflected section (e.g., a splitter) 108, creating a reference beam and sample beam, respectively. The reference beam from the OCT light source 101 is reflected by a reference mirror 105 while a sample beam is reflected or scattered from an object (e.g., an object to be examined, an object, a target, a patient, etc.) 106 through a circulator 901, a rotary junction 90 (“RJ”) and a catheter 120. In one or more embodiments, the fiber between the circulator 901 and the reference mirror or reference reflection 105 may be coiled to adjust the length of the reference arm 102 (best seen in FIG. 9C). Optical fibers in the sample arm 103 may be made of double clad fiber (“DCF”). Excitation light for the fluorescence may be directed to the RJ 90 and the catheter 120, and illuminate the object (e.g., an object to be examined, an object, a patient, etc.) 106. The light from the OCT light source 101 may be delivered through the core of DCF while the fluorescence light emitted from the object (e.g., an object to be examined, an object, a target, a patient, etc.) 106 may be collected through the cladding of the DCF. For pullback imaging, the RJ 90 may be moved with a linear stage to achieve helical scanning of the object (e.g., an object to be examined, an object, a target, a patient, etc.) 106. In one or more embodiments, the RJ 90 may include any one or more features of an RJ as discussed herein. Dichroic filters DFi, DF2 may be used to separate excitation light and the rest of fluorescence and OCT lights. For example (and while not limited to this example), in one or more embodiments, DFi may be a long pass dichroic filter with a cutoff wavelength of ~iooo nm, and the OCT light, which may be longer than a cutoff wavelength of DFi, may go through the DFi while fluorescence excitation and emission, which are a shorter wavelength than the cut off, reflect at DFi. In one or more embodiments, for example (and while not limited to this example) , DF2 may be a short pass dichroic filter; the excitation wavelength may be shorter than fluorescence emission light such that the excitation light, which has a wavelength shorter than a cutoff wavelength of DF2, may pass through the DF2, and the fluorescence emission light reflect with DF2. In one embodiment, both beams combine at the deflecting section 108 and generate interference patterns. In one or more embodiments, the beams go to the coupler or combiner 903, and the coupler or combiner 903 combines both beams via the circulator 901 and the deflecting section 108, and the combined beams are delivered to one or more detectors (such as the one or more detectors 107; see e.g., the first detector 107 connected to the coupler or combiner 903 in FIG. 9C).
[0166] In one or more embodiments, the optical fiber in the catheter 120 operates to rotate inside the catheter 120, and the OCT light and excitation light may be emitted from a side angle of a tip of the catheter 120. After interacting with the object or patient 106, the OCT light may be delivered back to an OCT interferometer (e.g., via the circulator 901 of the sample arm 103), which may include the coupler or combiner 903, and combined with the reference beam (e.g., via the coupler or combiner 903) to generate interference patterns. The output of the interferometer is detected with a first detector 107, wherein the first detector 107 may be photodiodes or multi-array cameras, and then may be recorded to a computer (e.g., to the computer 2, the computer 1200 as shown in FIG. 9C, the computer 1200’, or any other computer discussed herein) through a first data-acquisition unit or board (“DAQ1”). [0167] Simultaneously or at a different time, the fluorescence intensity may be recorded through a second detector 107 (e.g., a photomultiplier) through a second data-acquisition unit or board (“DAQ2”). The OCT signal and fluorescence signal may be then processed by the computer (e.g., to the computer 2, the computer 1200 as shown in FIG. 9C, the computer 1200’, or any other computer discussed herein) to generate an OCT-fluorescence data set 140, which includes or is made of multiple frames of helically scanned data. Each set of frames includes or is made of multiple data elements of co-registered OCT and fluorescence data, which correspond to the rotational angle and pullback position.
[0168] Detected fluorescence or auto-fluorescence signals may be processed or further processed as discussed in U.S. Pat. App. No. 62/861,888, filed on June 14, 2019, the disclosure of which is incorporated herein by reference in its entirety, and/or as discussed in U.S. Pat. App. No. 16/368,510, filed March 28, 2019, the disclosure of which is incorporated herein by reference herein in its entirety.
[0169] While not limited to such arrangements, configurations, devices or systems, one or more embodiments of the devices, apparatuses, systems, methods, storage mediums, GUI’s, etc. discussed herein may be used with an apparatus or system as aforementioned, such as, but not limited to, for example, the system 100, the system 100’, the system 100”, the devices, apparatuses, or systems of FIGS. 1A-1B and 9A-17, any other device, apparatus or system discussed herein, etc. In one or more embodiments, one user may perform the method(s) discussed herein. In one or more embodiments, one or more users may perform the method(s) discussed herein. In one or more embodiments, one or more of the computers, CPUs, processors, etc. discussed herein may be used to process, control, update, emphasize, and/or change one or more of the imaging modalities, and/or process the related techniques, functions or methods, or may process the electrical signals as discussed above.
[0170] The light source 101 may include a plurality of light sources or may be a single light source. The light source 101 may be a broadband lightsource, and may include one or more of a laser, an organic light emitting diode (OLED), a light emitting diode (LED), a halogen lamp, an incandescent lamp, supercontinuum light source pumped by a laser, and/or a fluorescent lamp. The light source 101 may be any light source that provides light which may then be dispersed to provide light which is then used for imaging, performing control, viewing, changing, emphasizing methods for imaging modalities, constructing or reconstructing 3D structure(s), and/or any other method discussed herein. The light source 101 may be fiber coupled or may be free space coupled to the other components of the apparatus and/or system too, too’, too”, the devices, apparatuses or systems of FIGS. 1A-1B and 12A-17, or any other embodiment discussed herein. As aforementioned, the light source 101 may be a swept-source (SS) light source. [0171] Additionally or alternatively, the one or more detectors 107 may be a linear array, a charge- coupled device (CCD), a plurality of photodiodes or some other method of converting the light into an electrical signal. The detector(s) 107 may include an analog to digital converter (ADC). The one or more detectors may be detectors having structure as shown in one or more of FIGS. 1A-1B and 12A- 17 and as discussed herein.
[0172] In accordance with one or more aspects of the present disclosure, one or more methods for performing imaging are provided herein. FIG. 10 illustrates a flow chart of at least one embodiment of a method for performing imaging. The method(s) may include one or more of the follow ing: (i) splitting or dividing light into a first light and a second reference light (see step S4000 in FIG. 10); (ii) receiving reflected or scattered light of the first light after the first light travels along a sample arm and irradiates an object (see step S4001 in FIG. 10); (iii) receiving the second reference light after the second reference light travels along a reference arm and reflects off of a reference reflection (see step S4002 in FIG. 10); and (iv) generating interference light by causing the reflected or scattered light of the first light and the reflected second reference light to interfere with each other (for example, by combining or recombining and then interfering, by interfering, etc.), the interference light generating one or more interference patterns (see step S4003 in FIG. 10). One or more methods may further include using low frequency monitors to update or control high frequency content to improve image quality. For example, one or more embodiments may use multiple imaging modalities, related methods or techniques for same, etc. to achieve improved image quality. In one or more embodiments, an imaging probe may be connected to one or more systems (e.g., the system too, the system too’, the system too”, the devices, apparatuses or systems of FIGS. 1A-1B and 12A-17, any other system or apparatus discussed herein, etc.) with a connection member or interface module. For example, when the connection member or interface module is a rotary junction for an imaging probe, the rotary junction may be at least one of: a contact rotary junction, a lenseless rotary junction, a lensbased rotary junction, or other rotary junction known to those skilled in the art. The rotary junction may be a one channel rotary junction or a two channel rotary junction. In one or more embodiments, the illumination portion of the imaging probe may be separate from the detection portion of the imaging probe. For example, in one or more applications, a probe may refer to the illumination assembly, which includes an illumination fiber (e.g., single mode fiber, a GRIN lens, a spacer and the grating on the polished surface of the spacer, etc.). In one or more embodiments, a scope may refer to the illumination portion which, for example, may be enclosed and protected by a drive cable, a sheath, and detection fibers (e.g., multimode fibers (MMFs)) around the sheath. Grating coverage is optional on the detection fibers (e.g., MMFs) for one or more applications. The illumination portion may be connected to a rotary joint and may be rotating continuously at video rate. In one or more embodiments, the detection portion may include one or more of: a detection fiber, a detector (e.g., the one or more detectors 107, a spectrometer, etc.), the computer 1200, the computer 1200’, the computer 2, any other computer or processor discussed herein, etc. The detection fibers may surround the illumination fiber, and the detection fibers may or may not be covered by a grating, a spacer, a lens, an end of a probe or catheter, etc.
[0173] The one or more detectors 107 may transmit the digital or analog signals to a processor or a computer such as, but not limited to, an image processor, a processor or computer 1200, 1200’ (see e.g., FIGS. 9A-9C and 11-13), a computer 2 (see e.g., FIG. 1A), any other processor or computer discussed herein, a combination thereof, etc. The image processor may be a dedicated image processor or a general purpose processor that is configured to process images. In at least one embodiment, the computer 1200, 1200’, 2 or any other processor or computer discussed herein may be used in place of, or in addition to, the image processor. In an alternative embodiment, the image processor may include an ADC and receive analog signals from the one or more detectors 107. The image processor may include one or more of a CPU, DSP, FPGA, ASIC, or some other processing circuitry. The image processor may include memory for storing image, data, and instructions. The image processor may generate one or more images based on the information provided by the one or more detectors 107. A computer or processor discussed herein, such as, but not limited to, a processor of the devices, apparatuses or systems of FIGS. 1-9C, the computer 1200, the computer 1200’, the computer 2, the image processor, may also include one or more components further discussed herein below (see e.g., FIGS. 11-13).
[0174] In at least one embodiment, a console or computer 1200, 1200’, a computer 2, any other computer or processor discussed herein, etc. operates to control motions of the RJ via the motion control unit (MCU) 112 or a motor M, acquires intensity data from the detector(s) in the one or more detectors 107, and displays the scanned image (e.g., on a monitor or screen such as a display, screen or monitor 1209 as shown in the console or computer 1200 of any of FIGS. 9A-9C and FIGS. 11 and 13 and/or the console 1200’ of FIGS. 12-13 as further discussed below; the computer 2 of FIG. 1A; any other computer or processor discussed herein; etc.). In one or more embodiments, the MCU 112 or the motor M operates to change a speed of a motor of the RJ and/or of the RJ. The motor may be a stepping or a DC servo motor to control the speed and increase position accuracy (e.g., compared to when not using a motor, compared to when not using an automated or controlled speed and/or position change device, compared to a manual control, etc.).
[0175] The output of the one or more components of any of the systems discussed herein may be acquired with the at least one detector 107, e.g., such as, but not limited to, photodiodes, Photomultiplier tube(s) (PMTs), line scan camera(s), or multi-array camera(s). Electrical analog signals obtained from the output of the system too, too’, too”, and/or the detector(s) 107 thereof, and/or from the devices, apparatuses, or systems of FIGS. 1-9C and/or 11-17, are converted to digital signals to be analyzed with a computer, such as, but not limited to, the computer 1200, 1200’. In one or more embodiments, the light source 101 may be a radiation source or a broadband light source that radiates in a broad band of wavelengths. In one or more embodiments, a Fourier analyzer including software and electronics may be used to convert the electrical analog signals into an optical spectrum.
[0176] Unless otherwise discussed herein, like numerals indicate like elements. For example, while variations or differences exist between the systems, such as, but not limited to, the system too, the system too’, the system too”, or any other device, apparatus or system discussed herein, one or more features thereof may be the same or similar to each other, such as, but not limited to, the light source 101 or other component(s) thereof (e.g., the console 1200, the console 1200’, etc.). Those skilled in the art will appreciate that the light source 101, the motor or MCU 112, the RJ, the at least one detector 107, and/or one or more other elements of the system too may operate in the same or similar fashion to those like-numbered elements of one or more other systems, such as, but not limited to, the devices, apparatuses or systems of FIGS. 1-9C and/ or 11-17, the system 100’, the system 100”, or any other system discussed herein. Those skilled in the art will appreciate that alternative embodiments of the devices, apparatuses or systems of FIGS. 1-9C and/or 11-17, the system 100’, the system 100”, any other device, apparatus or system discussed herein, etc., and/or one or more like- numbered elements of one of such systems, while having other variations as discussed herein, may operate in the same or similar fashion to the like-numbered elements of any of the other systems (or components thereof) discussed herein. Indeed, while certain differences exist between the system 100 of FIG. 9A and one or more embodiments shown in any of FIGS. 1-8, 9B-9C, and 11-17, for example, as discussed herein, there are similarities. Likewise, while the console or computer 1200 may be used in one or more systems (e.g., the system 100, the system 100’, the system 100”, the devices, apparatuses or systems of any of FIGS. 1-17, or any other system discussed herein, etc.), one or more other consoles or computers, such as the console or computer 1200’, any other computer or processor discussed herein, etc., may be used additionally or alternatively.
[0177] There are many ways to compute intensity, viscosity, resolution (including increasing resolution of one or more images), etc., to use one or more imaging modalities, to construct or reconstruct 3D structure(s), to detect catheter connection or disconnection and/or related methods for same, discussed herein, digital as well as analog. In at least one embodiment, a computer, such as the console or computer 1200, 1200’, may be dedicated to control and monitor the imaging (e.g., OCT, single mode OCT, multimodal OCT, multiple imaging modalities, etc.) devices, systems, methods and/or storage mediums described herein.
[0178] The electric signals used for imaging may be sent to one or more processors, such as, but not limited to, a computer or processor 2 (see e.g., FIG. 1A), a computer 1200 (see e.g., FIGS. 9A-9B, 11, and 13), a computer 1200’ (see e.g., FIGS. 12 and 13), etc. as discussed further below, via cable(s) or wire(s), such as, but not limited to, the cable(s) or wire(s) 113 (see FIG. 11). Additionally or alternatively, the electric signals, as aforementioned, may be processed in one or more embodiments as discussed above by any other computer or processor or components thereof. The computer or processor 2 as shown in FIG. 1A may be used instead of any other computer or processor discussed herein (e.g., computer or processors 1200, 1200’, etc.), and/or the computer or processor 1200, 1200’ may be used instead of any other computer or processor discussed herein (e.g., computer or processor 2). In other words, the computers or processors discussed herein are interchangeable, and may operate to perform any of the multiple imaging modalities feature(s) and method(s) discussed herein, including using, controlling, and changing a GUI or multiple GUI’s.
[0179] Various components of a computer system 1200 are provided in FIG. 11. A computer system 1200 may include a central processing unit (“CPU”) 1201, a ROM 1202, a RAM 1203, a communication interface 1205, a hard disk (and/or other storage device) 1204, a screen (or monitor interface) 1209, a keyboard (or input interface; may also include a mouse or other input device in addition to the keyboard) 1210 and a BUS (or “Bus”) or other connection lines (e.g., connection line 1213) between one or more of the aforementioned components (e.g., including but not limited to, being connected to the console, the probe, the imaging apparatus or system, any motor discussed herein, a light source, etc.). In addition, the computer system 1200 may comprise one or more of the aforementioned components. For example, a computer system 1200 may include a CPU 1201, a RAM 1203, an input/output (I/O) interface (such as the communication interface 1205) and a bus (which may include one or more lines 1213 as a communication system between components of the computer system 1200; in one or more embodiments, the computer system 1200 and at least the CPU 1201 thereof may communicate with the one or more aforementioned components of a device or system, such as, but not limited to, an apparatus or system using one or more imaging modalities and related method(s) as discussed herein), and one or more other computer systems 1200 may include one or more combinations of the other aforementioned components (e.g., the one or more lines 1213 of the computer 1200 may connect to other components via line 113). The CPU 1201 is configured to read and perform computer-executable instructions stored in a storage medium. The computer-executable instructions may include those for the performance of the methods and/or calculations described herein. The system 1200 may include one or more additional processors in addition to CPU 1201, and such processors, including the CPU 1201, may be used for tissue or object characterization, diagnosis, evaluation, imaging and/or construction or reconstruction. The system 1200 may further include one or more processors connected via a network connection (e.g., via network 1206). The CPU 1201 and any additional processor being used by the system 1200 may be located in the same telecom network or in different telecom networks (e.g., performing feature(s), function(s), technique(s), method(s), etc. discussed herein maybe controlled remotely).
[0180] The I/O or communication interface 1205 provides communication interfaces to input and output devices, which may include a light source, a spectrometer, a microphone, a communication cable and a network (either wired or wireless), a keyboard 1210, a mouse (see e.g., the mouse 1211 as shown in FIG. 12), a touch screen or screen 1209, a light pen and so on. The communication interface of the computer 1200 may connect to other components discussed herein via line 113 (as diagrammatically shown in FIG. 11). The Monitor interface or screen 1209 provides communication interfaces thereto.
[0181] Any methods and/or data of the present disclosure, such as the methods for performing tissue or object characterization, diagnosis, examination, imaging (including, but not limited to, increasing image resolution, performing imaging using one or more imaging modalities, viewing or changing one or more imaging modalities and related methods (and/or option(s) or feature(s)), etc.), and/ or catheter connection and/or disconnection detection, for example, as discussed herein, may be stored on a computer-readable storage medium. A computer-readable and/or writable storage medium used commonly, such as, but not limited to, one or more of a hard disk e.g., the hard disk 1204, a magnetic disk, etc.), a flash memory, a CD, an optical disc (e.p., a compact disc (“CD”) a digital versatile disc (“DVD”), a Blu-ray™ disc, etc.), a magneto-optical disk, a random-access memory (“RAM”) (such as the RAM 1203), a DRAM, a read only memory (“ROM”), a storage of distributed computing systems, a memory card, or the like (e.g., other semiconductor memory, such as, but not limited to, a non-volatile memory card, a solid state drive (SSD) (see SSD 1207 in FIG. 12), SRAM, etc.), an optional combination thereof, a server/database, etc. may be used to cause a processor, such as, the processor or CPU 1201 of the aforementioned computer system 1200 to perform the steps of the methods disclosed herein. The computer-readable storage medium may be a non-transitory computer-readable medium, and/or the computer-readable medium may comprise all computer- readable media, with the sole exception being a transitory, propagating signal in one or more embodiments. The computer-readable storage medium may include media that store information for predetermined, limited, or short period(s) of time and/or only in the presence of power, such as, but not limited to Random Access Memory (RAM), register memory, processor cache(s), etc. Embodiment(s) of the present disclosure may also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a “non-transitory computer- readable storage medium”) to perform the functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment(s).
[0182] In accordance with at least one aspect of the present disclosure, the methods, systems, and computer-readable storage mediums related to the processors, such as, but not limited to, the processor of the aforementioned computer 1200, etc., as described above may be achieved utilizing suitable hardware, such as that illustrated in the figures. Functionality of one or more aspects of the present disclosure may be achieved utilizing suitable hardware, such as that illustrated in FIG. 11. Such hardware may be implemented utilizing any of the known technologies, such as standard digital circuitry, any of the known processors that are operable to execute software and/or firmware programs, one or more programmable digital devices or systems, such as programmable read only memories (PROMs), programmable array logic devices (PALs), etc. The CPU 1201 (as shown in FIG. 11), the processor or computer 2 (as shown in FIG. 1A) and/or the computer or processor 1200’ (as shown in FIG. 12) may also include and/or be made of one or more microprocessors, nanoprocessors, one or more graphics processing units (“GPUs”; also called a visual processing unit (“VPU”)), one or more Field Programmable Gate Arrays (“FPGAs”), or other types of processing components (e.g., application specific integrated circuit(s) (ASIC)). Still further, the various aspects of the present disclosure may be implemented by way of software and/or firmware program(s) that may be stored on suitable storage medium (e.g., computer-readable storage medium, hard drive, etc.) or media (such as floppy disk(s), memory chip(s), etc.) for transportability and/or distribution. The computer may include a network of separate computers or separate processors to read out and execute the computer executable instructions. The computer executable instructions may be provided to the computer, for example, from a network or the storage medium. The computers or processors (e.g., 2, 1200, 1200’, etc.) may include the aforementioned CPU structure, or may be connected to such CPU structure for communication therewith.
[0183] As aforementioned, hardware structure of an alternative embodiment of a computer or console 1200’ is shown in FIG. 12 (see also, FIG. 13). The computer 1200’ includes a central processing unit (CPU) 1201, a graphical processing unit (GPU) 1215, a random access memory (RAM) 1203, a network interface device 1212, an operation interface 1214 such as a universal serial bus (USB) and a memory such as a hard disk drive or a solid state drive (SSD) 1207. The computer or console 1200’ may include a display 1209. The computer 1200’ may connect with a motor, a console, or any other component of the device(s) or system(s) discussed herein via the operation interface 1214 or the network interface 1212 (e.g., via a cable or fiber, such as the cable or fiber 113 as similarly shown in FIG. 11). A computer, such as the computer 1200’, may include a motor or motion control unit (MCU) in one or more embodiments. The operation interface 1214 is connected with an operation unit such as a mouse device 1211, a keyboard 1210 or a touch panel device. The computer 1200’ may include two or more of each component.
[0184] At least one computer program is stored in the SSD 1207, and the CPU 1201 loads the at least one program onto the RAM 1203, and executes the instructions in the at least one program to perform one or more processes described herein, as well as the basic input, output, calculation, memoiy writing and memoiy reading processes. [0185] The computer, such as the computer 2, the computer 1200, 1200’, (or other component(s) such as, but not limited to, the PCU, etc.), etc. may communicate with an MCU, an interferometer, a spectrometer, a detector, etc. to perform imaging, and may reconstruct an image from the acquired intensity data. The monitor or display 1209 displays the reconstructed image, and may display other information about the imaging condition or about an object to be imaged. The monitor 1209 also provides a graphical user interface for a user to operate any system discussed herein. An operation signal is input from the operation unit (e.g., such as, but not limited to, a mouse device 1211, a keyboard 1210, a touch panel device, etc.) into the operation interface 1214 in the computer 1200’, and corresponding to the operation signal the computer 1200’ instructs any system discussed herein to set or change the imaging condition (e.g., improving resolution of an image or images), and to start or end the imaging. A light or laser source and a spectrometer and/or detector may have interfaces to communicate with the computers 1200, 1200’ to send and receive the status information and the control signals.
[0186] As shown in FIG. 13, one or more processors or computers 1200, 1200’ (or any other processor discussed herein) may be part of a system in which the one or more processors or computers 1200, 1200’ (or any other processor discussed herein) communicate with other devices (e.g., a database 1603, a memoiy 1602 (which may be used with or replaced by any other type of memory discussed herein or known to those skilled in the art), an input device 1600, an output device 1601, etc.). In one or more embodiments, one or more models may have been trained previously and stored in one or more locations, such as, but not limited to, the memoiy 1602, the database 1603, etc. In one or more embodiments, it is possible that one or more models and/or data discussed herein (e.g., training data, testing data, validation data, imaging data, etc.) may be input or loaded via a device, such as the input device 1600. In one or more embodiments, a user may employ an input device 1600 (which may be a separate computer or processor, a keyboard such as the keyboard 1210, a mouse such as the mouse 1211, a microphone, a screen or display 1209 (e.g., a touch screen or display), or any other input device known to those skilled in the art). In one or more system embodiments, an input device 1600 may not be used (e.g., where user interaction is eliminated by one or more artificial intelligence features discussed herein). In one or more system embodiments, the output device 1601 may receive one or more outputs discussed herein to perform the marker detection, the coregistration, and/or any other process discussed herein. In one or more system embodiments, the database 1603 and/or the memory 1602 may have outputted information (e.g., trained model(s), detected marker information, image data, test data, validation data, training data, coregistration result(s), segmentation model information, object detection/regression model information, combination model information, etc.) stored therein. That said, one or more embodiments may include several types of data stores, memory, storage media, etc. as discussed above, and such storage media, memoiy, data stores, etc. may be stored locally or remotely. [0187] Additionally, unless otherwise specified, the term “subset” of a corresponding set does not necessarily represent a proper subset and may be equal to the corresponding set.
[0188] While one or more embodiments of the present disclosure include various details regarding a neural network model architecture and optimization approach, in one or more embodiments, any other model architecture, machine learning algorithm, or optimization approach may be employed. One or more embodiments may utilize hyper- arameter combination(s). One or more embodiments may employ data capture, selection, annotation as well as model evaluation (e.g., computation of loss and validation metrics) since data may be domain and application specific. In one or more embodiments, the model architecture may be modified and optimized to address a variety of computer visions issues (discussed below).
[0189] One or more embodiments of the present disclosure may automatically detect (predict a spatial location of) a radiodense OCT marker in a time series of X-ray images to co-register the X-ray images with the corresponding OCT images (at least one example of a reference point of two different coordinate systems). One or more embodiments may use deep (recurrent) convolutional neural network(s), which may improve marker detection, catheter connection and/or disconnection detection, and image co-registration significantly. One or more embodiments may employ segmentation and/or object/keypoint detection architectures to solve one or more computer vision issues in other domain areas in one or more applications. One or more embodiments employ several novel materials and methods to solve one or more computer vision or other issues (e.g., radiodense OCT marker detection in time series of X-ray images, for instance; catheter connection or disconnection detection; etc.).
[0190] One or more embodiments employ data capture and selection. In one or more embodiments, the data is what makes such an application unique and distinguishes this application from other applications. For example, images may include a radiodense marker that is specifically used in one or more procedures (e.g., added to the OCT capsule, used in catheters/probes with a similar marker to that of an OCT marker, used in catheters/ probes with a similar or same marker even in a case where the catheters/probes use an imaging modality different from OCT, etc.) to facilitate computational detection of a marker and/or catheter connection or disconnection in one or more images (e.g., X-ray images). One or more embodiments couple a software device or features (model) to hardware (e.g., an OCT probe, a probe/catheter using an imaging modality different from OCT while using a marker that is the same as or similar to the marker of an OCT probe/catheter, etc.). One or more embodiments may utilize animal data in addition to patient data. Training deep learning may use a large amount of data, which may be difficult to obtain from clinical studies. Inclusion of image data from pre-clinical studies in animals into a training set may improve model performance. Training and evaluation of a model may be highly data dependent (e.g., a way in which frames are selected (e. ., pullback only), split into training/validation/test sets, and grouped into batches as well as the order in which the frames, sets, and/or batches are presented to the model, any other data discussed herein, etc.). In one or more embodiments, such parameters may be more important or significant than some of the model hyper-parameters (e.g., batch size, number of convolution layers, any other hyper-parameter discussed herein, etc.). One or more embodiments may use a collection or collections of user annotations after introduction of a device/apparatus, system, and/or method(s) into a market, and may use post market surveillance, retraining of a model or models with new data collected (e.g., in clinical use), and/or a continuously adaptive algorithm/methodfs).
[0191] One or more embodiments employ data annotation. For example, one or more embodiments may label pixel(s) representing a marker or a catheter connection or disconnection as well as pixels representing a blood vessel(s) at different phase(s) of a procedure/method (e.g., different levels of contrast due to intravascular contrast agent) of frame(s) acquired during pullback.
[0192] One or more embodiments employ incorporation of prior knowledge. For example, in one or more embodiments, a marker location may be known inside a vessel and/or inside a catheter or probe. As such, simultaneous localization of the vessel and marker may be used to improve marker detection and/or catheter connection or disconnection detection. In one or more embodiments, a marker may move during a pullback inside a vessel, and such prior knowledge may be incorporated into the machine learning algorithm or the loss function.
[0193] One or more embodiments employ loss (cost) and evaluation function(s)/metric(s). For example, use of temporal information for model training and evaluation may be used in one or more embodiments. One or more embodiments may evaluate a distance between prediction and ground truth per frame as well as consider a trajectory of predictions across multiple frames of a time series.
[0194] Application of machine learning
[0195] Application of machine learning may be used in one or more embodiment(s), as discussed in PCT/US2020/051615, filed on September 18, 2020 and published as WO 2021/055837 A9 on March 25, 2021, and as discussed in U.S. Pat. App. No. 17/761,561, filed on March 17, 2022, the applications and publications of which are incorporated by reference herein in their entireties. For example, at least one embodiment of an overall process of machine learning is shown below: i.Create a dataset that contains both images and corresponding ground truth labels; ii.Split the dataset into a training set and a testing set; iii.Select a model architecture and other hyper-parameters; iv.Train the model with the training set; v.Evaluate the trained model with the validation set; and vi.Repeat iv and v with new dataset(s).
[0196] Based on the testing results, steps i and iii may be revisited in one or more embodiments.
[0197] One or more models may be used in one or more embodiment(s) to detect a catheter connection or disconnection, such as, but not limited to, the one or more models as discussed in PCT/US2020/051615, filed on September 18, 2020 and published as WO 2021/055837 A9 on March 25, 2021, and as discussed in U.S. Pat. App. No. 17/761,561, filed on March 17, 2022, the applications and publications of which are incorporated by reference herein in their entireties. For example, one or more embodiments may use a segmentation model, a regression model, a combination thereof, etc.
[0198] For regression model(s), the input may be the entire image frame or frames, and the output may be the centroid coordinates of radiopaque markers (target marker and stationary marker, if necessary/ desired) and/or coordinates of a portion of a catheter or probe to be used in determining the connection or disconnection status. As shown diagrammatically in FIGS. 14-16, an example of an input image on the left side of FIGS. 14-16 and a corresponding output image on the right side of FIGS. 14-16 are illustrated for regression model(s). At least one architecture of a regression model is shown in FIG. 14. In at least the embodiment of FIG. 14, the regression model may use a combination of one or more convolution layers 900, one or more max-pooling layers 901, and one or more fully connected dense layers 902. While not limited to the Kernel size, Width/Number of filters (output size), and Stride sizes shown for each layer (e.q., in the left convolution layer of FIG. 14, the Kernel size is “3x3”, the Width/# of filters (output size) is “64”, and the Stride size is “2”). In one or more embodiments, another hyper-parameter search with a fixed optimizer and with a different width may be performed, and at least one embodiment example of a model architecture for a convolutional neural network for this scenario is shown in FIG. 15. One or more embodiments may use one or more features for a regression model as discussed in “Deep Residual Learning for Image Recognition” to Kaiming He, et al., Microsoft Research, December 10, 2015 (https://arxiv.org/pdf/1512.03385.pdf), which is incorporated by reference herein in its entirety. FIG. 16 shows at least a further embodiment example of a created architecture of or for a regression model(s).
[0199] Since the output from a segmentation model, in one or more embodiments, is a “probability” of each pixel that may be categorized as a catheter connection or disconnection, postprocessing after prediction via the trained segmentation model may be developed to better define, determine, or locate the final coordinate of catheter location (or a marker location where the marker is a part of the catheter) and/or determine the connection or disconnection status of the catheter. One or more embodiments of a semantic segmentation model may be performed using the One- Hundred Layers Tiramisu method discussed in “The One Hundred Layers Tiramisu: Fully Convolutional DenseNets for Semantic Segmentation” to Simon Jegou, et al., Montreal Institute for Learning Algorithms, published October 31, 2017 (https://arxiv.org/pdf/1611.09326.pdf), which is incorporated by reference herein in its entirety. A segmentation model may be used in one or more embodiment, for example, as shown in FIG. 17. At least one embodiment may utilize an input 600 as shown to obtain an output 605 of at least one embodiment of a segmentation model method. For example, by applying the One-Hundred Layers Tiramisu method(s), one or more features, such as, but not limited to, convolution 601, concatenation 603, transition up 605, transition down 604, dense block 602, etc., may be employed by slicing the training data set. While not limited to only or by only these embodiment examples, in one or more embodiments, a slicing size may be one or more of the following: 100 x 100, 224 x 224, 512 x 512, and, in one or more of the experiments performed, a slicing size of 224 x 224 performed the best. A batch size (of images in a batch) may be one or more of the following: 2, 4, 8, 16, and, from the one or more experiments performed, a bigger batch size typically performs better (e.g., with greater accuracy). In one or more embodiments, 16 images/batch may be used. The optimization of all of these hyper-parameters depends on the size of the available data set as well as the available computer/computing resources; thus, once more data is available, different hyper-parameter values may be chosen. Additionally, in one or more embodiments, steps/epoch may be too, and the epochs may be greater than (>) 1000. In one or more embodiments, a convolutional autoencoder (CAE) maybe used.
[0200] In one or more embodiments, hyper-parameters may include, but are not limited to, one or more of the following: Depth (i.e., # of layers), Width (i.e., # of filters), Batch size (/.«., # of training images/step): May be >4 in one or more embodiments, Learning rate (i.e., a hyper-parameter that controls how fast the weights of a neural network (the coefficients of regression model) are adjusted with respect the loss gradient), Dropout (i.e., % of neurons (filters) that are dropped at each layer), and/or Optimizer: for example, Adam optimizer or Stochastic gradient descent (SGD) optimizer. In one or more embodiments, other hyper-parameters may be fixed or constant values, such as, but not limited to, for example, one or more of the following: Input size (e.g., 1024 pixel x 1024 pixel, 512 pixel x 512 pixel, another preset or predetermined number or value set, etc.), Epochs: too, 200, 300, 400, 500, another preset or predetermined number, etc. (for additional training, iteration may be set as 3000 or higher), and/or Number of models trained with different hyper-parameter configurations (e.g., 10, 20, another preset or predetermined number, etc.).
[0201] One or more features discussed herein may be determined using a convolutional autoencoder, Gaussian filters, Haralick features, and/or thickness or shape of the sample or object.
[0202] One or more embodiments of the present disclosure may use machine learning to determine marker location; to determine, detect, or evaluate catheter connection or disconnection; to perform coregistration; and/or to perform any other feature discussed herein. Machine learning is a field of computer science that gives processors the ability to learn, via artificial intelligence. Machine learning may involve one or more algorithms that allow processors or computers to learn from examples and to make predictions for new unseen data points. In one or more embodiments, such one or more algorithms may be stored as software or one or more programs in at least one memory or storage medium, and the software or one or more programs allow a processor or computer to carry out operation(s) of the processes described in the present disclosure.
[0203] Similarly, the present disclosure and/ or one or more components of devices, systems, and storage mediums, and/or methods, thereof also may be used in conjunction with optical coherence tomography probes. Such probes include, but are not limited to, the OCT imaging systems disclosed in U.S. Pat. Nos. 6,763,261; 7,366,376; 7,843,572; 7,872,759; 8,289,522; 8,676,013; 8,928,889; 9,087,368; 9,557,154; 10,912,462; 9,795,301; and 9,332,942 to Tearney et al., and U.S. Pat. Pub. Nos. 2014/0276011 and 2017/0135584; and WO 2016/01505210 Tearney et al., and arrangements and methods of facilitating photoluminescence imaging, such as those disclosed in U.S. Pat. No. 7,889,348 to Tearney et al., as well as the disclosures directed to multimodality imaging disclosed in U.S. Pat. 9,332,942, and U.S. Patent Publication Nos. 2010/0092389, 2011/0292400, 2012/0101374, and 2016/0228097, and WO 2016/144878, each of which patents and patent publications are incorporated by reference herein in their entireties.
[0204] The present disclosure and/or one or more components of devices, systems, and storage mediums, and/or methods, thereof also may be used in conjunction with OCT imaging systems and/or catheters and catheter systems, such as, but not limited to, those disclosed in U.S. Pat. Nos. 9,869,828; 10,323,926; 10,558,001; 10,601,173; 10,606,064; 10,743,749; 10,884,199; 10,895,692; and 11,175,126 as well as U.S. Patent Publication Nos. 2019/0254506; 2020/0390323; 2021/0121132; 2021/0174125; 2022/0040454; 2022/0044428, and W02021/055837, each ofwhich patents and patent publications are incorporated by reference herein in their entireties.
[0205] Further, the present disclosure and/or one or more components of devices, systems, and storage mediums, and/or methods, thereof also may be used in conjunction with continuum robotic systems and catheters, such as, but not limited to, those described in U.S. Patent Publication Nos. 2019/0105468; 2021/0369085; 2020/0375682; 2021/0121162; 2021/0121051; and 2022- 0040450, each of which patents and/or patent publications are incorporated by reference herein in their entireties.
[0206] Although the disclosure herein has been described with reference to particular embodiments, it is to be understood that these embodiments are merely illustrative of the principles and applications of the present disclosure (and are not limited thereto), and the invention is not limited to the disclosed embodiments. It is therefore to be understood that numerous modifications may be made to the illustrative embodiments and that other arrangements may be devised without departing from the spirit and scope of the present disclosure. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.

Claims

WHAT IS CLAIMED IS:
1. An artificial intelligence training apparatus comprising: a memory; one or more processors in communication with the memory, the one or more processors operating to: acquire or receive image data, the image data including image data for a catheter to be connected to or disconnected from an imaging device or system; train a model with a portion of the image data; use one or more images of the image data along with the trained model to determine whether the catheter has established a valid or complete connection to the imaging device or system or a valid or complete connection disconnection from the imaging device or system; determine whether the performance of the trained model is sufficient; and in the event that the performance of the trained model is not sufficient, then repeat the procedure of model training and determining, or, in the event that the performance of the trained model is sufficient, select the trained model and save the trained model to the memory.
2. The artificial intelligence training apparatus of claim 1, wherein the one or more processors further operate to: split the acquired image data into training, validation, and test sets or groups; and train the model with data in the training set or group and evaluate the model with data in the validation set or group to evaluate whether the trained model is sufficient based on a performance of the model on the validation set or group.
3. The artificial intelligence training apparatus of claim 1, wherein one or more of the following:
(i) the saved, trained model is used as a created identifier or detector for identifying or detecting a catheter connection or disconnection in image data;
(ii) the model is one or a combination of the following: a neural net model or neural network model, a deep convolutional neural network model, a recurrent neural network model with long shortterm memory that can take temporal relationships across images or frames into account, a model that can take temporal relationships across images or frames into account, a model that can take temporal relationships into account including catheter movement(s) or location(s) during pullback in a vessel and/or including catheter connection or disconnection data during pullback in a vessel, a model that can use prior knowledge about a procedure and incorporate the prior knowledge into the machine learning algorithm or a loss function, a model using feature pyramid(s) that can take different image resolutions into account, and/or a model using residual learning technique(s); a segmentation model, a segmentation model with post-processing, a model with pre-processing, a model with postprocessing, a segmentation model with pre-processing, a deep learning or machine learning model, a semantic segmentation model or classification model, an object detection or regression model, an object detection or regression model with pre-processing or post-processing, a combination of a semantic segmentation model and an object detection or regression model, a model using repeated segmentation model technique(s), a model using feature pyramid! s), a genetic algorithm that operates to breed multiple models for improved performance, and/or a model using repeated object detection or regression model technique(s);
(iii) the one or more processors further operate to use one or more neural networks or convolutional neural networks to one or more of: train a model, determine whether the performance of the trained model is sufficient or not, and/or to detect the catheter connection or disconnection data, select a model, and estimate a generalization error of the model;
(iv) the one or more processors further operate to estimate a generalization error of the trained model with image data placed or included in a test set or group; and/or
(v) the one or more processors further operate to estimate a generalization error of multiple trained models with data in the test set or group, and to select one model based on a performance of the selected model on the validation set or group.
4. The artificial intelligence training apparatus of claim 1, wherein the one or more processors further operate to: detect a case where a catheter or a new catheter is connected to the imaging device or system; receive or obtain an image or images; use a neural network or other Al compatible or Al-ready network to evaluate the received or obtained image or images; determine whether the catheter is connected or disconnected or whether a connection or disconnection for the catheter is detected; and set a connection or disconnection status for the catheter based on the determination.
5. The artificial intelligence training apparatus of claim 4, wherein the one or more processors further operate to one or more of the following: set the connection or disconnection status as “Yes” in a case where one or more of the following: a connection between the catheter and the imaging device or system is detected, a partial connection between the catheter and the imaging device or system is detected, a complete connection between the catheter and the imaging device or system is detected; or set the connection or disconnection status as “No” in a case where no connection of any kind between the catheter and the imaging device or system is detected or where a disconnection between the catheter and the imaging device or system is detected; and/ or save the connection or disconnection status in one or more memories and/or use the connection or disconnection status to evaluate the trained model and/or to detect one or more additional catheter connections or disconnections.
6. The artificial intelligence training apparatus of claim 4, wherein one or more of the following:
(i) use of the trained model results in one or more of the following: improved safety of the catheter and/or of the imaging device or system, improved imaging accuracy, and/or improved accuracy or success of the connection or disconnection status, wherein the improved safety, the improved imaging accuracy, and/ or the improved accuracy or success is compared with a case where the trained model is not used;
(ii) use of the trained model in a case where the received or obtained image or images includes one image or image frame results in one or more of the following: improved safety of the catheter and/or of the imaging device or system, improved imaging accuracy, and/or improved accuracy or success of the connection or disconnection status, the improved safety, wherein the improved imaging accuracy, and/or the improved accuracy or success is compared with a case where the trained model is not used;
(iii) use of the trained model in a case where the received or obtained image or images includes a plurality of images or image frames results in one or more of the following: improved safety of the catheter and/or of the imaging device or system, improved imaging accuracy, and/or improved accuracy or success of the connection or disconnection status, wherein the improved safety, the improved imaging accuracy, and/ or the improved accuracy or success is compared with a case where the trained model is not used;
(iv) re-use the trained model from a case where the received or obtained image or images includes one image or image frame for a case where the received or obtained image or images includes a plurality of images or image frames such that the re-use of the trained model results in one or more of the following: improved safety of the catheter and/or of the imaging device or system, improved imaging accuracy, and/or improved accuracy or success of the connection or disconnection status, wherein the improved safety, the improved imaging accuracy, and/or the improved accuracy or success is compared across a preset or predetermined number of images or frames with an improved safety of the catheter and/or of the imaging device or system, an improved imaging accuracy, and/or an improved accuracy or success of the connection or disconnection status for the case where the trained model is used for one image or image frame; and/or
(v) the received or obtained image or images comprises a plurality of received or obtained images or image frames such that one or more of the following is achieved: improved safety of the catheter and/or of the imaging device or system as compared with a case where the received or obtained image or images comprises a single image or image frame, improved imaging accuracy as compared with a case where the received or obtained image or images comprises a single image or image frame, and/or improved accuracy or success of the connection or disconnection status as compared with a case where the received or obtained image or images comprises a single image or image frame.
7. The artificial intelligence training apparatus of claim 4, wherein, in a case where the received or obtained image or images comprises or is to comprise a plurality of images or image frames, the one or more processors further operate to: evaluate or determine whether a set, predetermined, or minimum amount of images or frames have been collected to perform the evaluation of the plurality of images or frames; and in a case where the evaluation or determination of the set, predetermined, or minimum amount of images or frames is “NO”, obtain or receive one or more additional images or frames, or, in a case where the evaluation or determination of the set, predetermined, or minimum amount of images or frames is “YES”, proceed to use the neural network or other Al compatible or Al-ready network to evaluate the received or obtained images or image frames, determine whether the catheter is connected or disconnected or whether the connection or disconnection for the catheter is detected; and set the connection or disconnection status for the catheter based on the determination.
8. The artificial intelligence training apparatus of claim 4, wherein, the one or more processors further operate to send a message or warning to a display, the message or warning indicating that one or more of the following: the catheter has been connected, the catheter has been disconnected, the imaging device or system is in a start up phase where a catheter is not attached yet, and/or the imaging device or system is in a start up phase where a connection or disconnection of a catheter is not detected yet.
9. The artificial intelligence training apparatus of claim 4, wherein, the one or more processors further operate to: detect whether the catheter has been disconnected from the imaging device or system; received or obtain one or more additional images; evaluate the one or more additional images using the neural network or other Al compatible or Al-ready network to evaluate the received or obtained one or more additional images; determine whether the catheter is connected or disconnected or whether a connection or disconnection for the catheter is detected; and update the connection or disconnection status, or set an updated connection or disconnection status, for the catheter based on the determination.
10. The artificial intelligence training apparatus of claim 1, wherein the acquired or received image data includes one or more of the following: Optical Coherence Tomography (OCT) data, Multimodality Optical Coherence Tomography (MM-OCT) data, angiography data, and/or data for one or more imaging modalities, wherein the one or more imaging modalities include one or more of the following: tomography imaging; Optical Coherence Tomography (OCT) imaging; fluorescence imaging; near-infrared auto-fluorescence (NIRAF) imaging; near-infrared auto-fluorescence (NIRAF) imaging in a predetermined view, a carpet view, and/or an indicator view; near-infrared fluorescence (NIRF) imaging; near-infrared fluorescence (NIRF) imaging in a predetermined view, a carpet view, and/or an indicator view; three-dimensional (3D) rendering or imaging; 3D rendering or imaging of a vessel; 3D rendering or imaging of a vessel in a half-pipe view or display; 3D rendering or imaging of an object, target, or sample; lumen profile imaging; lumen diameter display imaging; longitudinal view imaging; computer tomography (CT) imaging; imaging for Magnetic Resonance Imaging (MRI); Intravascular Ultrasound (IVUS) imaging; X-ray imaging or view(s); angiography imaging or view(s); and/or medical imaging.
11. The artificial intelligence training apparatus of claim 1, wherein one or more of the following: the connection or disconnection status of the catheter is: calculated or determined wiLhoul using another piece of equipment, calculated or determined automatically, and/or calculated or determined without using or involving a trained operator or operators of the imaging device or system; the one or more processors make the evaluation(s) or determination(s) during a normal course of operation(s) of the imaging device or system or in real time such that the one or more processors avoid separate or additional operation or operations; the one or more processors employ one image or image frame of the received or obtained image or images or employ a plurality of images or image frames of the received or obtained images to evaluate the connection or disconnection status of the catheter; and/or the one or more processors further operate to calculate a coregistration success rate and/or determine whether a location of the detected catheter or detected catheter connection or disconnection is correct based on the trained model.
12. The artificial intelligence training apparatus of claim 1, wherein one or more of the following:
(i) the one or more processors further operate to improve a performance of the trained model by subsequently adding more training data and retraining the trained model to create a new instance of the model with better or optimized performance; and/or
(ii) the one or more processors further operate to improve a performance of the trained model by subsequently adding more training data and retraining the trained model to create a new instance of the model with better or optimized performance, wherein the additional training data includes one or more of the following: data based on one or more inputs of a user of the imaging device or system, and/or data from a user of the imaging device or system that identifies or corrects a location of a catheter in an image or images.
13. The artificial intelligence training apparatus of claim 1, wherein one or more of the following:
(i) the trained model and/ or sets of training data or images are obtained by collecting a series of images in the image data, the series of images including one or more images with a catheter connected and the series of images including one or more images without the catheter being connected; (ii) the image data includes hundreds or thousands of images that are captured and labeled to establish ground truth data and to split the image data into training, validation, and test sets or groups;
(iii) after the trained model is obtained, test set(s) or group(s) of the image data are fed or inserted through a neural network or other Al compatible or Al-ready network to evaluate an accuracy of the trained model based on results of the test set(s) or group(s);
(iv) the received or obtained image data is used to establish ground truth data; and/or
(v) the one or more processors further operate to one or more of the following:
(a) calculate or improve a connection or disconnection detection success rate using application of machine learning or deep learning;
(b) decide on the model to be trained based on a connection or disconnection detection success rate associated with the model as compared with a connection or disconnection detection success rate associated with one or more other models;
(c) determine whether a connection or disconnection determination is correct based on the trained model;
(d) evaluate the connection or disconnection detection success rate; and/or
(e) use the received or obtained image data to establish ground truth data, and to split the ground truth data into sets or groups for training, validation, and testing.
14. The artificial intelligence training apparatus of claim 1, wherein one or more of the following: the one or more processors further operate to receive or obtain the image data during a pullback operation of the catheter; the catheter is an intravascular imaging catheter or an intravascular imaging device; the catheter operates to obtain, receive, or acquire image data of an object, target, or sample; and/or the catheter operates to obtain, receive, or acquire image data of an object, target, or sample, where the object, target, or sample includes one or more of the following: a vessel, a target specimen or object, and a patient.
15. An artificial intelligence detection apparatus comprising: one or more processors that operate to: acquire or receive image data; receive a trained model or load a trained model from a memory; apply the trained model to the acquired or received image data; select one image or frame of the acquired or received image data; detect a connection or disconnection of a catheter and an imaging device or system with the trained model, the detected connection or disconnection of the catheter defining detected results or a detected connection or disconnection status; check whether the detected connection or disconnection of the catheter is correct or accurate; in an event that the detected connection or disconnection of the catheter is not correct or accurate, then modify the detected results or the detected connection or disconnection status of the catheter, and repeat the check as to whether the detected connection or disconnection of the catheter is correct or accurate, or in an event that the connection or disconnection of the catheter is correct or accurate, then check whether all of the frames of the acquired or received image data have been checked for correctness or accuracy; and in an event that all of the frames have not been checked for correctness or accuracy, then select another frame and repeat the detection of a connection or disconnection of the catheter and the check of whether the connection or disconnection of the catheter is correct or accurate or not for the another frame.
16. The apparatus of claim 15, wherein the one or more processors further operate to use one or more neural networks or convolutional neural networks to one or more of: load the trained model, select the image or frame, detect the connection or disconnection of the catheter for each frame, determine whether the detected connection or disconnection of the catheter is accurate or correct, determine whether the catheter or a portion of the catheter is broken in a case where the connection or disconnection is accurate or not accurate, modify the detected results or the detected connection or disconnection status for each frame, display the checked results for the connection or disconnection of the catheter on a display, and/or acquire or receive the image data during a pullback operation of the catheter.
17. The apparatus of claim 15, wherein the loaded, trained model is one or a combination of the following: a neural net model or neural network model, a deep convolutional neural network model, a recurrent neural network model with long short-term memory that can take temporal relationships across images or frames into account, a model that can take temporal relationships across images or frames into account, a model that can take temporal relationships into account including catheter movement(s) or location(s) during pullback in a vessel and/or including catheter connection or disconnection data during pullback in a vessel, a model that can use prior knowledge about a procedure and incorporate the prior knowledge into the machine learning algorithm or a loss function, a model using feature pyramid(s) that can take different image resolutions into account, and/or a model using residual learning technique(s); a segmentation model, a segmentation model with post-processing, a model with pre-processing, a model with post-processing, a segmentation model with pre-processing, a deep learning or machine learning model, a semantic segmentation model or classification model, an object detection or regression model, an object detection or regression model with pre-processing or post-processing, a combination of a semantic segmentation model and an object detection or regression model, a model using repeated segmentation model technique(s), a model using feature pyramid(s), a genetic algorithm that operates to breed multiple models for improved performance, and/or a model using repeated object detection or regression model technique(s).
18. The apparatus of claim 15, wherein the acquired or received image data includes one or more of the following: Optical Coherence Tomography (OCT) data, Multi-modality Optical Coherence Tomography (MM-OCT) data, angiography data, and/or data for one or more imaging modalities, wherein the one or more imaging modalities include one or more of the following: tomography imaging; Optical Coherence Tomography (OCT) imaging; fluorescence imaging; near-infrared autofluorescence (NIRAF) imaging; near-infrared auto-fluorescence (NIRAF) imaging in a predetermined view, a carpet view, and/or an indicator view; near-infrared fluorescence (NIRF) imaging; nearinfrared fluorescence (NIRF) imaging in a predetermined view, a carpet view, and/or an indicator view; three-dimensional (3D) rendering or imaging; 3D rendering or imaging of a vessel; 3D rendering or imaging of a vessel in a half-pipe view or display; 3D rendering or imaging of an object, target, or sample; lumen profile imaging; lumen diameter display imaging; longitudinal view imaging; computer tomography (CT) imaging; imaging for Magnetic Resonance Imaging (MRI); Intravascular Ultrasound (IVUS) imaging; X-ray imaging or view(s); angiography imaging or view(s); and/or medical imaging.
19. The apparatus of claim 15, wherein one or more of the following: the one or more processors further operate to receive or obtain the image data during a pullback operation of the catheter; the catheter is an intravascular imaging catheter or an intravascular imaging device; the catheter operates to obtain, receive, or acquire image data of an object, target, or sample; the catheter operates to obtain, receive, or acquire image data of an object, target, or sample, where the object, target, or sample includes one or more of the following: a vessel, a target specimen or object, and a patient; and/or in a case where the catheter includes a tool channel or a tool chamber and/or in a case where the tool channel or the tool chamber includes one or more markers that operate to indicate an end of the tool chanel or the tool chamber, the one or more processors further operate to: detect an end of the tool channel or chamber, detect the one or more markers, detect one or more signals in the tool chamber or the tool channel, detec one or more signals located outside of the tool chamber or the tool channel, and/or identify where a field of view of the catheter or of one or more sensors or detectorsd of the catheter changes or expands.
20. A method for training a model using artificial intelligence, the method comprising: acquiring or receiving image data, the image data including image data for a catheter to be connected to or disconnected from an imaging device or system; training a model with a portion of the image data; using one or more images of the image data along with the trained model to determine whether the catheter has established a valid or complete connection to the imaging device or system or a valid or complete connection or disconnection from the imaging device or system; determining whether the performance of the trained model is sufficient; and in the event that the performance of the trained model is not sufficient, then repeating the procedure of model training and determining, or, in the event that the performance of the trained model is sufficient, selecting the trained model and saving the trained model to a memory.
21. The method of claim 20, further including or using the apparatus of one or more of claims 6-19 and/or including or using the apparatus of one or more of claims 20-24.
22. A non-transitory computer-readable storage medium storing at least one program for causing a computer to execute a method for training a model using artificial intelligence, the method comprising: acquiring or receiving image data, the image data including image data for a catheter to be connected to or disconnected from an imaging device or system; training a model with a portion of the image data; using one or more images of the image data along with the trained model to determine whether the catheter has established a valid or complete connection to the imaging device or system or a valid or complete connection disconnection from the imaging device or system; determining whether the performance of the trained model is sufficient; and in the event that the performance of the trained model is not sufficient, then repeating the procedure of model training and determining, or, in the event that the performance of the trained model is sufficient, selecting the trained model and saving the trained model to a memory.
23. A method for detecting a connection or disconnection of a catheter, the method comprising: acquiring or receiving image data; receiving a trained model or loading a trained model from a memory; applying the trained model to the acquired or received image data; selecting one image or frame of the acquired or received image data; detecting a connection or disconnection of a catheter with or to an imaging device or system with the trained model, the detected connection or disconnection of the catheter defining detected results or a detected connection or disconnection status; checking whether the detected connection or disconnection of the catheter is correct or accurate; in an event that the detected connection or disconnection of the catheter is not correct or accurate, then modifying the detected results or the detected connection or disconnection status of the catheter, and repeating the check as to whether the detected connection or disconnection of the catheter is correct or accurate, or in an event that the connection or disconnection of the catheter is correct or accurate, then checking whether all of the frames of the acquired or received image data have been checked for correctness or accuracy; and in an event that all of the frames have not been checked for correctness or accuracy, then selecting another frame and repeating the detection of a connection or disconnection of the catheter and the check of whether the connection or disconnection of the catheter is correct or accurate or not for the another frame.
24. The method of claim 23, further comprising: using one or more neural networks or convolutional neural networks to one or more of: load the trained model, select the image or frame, detect the connection or disconnection of the catheter for each frame, determine whether the detected connection or disconnection of the catheter is accurate or correct, determine whether the catheter or a portion of the catheter is broken in a case where the connection or disconnection is accurate or not accurate, modify the detected results or the detected connection or disconnection status for each frame, display the checked results for the connection or disconnection of the catheter on a display, and/or acquire or receive the image data during a pullback operation of the catheter.
25. The method of claim 23, further including or using one or more of the following:
(i) the apparatus of one or more of claims 1-14;
(ii) the apparatus of one or more of claims 15-19;
(iii) the method of one or more of claims 20-21; and/or
(v) the storage medium of claim 22.
26. A non-transitory computer-readable storage medium storing at least one program for causing a computer to execute a method for detecting a connection or disconnection of a catheter, the method comprising: acquiring or receiving image data; receiving a trained model or loading a trained model from a memory; applying the trained model to the acquired or received image data; selecting one image or frame of the acquired or received image data; detecting a connection or disconnection of a catheter with or to an imaging device or system with the trained model, the detected connection or disconnection of the catheter defining detected results or a detected connection or disconnection status; checking whether the detected connection or disconnection of the catheter is correct or accurate; in an event that the detected connection or disconnection of the catheter is not correct or accurate, then modifying the detected results or the detected connection or disconnection status of the catheter, and repeating the check as to whether the detected connection or disconnection of the catheter is correct or accurate, or in an event that the connection or disconnection of the catheter is correct or accurate, then checking whether all of the frames of the acquired or received image data have been checked for correctness or accuracy; and in an event that all of the frames have not been checked for correctness or accuracy, then selecting another frame and repeating the detection of a connection or disconnection of the catheter and the check of whether the connection or disconnection of the catheter is correct or accurate or not for the another frame.
PCT/US2023/021695 2022-05-12 2023-05-10 Artificial intelligence catheter optical connection or disconnection evaluation, including deep machine learning and using results thereof WO2023220150A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202263341233P 2022-05-12 2022-05-12
US63/341,233 2022-05-12

Publications (1)

Publication Number Publication Date
WO2023220150A1 true WO2023220150A1 (en) 2023-11-16

Family

ID=88730889

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2023/021695 WO2023220150A1 (en) 2022-05-12 2023-05-10 Artificial intelligence catheter optical connection or disconnection evaluation, including deep machine learning and using results thereof

Country Status (1)

Country Link
WO (1) WO2023220150A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119199204A (en) * 2024-11-26 2024-12-27 嘉兴翼波电子有限公司 A radio frequency coaxial probe assembly and a pulse characteristic measurement method thereof

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190029624A1 (en) * 2017-07-26 2019-01-31 Canon U.S.A., Inc. Method for co-registering and displaying multiple imaging modalities
US20190110753A1 (en) * 2017-10-13 2019-04-18 Ai Technologies Inc. Deep learning-based diagnosis and referral of ophthalmic diseases and disorders
WO2021055837A1 (en) * 2019-09-20 2021-03-25 Canon U.S.A., Inc. Artificial intelligence coregistration and marker detection, including machine learning and using results thereof
JP2021052999A (en) * 2019-09-27 2021-04-08 国立大学法人三重大学 Evaluation system, evaluation method, learning method, learned model and program
WO2021193007A1 (en) * 2020-03-27 2021-09-30 テルモ株式会社 Program, information processing method, information processing device, and model generating method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190029624A1 (en) * 2017-07-26 2019-01-31 Canon U.S.A., Inc. Method for co-registering and displaying multiple imaging modalities
US20190110753A1 (en) * 2017-10-13 2019-04-18 Ai Technologies Inc. Deep learning-based diagnosis and referral of ophthalmic diseases and disorders
WO2021055837A1 (en) * 2019-09-20 2021-03-25 Canon U.S.A., Inc. Artificial intelligence coregistration and marker detection, including machine learning and using results thereof
JP2021052999A (en) * 2019-09-27 2021-04-08 国立大学法人三重大学 Evaluation system, evaluation method, learning method, learned model and program
WO2021193007A1 (en) * 2020-03-27 2021-09-30 テルモ株式会社 Program, information processing method, information processing device, and model generating method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119199204A (en) * 2024-11-26 2024-12-27 嘉兴翼波电子有限公司 A radio frequency coaxial probe assembly and a pulse characteristic measurement method thereof

Similar Documents

Publication Publication Date Title
US12161426B2 (en) Artificial intelligence coregistration and marker detection, including machine learning and using results thereof
JP7626704B2 (en) System for classifying arterial image regions and their features and method of operation thereof - Patents.com
US12109056B2 (en) Constructing or reconstructing 3D structure(s)
US12076177B2 (en) Apparatuses, systems, methods and storage mediums for performance of co-registration
JP2018519018A (en) Intravascular imaging system interface and stent detection method
WO2023220150A1 (en) Artificial intelligence catheter optical connection or disconnection evaluation, including deep machine learning and using results thereof
US11972561B2 (en) Auto-pullback triggering method for intracoronary imaging apparatuses or systems using blood clearing
US20230289956A1 (en) System and Method for identifying potential false positive and blind spot locations in catheter-based multimodal images
US20240108224A1 (en) Angiography image/video synchronization with pullback and angio delay measurement
US12076118B2 (en) Devices, systems, and methods for detecting external elastic lamina (EEL) from intravascular OCT images
US12112472B2 (en) Artifact removal from multimodality OCT images
WO2024071322A1 (en) Information processing method, learning model generation method, computer program, and information processing device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23804194

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE