US20220164951A1 - Systems and methods for using ai to identify regions of interest in medical images - Google Patents
Systems and methods for using ai to identify regions of interest in medical images Download PDFInfo
- Publication number
- US20220164951A1 US20220164951A1 US17/531,177 US202117531177A US2022164951A1 US 20220164951 A1 US20220164951 A1 US 20220164951A1 US 202117531177 A US202117531177 A US 202117531177A US 2022164951 A1 US2022164951 A1 US 2022164951A1
- Authority
- US
- United States
- Prior art keywords
- environment
- algorithm
- data
- model
- roi
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/60—Protecting data
- G06F21/62—Protecting access to data via a platform, e.g. using keys or access control rules
- G06F21/6218—Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
- G06F21/6245—Protecting personal data, e.g. for financial or medical purposes
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/22—Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/217—Validation; Performance evaluation; Active pattern learning techniques
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/40—Processing or translation of natural language
-
- G06K9/6256—
-
- G06K9/6262—
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/94—Hardware or software architectures specially adapted for image or video understanding
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/205—Parsing
- G06F40/211—Syntactic parsing, e.g. based on context-free grammar [CFG] or unification grammars
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/279—Recognition of textual entities
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30068—Mammography; Breast
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
Definitions
- Deep learning enables systems to automatically discover the information required to perform feature detection or classification using raw data. Deep learning requires access to large amounts of accurately labeled data. Typically, the data labeling is primarily a manual process, which can be prohibitively costly in terms of time and human/financial resources. Moreover, data privacy concerns are often a consideration.
- medical report data and/or corresponding medical images may be provided to a first service or application in a first environment.
- the first service/application may use the medical report data/medical images to train a natural language processing (NLP)-based algorithm to identify within the medical images the location of findings described in the medical report data.
- NLP natural language processing
- the output of the NLP-based algorithm may be stored in an ROI repository in the first environment.
- a request to train a user-specific model or an algorithm may be received by a second service or application in a second environment.
- one or more data objects for the requested user-specific model/algorithm may be provided to the first service/application in the first environment.
- the first service/application may use data in the ROI repository to populate the data objects and train the user-specific model/algorithm.
- the trained user-specific model/algorithm may then be provided to the second service/application in the second environment, where the trained user-specific model/algorithm may be tested, stored, and/or provided to the user.
- FIG. 1 illustrates an overview of an example system for using AI to identify ROI in medical images, as described herein.
- FIG. 2 is a diagram of an example process flow for using AI to identify ROI in medical images, as described herein.
- FIG. 3 illustrates an example method for training an NLP-based model to generate medical images comprising labeled ROI, as described herein
- FIG. 4 illustrates an example method for using AI to identify ROI in medical images, as described herein.
- FIG. 5 illustrates one example of a suitable operating environment in which one or more of the present embodiments may be implemented.
- Medical imaging has become a widely used tool for identifying and diagnosing abnormalities, such as cancers or other conditions, within the human body.
- Medical imaging processes such as mammography and tomosynthesis are particularly useful tools for imaging breasts to screen for, or diagnose, cancer or other lesions with the breasts.
- Tomosynthesis systems are mammography systems that allow high resolution breast imaging based on limited angle tomosynthesis.
- Tomosynthesis generally, produces a plurality of X-ray images, each of discrete layers or slices of the breast, through the entire thickness thereof.
- a tomosynthesis system acquires a series of X-ray projection images, each projection image obtained at a different angular displacement as the X-ray source moves along a path, such as a circular arc, over the breast.
- CT computed tomography
- tomosynthesis is typically based on projection images obtained at limited angular displacements of the X-ray source around the breast. Tomosynthesis reduces or eliminates the problems caused by tissue overlap and structure noise present in 2D mammography imaging.
- AI artificial intelligence
- machine learning methods such as deep learning
- these tools are highly accurate and efficient, these tools must be trained to perform specific tasks.
- the training requires access to a large amount of accurately labeled data.
- the data labeling process is performed manually.
- a clinical professional must read medical report documents (physician notes, radiology reports, biopsy reports, etc.) to identify ROI associated with a patient.
- the clinical professional labels the identified ROI on medical images associated with the medical documents.
- the quality of the labeling varies among clinical professionals based on various factors, such as experience, ability, fatigue, etc.
- the labeled medical images are provided as input to an AI component.
- the AI component is trained to identify the labeled ROI in medical images subsequently provided to the trained AI component.
- the clinical professional intends to train the AI component to identify a new ROI or a new aspect of a ROI for which the AI component was previously trained, the entire process must be repeated.
- the data labeling process is often time-consuming, cumbersome, expensive, and potentially inaccurate.
- the large amount of accurately labeled data includes patient records and other sensitive physical information that is protected by various laws and regulations including data security and protection and confidential handling of protected health information.
- the data for purposes of labeling must first be deidentified by removing identification of a particular patient prior to the export of data from a medical facility. Such deidentification is time consuming and often done manually.
- a first computing environment may comprise sensitive physical and/or electronic data, such as the medical report data, medical images, patient records, and other hospital information system (HIS) data.
- the first computing environment may correspond to a healthcare facility or a section or deport intent of a healthcare facility.
- At least a portion of the medical report data and/or medical images may be provided as input to a first service or application in the first computing environment.
- the first service or application may use the input to train an AI model or algorithm to identify ROI within the medical images based on the medical report data.
- the model or algorithm may use NLP techniques to identify language that describes the locations of findings in the medical report data.
- the model or algorithm may use the identified language to provide output including image overlays for the medical images or annotated versions of the medical images that include labeled locations of the findings identified by the identified language.
- the labeled locations may include textual labels, numerical values, highlighting, encircling (and/or other types of content enclosing), arrows or pointers, font or style modifications, etc.
- the output of the model or algorithm may be stored in at least one data repository in the first computing environment.
- the data repository may also store one or more portions of the medical report data and/or the patient records.
- a second computing environment may include a second service or application for training and storing user-requested models or algorithms.
- the second computing environment may be physically and/or logically separate from the first computing environment.
- the second service or application may provide data objects and/or training requirements for the requested user-specific model or algorithm to a training component in the first computing environment.
- the training component may search the data repository to identify information relevant to the requested user-specific model or algorithm.
- the training component may use the identified information to train the requested user-specific model or algorithm.
- the trained user-specific model or algorithm may be provided to the second service or application in the second computing environment without allowing the second computing environment access to the sensitive data in the first computing environment.
- the second service or application may evaluate the model to determine a set of performance metrics.
- the set of performance metrics may represent the accuracy or effectiveness of the trained user-specific model or algorithm.
- the second service or application may use the set of performance metrics to iteratively tune/train the trained user-specific model or algorithm.
- the present disclosure provides a plurality of technical benefits including but not limited to: training an NLP-based model to detect text relating to ROI locations, using NLP-based model output to train specific AI models, improving data security/privacy during model creation, improving the accuracy of labeled data, improving the efficiency of generating labeled data, enabling self-learning AI systems within client or sensitive environments.
- FIG. 1 illustrates an overview of an example system for using AI to identify regions of interest (ROI) in medical images as described herein.
- Example system 100 as presented is a combination of interdependent components that interact to form an integrated system.
- Components of system 100 may be hardware components or software components implemented on and/or executed by hardware components of the system.
- System 100 may provide one or more operating environments for software components to execute according to operating constraints, resources, and facilities of system 100 .
- the operating environment(s) and/or software components may be provided by a single processing device, as depicted in FIG. 6 .
- the operating environment(s) and software components may be distributed across multiple devices. For instance, input may be entered on a user device and information may be processed or accessed using other devices in a network, such as one or more network devices and/or server devices.
- system 100 comprises environments 101 and 121 and network 110 .
- scale of systems such as system 100 may vary and may include more or fewer environments and/or components than those described in FIG. 1 .
- at least a portion of the functionality and components of environments 101 and 121 may be integrated into a single environment, processing system, or device. Alternately, the functionality and components of environments 101 and/or 121 may be distributed across multiple environments or processing systems.
- Environment 101 may comprise user devices 102 A, 102 B, and 102 C (collectively “user devices 102 ”), server device 104 , and data store(s) 106 .
- environment 101 may represent a cloud-based or distributed computing environment.
- User devices 102 may be configured to receive or collect input from one or more users or alternate devices. Examples of user devices 102 include, but are not limited to, personal computers (PCs), server devices, mobile devices (e.g., smartphones, tablets, laptops, personal digital assistants (PDAs)), and wearable devices (e.g., smart watches, smart eyewear, fitness trackers, smart clothing, body-mounted devices).
- User devices 102 may include sensors, applications, and/or services for receiving or collecting input.
- Example sensor include microphones, touch-based sensors, keyboards, pointing/selection tools, optical/magnetic scanners, accelerometers, magnetometers, gyroscopes, etc.
- the collected input may include, for example, voice input, touch input, text-based input, gesture input, video input, and/or image input.
- Server device 104 may be configured to receive collected input from user devices 102 .
- Examples of server device 104 include, but are not limited to, application servers, web servers, file servers, database servers, and mail servers.
- server device 104 may provide access to data and one or more services/applications.
- the data and services/applications may be stored remotely from server device 104 and accessed by server device 104 via network 110 .
- the data and services/applications may be stored and accessed locally on server device 104 using a data store, such as data store(s) 106 .
- Examples of data store(s) 106 include, but are not limited to, databases, file systems, directories, flat files, and email storage systems.
- data store(s) 106 may comprise data objects and/or sets of instructions for one or more algorithms and/or models.
- a model may refer to a predictive or statistical utility or program that may be used to determine a probability distribution over one or more character sequences, classes, objects, result sets or events, and/or to predict a response value from one or more predictors.
- a model may be based on, or incorporate, one or more rule sets, machine learning (ML), a neural network, or the like.
- the algorithms and/or models may be proprietary and/or subject to trade secret protections by the owners of the algorithms and/or models.
- server device 104 may collect or receive one or more data objects and/or sets of instructions relating to a specific task or set of tasks from data store(s) 106 .
- Server device 104 may identify a task and/or corresponding data objects/instructions based on one or more terms in or associated with the collected input.
- server device 104 may parse the collected input to identify query terms or input terms. The identified terms may be used to search the data (e.g., algorithm names, data object text, instruction text) in data store(s) 106 for similar or matching terms using search techniques, such as pattern matching, regular expressions, fuzzy matching, etc. When one or more matches are identified, the corresponding algorithm(s)/model(s) may be selected and server device 104 may collect or receive one or more data objects and/or sets of instructions relating to the selected algorithm(s)/model(s). Server device 104 may provide one or more data objects and/or sets of instructions to environment 121 based on the collected input.
- search techniques such as pattern matching, regular expressions, fuzzy matching, etc.
- Server device 104 may be further configured to evaluate response data received from environment 121 .
- the response data may be provided by environment 121 in response to one or more data objects and/or sets of instructions provided to environment 121 .
- server device 104 may comprise or provide access to an execution environment (not pictured).
- the execution environment may comprise or utilize functionality for evaluating the response data.
- the response data corresponds to a trained user-requested model or algorithm.
- the evaluated response data may be stored in one or more data stores, such as data store(s) 106 .
- the response data may be provided to a user in response to receiving the collected input.
- Environment 121 may comprise server device 124 , data store(s) 126 , and feature store(s) 128 .
- environment 121 may represent a computing environment comprising sensitive data, such as a healthcare computing environment comprising patient data.
- Server device 124 may be configured to collect data from the one or more data sources, such as data store(s) 126 and/or feature store(s) 128 .
- data store(s) 126 and feature store(s) 128 include, but are not limited to, databases, file systems, directories, flat files, and email storage systems.
- the collected data may correspond to medical report data, medical images, patient records, and/or other sensitive medically related information.
- the collected data may be used to train an NLP-based algorithm or model (not pictured). At least a portion of the output of the trained NLP-based algorithm or model and/or the collected data may be stored in feature store(s) 128 .
- Server device 124 may be further configured to receive one or more data objects and/or sets of instructions from environment 101 .
- Server device 124 may identify a specific task associated with the received one or more data objects and/or sets of instructions.
- the identified specific task may be used to search feature store(s) 128 for stored data relevant to performing the specific task.
- the stored data may correspond to labeled or annotated image data, text terms or phrases from medical report data, and/or feature data associated with image data or medical report data.
- Stored data identified to be relevant may be provided to a training component (not pictured) within environment 121 .
- the training component may be a hardware device, a software component within server device 124 , or a software component within a separate hardware device of environment 121 .
- the training component may be implemented as a black box that provides separation between environment 101 and environment 121 .
- the separation may prevent environment 101 (and other environments external to environment 121 ) from accessing the sensitive data of environment 121 from outside of environment 121 .
- the separation may also prevent environment 121 from unauthorized access of the models and/or algorithms stored in data store(s) 106 .
- the models and/or algorithms may be proprietary to owners who are third parties with respect to environment 101 , it may be desirable for the owners to keep the algorithms secure from users in environment 101 .
- the training component may be configured to train a user-requested model or algorithm.
- the stored data identified to be relevant may be provided to the training component.
- the training component may use the relevant stored data to train a user-requested model or algorithm that is operable to perform the identified specific task.
- the trained user-requested model or algorithm may then be provided as response data to environment 101 .
- the user-requested model or algorithm may be trained and provided to environment 101 such that sensitive data in environment 121 is not exposed to environment 101 .
- the patient data that is located in any sensitive medically related information used to train the trained user-requested model or algorithm does not need to be de-identified because it is not removed from the environment 101 and stays on site in the environment 101 . This results in saving significant time from gathering, processing, exporting and storing information, which previously may have been done manually by a highly skilled medical technician.
- FIG. 2 is a diagram of an example process flow for using AI to identify regions of interest (ROI) in medical images, as described herein.
- Process flow 200 comprises environments 201 and 221 .
- environment 201 may represent a healthcare facility, such as a hospital, an imaging and radiology center, an urgent care facility, a medical clinic or medical offices, an outpatient surgical facility, a physical rehabilitation center, etc.
- Environment 201 may comprise sensitive or private information associated with a healthcare facility, healthcare patients, and/or healthcare personnel.
- Environment 221 may represent a web-based, cloud-based, or distributed computing environment associated with environment 201 . Environment 221 may be publicly or selectively accessible and may implement security procedures to enable the secure access of environment 201 .
- environment 221 may not store or have access to sensitive or private information comprised by environment 201 .
- Environments 201 and 221 may be physically and/or logically separated.
- the environment 201 and 221 may be separated by firewalls and authentication protocols ensuring safe handling of the sensitive medical information comprised in the environment 201 .
- Environment 201 may comprise ROI analysis engine 202 , medical data 204 A, 204 B, and 204 C (collectively “medical data 204 ”), ROI repository 206 , orchestration engine 214 , and training engine 216 .
- Environment 221 may comprise user(s) 208 , application 210 , algorithm repository 212 , and model repository 218 .
- One of skill in the art will appreciate that the number and type of environments and/or components associated with environment 201 , environment 221 , and process flow 200 may vary from those described in FIG. 2 .
- ROI analysis engine 202 may be provided, or may have access to, medical data associated with one or more patients, such as medical data 204 .
- ROI analysis engine 202 may be configured to identify ROI associated with medical data 204 .
- medical data 204 include, but are not limited to, medical report data 204 A (e.g., radiology reports, biopsy reports, audio reports, healthcare professional notes and documents), medical image data 204 B (e.g., X-ray images, CT images, MRI images, ultrasound images), and electronic medical record (EMR) data 204 C (e.g., patient records, medical and treatment history information, patient health data).
- EMR electronic medical record
- ROI analysis engine 202 may use medical data 204 to train an AI model/algorithm (not pictured) within environment 201 .
- the AI model/algorithm may be stored by ROI analysis engine 202 or elsewhere within environment 201 .
- the AI model/algorithm may be configured to identify ROI within the medical image data based on corresponding medical report data.
- the AI model/algorithm may implement NLP techniques to identify text and/or speech in medical report data that describes the locations of one or more findings within the patient.
- the AI model/algorithm may use the identified text and/or speech to identify the findings in corresponding medical image data.
- the AI model/algorithm may label the identified finding within the medical image data by generating image overlays or annotated versions of the medical image data.
- the medical image data labeled by the AI model/algorithm, image feature data relating to the medical image data, and the corresponding identified text and/or speech may be stored in a data store, such as ROI repository 206 .
- a user in or interfacing with environment 221 may access application 210 .
- user(s) 208 may include one or more manufacturers of algorithms designed to detect different types of medical conditions or abnormalities, such as cancers which may be diagnosed by healthcare professionals from medical images.
- Application 210 may be configured to receive, store, and/or process user requests to train a user-specific algorithm to perform a specific task.
- application 210 may access algorithm repository 212 .
- Algorithm repository 212 may be configured to store and provide various algorithms relating to environment 201 .
- the algorithms of algorithm repository 212 may relate to various topics, concepts, or areas. For example, a first algorithm may be used to identify a first type of cancer, a second algorithm may be used to identify a second type of cancer, and a third algorithm may be used to identify images having poor image quality.
- Algorithm repository 212 may be configured to store and provide data objects and/or instructions for training the stored algorithms. Algorithms in the algorithm repository 212 may be proprietary and subject to trade secret protections. It may be desirable for the owners of the algorithms to keep the algorithms secure. As discussed above, environments 221 and 201 may be physically and logically separated and protected by firewalls and other security measures. By separating the environments 221 and 201 access to the algorithms is secured and can be managed by the owners as they reside in the environments subject to the owners' control.
- Application 210 may use terms and keywords in the request from user(s) 208 to identify a context (e.g., a topic, a concept, or an area) associated with the request.
- Application 210 may use the identified context to search algorithm repository 212 for relevant algorithms.
- search algorithm repository 212 for relevant algorithms.
- the identified algorithm, one or more data objects, and/or instructions for training the identified algorithm may be provided to orchestration engine 214 .
- orchestration engine 214 may be configured to monitor environment 221 and/or application 210 to detect when a user request to train a user-specific algorithm is received by application 210 .
- the monitoring may include the implementation of monitoring services or software used to transmit periodic queries to application 210 , receive notifications from application 210 , intercept messages between users(s) 208 and application 210 , etc.
- orchestration engine 214 may cause algorithm repository 212 to provide the identified algorithm, one or more data objects, and/or instructions for training the identified algorithm to orchestration engine 214 and/or training engine 216 .
- orchestration engine 214 may request the access path and/or credentials for algorithm repository 212 .
- Orchestration engine 214 may use the access path and/or credentials to retrieve the identified algorithm, data objects, and/or instructions.
- orchestration engine 214 may provide the access path and/or credentials to training engine 216 and training engine 216 may use the access path and/or credentials to retrieve the identified algorithm, data objects, and/or instructions.
- Orchestration engine 214 and/or training engine 216 may also be configured to initiate the training of the identified algorithm within environment 201 .
- Orchestration engine 214 may provide the identified algorithm, one or more data objects, and/or instructions for training the identified algorithm to training engine 216 .
- orchestration engine 214 may provide a command (including parameters) for initiating the training of the identified algorithm to the training engine 216 .
- Training engine 216 may be configured to search ROI repository 206 for data (e.g., medical image data, image feature data, identified text and/or speech) associated with the identified algorithm, and to train a model based on the data.
- training engine 216 may be implemented in a manner that provides separation between environment 201 and environment 221 .
- training engine 216 may prevent users and devices in environment 221 (and other environments external to environment 201 ) from accessing the sensitive or secure data of environment 201 , such as medical data 204 , from outside of environment 201 . Further, training engine 216 may prevent users and devices in environment 201 (and other environments external to environment 221 ) from directly accessing the algorithms stored in algorithm repository 212 . For instance, training engine 216 may implement security features or policies that prevent users and devices in environment 201 and environment 221 from viewing or accessing the data (e.g., ROI repository 206 data or algorithm repository 212 ) received by training engine 216 .
- data e.g., ROI repository 206 data or algorithm repository 212
- training engine 216 may train a model based on the identified algorithm.
- orchestration engine 214 or training engine 216 may provide the trained model to model repository 218 .
- orchestration engine 214 or training engine 216 may provide the trained model to application 210 and application 210 may provide the trained model to model repository 218 .
- Model repository 218 may be configured to store various trained models and associated data, such as creation/modification data, a description of the model, testing data, result accuracy data, keywords or terms associated with the model, version/iteration number, etc.
- application 210 may also be configured to provide a testing environment (not pictured) to test the trained model.
- the testing environment may implement tools for evaluating the performance metrics for the trained model.
- the performance metrics may relate to receiver operating characteristics (ROCs) and/or free-response receiver operating characteristics (FROCs), such as sensitivity, specificity, precision, hit rate, accuracy, etc.
- ROCs receiver operating characteristics
- FROCs free-response receiver operating characteristics
- Evaluating the performance metrics for the trained model may include using the trained model to perform a specific task intended by the user and/or comparing the performance metrics for the trained model to a set of baseline performance metrics.
- the trained model may be used to identify image data or aspects thereof. Based on the performance metrics for the trained model, the trained model may be provided to training engine 216 , as described above, to be refined/retrained. A set of training parameters may for refining/retraining may also be provided to training engine 216 . Training engine 216 may refine/retrain the trained model based on the set of training parameters. The refined/retrained model may be provided to application 210 and/or model repository 218 . The testing environment of application 210 may be used to evaluate the performance metrics of the refined/retrained model. In some aspects, the performance metrics of trained model and the refined/retrained model may be compared to determine whether the trained model or the refined/retrained model is more accurate. Based on the comparison, the trained model and/or the refined/retrained model may be stored or removed from the model repository 218 . Additionally, the refined/retrained model may be further refined/retrained using the process described above.
- methods 300 and 400 may be executed by a system, such as system 100 of FIG. 1 . However, methods 300 and 400 are not limited to such examples. In other aspects, methods 300 and 400 may be performed by a single device comprising multiple computing environments. In at least one aspect, methods 300 and 400 may be executed (e.g., computer-implemented operations) by one or more components of a distributed network, such as a web service/distributed network service (e.g. cloud service).
- a distributed network such as a web service/distributed network service (e.g. cloud service).
- FIG. 3 illustrates an example method for training an NLP-based model to generate images comprising labeled ROI, as described herein.
- Example method 300 begins at operation 302 , where medical data may be received.
- an analysis component such as ROI analysis engine 202
- the secure environment may correspond to a client environment of a healthcare facility or of another location comprising sensitive data.
- the analysis component may receive or have access to medical data from one or more sources, such as data store(s) 106 .
- the medical data may include medical report data, medical image data, EMR data, and other HIS data.
- text describing the location of ROI may be identified.
- the analysis component may apply one or more NLP techniques to the medical data.
- Example NLP techniques include, but are not limited to, named entity recognition, sentiment analysis, tokenization, sentence segmentation, and stemming and lemmatization.
- the NLP techniques may be used to identify significant terms and/or phrases in text data of the medical data.
- the significant terms and/or phrases may correspond to terms and/or phrases of a standardized (or semi-standardized) lexicon used for reporting the outcomes of image review.
- the NLP techniques may be applied to medical report data (e.g., radiology reports and/or biopsy reports) to identify text describing one or more findings or ROI (e.g., lesions, asymmetric breast tissue, macrocalcifications, asymmetry density, distortion mass, or adenopathy) resulting from a mammographic exam.
- the text may include features of the findings or ROI, such as size, location, texture, density, symmetry, etc.
- the NLP techniques may identify a sentence in a radiology report that indicates a lesion was detected in the superior medial portion of a patient's left breast.
- the NLP techniques may also identify another sentence in the radiology report that indicates the size and density of the lesion and the approximate location of the lesion with in the superior medial quadrant.
- the text associated with each sentence may be extracted by the analysis component.
- the extracted text may be labeled (e.g., superior medial lesion) and stored with text relating to similar findings. For instance, all text describing findings or ROI in the superior medial quadrant of a breast may be stored under the category “Superior Medial Findings.”
- an NLP-based model may be trained.
- the significant terms and/or phrases identified in the text data of the medical data may be provided as input to an NLP-based model located within the secure environment.
- the NLP-based model may be generated and/or maintained by the analysis component or by another component within the secure environment.
- Image data corresponding to the identified significant terms and/or phrases may also be provided as input to an NLP-based model.
- the input may be used to train the NLP-based model to match the identified significant terms and/or phrases to corresponding locations of ROI in the image data. Matching the identified significant terms and/or phrases to the corresponding locations may include generating labeled image data comprising labels and/or annotations of the ROI.
- various text strings from a radiology report and one or more corresponding tomosynthesis computer-aided design (CAD) images may be provided to an NLP-based model.
- the NLP-based model may evaluate the CAD image(s) to identify images of the patient's left breast.
- the NLP-based model may evaluate the superior medial quadrant of the breast in the CAD image to identify ROI corresponding to the text string.
- the evaluation may include the use of unified vectors of location features.
- the NLP-based model may label the ROI on the CAD image and/or create a labeled version of the CAD image. For instance, the NLP-based model may generate an overlay in which the identified ROI is encircled or otherwise highlighted.
- the NLP-based model data may be stored in a repository.
- content generated or output by the NLP-based model may be stored in a data repository, such as ROI repository 206 .
- the content may include, labeled or otherwise annotated image data, unlabeled/unannotated image data, feature vectors, terms and/or phrases, and/or medical data available to the analysis component.
- the data repository may also be located in the secure environment such that the NLP-based model may be trained, and the content generated therefrom may be stored without exposing sensitive data to entities outside of the secure environment.
- FIG. 4 illustrates an example method for using AI to identify ROI in medical images as described herein.
- an application or service providing component (“application component”), such as application 210 may be located in a first computing environment, such as environment 221 .
- the first computing environment may correspond to a cloud-based or web-based environment that may be publicly or selectively accessible.
- the first computing environment may provide access to a second computing environment, such as environment 201 .
- the second computing environment may correspond to a secure, client environment of a healthcare facility or of another location comprising private or sensitive data.
- Example method 400 begins at operation 402 , where a request to train a user-selected algorithm is detected.
- a user in or accessing the first computing environment may access a user interface provided by the application component.
- the user interface may provide the user with an option to identify an algorithm to be trained to perform one or more tasks. Identifying the algorithm may comprise selecting an algorithm from a list of algorithms in an algorithm store, such as algorithm repository 212 . Alternately, identifying an algorithm may comprise providing one or more algorithm characteristics (e.g., intended function or type/category) to the user interface.
- the application component may provide the identified algorithm, one or more data objects associated with the identified algorithm, and/or instructions for training the identified algorithm (collectively, “algorithm container”).
- the algorithm container may be provided to the second computing environment.
- the application component may send the algorithm container to one or more components in the second computing environment in response to receiving the user request.
- the application component may send the algorithm container to an algorithm training orchestration component, such as orchestration engine 214 , or to an algorithm training component, such as training engine 216 .
- the algorithm training orchestration component of the second computing environment may monitor the application component in the first computing environment. Upon detecting a request to train a user-selected algorithm has been received by the application component, the orchestration component or the algorithm training component may request the algorithm container from the application component or the algorithm store.
- the application component may provide the algorithm container to the orchestration component or the algorithm training component.
- the application component may provide information for the algorithm container (e.g., identifier, location/path, access credentials) to the orchestration component or algorithm training component.
- the orchestration component or algorithm training component may use the information for the algorithm container to retrieve the algorithm container.
- the algorithm container may be used to identify content related to the user-selected algorithm.
- one or more identifiers e.g., terms, phrases, topics, contexts
- the identifiers may be used to search a data repository, such as ROI repository 206 , for content related (e.g., relevant) to the algorithm container.
- the content in the data repository may include, for example, labeled or otherwise annotated image data, unlabeled/unannotated image data, ROI feature vectors, terms and/or phrases describing ROI, and/or other medical data available in the second computing environment. Searching the data repository may include using pattern matching techniques, such as regular expressions, fuzzy logic, pattern recognition models, etc.
- Any content determined to be related to the algorithm container may be identified and extracted from the data repository.
- an algorithm container for detecting metastatic breast cancer may be titled as or comprise the term “Metastatic.” Based on identifying the term “metastatic” in/for the algorithm container, image data comprising ROI that include instances of metastatic breast cancer may be identified.
- content related to the algorithm container may be used to train a model.
- the algorithm container and/or the content related to the algorithm container may be provided as input to the training component in the second computing environment.
- the training component may use the input to train a model corresponding to the algorithm container.
- the training component may use overlay image data in the related content to populate or otherwise configure one or more data objects in the algorithm container according to a set of instructions and/or parameters in the algorithm container.
- the populated/configured data objects may be used to construct a model representing the algorithm the user requested to be trained.
- the model may be trained such that data used to train the model in the second computing environment is not exposed to the first computing environment.
- the trained model may be provided to the first computing environment.
- the orchestration component may receive or collect the trained model from the training component.
- the orchestration component may provide the trained model to the first computing environment.
- the orchestration component may provide the trained model to the application component and/or to a model store of the first computing environment, such as model repository 218 .
- the training component may provide the trained model to the first computing environment.
- the trained model may be stored in the model store and/or presented to the user using the user interface.
- the user interface may enable the user to execute, modify, or otherwise interact with the trained model.
- the trained model may be evaluated.
- the first computing environment or a component thereof, such as the application component may comprise a test operating environment.
- the test operating environment may provide one or more tools for evaluating the trained model.
- the evaluation may include identifying performance metrics for the trained model and/or comparing the identified performance metrics to a set of baseline or default performance metrics.
- the test operating environment may enable the iterative training of a model. For example, after evaluating a trained model in the test operating environment, an updated algorithm container may be manually or automatically selected from the algorithm store or may otherwise be acquired.
- the updated algorithm container may be selected by, for example, the application component based on predefined testing constraints or according to a test script or executable test file for the selected algorithm or algorithm type.
- the trained model and the updated algorithm container may be provided to the training component in the second computing environment.
- the updated algorithm container may comprise an updated set of instructions and/or parameters for training the trained model.
- the training component may update/(re)train the trained model.
- the updated trained model may be provided to the first computing environment.
- the test operating environment may be used to evaluate performance metrics for the updated trained model.
- the performance metrics for the trained model and the performance metrics for the updated trained model may then be compared to determine the which model (e.g., trained model or updated trained model) is more accurate. Based on the comparison, the most accurate model may be selected, and a newly updated algorithm container may be selected or obtained.
- the process may continue as described above until a set of performance metrics meeting or exceeding a threshold value/level is acquired, or until a defined set of criteria is met.
- FIG. 5 illustrates an exemplary suitable operating environment for the automating clinical workflow decision techniques described in FIG. 1 .
- operating environment 500 typically includes at least one processing unit 502 and memory 504 .
- memory 504 storing, instructions to perform the techniques disclosed herein
- memory 504 may be volatile (such as RAM), non-volatile (such as ROM, flash memory, etc.), or some combination of the two.
- This most basic configuration is illustrated in FIG. 5 by dashed line 506 .
- environment 500 may also include storage devices (removable, 508 , and/or non-removable, 510 ) including, but not limited to, magnetic or optical disks or tape.
- environment 500 may also have input device(s) 514 such as keyboard, mouse, pen, voice input, etc. and/or output device(s) 516 such as a display, speakers, printer, etc.
- input device(s) 514 such as keyboard, mouse, pen, voice input, etc.
- output device(s) 516 such as a display, speakers, printer, etc.
- Also included in the environment may be one or more communication connections 512 , such as LAN, WAN, point to point, etc. In embodiments, the connections may be operable to facility point-to-point communications, connection-oriented communications, connectionless communications, etc.
- Operating environment 500 typically includes at least some form of computer readable media.
- Computer readable media can be any available media that can be accessed by processing unit 502 or other devices comprising the operating environment.
- Computer readable media may comprise computer storage media and communication media.
- Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data.
- Computer storage media includes, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transitory medium which can be used to store the desired information.
- Computer storage media does not include communication media.
- Communication media embodies computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media.
- modulated data signal means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal.
- communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared, microwave, and other wireless media. Combinations of the any of the above should also be included within the scope of computer readable media.
- the operating environment 500 may be a single computer operating in a networked environment using logical connections to one or more remote computers.
- the remote computer may be a personal computer, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above as well as others not so mentioned.
- the logical connections may include any method supported by available communications media.
- Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets and the Internet.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Medical Informatics (AREA)
- Bioethics (AREA)
- Artificial Intelligence (AREA)
- Data Mining & Analysis (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Databases & Information Systems (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Computer Hardware Design (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Security & Cryptography (AREA)
- Quality & Reliability (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Computational Linguistics (AREA)
- Computing Systems (AREA)
- Epidemiology (AREA)
- Primary Health Care (AREA)
- Public Health (AREA)
- Medical Treatment And Welfare Office Work (AREA)
Abstract
Examples of the present disclosure describe systems and methods for using AI to identify regions of interest (ROI) in medical images. In aspects, medical reports and images may be provided to a first environment. The first environment may use the medical report data/medical images to train a natural language processing (NLP)-based algorithm to identify the location in images of ROI described in the medical report data. The output of the NLP-based algorithm may be stored in an ROI repository in the first environment. After the NLP-based algorithm has been trained, a request to train a user-specific model may be received in a second environment. Data objects for the requested user-specific model may be provided to the first environment, which uses the ROI repository to train the model. The trained model may be provided to the second environment, where the trained user-specific model/algorithm may be tested and stored.
Description
- This application claims priority to and the benefit of U.S. Provisional Application No. 63/116,162, filed on Nov. 20, 2020, entitled “Systems and Methods for Using AI to Identify Regions of Interest in Medical Images,” the disclosure of which is hereby incorporated by reference in its entirety.
- In the realm of artificial intelligence (AI), deep learning enables systems to automatically discover the information required to perform feature detection or classification using raw data. Deep learning requires access to large amounts of accurately labeled data. Typically, the data labeling is primarily a manual process, which can be prohibitively costly in terms of time and human/financial resources. Moreover, data privacy concerns are often a consideration.
- It is with respect to these and other general considerations that the aspects disclosed herein have been made. Also, although relatively specific problems may be discussed, it should be understood that the examples should not be limited to solving the specific problems identified in the background or elsewhere in the present disclosure.
- Examples of the present disclosure describe systems and methods for using AI to identify regions of interest (ROI) in medical images. In aspects, medical report data and/or corresponding medical images may be provided to a first service or application in a first environment. The first service/application may use the medical report data/medical images to train a natural language processing (NLP)-based algorithm to identify within the medical images the location of findings described in the medical report data. The output of the NLP-based algorithm may be stored in an ROI repository in the first environment. After the NLP-based algorithm has been trained, a request to train a user-specific model or an algorithm may be received by a second service or application in a second environment. In response to the request, one or more data objects for the requested user-specific model/algorithm may be provided to the first service/application in the first environment. The first service/application may use data in the ROI repository to populate the data objects and train the user-specific model/algorithm. The trained user-specific model/algorithm may then be provided to the second service/application in the second environment, where the trained user-specific model/algorithm may be tested, stored, and/or provided to the user.
- This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. Additional aspects, features, and/or advantages of examples will be set forth in part in the description which follows and, in part, will be apparent from the description, or may be learned by practice of the disclosure.
- Non-limiting and non-exhaustive examples are described with reference to the following figures.
-
FIG. 1 illustrates an overview of an example system for using AI to identify ROI in medical images, as described herein. -
FIG. 2 is a diagram of an example process flow for using AI to identify ROI in medical images, as described herein. -
FIG. 3 illustrates an example method for training an NLP-based model to generate medical images comprising labeled ROI, as described herein -
FIG. 4 illustrates an example method for using AI to identify ROI in medical images, as described herein. -
FIG. 5 illustrates one example of a suitable operating environment in which one or more of the present embodiments may be implemented. - Medical imaging has become a widely used tool for identifying and diagnosing abnormalities, such as cancers or other conditions, within the human body. Medical imaging processes such as mammography and tomosynthesis are particularly useful tools for imaging breasts to screen for, or diagnose, cancer or other lesions with the breasts. Tomosynthesis systems are mammography systems that allow high resolution breast imaging based on limited angle tomosynthesis. Tomosynthesis, generally, produces a plurality of X-ray images, each of discrete layers or slices of the breast, through the entire thickness thereof. In contrast to conventional two-dimensional (2D) mammography systems, a tomosynthesis system acquires a series of X-ray projection images, each projection image obtained at a different angular displacement as the X-ray source moves along a path, such as a circular arc, over the breast. In contrast to conventional computed tomography (CT), tomosynthesis is typically based on projection images obtained at limited angular displacements of the X-ray source around the breast. Tomosynthesis reduces or eliminates the problems caused by tissue overlap and structure noise present in 2D mammography imaging.
- In recent times, artificial intelligence (AI) has been increasingly used to evaluate the image data generated using medical imaging. In particular, machine learning methods, such as deep learning, provide powerful tools for evaluating image data. Although such tools are highly accurate and efficient, these tools must be trained to perform specific tasks. The training requires access to a large amount of accurately labeled data. Generally, the data labeling process is performed manually. For example, a clinical professional must read medical report documents (physician notes, radiology reports, biopsy reports, etc.) to identify ROI associated with a patient. The clinical professional labels the identified ROI on medical images associated with the medical documents. Often, the quality of the labeling varies among clinical professionals based on various factors, such as experience, ability, fatigue, etc. The labeled medical images are provided as input to an AI component. Based on the input, the AI component is trained to identify the labeled ROI in medical images subsequently provided to the trained AI component. When the clinical professional intends to train the AI component to identify a new ROI or a new aspect of a ROI for which the AI component was previously trained, the entire process must be repeated. Thus, the data labeling process is often time-consuming, cumbersome, expensive, and potentially inaccurate.
- In addition, the large amount of accurately labeled data includes patient records and other sensitive physical information that is protected by various laws and regulations including data security and protection and confidential handling of protected health information. As such, to comply with the laws and regulations, the data for purposes of labeling must first be deidentified by removing identification of a particular patient prior to the export of data from a medical facility. Such deidentification is time consuming and often done manually.
- To address such issues with data labeling for AI training, the present disclosure describes systems and methods for using AI to identify ROI in medical images. In aspects, a first computing environment may comprise sensitive physical and/or electronic data, such as the medical report data, medical images, patient records, and other hospital information system (HIS) data. The first computing environment may correspond to a healthcare facility or a section or deport intent of a healthcare facility. At least a portion of the medical report data and/or medical images may be provided as input to a first service or application in the first computing environment. The first service or application may use the input to train an AI model or algorithm to identify ROI within the medical images based on the medical report data. In at least one example, the model or algorithm may use NLP techniques to identify language that describes the locations of findings in the medical report data. The model or algorithm may use the identified language to provide output including image overlays for the medical images or annotated versions of the medical images that include labeled locations of the findings identified by the identified language. The labeled locations may include textual labels, numerical values, highlighting, encircling (and/or other types of content enclosing), arrows or pointers, font or style modifications, etc. The output of the model or algorithm may be stored in at least one data repository in the first computing environment. The data repository may also store one or more portions of the medical report data and/or the patient records.
- In aspects, a second computing environment may include a second service or application for training and storing user-requested models or algorithms. The second computing environment may be physically and/or logically separate from the first computing environment. In response to receiving a request to train a user-requested model or algorithm, the second service or application may provide data objects and/or training requirements for the requested user-specific model or algorithm to a training component in the first computing environment. The training component may search the data repository to identify information relevant to the requested user-specific model or algorithm. The training component may use the identified information to train the requested user-specific model or algorithm. The trained user-specific model or algorithm may be provided to the second service or application in the second computing environment without allowing the second computing environment access to the sensitive data in the first computing environment. Thus, the integrity and security of the sensitive data may be maintained throughout the training process. Upon receiving the trained user-specific model or algorithm, the second service or application may evaluate the model to determine a set of performance metrics. The set of performance metrics may represent the accuracy or effectiveness of the trained user-specific model or algorithm. In at least one aspect, the second service or application may use the set of performance metrics to iteratively tune/train the trained user-specific model or algorithm.
- Accordingly, the present disclosure provides a plurality of technical benefits including but not limited to: training an NLP-based model to detect text relating to ROI locations, using NLP-based model output to train specific AI models, improving data security/privacy during model creation, improving the accuracy of labeled data, improving the efficiency of generating labeled data, enabling self-learning AI systems within client or sensitive environments.
-
FIG. 1 illustrates an overview of an example system for using AI to identify regions of interest (ROI) in medical images as described herein. Example system 100 as presented is a combination of interdependent components that interact to form an integrated system. Components of system 100 may be hardware components or software components implemented on and/or executed by hardware components of the system. System 100 may provide one or more operating environments for software components to execute according to operating constraints, resources, and facilities of system 100. In one example, the operating environment(s) and/or software components may be provided by a single processing device, as depicted inFIG. 6 . In another example, the operating environment(s) and software components may be distributed across multiple devices. For instance, input may be entered on a user device and information may be processed or accessed using other devices in a network, such as one or more network devices and/or server devices. - As one example, system 100 comprises environments 101 and 121 and network 110. One of skill in the art will appreciate that the scale of systems such as system 100 may vary and may include more or fewer environments and/or components than those described in
FIG. 1 . For instance, in some examples, at least a portion of the functionality and components of environments 101 and 121 may be integrated into a single environment, processing system, or device. Alternately, the functionality and components of environments 101 and/or 121 may be distributed across multiple environments or processing systems. - Environment 101 may comprise user devices 102A, 102B, and 102C (collectively “user devices 102”), server device 104, and data store(s) 106. In at least one aspect, environment 101 may represent a cloud-based or distributed computing environment. User devices 102 may be configured to receive or collect input from one or more users or alternate devices. Examples of user devices 102 include, but are not limited to, personal computers (PCs), server devices, mobile devices (e.g., smartphones, tablets, laptops, personal digital assistants (PDAs)), and wearable devices (e.g., smart watches, smart eyewear, fitness trackers, smart clothing, body-mounted devices). User devices 102 may include sensors, applications, and/or services for receiving or collecting input. Example sensor include microphones, touch-based sensors, keyboards, pointing/selection tools, optical/magnetic scanners, accelerometers, magnetometers, gyroscopes, etc. The collected input may include, for example, voice input, touch input, text-based input, gesture input, video input, and/or image input.
- Server device 104 may be configured to receive collected input from user devices 102. Examples of server device 104 include, but are not limited to, application servers, web servers, file servers, database servers, and mail servers. Upon receiving collected input, server device 104 may provide access to data and one or more services/applications. The data and services/applications may be stored remotely from server device 104 and accessed by server device 104 via network 110. Alternately, the data and services/applications may be stored and accessed locally on server device 104 using a data store, such as data store(s) 106. Examples of data store(s) 106 include, but are not limited to, databases, file systems, directories, flat files, and email storage systems. In some aspects, data store(s) 106 may comprise data objects and/or sets of instructions for one or more algorithms and/or models. A model, as used herein, may refer to a predictive or statistical utility or program that may be used to determine a probability distribution over one or more character sequences, classes, objects, result sets or events, and/or to predict a response value from one or more predictors. A model may be based on, or incorporate, one or more rule sets, machine learning (ML), a neural network, or the like. In at least one aspect, the algorithms and/or models may be proprietary and/or subject to trade secret protections by the owners of the algorithms and/or models.
- The algorithms and/or models in data store(s) 106 may be used to perform one or more specific tasks, such as identifying a type of cancer, a category of disease, image anomalies, etc. Although reference to specific types of tasks are described herein, it is contemplated that the algorithms and/or models may be used to perform alternate types of tasks and used in alternate types of environments. In response to receiving the collected input, server device 104 may collect or receive one or more data objects and/or sets of instructions relating to a specific task or set of tasks from data store(s) 106. Server device 104 may identify a task and/or corresponding data objects/instructions based on one or more terms in or associated with the collected input. For example, server device 104 may parse the collected input to identify query terms or input terms. The identified terms may be used to search the data (e.g., algorithm names, data object text, instruction text) in data store(s) 106 for similar or matching terms using search techniques, such as pattern matching, regular expressions, fuzzy matching, etc. When one or more matches are identified, the corresponding algorithm(s)/model(s) may be selected and server device 104 may collect or receive one or more data objects and/or sets of instructions relating to the selected algorithm(s)/model(s). Server device 104 may provide one or more data objects and/or sets of instructions to environment 121 based on the collected input.
- Server device 104 may be further configured to evaluate response data received from environment 121. The response data may be provided by environment 121 in response to one or more data objects and/or sets of instructions provided to environment 121. In aspects, server device 104 may comprise or provide access to an execution environment (not pictured). The execution environment may comprise or utilize functionality for evaluating the response data. In at least one aspect, the response data corresponds to a trained user-requested model or algorithm. The evaluated response data may be stored in one or more data stores, such as data store(s) 106. The response data may be provided to a user in response to receiving the collected input.
- Environment 121 may comprise server device 124, data store(s) 126, and feature store(s) 128. In at least one aspect, environment 121 may represent a computing environment comprising sensitive data, such as a healthcare computing environment comprising patient data. Server device 124 may be configured to collect data from the one or more data sources, such as data store(s) 126 and/or feature store(s) 128. Examples of data store(s) 126 and feature store(s) 128 include, but are not limited to, databases, file systems, directories, flat files, and email storage systems. In at least one aspect, the collected data may correspond to medical report data, medical images, patient records, and/or other sensitive medically related information. The collected data may be used to train an NLP-based algorithm or model (not pictured). At least a portion of the output of the trained NLP-based algorithm or model and/or the collected data may be stored in feature store(s) 128.
- Server device 124 may be further configured to receive one or more data objects and/or sets of instructions from environment 101. Server device 124 may identify a specific task associated with the received one or more data objects and/or sets of instructions. The identified specific task may be used to search feature store(s) 128 for stored data relevant to performing the specific task. In at least one aspect, the stored data may correspond to labeled or annotated image data, text terms or phrases from medical report data, and/or feature data associated with image data or medical report data. Stored data identified to be relevant may be provided to a training component (not pictured) within environment 121. The training component may be a hardware device, a software component within server device 124, or a software component within a separate hardware device of environment 121. In examples, the training component may be implemented as a black box that provides separation between environment 101 and environment 121. The separation may prevent environment 101 (and other environments external to environment 121) from accessing the sensitive data of environment 121 from outside of environment 121. The separation may also prevent environment 121 from unauthorized access of the models and/or algorithms stored in data store(s) 106. For instance, as the models and/or algorithms may be proprietary to owners who are third parties with respect to environment 101, it may be desirable for the owners to keep the algorithms secure from users in environment 101.
- The training component may be configured to train a user-requested model or algorithm. In examples, the stored data identified to be relevant may be provided to the training component. The training component may use the relevant stored data to train a user-requested model or algorithm that is operable to perform the identified specific task. The trained user-requested model or algorithm may then be provided as response data to environment 101. In aspects, the user-requested model or algorithm may be trained and provided to environment 101 such that sensitive data in environment 121 is not exposed to environment 101. As such, the patient data that is located in any sensitive medically related information used to train the trained user-requested model or algorithm does not need to be de-identified because it is not removed from the environment 101 and stays on site in the environment 101. This results in saving significant time from gathering, processing, exporting and storing information, which previously may have been done manually by a highly skilled medical technician.
-
FIG. 2 is a diagram of an example process flow for using AI to identify regions of interest (ROI) in medical images, as described herein.Process flow 200, as presented, comprises 201 and 221. In examples,environments environment 201 may represent a healthcare facility, such as a hospital, an imaging and radiology center, an urgent care facility, a medical clinic or medical offices, an outpatient surgical facility, a physical rehabilitation center, etc.Environment 201 may comprise sensitive or private information associated with a healthcare facility, healthcare patients, and/or healthcare personnel.Environment 221 may represent a web-based, cloud-based, or distributed computing environment associated withenvironment 201.Environment 221 may be publicly or selectively accessible and may implement security procedures to enable the secure access ofenvironment 201. Generally,environment 221 may not store or have access to sensitive or private information comprised byenvironment 201. 201 and 221 may be physically and/or logically separated. In addition, theEnvironments 201 and 221 may be separated by firewalls and authentication protocols ensuring safe handling of the sensitive medical information comprised in theenvironment environment 201. -
Environment 201 may compriseROI analysis engine 202, 204A, 204B, and 204C (collectively “medical data 204”),medical data ROI repository 206,orchestration engine 214, andtraining engine 216.Environment 221 may comprise user(s) 208,application 210,algorithm repository 212, andmodel repository 218. One of skill in the art will appreciate that the number and type of environments and/or components associated withenvironment 201,environment 221, and process flow 200 may vary from those described inFIG. 2 . - In aspects,
ROI analysis engine 202 may be provided, or may have access to, medical data associated with one or more patients, such as medical data 204.ROI analysis engine 202 may be configured to identify ROI associated with medical data 204. Examples of medical data 204 include, but are not limited to,medical report data 204A (e.g., radiology reports, biopsy reports, audio reports, healthcare professional notes and documents),medical image data 204B (e.g., X-ray images, CT images, MRI images, ultrasound images), and electronic medical record (EMR) data 204C (e.g., patient records, medical and treatment history information, patient health data). Although specific references to medical data and procedures are described, it is contemplated that the systems and methods described herein may be implemented with alternate types of data, procedures, and environments. - Upon receiving the medical data,
ROI analysis engine 202 may use medical data 204 to train an AI model/algorithm (not pictured) withinenvironment 201. The AI model/algorithm may be stored byROI analysis engine 202 or elsewhere withinenvironment 201. The AI model/algorithm may be configured to identify ROI within the medical image data based on corresponding medical report data. For example, the AI model/algorithm may implement NLP techniques to identify text and/or speech in medical report data that describes the locations of one or more findings within the patient. The AI model/algorithm may use the identified text and/or speech to identify the findings in corresponding medical image data. The AI model/algorithm may label the identified finding within the medical image data by generating image overlays or annotated versions of the medical image data. The medical image data labeled by the AI model/algorithm, image feature data relating to the medical image data, and the corresponding identified text and/or speech may be stored in a data store, such asROI repository 206. - After the AI model/algorithm had been trained, a user in or interfacing with
environment 221, such as user(s) 208, may accessapplication 210. Examples of user(s) 208 may include one or more manufacturers of algorithms designed to detect different types of medical conditions or abnormalities, such as cancers which may be diagnosed by healthcare professionals from medical images.Application 210 may be configured to receive, store, and/or process user requests to train a user-specific algorithm to perform a specific task. Upon receiving a request from user(s) 208 to train a new user-specific algorithm,application 210 may accessalgorithm repository 212.Algorithm repository 212 may be configured to store and provide various algorithms relating toenvironment 201. The algorithms ofalgorithm repository 212 may relate to various topics, concepts, or areas. For example, a first algorithm may be used to identify a first type of cancer, a second algorithm may be used to identify a second type of cancer, and a third algorithm may be used to identify images having poor image quality.Algorithm repository 212 may be configured to store and provide data objects and/or instructions for training the stored algorithms. Algorithms in thealgorithm repository 212 may be proprietary and subject to trade secret protections. It may be desirable for the owners of the algorithms to keep the algorithms secure. As discussed above, 221 and 201 may be physically and logically separated and protected by firewalls and other security measures. By separating theenvironments 221 and 201 access to the algorithms is secured and can be managed by the owners as they reside in the environments subject to the owners' control.environments -
Application 210 may use terms and keywords in the request from user(s) 208 to identify a context (e.g., a topic, a concept, or an area) associated with the request.Application 210 may use the identified context to searchalgorithm repository 212 for relevant algorithms. When a relevant algorithm is identified inalgorithm repository 212, the identified algorithm, one or more data objects, and/or instructions for training the identified algorithm may be provided toorchestration engine 214. In some examples,orchestration engine 214 may be configured to monitorenvironment 221 and/orapplication 210 to detect when a user request to train a user-specific algorithm is received byapplication 210. The monitoring may include the implementation of monitoring services or software used to transmit periodic queries toapplication 210, receive notifications fromapplication 210, intercept messages between users(s) 208 andapplication 210, etc. When a user request to train a user-specific algorithm is detected,orchestration engine 214 may causealgorithm repository 212 to provide the identified algorithm, one or more data objects, and/or instructions for training the identified algorithm toorchestration engine 214 and/ortraining engine 216. For example,orchestration engine 214 may request the access path and/or credentials foralgorithm repository 212.Orchestration engine 214 may use the access path and/or credentials to retrieve the identified algorithm, data objects, and/or instructions. Alternately,orchestration engine 214 may provide the access path and/or credentials totraining engine 216 andtraining engine 216 may use the access path and/or credentials to retrieve the identified algorithm, data objects, and/or instructions. -
Orchestration engine 214 and/ortraining engine 216 may also be configured to initiate the training of the identified algorithm withinenvironment 201.Orchestration engine 214 may provide the identified algorithm, one or more data objects, and/or instructions for training the identified algorithm totraining engine 216. Alternately or additionally,orchestration engine 214 may provide a command (including parameters) for initiating the training of the identified algorithm to thetraining engine 216.Training engine 216 may be configured to searchROI repository 206 for data (e.g., medical image data, image feature data, identified text and/or speech) associated with the identified algorithm, and to train a model based on the data. In aspects,training engine 216 may be implemented in a manner that provides separation betweenenvironment 201 andenvironment 221. For example,training engine 216 may prevent users and devices in environment 221 (and other environments external to environment 201) from accessing the sensitive or secure data ofenvironment 201, such as medical data 204, from outside ofenvironment 201. Further,training engine 216 may prevent users and devices in environment 201 (and other environments external to environment 221) from directly accessing the algorithms stored inalgorithm repository 212. For instance,training engine 216 may implement security features or policies that prevent users and devices inenvironment 201 andenvironment 221 from viewing or accessing the data (e.g.,ROI repository 206 data or algorithm repository 212) received bytraining engine 216. - Upon receiving the identified algorithm, one or more data objects, instructions for training the identified algorithm, and/or command (including parameters) for initiating the training of the identified algorithm,
training engine 216 may train a model based on the identified algorithm. When the model has been trained,orchestration engine 214 ortraining engine 216 may provide the trained model tomodel repository 218. Alternately,orchestration engine 214 ortraining engine 216 may provide the trained model toapplication 210 andapplication 210 may provide the trained model tomodel repository 218.Model repository 218 may be configured to store various trained models and associated data, such as creation/modification data, a description of the model, testing data, result accuracy data, keywords or terms associated with the model, version/iteration number, etc. - In aspects, after the trained model has been provided to
model repository 218 and/orapplication 210, user(s) 208 may interact with the trainedmodel using application 210. For example,application 210 may also be configured to provide a testing environment (not pictured) to test the trained model. The testing environment may implement tools for evaluating the performance metrics for the trained model. In examples, the performance metrics may relate to receiver operating characteristics (ROCs) and/or free-response receiver operating characteristics (FROCs), such as sensitivity, specificity, precision, hit rate, accuracy, etc. Evaluating the performance metrics for the trained model may include using the trained model to perform a specific task intended by the user and/or comparing the performance metrics for the trained model to a set of baseline performance metrics. For example, the trained model may be used to identify image data or aspects thereof. Based on the performance metrics for the trained model, the trained model may be provided totraining engine 216, as described above, to be refined/retrained. A set of training parameters may for refining/retraining may also be provided totraining engine 216.Training engine 216 may refine/retrain the trained model based on the set of training parameters. The refined/retrained model may be provided toapplication 210 and/ormodel repository 218. The testing environment ofapplication 210 may be used to evaluate the performance metrics of the refined/retrained model. In some aspects, the performance metrics of trained model and the refined/retrained model may be compared to determine whether the trained model or the refined/retrained model is more accurate. Based on the comparison, the trained model and/or the refined/retrained model may be stored or removed from themodel repository 218. Additionally, the refined/retrained model may be further refined/retrained using the process described above. - Having described a system and process flow that may employ the techniques disclosed herein, the present disclosure will now describe one or more methods that may be performed by various aspects of the present disclosure. In aspects,
300 and 400 may be executed by a system, such as system 100 ofmethods FIG. 1 . However, 300 and 400 are not limited to such examples. In other aspects,methods 300 and 400 may be performed by a single device comprising multiple computing environments. In at least one aspect,methods 300 and 400 may be executed (e.g., computer-implemented operations) by one or more components of a distributed network, such as a web service/distributed network service (e.g. cloud service).methods -
FIG. 3 illustrates an example method for training an NLP-based model to generate images comprising labeled ROI, as described herein.Example method 300 begins atoperation 302, where medical data may be received. In aspects, an analysis component, such asROI analysis engine 202, may be located in a secure environment, such asenvironment 201. The secure environment may correspond to a client environment of a healthcare facility or of another location comprising sensitive data. The analysis component may receive or have access to medical data from one or more sources, such as data store(s) 106. The medical data may include medical report data, medical image data, EMR data, and other HIS data. - At
operation 304, text describing the location of ROI may be identified. In aspects, the analysis component may apply one or more NLP techniques to the medical data. Example NLP techniques include, but are not limited to, named entity recognition, sentiment analysis, tokenization, sentence segmentation, and stemming and lemmatization. The NLP techniques may be used to identify significant terms and/or phrases in text data of the medical data. The significant terms and/or phrases may correspond to terms and/or phrases of a standardized (or semi-standardized) lexicon used for reporting the outcomes of image review. As one example, the NLP techniques may be applied to medical report data (e.g., radiology reports and/or biopsy reports) to identify text describing one or more findings or ROI (e.g., lesions, asymmetric breast tissue, macrocalcifications, asymmetry density, distortion mass, or adenopathy) resulting from a mammographic exam. The text may include features of the findings or ROI, such as size, location, texture, density, symmetry, etc. As a specific example, the NLP techniques may identify a sentence in a radiology report that indicates a lesion was detected in the superior medial portion of a patient's left breast. The NLP techniques may also identify another sentence in the radiology report that indicates the size and density of the lesion and the approximate location of the lesion with in the superior medial quadrant. The text associated with each sentence may be extracted by the analysis component. The extracted text may be labeled (e.g., superior medial lesion) and stored with text relating to similar findings. For instance, all text describing findings or ROI in the superior medial quadrant of a breast may be stored under the category “Superior Medial Findings.” - At operation 306, an NLP-based model may be trained. In aspects, the significant terms and/or phrases identified in the text data of the medical data (and in other medical data) may be provided as input to an NLP-based model located within the secure environment. The NLP-based model may be generated and/or maintained by the analysis component or by another component within the secure environment. Image data corresponding to the identified significant terms and/or phrases may also be provided as input to an NLP-based model. The input may be used to train the NLP-based model to match the identified significant terms and/or phrases to corresponding locations of ROI in the image data. Matching the identified significant terms and/or phrases to the corresponding locations may include generating labeled image data comprising labels and/or annotations of the ROI. For example, various text strings from a radiology report and one or more corresponding tomosynthesis computer-aided design (CAD) images may be provided to an NLP-based model. In response to the text string “a lesion was detected in the superior medial portion of a patient's left breast,” the NLP-based model may evaluate the CAD image(s) to identify images of the patient's left breast. For each identified CAD image of the patient's left breast, the NLP-based model may evaluate the superior medial quadrant of the breast in the CAD image to identify ROI corresponding to the text string. The evaluation may include the use of unified vectors of location features. For each identified ROI, the NLP-based model may label the ROI on the CAD image and/or create a labeled version of the CAD image. For instance, the NLP-based model may generate an overlay in which the identified ROI is encircled or otherwise highlighted.
- At
operation 308, the NLP-based model data may be stored in a repository. In aspects, content generated or output by the NLP-based model may be stored in a data repository, such asROI repository 206. The content may include, labeled or otherwise annotated image data, unlabeled/unannotated image data, feature vectors, terms and/or phrases, and/or medical data available to the analysis component. The data repository may also be located in the secure environment such that the NLP-based model may be trained, and the content generated therefrom may be stored without exposing sensitive data to entities outside of the secure environment. -
FIG. 4 illustrates an example method for using AI to identify ROI in medical images as described herein. In aspects, an application or service providing component (“application component”), such asapplication 210 may be located in a first computing environment, such asenvironment 221. The first computing environment may correspond to a cloud-based or web-based environment that may be publicly or selectively accessible. The first computing environment may provide access to a second computing environment, such asenvironment 201. The second computing environment may correspond to a secure, client environment of a healthcare facility or of another location comprising private or sensitive data. -
Example method 400 begins atoperation 402, where a request to train a user-selected algorithm is detected. In aspects, a user in or accessing the first computing environment may access a user interface provided by the application component. The user interface may provide the user with an option to identify an algorithm to be trained to perform one or more tasks. Identifying the algorithm may comprise selecting an algorithm from a list of algorithms in an algorithm store, such asalgorithm repository 212. Alternately, identifying an algorithm may comprise providing one or more algorithm characteristics (e.g., intended function or type/category) to the user interface. In response to a user identifying an algorithm, the application component may provide the identified algorithm, one or more data objects associated with the identified algorithm, and/or instructions for training the identified algorithm (collectively, “algorithm container”). - At
operation 404, the algorithm container may be provided to the second computing environment. In some aspects, the application component may send the algorithm container to one or more components in the second computing environment in response to receiving the user request. For example, the application component may send the algorithm container to an algorithm training orchestration component, such asorchestration engine 214, or to an algorithm training component, such astraining engine 216. In other aspects, the algorithm training orchestration component of the second computing environment may monitor the application component in the first computing environment. Upon detecting a request to train a user-selected algorithm has been received by the application component, the orchestration component or the algorithm training component may request the algorithm container from the application component or the algorithm store. In response to the request by the orchestration component, the application component may provide the algorithm container to the orchestration component or the algorithm training component. Alternately, the application component may provide information for the algorithm container (e.g., identifier, location/path, access credentials) to the orchestration component or algorithm training component. The orchestration component or algorithm training component may use the information for the algorithm container to retrieve the algorithm container. - At
operation 406, the algorithm container may be used to identify content related to the user-selected algorithm. In aspects, one or more identifiers (e.g., terms, phrases, topics, contexts) associated with the received algorithm container may be identified. The identifiers may be used to search a data repository, such asROI repository 206, for content related (e.g., relevant) to the algorithm container. The content in the data repository may include, for example, labeled or otherwise annotated image data, unlabeled/unannotated image data, ROI feature vectors, terms and/or phrases describing ROI, and/or other medical data available in the second computing environment. Searching the data repository may include using pattern matching techniques, such as regular expressions, fuzzy logic, pattern recognition models, etc. Any content determined to be related to the algorithm container may be identified and extracted from the data repository. As on specific example, an algorithm container for detecting metastatic breast cancer may be titled as or comprise the term “Metastatic.” Based on identifying the term “metastatic” in/for the algorithm container, image data comprising ROI that include instances of metastatic breast cancer may be identified. - At
operation 408, content related to the algorithm container may be used to train a model. In aspects, the algorithm container and/or the content related to the algorithm container may be provided as input to the training component in the second computing environment. The training component may use the input to train a model corresponding to the algorithm container. For example, the training component may use overlay image data in the related content to populate or otherwise configure one or more data objects in the algorithm container according to a set of instructions and/or parameters in the algorithm container. The populated/configured data objects may be used to construct a model representing the algorithm the user requested to be trained. In examples, the model may be trained such that data used to train the model in the second computing environment is not exposed to the first computing environment. - At
operation 410, the trained model may be provided to the first computing environment. In aspects, the orchestration component may receive or collect the trained model from the training component. The orchestration component may provide the trained model to the first computing environment. For example, the orchestration component may provide the trained model to the application component and/or to a model store of the first computing environment, such asmodel repository 218. Alternately, the training component may provide the trained model to the first computing environment. The trained model may be stored in the model store and/or presented to the user using the user interface. The user interface may enable the user to execute, modify, or otherwise interact with the trained model. - At
operation 412, the trained model may be evaluated. In aspects, the first computing environment or a component thereof, such as the application component, may comprise a test operating environment. The test operating environment may provide one or more tools for evaluating the trained model. The evaluation may include identifying performance metrics for the trained model and/or comparing the identified performance metrics to a set of baseline or default performance metrics. In some aspects, the test operating environment may enable the iterative training of a model. For example, after evaluating a trained model in the test operating environment, an updated algorithm container may be manually or automatically selected from the algorithm store or may otherwise be acquired. The updated algorithm container may be selected by, for example, the application component based on predefined testing constraints or according to a test script or executable test file for the selected algorithm or algorithm type. The trained model and the updated algorithm container may be provided to the training component in the second computing environment. The updated algorithm container may comprise an updated set of instructions and/or parameters for training the trained model. Based on the updated algorithm container, the training component may update/(re)train the trained model. The updated trained model may be provided to the first computing environment. The test operating environment may be used to evaluate performance metrics for the updated trained model. The performance metrics for the trained model and the performance metrics for the updated trained model may then be compared to determine the which model (e.g., trained model or updated trained model) is more accurate. Based on the comparison, the most accurate model may be selected, and a newly updated algorithm container may be selected or obtained. The process may continue as described above until a set of performance metrics meeting or exceeding a threshold value/level is acquired, or until a defined set of criteria is met. -
FIG. 5 illustrates an exemplary suitable operating environment for the automating clinical workflow decision techniques described inFIG. 1 . In its most basic configuration, operatingenvironment 500 typically includes at least oneprocessing unit 502 andmemory 504. Depending on the exact configuration and type of computing device, memory 504 (storing, instructions to perform the techniques disclosed herein) may be volatile (such as RAM), non-volatile (such as ROM, flash memory, etc.), or some combination of the two. This most basic configuration is illustrated inFIG. 5 by dashedline 506. Further,environment 500 may also include storage devices (removable, 508, and/or non-removable, 510) including, but not limited to, magnetic or optical disks or tape. Similarly,environment 500 may also have input device(s) 514 such as keyboard, mouse, pen, voice input, etc. and/or output device(s) 516 such as a display, speakers, printer, etc. Also included in the environment may be one ormore communication connections 512, such as LAN, WAN, point to point, etc. In embodiments, the connections may be operable to facility point-to-point communications, connection-oriented communications, connectionless communications, etc. -
Operating environment 500 typically includes at least some form of computer readable media. Computer readable media can be any available media that can be accessed by processingunit 502 or other devices comprising the operating environment. By way of example, and not limitation, computer readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transitory medium which can be used to store the desired information. Computer storage media does not include communication media. - Communication media embodies computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared, microwave, and other wireless media. Combinations of the any of the above should also be included within the scope of computer readable media.
- The operating
environment 500 may be a single computer operating in a networked environment using logical connections to one or more remote computers. The remote computer may be a personal computer, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above as well as others not so mentioned. The logical connections may include any method supported by available communications media. Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets and the Internet. - The embodiments described herein may be employed using software, hardware, or a combination of software and hardware to implement and perform the systems and methods disclosed herein. Although specific devices have been recited throughout the disclosure as performing specific functions, one of skill in the art will appreciate that these devices are provided for illustrative purposes, and other devices may be employed to perform the functionality disclosed herein without departing from the scope of the disclosure.
- This disclosure describes some embodiments of the present technology with reference to the accompanying drawings, in which only some of the possible embodiments were shown. Other aspects may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments were provided so that this disclosure was thorough and complete and fully conveyed the scope of the possible embodiments to those skilled in the art.
- Although specific embodiments are described herein, the scope of the technology is not limited to those specific embodiments. One skilled in the art will recognize other embodiments or improvements that are within the scope and spirit of the present technology. Therefore, the specific structure, acts, or media are disclosed only as illustrative embodiments. The scope of the technology is defined by the following claims and any equivalents therein.
Claims (3)
1. A method for using AI to identify ROI in medical images, the method comprising:
detecting, by an orchestration engine in a first environment, a request to train an algorithm to identify the ROI, wherein the request is provided to a second environment external to the first environment;
receiving an algorithm container for the algorithm, wherein the algorithm container comprises one or more data objects for the algorithm;
identifying, in a ROI repository in the first environment, content related to the algorithm container, wherein the content comprises one or more images of the ROI;
training, by a training engine in the first environment, a model using the content and
providing the trained model to the second environment.
2. A system for using AI to identify ROI in medical images, the system comprising:
a processor; and
memory coupled to the processor, the memory comprising computer executable instructions that, when executed, perform a method comprising:
detecting, by an orchestration engine in a first environment, a request to train an algorithm to identify the ROI, wherein the request is provided to a second environment external to the first environment;
receiving an algorithm container for the algorithm, wherein the algorithm container comprises one or more data objects for the algorithm;
identifying, in a ROI repository in the first environment, content related to the algorithm container, wherein the content comprises one or more images of the ROI;
training, by a training engine in the first environment, a model using the content and
providing the trained model to the second environment.
3. A system for using AI to identify ROI in medical images, the system comprising:
a processor; and
memory coupled to the processor, the memory comprising computer executable instructions that, when executed, perform a method comprising:
receiving, by an application in a first environment, a request to train an algorithm to identify the ROI;
providing, to a second environment, an algorithm container for the algorithm, wherein the algorithm container comprises one or more data objects for the algorithm;
receiving a trained model from the second environment, wherein the trained model is based on the algorithm container; and
evaluating performance metrics of the trained model using a test operating environment of the second environment.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US17/531,177 US20220164951A1 (en) | 2020-11-20 | 2021-11-19 | Systems and methods for using ai to identify regions of interest in medical images |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US202063116162P | 2020-11-20 | 2020-11-20 | |
| US17/531,177 US20220164951A1 (en) | 2020-11-20 | 2021-11-19 | Systems and methods for using ai to identify regions of interest in medical images |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| US20220164951A1 true US20220164951A1 (en) | 2022-05-26 |
Family
ID=81657157
Family Applications (2)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US17/531,177 Abandoned US20220164951A1 (en) | 2020-11-20 | 2021-11-19 | Systems and methods for using ai to identify regions of interest in medical images |
| US17/532,286 Active 2043-01-25 US12530860B2 (en) | 2020-11-20 | 2021-11-22 | Systems and methods for using AI to identify regions of interest in medical images |
Family Applications After (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US17/532,286 Active 2043-01-25 US12530860B2 (en) | 2020-11-20 | 2021-11-22 | Systems and methods for using AI to identify regions of interest in medical images |
Country Status (1)
| Country | Link |
|---|---|
| US (2) | US20220164951A1 (en) |
Cited By (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20230074950A1 (en) * | 2021-08-24 | 2023-03-09 | Nvidia Corporation | Object characterization using one or more neural networks |
| US11694792B2 (en) | 2019-09-27 | 2023-07-04 | Hologic, Inc. | AI system for predicting reading time and reading complexity for reviewing 2D/3D breast images |
| US11883206B2 (en) | 2019-07-29 | 2024-01-30 | Hologic, Inc. | Personalized breast imaging system |
| US20240095976A1 (en) * | 2022-09-20 | 2024-03-21 | United Imaging Intelligence (Beijing) Co., Ltd. | Systems and methods associated with breast tomosynthesis |
| US20240120114A1 (en) * | 2021-02-09 | 2024-04-11 | Talking Medicines Limited | Medicine evaluation system |
| US12530860B2 (en) | 2020-11-20 | 2026-01-20 | Hologic, Inc. | Systems and methods for using AI to identify regions of interest in medical images |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20180018590A1 (en) * | 2016-07-18 | 2018-01-18 | NantOmics, Inc. | Distributed Machine Learning Systems, Apparatus, and Methods |
| US11853401B1 (en) * | 2018-06-05 | 2023-12-26 | Amazon Technologies, Inc. | Machine learning model creation via user-configured model building blocks |
Family Cites Families (564)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP4054402B2 (en) | 1997-04-25 | 2008-02-27 | 株式会社東芝 | X-ray tomography equipment |
| US3502878A (en) | 1967-09-22 | 1970-03-24 | Us Health Education & Welfare | Automatic x-ray apparatus for limiting the field size of a projected x-ray beam in response to film size and to source-to-film distance |
| US7050611B2 (en) | 2001-05-29 | 2006-05-23 | Mevis Breastcare Gmbh Co. Kg | Method and computer system for screening of medical cases |
| US3863073A (en) | 1973-04-26 | 1975-01-28 | Machlett Lab Inc | Automatic system for precise collimation of radiation |
| US3908293A (en) | 1974-09-19 | 1975-09-30 | Stretch Devices Inc | Screen tensioning and printing frame |
| US3971950A (en) | 1975-04-14 | 1976-07-27 | Xerox Corporation | Independent compression and positioning device for use in mammography |
| US4160906A (en) | 1977-06-23 | 1979-07-10 | General Electric Company | Anatomically coordinated user dominated programmer for diagnostic x-ray apparatus |
| DE2838901C2 (en) | 1978-09-06 | 1986-11-06 | Siemens AG, 1000 Berlin und 8000 München | Catapult drawer |
| FR2512024A1 (en) | 1981-08-27 | 1983-03-04 | Adir | TRICYCLIC ETHERS, PREPARATION THEREOF AND PHARMACEUTICAL COMPOSITIONS CONTAINING THEM |
| FR2549248B1 (en) | 1983-06-24 | 1986-01-31 | Thomson Csf | RETRACTABLE CASSETTE HOLDER FOR RADIOLOGICAL AND RADIOGRAPHIC EXAMINATION APPARATUS |
| DE3339775A1 (en) | 1983-11-03 | 1985-05-15 | Siemens AG, 1000 Berlin und 8000 München | X-RAY DIAGNOSTIC DEVICE WITH RADIATION FILTERS |
| JPS60129034A (en) | 1983-12-16 | 1985-07-10 | 横河メディカルシステム株式会社 | Operation table of x-ray tomographic apparatus |
| US4706269A (en) | 1985-03-11 | 1987-11-10 | Reina Leo J | Anti-scatter grid structure |
| US4773087A (en) | 1986-04-14 | 1988-09-20 | University Of Rochester | Quality of shadowgraphic x-ray images |
| USRE33634E (en) | 1986-09-23 | 1991-07-09 | Method and structure for optimizing radiographic quality by controlling X-ray tube voltage, current focal spot size and exposure time | |
| US4821727A (en) | 1986-10-30 | 1989-04-18 | Elscint Ltd. | Mammographic biopsy needle holder system |
| US4819258A (en) | 1986-11-28 | 1989-04-04 | Bennett X-Ray Corp. | Auto-setting of KV in an x-ray machine after selection of technic factors |
| US4907156A (en) | 1987-06-30 | 1990-03-06 | University Of Chicago | Method and system for enhancement and detection of abnormal anatomic regions in a digital image |
| US5051904A (en) | 1988-03-24 | 1991-09-24 | Olganix Corporation | Computerized dynamic tomography system |
| US5740270A (en) | 1988-04-08 | 1998-04-14 | Neuromedical Systems, Inc. | Automated cytological specimen classification system and method |
| DK654488A (en) | 1988-11-23 | 1990-05-24 | Nordisk Roentgen Tech App | ROENTGENAPPARAT |
| US5099846A (en) | 1988-12-23 | 1992-03-31 | Hardy Tyrone L | Method and apparatus for video presentation from a variety of scanner imaging sources |
| FR2645006A1 (en) | 1989-03-29 | 1990-10-05 | Gen Electric Cgr | MAMMOGRAPH HAVING INTEGRATED STEREOTAXIC VIEWING DEVICE AND METHOD OF USING SUCH A MAMMOGRAPHER |
| FR2646340A1 (en) | 1989-04-28 | 1990-11-02 | Gen Electric Cgr | ADJUSTABLE CASSETTE HOLDER IN DIMENSION AND POSITION FOR MAMMOGRAPHY |
| DE58908415D1 (en) | 1989-07-03 | 1994-10-27 | Siemens Ag | X-ray diagnostic device for mammography images. |
| US5133020A (en) | 1989-07-21 | 1992-07-21 | Arch Development Corporation | Automated method and system for the detection and classification of abnormal lesions and parenchymal distortions in digital medical images |
| US4969174A (en) | 1989-09-06 | 1990-11-06 | General Electric Company | Scanning mammography system with reduced scatter radiation |
| CA2014918A1 (en) | 1989-09-06 | 1991-03-06 | James A. Mcfaul | Scanning mammography system with improved skin line viewing |
| US5240011A (en) | 1991-11-27 | 1993-08-31 | Fischer Imaging Corporation | Motorized biopsy needle positioner |
| US5415169A (en) | 1989-11-21 | 1995-05-16 | Fischer Imaging Corporation | Motorized mammographic biopsy apparatus |
| US5078142A (en) | 1989-11-21 | 1992-01-07 | Fischer Imaging Corporation | Precision mammographic needle biopsy system |
| WO1991007922A1 (en) | 1989-11-27 | 1991-06-13 | Bard International, Inc. | Puncture guide for computer tomography |
| US5199056A (en) | 1989-11-28 | 1993-03-30 | Darrah Carol J | Mammography compression paddle |
| US5481623A (en) | 1990-04-19 | 1996-01-02 | Fuji Photo Film Co., Ltd. | Apparatus for determining an image position on imaging media |
| FR2668359B1 (en) | 1990-10-24 | 1998-02-20 | Gen Electric Cgr | MAMMOGRAPH PROVIDED WITH A PERFECTED NEEDLE HOLDER. |
| US5409497A (en) | 1991-03-11 | 1995-04-25 | Fischer Imaging Corporation | Orbital aiming device for mammo biopsy |
| US5129911A (en) | 1991-03-11 | 1992-07-14 | Siczek Bernard W | Orbital aiming device |
| US5279309A (en) | 1991-06-13 | 1994-01-18 | International Business Machines Corporation | Signaling device and method for monitoring positions in a surgical operation |
| US5220867A (en) | 1991-07-15 | 1993-06-22 | Carpenter Robert C | Adjustable tension silk screen frame |
| US5163075A (en) | 1991-08-08 | 1992-11-10 | Eastman Kodak Company | Contrast enhancement of electrographic imaging |
| US5941832A (en) | 1991-09-27 | 1999-08-24 | Tumey; David M. | Method and apparatus for detection of cancerous and precancerous conditions in a breast |
| US5289520A (en) | 1991-11-27 | 1994-02-22 | Lorad Corporation | Stereotactic mammography imaging system with prone position examination table and CCD camera |
| US5594769A (en) | 1991-11-27 | 1997-01-14 | Thermotrex Corporation | Method and apparatus for obtaining stereotactic mammographic guided needle breast biopsies |
| US5289374A (en) | 1992-02-28 | 1994-02-22 | Arch Development Corporation | Method and system for analysis of false positives produced by an automated scheme for the detection of lung nodules in digital chest radiographs |
| US5343390A (en) | 1992-02-28 | 1994-08-30 | Arch Development Corporation | Method and system for automated selection of regions of interest and detection of septal lines in digital chest radiographs |
| US5359637A (en) | 1992-04-28 | 1994-10-25 | Wake Forest University | Self-calibrated tomosynthetic, radiographic-imaging system, method, and device |
| US5537485A (en) | 1992-07-21 | 1996-07-16 | Arch Development Corporation | Method for computer-aided detection of clustered microcalcifications from digital mammograms |
| US5386447A (en) | 1992-09-23 | 1995-01-31 | Fischer Imaging Corporation | Mammographic screening and biopsy apparatus |
| US5596200A (en) | 1992-10-14 | 1997-01-21 | Primex | Low dose mammography system |
| US5495576A (en) | 1993-01-11 | 1996-02-27 | Ritchey; Kurtis J. | Panoramic image based virtual reality/telepresence audio-visual system and method |
| FR2703237B1 (en) | 1993-03-29 | 1995-05-19 | Ge Medical Syst Sa | Mammograph equipped with a stereotaxic camera with digital detector and method of using such a mammograph. |
| US5491627A (en) | 1993-05-13 | 1996-02-13 | Arch Development Corporation | Method and system for the detection of microcalcifications in digital mammograms |
| US5479603A (en) | 1993-07-21 | 1995-12-26 | Xerox Corporation | Method and apparatus for producing a composite second image in the spatial context of a first image |
| US5878746A (en) | 1993-08-25 | 1999-03-09 | Lemelson; Jerome H. | Computerized medical diagnostic system |
| US5365562A (en) | 1993-09-20 | 1994-11-15 | Fischer Imaging Corporation | Digital imaging apparatus |
| US6075879A (en) | 1993-09-29 | 2000-06-13 | R2 Technology, Inc. | Method and system for computer-aided lesion detection using information from multiple images |
| US5474072A (en) | 1993-10-29 | 1995-12-12 | Neovision Corporation | Methods and apparatus for performing sonomammography |
| US5526394A (en) | 1993-11-26 | 1996-06-11 | Fischer Imaging Corporation | Digital scan mammography apparatus |
| US5452367A (en) | 1993-11-29 | 1995-09-19 | Arch Development Corporation | Automated method and system for the segmentation of medical images |
| CA2113752C (en) | 1994-01-19 | 1999-03-02 | Stephen Michael Rooks | Inspection system for cross-sectional imaging |
| JP4127725B2 (en) | 1994-03-15 | 2008-07-30 | 株式会社東芝 | Processing time prediction system for hospital operations |
| DE4414689C2 (en) | 1994-04-26 | 1996-08-29 | Siemens Ag | X-ray diagnostic device |
| US5499097A (en) | 1994-09-19 | 1996-03-12 | Neopath, Inc. | Method and apparatus for checking automated optical system performance repeatability |
| AU3371395A (en) | 1994-09-20 | 1996-04-19 | Neopath, Inc. | Biological specimen analysis system processing integrity checking apparatus |
| US5557097A (en) | 1994-09-20 | 1996-09-17 | Neopath, Inc. | Cytological system autofocus integrity checking apparatus |
| US5647025A (en) | 1994-09-20 | 1997-07-08 | Neopath, Inc. | Automatic focusing of biomedical specimens apparatus |
| US5553111A (en) | 1994-10-26 | 1996-09-03 | The General Hospital Corporation | Apparatus and method for improved tissue imaging |
| US5649032A (en) | 1994-11-14 | 1997-07-15 | David Sarnoff Research Center, Inc. | System for automatically aligning images to form a mosaic image |
| US5712890A (en) | 1994-11-23 | 1998-01-27 | Thermotrex Corp. | Full breast digital mammography device |
| US5506877A (en) | 1994-11-23 | 1996-04-09 | The General Hospital Corporation | Mammography breast compression device and method |
| US5657362A (en) | 1995-02-24 | 1997-08-12 | Arch Development Corporation | Automated method and system for computerized detection of masses and parenchymal distortions in medical images |
| US5729471A (en) | 1995-03-31 | 1998-03-17 | The Regents Of The University Of California | Machine dynamic selection of one video camera/image of a scene from multiple video cameras/images of the scene in accordance with a particular perspective on the scene, an object in the scene, or an event in the scene |
| US5671288A (en) | 1995-05-31 | 1997-09-23 | Neopath, Inc. | Method and apparatus for assessing slide and specimen preparation quality |
| US6216540B1 (en) | 1995-06-06 | 2001-04-17 | Robert S. Nelson | High resolution device and method for imaging concealed objects within an obscuring medium |
| US5820623A (en) | 1995-06-20 | 1998-10-13 | Ng; Wan Sing | Articulated arm for medical procedures |
| US5642433A (en) | 1995-07-31 | 1997-06-24 | Neopath, Inc. | Method and apparatus for image contrast quality evaluation |
| US5642441A (en) | 1995-10-24 | 1997-06-24 | Neopath, Inc. | Separation apparatus and method for measuring focal plane |
| US5818898A (en) | 1995-11-07 | 1998-10-06 | Kabushiki Kaisha Toshiba | X-ray imaging apparatus using X-ray planar detector |
| US5693948A (en) | 1995-11-21 | 1997-12-02 | Loral Fairchild Corporation | Advanced CCD-based x-ray image sensor system |
| US5627869A (en) | 1995-11-22 | 1997-05-06 | Thermotrex Corporation | Mammography apparatus with proportional collimation |
| FI955636A0 (en) | 1995-11-23 | 1995-11-23 | Planmed Oy | Foerfarande och system Foer styrning av funktionerna av en mammografiaanordning |
| JP2000501184A (en) | 1995-11-30 | 2000-02-02 | クロマビジョン メディカル システムズ,インコーポレイテッド | Method and apparatus for automatic image analysis of biological specimens |
| US5769086A (en) | 1995-12-06 | 1998-06-23 | Biopsys Medical, Inc. | Control system and method for automated biopsy device |
| JPH09198490A (en) | 1996-01-22 | 1997-07-31 | Hitachi Medical Corp | Three-dimensional discrete data projector |
| JPH09238934A (en) | 1996-03-11 | 1997-09-16 | Toshiba Medical Eng Co Ltd | Image display system |
| DE19619915A1 (en) | 1996-05-17 | 1997-11-20 | Siemens Ag | Process for creating tomosynthesis images |
| DE19619925C2 (en) | 1996-05-17 | 1999-09-09 | Sirona Dental Systems Gmbh | X-ray diagnostic device for tomosynthesis |
| DE19619913C2 (en) | 1996-05-17 | 2001-03-15 | Sirona Dental Systems Gmbh | X-ray diagnostic device for tomosynthesis |
| DE19619924A1 (en) | 1996-05-17 | 1997-11-20 | Siemens Ag | Tomosynthetic image generating method |
| US5835079A (en) | 1996-06-13 | 1998-11-10 | International Business Machines Corporation | Virtual pointing device for touchscreens |
| US6067079A (en) | 1996-06-13 | 2000-05-23 | International Business Machines Corporation | Virtual pointing device for touchscreens |
| US5841124A (en) | 1996-06-19 | 1998-11-24 | Neopath, Inc. | Cytological system autofocus integrity checking apparatus |
| US6263092B1 (en) | 1996-07-10 | 2001-07-17 | R2 Technology, Inc. | Method and apparatus for fast detection of spiculated lesions in digital mammograms |
| US6198838B1 (en) | 1996-07-10 | 2001-03-06 | R2 Technology, Inc. | Method and system for detection of suspicious lesions in digital mammograms using a combination of spiculation and density signals |
| US5851180A (en) | 1996-07-12 | 1998-12-22 | United States Surgical Corporation | Traction-inducing compression assembly for enhanced tissue imaging |
| US6075905A (en) | 1996-07-17 | 2000-06-13 | Sarnoff Corporation | Method and apparatus for mosaic image construction |
| DE69739995D1 (en) | 1996-07-23 | 2010-10-28 | Gen Hospital Corp | ARRANGEMENT TO MAMMOGRAPHY BY TOMOSYNTHESIS |
| JPH1033523A (en) | 1996-07-24 | 1998-02-10 | Hitachi Medical Corp | X-ray ct device |
| US5776062A (en) | 1996-10-15 | 1998-07-07 | Fischer Imaging Corporation | Enhanced breast imaging/biopsy system employing targeted ultrasound |
| US5986662A (en) | 1996-10-16 | 1999-11-16 | Vital Images, Inc. | Advanced diagnostic viewer employing automated protocol selection for volume-rendered imaging |
| US6293282B1 (en) | 1996-11-05 | 2001-09-25 | Jerome Lemelson | System and method for treating select tissue in living being |
| US6104840A (en) | 1996-11-08 | 2000-08-15 | Ricoh Company, Ltd. | Method and system for generating a composite image from partially overlapping adjacent images taken along a plurality of axes |
| JP3878259B2 (en) | 1996-11-13 | 2007-02-07 | 東芝医用システムエンジニアリング株式会社 | Medical image processing device |
| US6137527A (en) | 1996-12-23 | 2000-10-24 | General Electric Company | System and method for prompt-radiology image screening service via satellite |
| US6314310B1 (en) | 1997-02-14 | 2001-11-06 | Biosense, Inc. | X-ray guided surgical location system with extended mapping volume |
| US7117098B1 (en) | 1997-02-27 | 2006-10-03 | Cellomics, Inc. | Machine-readable storage medium for analyzing distribution of macromolecules between the cell membrane and the cell cytoplasm |
| US6249616B1 (en) | 1997-05-30 | 2001-06-19 | Enroute, Inc | Combining digital images based on three-dimensional relationships between source image data sets |
| US6044181A (en) | 1997-08-01 | 2000-03-28 | Microsoft Corporation | Focal length estimation method and apparatus for construction of panoramic mosaic images |
| US5999639A (en) | 1997-09-04 | 1999-12-07 | Qualia Computing, Inc. | Method and system for automated detection of clustered microcalcifications from digital mammograms |
| US6128108A (en) | 1997-09-03 | 2000-10-03 | Mgi Software Corporation | Method and system for compositing images |
| US20030135115A1 (en) | 1997-11-24 | 2003-07-17 | Burdette Everette C. | Method and apparatus for spatial registration and mapping of a biopsy needle during a tissue biopsy |
| US6442288B1 (en) | 1997-12-17 | 2002-08-27 | Siemens Aktiengesellschaft | Method for reconstructing a three-dimensional image of an object scanned in the context of a tomosynthesis, and apparatus for tomosynthesis |
| JP3554172B2 (en) | 1998-01-09 | 2004-08-18 | キヤノン株式会社 | Radiography equipment |
| US6175117B1 (en) | 1998-01-23 | 2001-01-16 | Quanta Vision, Inc. | Tissue analysis apparatus |
| US6289235B1 (en) | 1998-03-05 | 2001-09-11 | Wake Forest University | Method and system for creating three-dimensional images using tomosynthetic computed tomography |
| US6081577A (en) | 1998-07-24 | 2000-06-27 | Wake Forest University | Method and system for creating task-dependent three-dimensional images |
| US6375352B1 (en) | 1999-10-01 | 2002-04-23 | General Electric Company | Apparatus and method for obtaining x-ray tomosynthesis data for mammography |
| US6141398A (en) | 1998-08-25 | 2000-10-31 | General Electric Company | Protocol driven image reconstruction, display, and processing in a multislice imaging system |
| US6359617B1 (en) | 1998-09-25 | 2002-03-19 | Apple Computer, Inc. | Blending arbitrary overlaying images into panoramas |
| US6101236A (en) | 1998-10-02 | 2000-08-08 | University Of Iowa Research Foundation | Iterative method and apparatus for x-ray computed tomographic fluoroscopy |
| EP1143845A4 (en) | 1998-11-25 | 2004-10-06 | Fischer Imaging Corp | USER INTERFACE SYSTEM FOR MAMMOGRAPHIC IMAGER |
| FR2786388B1 (en) | 1998-11-27 | 2001-02-16 | Ge Medical Syst Sa | METHOD FOR DETECTING FABRIC OF A SPECIFIC NATURE IN DIGITAL RADIOLOGY AND ITS USE FOR ADJUSTING THE EXPOSURE PARAMETERS |
| US6149301A (en) | 1998-12-30 | 2000-11-21 | General Electric Company | X-ray target centering apparatus for radiographic imaging system |
| JP2000200340A (en) | 1999-01-06 | 2000-07-18 | Ge Yokogawa Medical Systems Ltd | Method and device for displaying image and ct system |
| CN1338139A (en) | 1999-01-29 | 2002-02-27 | 美国超导体公司 | Utility power system with superconducting magnetic energy storage |
| US6424332B1 (en) | 1999-01-29 | 2002-07-23 | Hunter Innovations, Inc. | Image comparison apparatus and method |
| US6233473B1 (en) | 1999-02-16 | 2001-05-15 | Hologic, Inc. | Determining body composition using fan beam dual-energy x-ray absorptiometry |
| US6272207B1 (en) | 1999-02-18 | 2001-08-07 | Creatv Microtech, Inc. | Method and apparatus for obtaining high-resolution digital X-ray and gamma ray images |
| WO2000057767A2 (en) | 1999-03-31 | 2000-10-05 | Ultraguide Ltd. | Apparatus and methods for medical diagnostic and for medical guided interventions and therapy |
| US6256370B1 (en) | 2000-01-24 | 2001-07-03 | General Electric Company | Method and apparatus for performing tomosynthesis |
| US6689142B1 (en) | 1999-04-26 | 2004-02-10 | Scimed Life Systems, Inc. | Apparatus and methods for guiding a needle |
| US6292530B1 (en) | 1999-04-29 | 2001-09-18 | General Electric Company | Method and apparatus for reconstructing image data acquired by a tomosynthesis x-ray imaging system |
| DE19922346C2 (en) | 1999-05-14 | 2003-06-18 | Siemens Ag | X-ray diagnostic device for tomosynthesis or layering |
| US6243441B1 (en) | 1999-07-13 | 2001-06-05 | Edge Medical Devices | Active matrix detector for X-ray imaging |
| US20020173721A1 (en) | 1999-08-20 | 2002-11-21 | Novasonics, Inc. | User interface for handheld imaging devices |
| US6987831B2 (en) | 1999-11-18 | 2006-01-17 | University Of Rochester | Apparatus and method for cone beam volume computed tomography breast imaging |
| US6480565B1 (en) | 1999-11-18 | 2002-11-12 | University Of Rochester | Apparatus and method for cone beam volume computed tomography breast imaging |
| US6245028B1 (en) | 1999-11-24 | 2001-06-12 | Marconi Medical Systems, Inc. | Needle biopsy system |
| US6633674B1 (en) | 1999-11-24 | 2003-10-14 | General Electric Company | Picture archiving and communication system employing improved data compression |
| US6645520B2 (en) | 1999-12-16 | 2003-11-11 | Dermatrends, Inc. | Transdermal administration of nonsteroidal anti-inflammatory drugs using hydroxide-releasing agents as permeation enhancers |
| FR2803069B1 (en) | 1999-12-28 | 2002-12-13 | Ge Medical Syst Sa | METHOD AND SYSTEM FOR COMPENSATING THE THICKNESS OF AN ORGAN |
| US8352289B2 (en) | 1999-12-30 | 2013-01-08 | Dhi Computing, Inc. | Systems and methods for providing and maintaining electronic medical records |
| US6411836B1 (en) | 1999-12-30 | 2002-06-25 | General Electric Company | Method and apparatus for user preferences configuring in an image handling system |
| WO2001054463A1 (en) | 2000-01-24 | 2001-07-26 | Mamea Imaging Ab | Method and arrangement relating to an x-ray imaging apparatus |
| US6901156B2 (en) | 2000-02-04 | 2005-05-31 | Arch Development Corporation | Method, system and computer readable medium for an intelligent search workstation for computer assisted interpretation of medical images |
| US6744848B2 (en) | 2000-02-11 | 2004-06-01 | Brandeis University | Method and system for low-dose three-dimensional imaging of a scene |
| GB0006598D0 (en) | 2000-03-17 | 2000-05-10 | Isis Innovation | Three-dimensional reconstructions from images |
| US6725095B2 (en) | 2000-04-13 | 2004-04-20 | Celsion Corporation | Thermotherapy method for treatment and prevention of cancer in male and female patients and cosmetic ablation of tissue |
| US6351660B1 (en) | 2000-04-18 | 2002-02-26 | Litton Systems, Inc. | Enhanced visualization of in-vivo breast biopsy location for medical documentation |
| AU2000246082A1 (en) | 2000-05-21 | 2001-12-03 | Transscan Medical Ltd. | Apparatus for impedance imaging coupled with another modality |
| US6683934B1 (en) | 2000-06-05 | 2004-01-27 | General Electric Company | Dual energy x-ray imaging system and method for radiography and mammography |
| US6327336B1 (en) | 2000-06-05 | 2001-12-04 | Direct Radiography Corp. | Radiogram showing location of automatic exposure control sensor |
| US6389104B1 (en) | 2000-06-30 | 2002-05-14 | Siemens Corporate Research, Inc. | Fluoroscopy based 3-D neural navigation based on 3-D angiography reconstruction data |
| JP2002052018A (en) | 2000-08-11 | 2002-02-19 | Canon Inc | Image display device, image display method, and storage medium |
| US20020186818A1 (en) | 2000-08-29 | 2002-12-12 | Osteonet, Inc. | System and method for building and manipulating a centralized measurement value database |
| EP1267722A1 (en) | 2000-10-20 | 2003-01-02 | Koninklijke Philips Electronics N.V. | Tomosynthesis in a limited angular range |
| US6758824B1 (en) | 2000-11-06 | 2004-07-06 | Suros Surgical Systems, Inc. | Biopsy apparatus |
| WO2002069808A2 (en) | 2000-11-06 | 2002-09-12 | Suros Surgical Systems, Inc. | Biopsy apparatus |
| US6468226B1 (en) | 2000-11-22 | 2002-10-22 | Mcintyre, Iv John J. | Remote tissue biopsy apparatus and associated methods |
| US7103205B2 (en) | 2000-11-24 | 2006-09-05 | U-Systems, Inc. | Breast cancer screening with ultrasound image overlays |
| US7556602B2 (en) | 2000-11-24 | 2009-07-07 | U-Systems, Inc. | Breast cancer screening with adjunctive ultrasound mammography |
| US7597663B2 (en) | 2000-11-24 | 2009-10-06 | U-Systems, Inc. | Adjunctive ultrasound processing and display for breast cancer screening |
| US7615008B2 (en) | 2000-11-24 | 2009-11-10 | U-Systems, Inc. | Processing and displaying breast ultrasound information |
| US6650928B1 (en) | 2000-11-27 | 2003-11-18 | Ge Medical Systems Global Technology Company, Llc | Color parametric and composite maps for CT perfusion |
| US6501819B2 (en) | 2000-12-18 | 2002-12-31 | Ge Medical Systems Global Technology Company, Llc | Medical diagnostic method and apparatus to control dual energy exposure techniques based on image information |
| FR2818116B1 (en) | 2000-12-19 | 2004-08-27 | Ge Med Sys Global Tech Co Llc | MAMMOGRAPHY APPARATUS |
| WO2002052507A1 (en) | 2000-12-22 | 2002-07-04 | Koninklijke Philips Electronics N.V. | Stereoscopic viewing of a region between clipping planes |
| US6463181B2 (en) | 2000-12-22 | 2002-10-08 | The United States Of America As Represented By The Secretary Of The Navy | Method for optimizing visual display of enhanced digital images |
| US7914453B2 (en) | 2000-12-28 | 2011-03-29 | Ardent Sound, Inc. | Visual imaging system for ultrasonic probe |
| WO2002065480A1 (en) | 2001-02-01 | 2002-08-22 | Creatv Microtech, Inc. | tNTI-SCATTER GRIDS AND COLLIMATOR DESIGNS, AND THEIR MOTION, FABRICATION AND ASSEMBLY |
| US7030861B1 (en) | 2001-02-10 | 2006-04-18 | Wayne Carl Westerman | System and method for packing multi-touch gestures onto a hand |
| US6486764B2 (en) | 2001-02-16 | 2002-11-26 | Delphi Technologies, Inc. | Rotary position sensor |
| US20020188466A1 (en) | 2001-04-18 | 2002-12-12 | Barrette Pierre Philip | Secure digital medical intellectual property (IP) distribution, market applications, and mobile devices |
| US6620111B2 (en) | 2001-04-20 | 2003-09-16 | Ethicon Endo-Surgery, Inc. | Surgical biopsy device having automatic rotation of the probe for taking multiple samples |
| US6965793B2 (en) | 2001-06-28 | 2005-11-15 | Chemimage Corporation | Method for Raman chemical imaging of endogenous chemicals to reveal tissue lesion boundaries in tissue |
| US6611575B1 (en) | 2001-07-27 | 2003-08-26 | General Electric Company | Method and system for high resolution 3D visualization of mammography images |
| US20030048260A1 (en) | 2001-08-17 | 2003-03-13 | Alec Matusis | System and method for selecting actions based on the identification of user's fingers |
| AU2002332758A1 (en) | 2001-08-31 | 2003-03-18 | Analogic Corporation | Image positioning method and system for tomosynthesis in a digital x-ray radiography system |
| US20030072478A1 (en) | 2001-10-12 | 2003-04-17 | Claus Bernhard Erich Hermann | Reconstruction method for tomosynthesis |
| WO2003037046A2 (en) | 2001-10-19 | 2003-05-01 | Hologic, Inc. | Mammography system and method employing offset compression paddles, automatic collimation, and retractable anti-scatter grid |
| US6626849B2 (en) | 2001-11-01 | 2003-09-30 | Ethicon Endo-Surgery, Inc. | MRI compatible surgical biopsy device |
| DE60135559D1 (en) | 2001-11-19 | 2008-10-09 | St Microelectronics Srl | Method for mixing digital images to produce a digital image with extended dynamic range |
| US7054473B1 (en) | 2001-11-21 | 2006-05-30 | R2 Technology, Inc. | Method and apparatus for an improved computer aided diagnosis system |
| US20030097055A1 (en) | 2001-11-21 | 2003-05-22 | Philips Medical Systems(Cleveland), Inc. | Method of reviewing tomographic scans with a large number of images |
| US6751285B2 (en) | 2001-11-21 | 2004-06-15 | General Electric Company | Dose management system for mammographic tomosynthesis |
| US6895077B2 (en) | 2001-11-21 | 2005-05-17 | University Of Massachusetts Medical Center | System and method for x-ray fluoroscopic imaging |
| JP4099984B2 (en) | 2001-12-14 | 2008-06-11 | コニカミノルタホールディングス株式会社 | Abnormal shadow detection apparatus and image output apparatus |
| US6978040B2 (en) | 2001-12-19 | 2005-12-20 | Canon Kabushiki Kaisha | Optical recovery of radiographic geometry |
| US6647092B2 (en) | 2002-01-18 | 2003-11-11 | General Electric Company | Radiation imaging system and method of collimation |
| FR2835731B1 (en) | 2002-02-12 | 2004-10-22 | Ge Med Sys Global Tech Co Llc | MAMMOGRAPHY APPARATUS |
| SE524458C2 (en) | 2002-03-01 | 2004-08-10 | Mamea Imaging Ab | Protective device by an X-ray apparatus |
| US7346381B2 (en) | 2002-11-01 | 2008-03-18 | Ge Medical Systems Global Technology Company Llc | Method and apparatus for medical intervention procedure planning |
| US6878115B2 (en) | 2002-03-28 | 2005-04-12 | Ultrasound Detection Systems, Llc | Three-dimensional ultrasound computed tomography imaging system |
| US6882700B2 (en) | 2002-04-15 | 2005-04-19 | General Electric Company | Tomosynthesis X-ray mammogram system and method with automatic drive system |
| US7218766B2 (en) | 2002-04-15 | 2007-05-15 | General Electric Company | Computer aided detection (CAD) for 3D digital mammography |
| US20030194050A1 (en) | 2002-04-15 | 2003-10-16 | General Electric Company | Multi modality X-ray and nuclear medicine mammography imaging system and method |
| US7139000B2 (en) | 2002-05-13 | 2006-11-21 | Ge Medical Systems Global Technology Company, Llc | Method, system and computer product for displaying axial images |
| US7295691B2 (en) | 2002-05-15 | 2007-11-13 | Ge Medical Systems Global Technology Company, Llc | Computer aided diagnosis of an image set |
| US11275405B2 (en) | 2005-03-04 | 2022-03-15 | Apple Inc. | Multi-functional hand-held device |
| US7599579B2 (en) | 2002-07-11 | 2009-10-06 | Ge Medical Systems Global Technology Company, Llc | Interpolated image filtering method and apparatus |
| US7450747B2 (en) | 2002-07-12 | 2008-11-11 | Ge Medical Systems Global Technology Company, Llc | System and method for efficiently customizing an imaging system |
| US7134080B2 (en) | 2002-08-23 | 2006-11-07 | International Business Machines Corporation | Method and system for a user-following interface |
| US20040036680A1 (en) | 2002-08-26 | 2004-02-26 | Mark Davis | User-interface features for computers with contact-sensitive displays |
| US6898331B2 (en) | 2002-08-28 | 2005-05-24 | Bae Systems Aircraft Controls, Inc. | Image fusion system and method |
| US6748044B2 (en) | 2002-09-13 | 2004-06-08 | Ge Medical Systems Global Technology Company, Llc | Computer assisted analysis of tomographic mammography data |
| US6574304B1 (en) | 2002-09-13 | 2003-06-03 | Ge Medical Systems Global Technology Company, Llc | Computer aided acquisition of medical images |
| US6940943B2 (en) | 2002-10-07 | 2005-09-06 | General Electric Company | Continuous scan tomosynthesis system and method |
| US7347829B2 (en) | 2002-10-07 | 2008-03-25 | Suros Surgical Systems, Inc. | Introduction system for minimally invasive surgical instruments |
| US6825838B2 (en) | 2002-10-11 | 2004-11-30 | Sonocine, Inc. | 3D modeling system |
| US8594410B2 (en) | 2006-08-28 | 2013-11-26 | Definiens Ag | Context driven image mining to generate image-based biomarkers |
| US7366333B2 (en) | 2002-11-11 | 2008-04-29 | Art, Advanced Research Technologies, Inc. | Method and apparatus for selecting regions of interest in optical imaging |
| KR100443552B1 (en) | 2002-11-18 | 2004-08-09 | 한국전자통신연구원 | System and method for embodying virtual reality |
| US20040171933A1 (en) | 2002-11-25 | 2004-09-02 | Milton Stoller | Mammography needle biopsy system and method |
| US8571289B2 (en) | 2002-11-27 | 2013-10-29 | Hologic, Inc. | System and method for generating a 2D image from a tomosynthesis data set |
| US7760924B2 (en) | 2002-11-27 | 2010-07-20 | Hologic, Inc. | System and method for generating a 2D image from a tomosynthesis data set |
| US7577282B2 (en) | 2002-11-27 | 2009-08-18 | Hologic, Inc. | Image handling and display in X-ray mammography and tomosynthesis |
| US7831296B2 (en) | 2002-11-27 | 2010-11-09 | Hologic, Inc. | X-ray mammography with tomosynthesis |
| US7616801B2 (en) | 2002-11-27 | 2009-11-10 | Hologic, Inc. | Image handling and display in x-ray mammography and tomosynthesis |
| US7123684B2 (en) | 2002-11-27 | 2006-10-17 | Hologic, Inc. | Full field mammography with tissue exposure control, tomosynthesis, and dynamic field of view processing |
| US6597762B1 (en) | 2002-11-27 | 2003-07-22 | Ge Medical Systems Global Technology Co., Llc | Method and apparatus of lesion detection and validation based on multiple reviews of a CT image |
| US7406150B2 (en) | 2002-11-29 | 2008-07-29 | Hologic, Inc. | Distributed architecture for mammographic image acquisition and processing |
| US7110490B2 (en) | 2002-12-10 | 2006-09-19 | General Electric Company | Full field digital tomosynthesis method and apparatus |
| US7904824B2 (en) | 2002-12-10 | 2011-03-08 | Siemens Medical Solutions Usa, Inc. | Medical imaging programmable custom user interface system and method |
| US7634308B2 (en) | 2002-12-17 | 2009-12-15 | Kabushiki Kaisha Toshiba | Method and system for X-ray diagnosis of object in which X-ray contrast agent is injected |
| US7356113B2 (en) | 2003-02-12 | 2008-04-08 | Brandeis University | Tomosynthesis imaging system and method |
| US7333644B2 (en) | 2003-03-11 | 2008-02-19 | Siemens Medical Solutions Usa, Inc. | Systems and methods for providing automatic 3D lesion segmentation and measurements |
| JP4497837B2 (en) | 2003-05-12 | 2010-07-07 | キヤノン株式会社 | Radiation imaging equipment |
| US7912528B2 (en) | 2003-06-25 | 2011-03-22 | Siemens Medical Solutions Usa, Inc. | Systems and methods for automated diagnosis and decision support for heart related diseases and conditions |
| WO2005001740A2 (en) | 2003-06-25 | 2005-01-06 | Siemens Medical Solutions Usa, Inc. | Systems and methods for automated diagnosis and decision support for breast imaging |
| US6885724B2 (en) | 2003-08-22 | 2005-04-26 | Ge Medical Systems Global Technology Company, Llc | Radiographic tomosynthesis image acquisition utilizing asymmetric geometry |
| US8090164B2 (en) | 2003-08-25 | 2012-01-03 | The University Of North Carolina At Chapel Hill | Systems, methods, and computer program products for analysis of vessel attributes for diagnosis, disease staging, and surgical planning |
| US7424141B2 (en) | 2003-08-29 | 2008-09-09 | Agilent Technologies, Inc. | System and method for performing auto-focused tomosynthesis |
| US7578781B2 (en) | 2003-09-18 | 2009-08-25 | Wisconsin Alumni Research Foundation | Device for placement of needles and radioactive seeds in radiotherapy |
| US7869862B2 (en) | 2003-10-15 | 2011-01-11 | Varian Medical Systems, Inc. | Systems and methods for functional imaging using contrast-enhanced multiple-energy computed tomography |
| US20050089205A1 (en) | 2003-10-23 | 2005-04-28 | Ajay Kapur | Systems and methods for viewing an abnormality in different kinds of images |
| JP2005149107A (en) | 2003-11-14 | 2005-06-09 | Konica Minolta Medical & Graphic Inc | Medical image management system |
| DE10353611B4 (en) | 2003-11-17 | 2013-01-17 | Siemens Aktiengesellschaft | X-ray diagnostic device for mammography examinations |
| US20050108643A1 (en) | 2003-11-17 | 2005-05-19 | Nokia Corporation | Topographic presentation of media files in a media diary application |
| US8768026B2 (en) | 2003-11-26 | 2014-07-01 | Hologic, Inc. | X-ray imaging with x-ray markers that provide adjunct information but preserve image quality |
| US8265728B2 (en) | 2003-11-26 | 2012-09-11 | University Of Chicago | Automated method and system for the evaluation of disease and registration accuracy in the subtraction of temporally sequential medical images |
| ATE551660T1 (en) | 2003-11-26 | 2012-04-15 | Koninkl Philips Electronics Nv | OPTIMIZING WORKFLOW FOR A HIGH-THROUGHPUT IMAGING ENVIRONMENT |
| US20070118550A1 (en) | 2003-11-27 | 2007-05-24 | Yang Guo L | Method and apparatus for building a multi-discipline and multi-media personal medical image library |
| US7727151B2 (en) | 2003-11-28 | 2010-06-01 | U-Systems Inc. | Navigation among multiple breast ultrasound volumes |
| US7773721B2 (en) | 2003-12-03 | 2010-08-10 | The General Hospital Corporation | Multi-segment cone-beam reconstruction system and method for tomosynthesis imaging |
| US9237929B2 (en) | 2003-12-22 | 2016-01-19 | Koninklijke Philips N.V. | System for guiding a medical instrument in a patient body |
| US20050135555A1 (en) | 2003-12-23 | 2005-06-23 | Claus Bernhard Erich H. | Method and system for simultaneously viewing rendered volumes |
| US7653229B2 (en) | 2003-12-23 | 2010-01-26 | General Electric Company | Methods and apparatus for reconstruction of volume data from projection data |
| US7787936B2 (en) | 2004-01-23 | 2010-08-31 | Traxyz Medical, Inc. | Methods and apparatus for performing procedures on target locations in the body |
| US7298881B2 (en) | 2004-02-13 | 2007-11-20 | University Of Chicago | Method, system, and computer software product for feature-based correlation of lesions from multiple images |
| US7289825B2 (en) | 2004-03-15 | 2007-10-30 | General Electric Company | Method and system for utilizing wireless voice technology within a radiology workflow |
| US7142633B2 (en) | 2004-03-31 | 2006-11-28 | General Electric Company | Enhanced X-ray imaging system and method |
| US7699783B2 (en) | 2004-04-08 | 2010-04-20 | Techniscan, Inc. | Method for imaging and treating a breast |
| EP1750584B1 (en) | 2004-05-14 | 2020-10-14 | Philips Intellectual Property & Standards GmbH | System and method for diagnosing breast cancer |
| GB0411402D0 (en) | 2004-05-21 | 2004-06-23 | Tissuomics Ltd | Penetrating radiation measurements |
| EP1782168A4 (en) | 2004-07-23 | 2009-01-07 | Learning Tree Int Inc | System and method for electronic presentations |
| US7835562B2 (en) | 2004-07-23 | 2010-11-16 | General Electric Company | Methods and apparatus for noise reduction filtering of images |
| FR2873835A1 (en) | 2004-07-29 | 2006-02-03 | Gen Electric | METHOD AND DEVICE FOR X-RAY IMAGING WITH CONTRAST PRODUCT FOR ENHANCED VISUALIZATION |
| WO2006020874A2 (en) | 2004-08-10 | 2006-02-23 | The Research Foundation | Flat-panel detector with avalanche gain |
| US20060074287A1 (en) | 2004-09-30 | 2006-04-06 | General Electric Company | Systems, methods and apparatus for dual mammography image detection |
| US7725153B2 (en) | 2004-10-04 | 2010-05-25 | Hologic, Inc. | Estimating visceral fat by dual-energy x-ray absorptiometry |
| US7505555B2 (en) | 2004-11-02 | 2009-03-17 | Biolucent, Llc | Pads for mammography and methods for making and using them |
| EP1815388B1 (en) | 2004-11-15 | 2013-03-06 | Hologic, Inc. | Matching geometry generation and display of mammograms and tomosynthesis images |
| US7869563B2 (en) | 2004-11-26 | 2011-01-11 | Hologic, Inc. | Integrated multi-mode mammography/tomosynthesis x-ray system and method |
| WO2006062802A2 (en) | 2004-12-08 | 2006-06-15 | Wang Paul C | Device for non-surgical correction of congenital inverted nipples and/or collection of nipple aspirate fluid |
| US20060132508A1 (en) | 2004-12-16 | 2006-06-22 | Navid Sadikali | Multi-planar image viewing system and method |
| US7616793B2 (en) | 2004-12-30 | 2009-11-10 | Hologic, Inc. | Medical image review workstation with integrated content-based resource retrieval |
| US9760214B2 (en) | 2005-02-23 | 2017-09-12 | Zienon, Llc | Method and apparatus for data entry input |
| US7859549B2 (en) | 2005-03-08 | 2010-12-28 | Agfa Inc. | Comparative image review system and method |
| US20060210131A1 (en) | 2005-03-15 | 2006-09-21 | Wheeler Frederick W Jr | Tomographic computer aided diagnosis (CAD) with multiple reconstructions |
| JP5038643B2 (en) | 2005-04-06 | 2012-10-03 | 株式会社東芝 | Image display device |
| US8373652B2 (en) | 2005-04-06 | 2013-02-12 | Kabushiki Kaisha Toshiba | Image display apparatus and image display method |
| US7517318B2 (en) | 2005-04-26 | 2009-04-14 | Biosense Webster, Inc. | Registration of electro-anatomical map with pre-acquired image using ultrasound |
| WO2006116700A2 (en) | 2005-04-28 | 2006-11-02 | Bruce Reiner | Method and apparatus for automated quality assurance in medical imaging |
| US10492749B2 (en) | 2005-05-03 | 2019-12-03 | The Regents Of The University Of California | Biopsy systems for breast computed tomography |
| DE102005022543A1 (en) | 2005-05-17 | 2006-11-23 | Siemens Ag | Mammography procedure and mammography device |
| US7606801B2 (en) | 2005-06-07 | 2009-10-20 | Varonis Inc. | Automatic management of storage access control |
| US7809175B2 (en) | 2005-07-01 | 2010-10-05 | Hologic, Inc. | Displaying and navigating computer-aided detection results on a review workstation |
| AU2006272553B2 (en) | 2005-07-25 | 2012-07-12 | U-Systems, Inc. | Compressive surfaces for ultrasonic tissue scanning |
| US7245694B2 (en) | 2005-08-15 | 2007-07-17 | Hologic, Inc. | X-ray mammography/tomosynthesis of patient's breast |
| US7889896B2 (en) | 2005-08-18 | 2011-02-15 | Hologic, Inc. | Patient worklist management in digital radiography review workstations |
| US8081165B2 (en) | 2005-08-30 | 2011-12-20 | Jesterrad, Inc. | Multi-functional navigational device and method |
| DE202005013910U1 (en) | 2005-09-02 | 2005-11-24 | Siemens Ag | Mammography unit has face shield moving within X-ray source head to provide withdrawn, protruding and transport positions |
| US20070052700A1 (en) | 2005-09-07 | 2007-03-08 | Wheeler Frederick W | System and method for 3D CAD using projection images |
| US10008184B2 (en) | 2005-11-10 | 2018-06-26 | Hologic, Inc. | System and method for generating a 2D image using mammography and/or tomosynthesis image data |
| US7342233B2 (en) | 2005-11-18 | 2008-03-11 | Sectra Mamea Ab | Method and arrangement relating to x-ray imaging |
| US20070118400A1 (en) | 2005-11-22 | 2007-05-24 | General Electric Company | Method and system for gesture recognition to drive healthcare applications |
| US8014576B2 (en) | 2005-11-23 | 2011-09-06 | The Medipattern Corporation | Method and system of computer-aided quantitative and qualitative analysis of medical images |
| US20070236490A1 (en) | 2005-11-25 | 2007-10-11 | Agfa-Gevaert | Medical image display and review system |
| DE102005058006A1 (en) | 2005-12-05 | 2007-06-06 | Siemens Ag | Method and peer network for determining the peer network originating stadium of a file |
| EP1966762A2 (en) | 2005-12-29 | 2008-09-10 | Carestream Health, Inc. | Cross-time and cross-modality medical diagnosis |
| US20070156451A1 (en) | 2006-01-05 | 2007-07-05 | Gering David T | System and method for portable display of relevant healthcare information |
| US7581399B2 (en) | 2006-01-05 | 2009-09-01 | United Technologies Corporation | Damped coil pin for attachment hanger hinge |
| WO2007095330A2 (en) | 2006-02-15 | 2007-08-23 | Hologic Inc | Breast biopsy and needle localization using tomosynthesis systems |
| US20070223651A1 (en) | 2006-03-21 | 2007-09-27 | Wagenaar Douglas J | Dual modality mammography device |
| US7489761B2 (en) | 2006-03-27 | 2009-02-10 | Hologic, Inc. | Breast compression for digital mammography, tomosynthesis and other modalities |
| US8948845B2 (en) | 2006-03-31 | 2015-02-03 | Koninklijke Philips N.V. | System, methods, and instrumentation for image guided prostate treatment |
| DE602007012886D1 (en) | 2006-04-12 | 2011-04-14 | Nassir Navab | VIRTUAL PENETRATING MIRROR FOR VISUALIZING VIRTUAL OBJECTS IN ANGIOGRAPHIC APPLICATIONS |
| WO2007120904A2 (en) | 2006-04-14 | 2007-10-25 | Fuzzmed, Inc. | System, method, and device for personal medical care, intelligent analysis, and diagnosis |
| US7945083B2 (en) | 2006-05-25 | 2011-05-17 | Carestream Health, Inc. | Method for supporting diagnostic workflow from a medical imaging apparatus |
| JP2007330334A (en) | 2006-06-12 | 2007-12-27 | Toshiba Corp | X-ray imaging apparatus and method thereof |
| US7974924B2 (en) | 2006-07-19 | 2011-07-05 | Mvisum, Inc. | Medical data encryption for communication over a vulnerable system |
| CN100444800C (en) | 2006-07-25 | 2008-12-24 | 倪湘申 | X-ray puncture positioning device and method for microtrauma operation |
| US20090080602A1 (en) | 2006-08-03 | 2009-03-26 | Kenneth Brooks | Dedicated breast radiation imaging/therapy system |
| WO2008018054A2 (en) | 2006-08-08 | 2008-02-14 | Keter Medical Ltd. | Imaging system |
| US20080043036A1 (en) | 2006-08-16 | 2008-02-21 | Mevis Breastcare Gmbh & Co. Kg | Method, apparatus and computer program for presenting cases comprising images |
| US8160677B2 (en) | 2006-09-08 | 2012-04-17 | Medtronic, Inc. | Method for identification of anatomical landmarks |
| JP2008068032A (en) | 2006-09-15 | 2008-03-27 | Toshiba Corp | Image display device |
| US20080139896A1 (en) | 2006-10-13 | 2008-06-12 | Siemens Medical Solutions Usa, Inc. | System and Method for Graphical Annotation of Anatomical Images Using a Touch Screen Display |
| CN101529475B (en) | 2006-10-17 | 2013-12-25 | 皇家飞利浦电子股份有限公司 | Presentation of 3D images in combination with 2D projection images |
| US8538776B2 (en) | 2006-10-25 | 2013-09-17 | Bruce Reiner | Method and apparatus of providing a radiation scorecard |
| JP4851296B2 (en) | 2006-10-26 | 2012-01-11 | 富士フイルム株式会社 | Radiation tomographic image acquisition apparatus and radiation tomographic image acquisition method |
| US20080114614A1 (en) | 2006-11-15 | 2008-05-15 | General Electric Company | Methods and systems for healthcare application interaction using gesture-based interaction enhanced with pressure sensitivity |
| US8280488B2 (en) | 2006-11-24 | 2012-10-02 | Huisman Henkjan J | Processing and displaying dynamic contrast-enhanced magnetic resonance imaging information |
| US7769219B2 (en) | 2006-12-11 | 2010-08-03 | Cytyc Corporation | Method for assessing image focus quality |
| US8044972B2 (en) | 2006-12-21 | 2011-10-25 | Sectra Mamea Ab | Synchronized viewing of tomosynthesis and/or mammograms |
| US8051386B2 (en) | 2006-12-21 | 2011-11-01 | Sectra Ab | CAD-based navigation of views of medical image data stacks or volumes |
| JP5052123B2 (en) | 2006-12-27 | 2012-10-17 | 富士フイルム株式会社 | Medical imaging system and method |
| US8091045B2 (en) | 2007-01-07 | 2012-01-03 | Apple Inc. | System and method for managing lists |
| US7676019B2 (en) | 2007-01-31 | 2010-03-09 | Sectra Mamea Ab | Compression arrangement |
| US10682107B2 (en) | 2007-01-31 | 2020-06-16 | Philips Digital Mammography Sweden Ab | Method and arrangement relating to x-ray imaging |
| JP4888165B2 (en) | 2007-03-12 | 2012-02-29 | 富士ゼロックス株式会社 | Image processing apparatus and program |
| US8155417B2 (en) | 2007-03-27 | 2012-04-10 | Hologic, Inc. | Post-acquisition adaptive reconstruction of MRI data |
| JP5656339B2 (en) | 2007-03-28 | 2015-01-21 | Jsr株式会社 | Protein-immobilized carrier and method for producing the same |
| US9597041B2 (en) | 2007-03-30 | 2017-03-21 | General Electric Company | Sequential image acquisition with updating method and system |
| US7936341B2 (en) | 2007-05-30 | 2011-05-03 | Microsoft Corporation | Recognizing selection regions from multiple simultaneous inputs |
| US9427201B2 (en) | 2007-06-30 | 2016-08-30 | Accuray Incorporated | Non-invasive method for using 2D angiographic images for radiosurgical target definition |
| WO2009012576A1 (en) | 2007-07-20 | 2009-01-29 | Resonant Medical Inc. | Methods and systems for guiding the acquisition of ultrasound images |
| FR2919747B1 (en) | 2007-08-02 | 2009-11-06 | Gen Electric | METHOD AND SYSTEM FOR DISPLAYING TOMOSYNTHESIS IMAGES |
| WO2009026587A1 (en) | 2007-08-23 | 2009-02-26 | Fischer Medical Technologies, Inc. | Improved computed tomography breast imaging and biopsy system |
| CN101861563B (en) | 2007-09-14 | 2014-03-12 | 松下航空电子公司 | Portable user control device and method for vehicle information system |
| US8126226B2 (en) | 2007-09-20 | 2012-02-28 | General Electric Company | System and method to generate a selected visualization of a radiological image of an imaged subject |
| US7630533B2 (en) | 2007-09-20 | 2009-12-08 | Hologic, Inc. | Breast tomosynthesis with display of highlighted suspected calcifications |
| US7929743B2 (en) | 2007-10-02 | 2011-04-19 | Hologic, Inc. | Displaying breast tomosynthesis computer-aided detection results |
| JP5159242B2 (en) | 2007-10-18 | 2013-03-06 | キヤノン株式会社 | Diagnosis support device, diagnosis support device control method, and program thereof |
| EP2207483B1 (en) | 2007-10-19 | 2016-06-01 | Metritrack, Inc. | Three dimensional mapping display system for diagnostic ultrasound machines and method |
| US8107700B2 (en) | 2007-11-21 | 2012-01-31 | Merge Cad Inc. | System and method for efficient workflow in reading medical image data |
| FR2924010B1 (en) | 2007-11-23 | 2010-11-26 | Gen Electric | IMPROVEMENTS ON MAMMOGRAPHIC DEVICES |
| US20090138280A1 (en) | 2007-11-26 | 2009-05-28 | The General Electric Company | Multi-stepped default display protocols |
| US20090175408A1 (en) | 2007-12-04 | 2009-07-09 | Goodsitt Mitchell M | Compression paddle and methods for using the same in various medical procedures |
| ES2576644T3 (en) | 2007-12-21 | 2016-07-08 | Koning Corporation | Conical beam apparatus for CT imaging |
| JP5328146B2 (en) | 2007-12-25 | 2013-10-30 | キヤノン株式会社 | Medical image processing apparatus, medical image processing method and program |
| EP2231011A1 (en) | 2007-12-31 | 2010-09-29 | Real Imaging Ltd. | System and method for registration of imaging data |
| US20090167702A1 (en) | 2008-01-02 | 2009-07-02 | Nokia Corporation | Pointing device detection |
| JP5294654B2 (en) | 2008-02-29 | 2013-09-18 | 富士フイルム株式会社 | Image display method and apparatus |
| US20160328998A1 (en) | 2008-03-17 | 2016-11-10 | Worcester Polytechnic Institute | Virtual interactive system for ultrasound training |
| JP5558672B2 (en) | 2008-03-19 | 2014-07-23 | 株式会社東芝 | Image processing apparatus and X-ray computed tomography apparatus |
| KR100977385B1 (en) | 2008-04-10 | 2010-08-20 | 주식회사 팬택 | Mobile terminal capable of controlling a widget-type idle screen and a standby screen control method using the same |
| US20110178389A1 (en) | 2008-05-02 | 2011-07-21 | Eigen, Inc. | Fused image moldalities guidance |
| US20100177053A2 (en) | 2008-05-09 | 2010-07-15 | Taizo Yasutake | Method and apparatus for control of multiple degrees of freedom of a display |
| BRPI0908290A2 (en) | 2008-05-09 | 2015-07-21 | Koninkl Philips Electronics Nv | "guideline-based clinical decision support system (cdss)" |
| JP5224451B2 (en) | 2008-06-03 | 2013-07-03 | 富士フイルム株式会社 | Projection image creation apparatus, method and program |
| US8031835B2 (en) | 2008-08-07 | 2011-10-04 | Xcision Medical Systems Llc | Method and system for translational digital tomosynthesis mammography |
| US9848849B2 (en) | 2008-08-21 | 2017-12-26 | General Electric Company | System and method for touch screen control of an ultrasound system |
| US7991106B2 (en) | 2008-08-29 | 2011-08-02 | Hologic, Inc. | Multi-mode tomosynthesis/mammography gain calibration and image correction using gain map information from selected projection angles |
| KR20110063659A (en) | 2008-09-04 | 2011-06-13 | 홀로직, 인크. | Integrated Multimode Abrasion / Tomosynthesis Spiral System and Method |
| US8284170B2 (en) | 2008-09-30 | 2012-10-09 | Apple Inc. | Touch screen device, method, and graphical user interface for moving on-screen objects without using a cursor |
| US20100088346A1 (en) | 2008-10-08 | 2010-04-08 | General Electric Company | Method and system for attaching objects to a data repository |
| US7940891B2 (en) | 2008-10-22 | 2011-05-10 | Varian Medical Systems, Inc. | Methods and systems for treating breast cancer using external beam radiation |
| US20100131482A1 (en) | 2008-11-26 | 2010-05-27 | General Electric Company | Adaptive user interface systems and methods for healthcare applications |
| US8543415B2 (en) | 2008-11-26 | 2013-09-24 | General Electric Company | Mobile medical device image and series navigation |
| US8782552B2 (en) | 2008-11-28 | 2014-07-15 | Sinan Batman | Active overlay system and method for accessing and manipulating imaging displays |
| US8547402B2 (en) | 2009-10-07 | 2013-10-01 | Hologic, Inc. | Displaying computer-aided detection information with associated breast tomosynthesis image information |
| US9146663B2 (en) | 2008-12-08 | 2015-09-29 | Hologic, Inc. | Displaying computer-aided detection information with associated breast tomosynthesis image information |
| JP2010137004A (en) | 2008-12-15 | 2010-06-24 | Fujifilm Corp | Radiation image processing system and processing method |
| US8184890B2 (en) | 2008-12-26 | 2012-05-22 | Three Palm Software | Computer-aided diagnosis and visualization of tomosynthesis mammography data |
| US8942342B2 (en) | 2008-12-29 | 2015-01-27 | Analogic Corporation | Multi-modality image acquisition |
| JP2010188003A (en) | 2009-02-19 | 2010-09-02 | Fujifilm Corp | Image displaying system and image capturing and displaying system |
| JP5373450B2 (en) | 2009-03-31 | 2013-12-18 | 富士フイルム株式会社 | Biopsy device and method of operating biopsy device |
| US8300023B2 (en) | 2009-04-10 | 2012-10-30 | Qualcomm Incorporated | Virtual keypad generator with learning capabilities |
| US9113124B2 (en) | 2009-04-13 | 2015-08-18 | Linkedin Corporation | Method and system for still image capture from video footage |
| US8217357B2 (en) | 2009-04-13 | 2012-07-10 | Hologic, Inc. | Integrated breast X-ray and molecular imaging system |
| US9198640B2 (en) | 2009-05-06 | 2015-12-01 | Real Imaging Ltd. | System and methods for providing information related to a tissue region of a subject |
| US8366619B2 (en) | 2009-05-13 | 2013-02-05 | University Of Washington | Nodule screening using ultrasound elastography |
| US8677282B2 (en) | 2009-05-13 | 2014-03-18 | International Business Machines Corporation | Multi-finger touch adaptations for medical imaging systems |
| US9386942B2 (en) | 2009-06-26 | 2016-07-12 | Cianna Medical, Inc. | Apparatus, systems, and methods for localizing markers or tissue structures within a body |
| US8639056B2 (en) | 2009-06-29 | 2014-01-28 | Thomson Licensing | Contrast enhancement |
| EP2454720B1 (en) | 2009-07-17 | 2019-11-27 | Koninklijke Philips N.V. | Multi-modality breast imaging |
| FR2948481B1 (en) | 2009-07-27 | 2012-01-20 | Gen Electric | IMAGING METHOD FOR THE PRODUCTION OF A TRIPLE ENERGY MODELING, AND DEVICE FOR IMPLEMENTING SUCH A METHOD |
| US8644644B2 (en) | 2009-09-14 | 2014-02-04 | Adobe Systems Incorporation | Methods and apparatus for blending images |
| JP5572440B2 (en) | 2009-09-15 | 2014-08-13 | 富士フイルム株式会社 | Diagnosis support system, diagnosis support program, and diagnosis support method |
| KR101616874B1 (en) | 2009-09-23 | 2016-05-02 | 삼성전자주식회사 | Method and apparatus for blending multiple images |
| CN102648485B (en) | 2009-10-05 | 2015-10-07 | 皇家飞利浦电子股份有限公司 | The interactive selection of volume of interest in image |
| EP2485651B1 (en) | 2009-10-08 | 2020-12-23 | Hologic, Inc. | Needle breast biopsy system |
| SG181057A1 (en) | 2009-11-25 | 2012-07-30 | Realtime Radiology Inc | System and method for management and distribution of diagnostic imaging |
| US9289183B2 (en) | 2009-11-27 | 2016-03-22 | Qview Medical, Inc. | Interactive display of computer aided detection results in combination with quantitative prompts |
| EP2503934B1 (en) | 2009-11-27 | 2021-10-20 | Hologic, Inc. | Systems and methods for tracking positions between imaging modalities and transforming a displayed three-dimensional image corresponding to a position and orientation of a probe |
| US20120014578A1 (en) | 2010-07-19 | 2012-01-19 | Qview Medical, Inc. | Computer Aided Detection Of Abnormalities In Volumetric Breast Ultrasound Scans And User Interface |
| EP2513865A2 (en) | 2009-12-17 | 2012-10-24 | Koninklijke Philips Electronics N.V. | Reconstructing an object of interest |
| US8027582B2 (en) | 2009-12-21 | 2011-09-27 | Sony Corporation | Autofocus with confidence measure |
| US9451924B2 (en) | 2009-12-30 | 2016-09-27 | General Electric Company | Single screen multi-modality imaging displays |
| US9201627B2 (en) | 2010-01-05 | 2015-12-01 | Rovi Guides, Inc. | Systems and methods for transferring content between user equipment and a wireless communications device |
| WO2011091300A2 (en) | 2010-01-24 | 2011-07-28 | Mistretta Medical, Llc | System and method for implementation of 4d time-energy subtraction computed tomography |
| US8559590B2 (en) | 2010-01-28 | 2013-10-15 | Varian Medical Systems, Inc. | Imaging breast cancerous lesions with microcalcifications |
| DE102010009295B4 (en) | 2010-02-25 | 2019-02-21 | Siemens Healthcare Gmbh | Method for displaying a region to be examined and / or treated |
| JP5340213B2 (en) | 2010-03-30 | 2013-11-13 | 富士フイルム株式会社 | Image display system |
| US20110268339A1 (en) | 2010-04-30 | 2011-11-03 | Lana Volokh | Systems and methods for determining a location of a lesion in a breast |
| WO2011148371A1 (en) | 2010-05-23 | 2011-12-01 | Technion Research And Development Foundation Ltd. | Detection, staging and grading of benign and malignant tumors |
| US20110310126A1 (en) | 2010-06-22 | 2011-12-22 | Emil Markov Georgiev | Method and system for interacting with datasets for display |
| US9392960B2 (en) | 2010-06-24 | 2016-07-19 | Uc-Care Ltd. | Focused prostate cancer treatment system and method |
| US9782134B2 (en) | 2010-06-28 | 2017-10-10 | Koninklijke Philips N.V. | Lesion imaging optimization using a tomosynthesis/biopsy system |
| JP5654787B2 (en) | 2010-06-30 | 2015-01-14 | 富士フイルム株式会社 | Radiographic imaging display method and system |
| KR101687971B1 (en) | 2010-07-19 | 2016-12-21 | 삼성전자주식회사 | Apparatus and method for checking breast cancer |
| WO2012019162A1 (en) | 2010-08-06 | 2012-02-09 | Accuray, Inc. | Systems and methods for real-time tumor tracking during radiation treatment using ultrasound imaging |
| JP5650467B2 (en) | 2010-08-27 | 2015-01-07 | 富士フイルム株式会社 | Radiation imaging system |
| JP2012061196A (en) | 2010-09-17 | 2012-03-29 | Fujifilm Corp | Tomographic image displaying method and apparatus |
| DE102010041920A1 (en) | 2010-10-04 | 2012-04-05 | Siemens Aktiengesellschaft | Method for representing concentration of contrast agent in predetermined volume portion of female breast, involves subtracting two dimensional low-energy image of female breast from two dimensional high-energy image |
| US20130222383A1 (en) | 2010-11-12 | 2013-08-29 | Hitachi Medical Corporation | Medical image display device and medical image display method |
| CA2817364C (en) | 2010-11-18 | 2018-10-02 | Hologic, Inc. | Table for performing medical procedures |
| US9146674B2 (en) | 2010-11-23 | 2015-09-29 | Sectra Ab | GUI controls with movable touch-control objects for alternate interactions |
| US8465413B2 (en) | 2010-11-25 | 2013-06-18 | Coloplast A/S | Method of treating Peyronie's disease |
| US9075903B2 (en) | 2010-11-26 | 2015-07-07 | Hologic, Inc. | User interface for medical image review workstation |
| WO2012073164A1 (en) | 2010-12-03 | 2012-06-07 | Koninklijke Philips Electronics N.V. | Device and method for ultrasound imaging |
| JP5170226B2 (en) | 2010-12-10 | 2013-03-27 | カシオ計算機株式会社 | Image processing apparatus, image processing method, and program |
| EP2651308B1 (en) | 2010-12-14 | 2020-03-11 | Hologic, Inc. | System and method for fusing three dimensional image data from a plurality of different imaging systems for use in diagnostic imaging |
| DE102011003137A1 (en) | 2011-01-25 | 2012-07-26 | Siemens Aktiengesellschaft | Imaging method with an improved representation of a tissue area |
| US9392986B2 (en) | 2011-02-14 | 2016-07-19 | University Of Rochester | Method and apparatus for cone beam breast CT image-based computer-aided detection and diagnosis |
| FR2971412B1 (en) | 2011-02-15 | 2014-01-17 | Gen Electric | METHOD OF ACQUIRING THE MORPHOLOGY OF A BREAST. |
| IT1404617B1 (en) | 2011-02-25 | 2013-11-29 | I M S Internaz Medicoscientifica S R L | EQUIPMENT FOR TOMOSYNTHESIS AND MAMMOGRAPHY. |
| EP2684157B1 (en) | 2011-03-08 | 2017-12-13 | Hologic Inc. | System and method for dual energy and/or contrast enhanced breast imaging for screening, diagnosis and biopsy |
| DE202011004071U1 (en) | 2011-03-16 | 2011-07-27 | Siemens Aktiengesellschaft | Compression plate for tomosynthesis |
| US20120256920A1 (en) | 2011-04-05 | 2012-10-11 | Julian Marshall | System and Method for Fusing Computer Assisted Detection in a Multi-Modality, Multi-Dimensional Breast Imaging Environment |
| US20120259230A1 (en) | 2011-04-11 | 2012-10-11 | Elven Riley | Tool for recording patient wound history |
| US8526763B2 (en) | 2011-05-27 | 2013-09-03 | Adobe Systems Incorporated | Seamless image composition |
| JP6134315B2 (en) | 2011-06-27 | 2017-05-24 | コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. | Anatomical tagging of findings in image data |
| CN102855483B (en) | 2011-06-30 | 2017-09-12 | 北京三星通信技术研究有限公司 | Handle the method and apparatus and breast cancer diagnosis apparatus of ultrasonoscopy |
| KR101477543B1 (en) | 2011-07-22 | 2014-12-31 | 삼성전자주식회사 | APPARATUS AND METHOD OF PHOTOGRAPHING USING X-ray |
| WO2013035026A1 (en) | 2011-09-07 | 2013-03-14 | Koninklijke Philips Electronics N.V. | Interactive live segmentation with automatic selection of optimal tomography slice |
| JP5439453B2 (en) | 2011-10-20 | 2014-03-12 | 株式会社東芝 | Image display device |
| DE102011087127B4 (en) | 2011-11-25 | 2015-11-19 | Siemens Aktiengesellschaft | Determination of acquisition parameters in a dual-energy tomosynthesis |
| EP2782505B1 (en) | 2011-11-27 | 2020-04-22 | Hologic, Inc. | System and method for generating a 2d image using mammography and/or tomosynthesis image data |
| US9317920B2 (en) | 2011-11-30 | 2016-04-19 | Rush University Medical Center | System and methods for identification of implanted medical devices and/or detection of retained surgical foreign objects from medical images |
| US11109835B2 (en) | 2011-12-18 | 2021-09-07 | Metritrack Llc | Three dimensional mapping display system for diagnostic ultrasound machines |
| US8594407B2 (en) | 2012-02-03 | 2013-11-26 | Siemens Aktiengesellschaft | Plane-by-plane iterative reconstruction for digital breast tomosynthesis |
| EP3315072B1 (en) | 2012-02-13 | 2020-04-29 | Hologic, Inc. | System and method for navigating a tomosynthesis stack using synthesized image data |
| JP5745444B2 (en) | 2012-03-05 | 2015-07-08 | 富士フイルム株式会社 | MEDICAL IMAGE DISPLAY DEVICE, MEDICAL IMAGE DISPLAY METHOD, AND MEDICAL IMAGE DISPLAY PROGRAM |
| CN104160424B (en) | 2012-03-08 | 2017-09-19 | 皇家飞利浦有限公司 | Smart Logo selection for improving the registration accuracy in multi-modality images fusion |
| US12070365B2 (en) | 2012-03-28 | 2024-08-27 | Navigate Surgical Technologies, Inc | System and method for determining the three-dimensional location and orientation of identification markers |
| JP5244250B1 (en) | 2012-03-28 | 2013-07-24 | パナソニック株式会社 | Power supply device |
| US8842806B2 (en) | 2012-04-03 | 2014-09-23 | Carestream Health, Inc. | Apparatus and method for breast imaging |
| ITBO20120227A1 (en) | 2012-04-24 | 2013-10-25 | I M S Internaz Medico Scient Ifica S R L | EQUIPMENT TO CARRY OUT AN EXAMINATION ON THE BREAST OF A PATIENT. |
| JP2013244211A (en) | 2012-05-25 | 2013-12-09 | Toshiba Corp | Medical image processor, medical image processing method and control program |
| DE102012213910A1 (en) | 2012-08-06 | 2014-02-06 | Siemens Aktiengesellschaft | Control module and method for perspective determination in the rendering of medical image data sets |
| KR101479212B1 (en) | 2012-09-05 | 2015-01-06 | 삼성전자 주식회사 | X-ray image apparatus and x-ray image forming method |
| US8983156B2 (en) | 2012-11-23 | 2015-03-17 | Icad, Inc. | System and method for improving workflow efficiences in reading tomosynthesis medical image data |
| US9113781B2 (en) | 2013-02-07 | 2015-08-25 | Siemens Aktiengesellschaft | Method and system for on-site learning of landmark detection models for end user-specific diagnostic medical image reading |
| EP2967473B1 (en) | 2013-03-15 | 2020-02-19 | Hologic, Inc. | System and method for navigating a tomosynthesis stack including automatic focusing |
| EP2967479B1 (en) | 2013-03-15 | 2018-01-31 | Hologic Inc. | Tomosynthesis-guided biopsy in prone |
| KR102078335B1 (en) | 2013-05-03 | 2020-02-17 | 삼성전자주식회사 | Medical imaging apparatus and control method for the same |
| US9129362B2 (en) | 2013-05-22 | 2015-09-08 | Siemens Aktiengesellschaft | Semantic navigation and lesion mapping from digital breast tomosynthesis |
| US10134148B2 (en) | 2013-05-30 | 2018-11-20 | H. Lee Moffitt Cancer Center And Research Institute, Inc. | Method of assessing breast density for breast cancer risk applications |
| EP3014577A1 (en) | 2013-06-28 | 2016-05-04 | Koninklijke Philips N.V. | Methods for generation of edge-preserving synthetic mammograms from tomosynthesis data |
| KR20150010515A (en) | 2013-07-19 | 2015-01-28 | 삼성전자주식회사 | Apparatus and method for photographing a medical image |
| JP6253323B2 (en) | 2013-09-26 | 2017-12-27 | キヤノン株式会社 | Subject information acquisition apparatus and control method thereof |
| US9668699B2 (en) | 2013-10-17 | 2017-06-06 | Siemens Healthcare Gmbh | Method and system for anatomical object detection using marginal space deep neural networks |
| KR102340594B1 (en) | 2013-10-24 | 2021-12-20 | 앤드류 피 스미스 | System and method for navigating x-ray guided breast biopsy |
| CN105682556A (en) | 2013-10-30 | 2016-06-15 | 皇家飞利浦有限公司 | Optimization of x-ray imaging during mammographic examination |
| US10978184B2 (en) | 2013-11-04 | 2021-04-13 | Terarecon, Inc. | Evolving contextual clinical data engine for medical information |
| US20150149206A1 (en) | 2013-11-27 | 2015-05-28 | General Electric Company | Systems and methods for intelligent radiology work allocation |
| WO2015092604A1 (en) | 2013-12-18 | 2015-06-25 | Koninklijke Philips N.V. | System and method for ultrasound and computed tomography image registration for sonothrombolysis treatment |
| US10835204B2 (en) | 2014-01-02 | 2020-11-17 | Metritrack, Inc. | System and method for tracking completeness of co-registered medical image data |
| US9152761B2 (en) | 2014-01-10 | 2015-10-06 | Heartflow, Inc. | Systems and methods for identifying medical image acquisition parameters |
| US10610182B2 (en) | 2014-01-15 | 2020-04-07 | Alara Systems, Inc | Converting low-dose to higher dose 3D tomosynthesis images through machine-learning processes |
| JP6495003B2 (en) | 2014-02-04 | 2019-04-03 | キヤノンメディカルシステムズ株式会社 | Medical image processing apparatus, medical image diagnostic apparatus, and medical image processing method |
| CA2941001C (en) | 2014-02-28 | 2018-02-06 | 3DBiopsy LLC | Biopsy needle actuator assembly |
| WO2015130916A1 (en) | 2014-02-28 | 2015-09-03 | Hologic, Inc. | System and method for generating and displaying tomosynthesis image slabs |
| JP2017513662A (en) | 2014-03-28 | 2017-06-01 | インテュイティブ サージカル オペレーションズ, インコーポレイテッド | Alignment of Q3D image with 3D image |
| EP2926736B1 (en) | 2014-03-31 | 2020-06-17 | Esaote S.p.A. | Apparatus and method for ultrasound image acquisition, generation and display |
| US10340041B2 (en) | 2014-05-09 | 2019-07-02 | Acupath Laboratories, Inc. | Biopsy mapping tools |
| US20150331995A1 (en) | 2014-05-14 | 2015-11-19 | Tiecheng Zhao | Evolving contextual clinical data engine for medical data processing |
| CA2891983A1 (en) | 2014-05-30 | 2015-11-30 | Intelerad Medical Systems Incorporated | Method and system for selecting readers for the analysis of radiology orders using due-in-time requirements of radiology orders |
| US20150375399A1 (en) | 2014-06-27 | 2015-12-31 | Hansen Medical, Inc. | User interface for medical robotics system |
| US20160000399A1 (en) | 2014-07-02 | 2016-01-07 | General Electric Company | Method and apparatus for ultrasound needle guidance |
| JP6026591B2 (en) | 2014-08-01 | 2016-11-16 | キヤノンマーケティングジャパン株式会社 | Interpretation request management system and control method therefor, interpretation request management apparatus and control method therefor, and program |
| US9782152B2 (en) | 2014-08-18 | 2017-10-10 | Vanderbilt University | Method and system for real-time compression correction for tracked ultrasound and applications of same |
| US9569864B2 (en) | 2014-09-12 | 2017-02-14 | Siemens Aktiengesellschaft | Method and apparatus for projection image generation from tomographic images |
| WO2016057960A1 (en) | 2014-10-10 | 2016-04-14 | Radish Medical Solutions, Inc. | Apparatus, system and method for cloud based diagnostics and image archiving and retrieval |
| US10603002B2 (en) | 2014-11-07 | 2020-03-31 | Hologic, Inc. | Pivoting paddle apparatus for mammography/tomosynthesis X-ray system |
| US9855014B2 (en) | 2014-12-16 | 2018-01-02 | General Electric Company | Compression paddle for use in breast imaging |
| EP3236859B1 (en) | 2014-12-24 | 2021-03-31 | Koninklijke Philips N.V. | Needle trajectory prediction for target biopsy |
| US10613637B2 (en) | 2015-01-28 | 2020-04-07 | Medtronic, Inc. | Systems and methods for mitigating gesture input error |
| US20160228068A1 (en) | 2015-02-10 | 2016-08-11 | General Electric Company | Quality assurance for mri-guided breast biopsy |
| JP6383321B2 (en) | 2015-04-08 | 2018-08-29 | 株式会社エクスメディオ | Diagnosis support system |
| JP6843073B2 (en) | 2015-05-18 | 2021-03-17 | コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. | Accuracy feedback during procedure for image-guided biopsy |
| WO2016193025A1 (en) | 2015-06-04 | 2016-12-08 | Koninklijke Philips N.V. | System and method for precision diagnosis and therapy augmented by cancer grade maps |
| US10169863B2 (en) | 2015-06-12 | 2019-01-01 | International Business Machines Corporation | Methods and systems for automatically determining a clinical image or portion thereof for display to a diagnosing physician |
| US20180214714A1 (en) | 2015-08-13 | 2018-08-02 | Siris Medical, Inc. | Result-driven radiation therapy treatment planning |
| US20180286504A1 (en) | 2015-09-28 | 2018-10-04 | Koninklijke Philips N.V. | Challenge value icons for radiology report selection |
| WO2017054775A1 (en) | 2015-09-30 | 2017-04-06 | Shanghai United Imaging Healthcare Co., Ltd. | System and method for determining a breast region in a medical image |
| US10762624B2 (en) | 2015-10-02 | 2020-09-01 | Curemetrix, Inc. | Cancer detection systems and methods |
| KR101822404B1 (en) | 2015-11-30 | 2018-01-26 | 임욱빈 | diagnostics system for cell using Deep Neural Network learning |
| US20170185904A1 (en) * | 2015-12-29 | 2017-06-29 | 24/7 Customer, Inc. | Method and apparatus for facilitating on-demand building of predictive models |
| US10127660B2 (en) | 2015-12-30 | 2018-11-13 | Case Western Reserve University | Radiomic features on diagnostic magnetic resonance enterography |
| US10586173B2 (en) * | 2016-01-27 | 2020-03-10 | Bonsai AI, Inc. | Searchable database of trained artificial intelligence objects that can be reused, reconfigured, and recomposed, into one or more subsequent artificial intelligence models |
| ES2634027B1 (en) | 2016-02-26 | 2018-07-02 | General Equipment For Medical Imaging, S.A. | Mobile molecular imaging system and intervention system comprising it |
| US9943280B2 (en) | 2016-03-07 | 2018-04-17 | General Electric Company | Breast tomosynthesis with flexible compression paddle |
| EP3427192A4 (en) | 2016-03-11 | 2019-03-27 | Magic Leap, Inc. | STRUCTURAL LEARNING IN CONVOLUTION NEURAL NETWORKS |
| US10413366B2 (en) | 2016-03-16 | 2019-09-17 | Synaptive Medical (Bardbados) Inc. | Trajectory guidance alignment system and methods |
| JP6744123B2 (en) | 2016-04-26 | 2020-08-19 | 株式会社日立製作所 | Moving object tracking device and radiation irradiation system |
| TWI609674B (en) | 2016-05-12 | 2018-01-01 | 太豪生醫股份有限公司 | Breast image matching method and image processing apparatus |
| BR102016011525A2 (en) | 2016-05-20 | 2017-12-05 | Pulse Participações S.A | SYSTEM AND RELATED METHODS FOR CONDUCTING CORRELATION BETWEEN MEDICAL DATE AND CONDITIONS OF DIAGNOSIS AND FOLLOW-UP OF HEALTH TREATMENT OF PATIENT MONITORED IN REAL TIME |
| EP3465539B1 (en) | 2016-05-27 | 2024-04-17 | Hologic, Inc. | Synchronized surface and internal tumor detection |
| EP3472741A4 (en) | 2016-06-17 | 2020-01-01 | Algotec Systems Ltd. | SYSTEM AND METHOD FOR SEQUENCING MEDICAL IMAGING TASKS |
| US9589374B1 (en) | 2016-08-01 | 2017-03-07 | 12 Sigma Technologies | Computer-aided diagnosis system for medical images using deep convolutional neural networks |
| US11610687B2 (en) | 2016-09-06 | 2023-03-21 | Merative Us L.P. | Automated peer review of medical imagery |
| US10733566B1 (en) | 2016-11-11 | 2020-08-04 | Iodine Software, LLC | High fidelity clinical documentation improvement (CDI) smart scoring systems and methods |
| US20180144244A1 (en) | 2016-11-23 | 2018-05-24 | Vital Images, Inc. | Distributed clinical workflow training of deep learning neural networks |
| EP3326535B1 (en) | 2016-11-25 | 2019-06-12 | ScreenPoint Medical | Displaying system for displaying digital breast tomosynthesis data |
| WO2018183549A1 (en) | 2017-03-30 | 2018-10-04 | Hologic, Inc. | System and method for synthesizing low-dimensional image data from high-dimensional image data using an object grid enhancement |
| EP3600052A1 (en) | 2017-03-30 | 2020-02-05 | Hologic, Inc. | System and method for targeted object enhancement to generate synthetic breast tissue images |
| CN110621231B (en) | 2017-03-30 | 2024-02-23 | 豪洛捷公司 | Systems and methods for hierarchical multi-level feature image synthesis and representation |
| WO2018221689A1 (en) | 2017-06-01 | 2018-12-06 | 株式会社ニデック | Medical information processing system |
| US11403483B2 (en) | 2017-06-20 | 2022-08-02 | Hologic, Inc. | Dynamic self-learning medical image method and system |
| NL2019410B1 (en) | 2017-08-10 | 2019-02-21 | Aidence B V | Computer-aided diagnostics using deep neural networks |
| US11361868B2 (en) | 2017-08-16 | 2022-06-14 | The Johns Hopkins University | Abnormal tissue detection via modal upstream data fusion |
| US10838505B2 (en) | 2017-08-25 | 2020-11-17 | Qualcomm Incorporated | System and method for gesture recognition |
| EP3685350B1 (en) | 2017-09-22 | 2025-07-16 | Nview Medical Inc. | Image reconstruction using machine learning regularizers |
| US11839507B2 (en) | 2017-11-08 | 2023-12-12 | Koninklijke Philips N.V. | Ultrasound system and method for correlation between ultrasound breast images and breast images of other imaging modalities |
| US10629305B2 (en) | 2017-11-09 | 2020-04-21 | General Electric Company | Methods and apparatus for self-learning clinical decision support |
| WO2019102917A1 (en) | 2017-11-21 | 2019-05-31 | 富士フイルム株式会社 | Radiologist determination device, method, and program |
| US10831519B2 (en) * | 2017-11-22 | 2020-11-10 | Amazon Technologies, Inc. | Packaging and deploying algorithms for flexible machine learning |
| US10679345B2 (en) | 2017-12-20 | 2020-06-09 | International Business Machines Corporation | Automatic contour annotation of medical images based on correlations with medical reports |
| EP3509013B1 (en) | 2018-01-04 | 2025-12-24 | Augmedics Inc. | Identification of a predefined object in a set of images from a medical image scanner during a surgical procedure |
| EP3511941A1 (en) | 2018-01-12 | 2019-07-17 | Siemens Healthcare GmbH | Method and system for evaluating medical examination results of a patient, computer program and electronically readable storage medium |
| ES2973313T3 (en) | 2018-01-31 | 2024-06-19 | Fujifilm Corp | Ultrasound diagnostic device and control method for ultrasound diagnostic device |
| KR102014359B1 (en) | 2018-02-20 | 2019-08-26 | (주)휴톰 | Method and apparatus for providing camera location using surgical video |
| JP6832880B2 (en) | 2018-03-02 | 2021-02-24 | 富士フイルム株式会社 | Learning data creation support device, learning data creation support method, and learning data creation support program |
| US12059580B2 (en) | 2018-03-08 | 2024-08-13 | Seetreat Pty Ltd | Method and system for guided radiation therapy |
| JP2019169049A (en) | 2018-03-26 | 2019-10-03 | 富士フイルム株式会社 | Medical image identification apparatus, method and program |
| JP2019170794A (en) | 2018-03-29 | 2019-10-10 | 株式会社島津製作所 | Fluoroscope and fluoroscopic method |
| CN111936989A (en) | 2018-03-29 | 2020-11-13 | 谷歌有限责任公司 | Similar Medical Image Search |
| US11564769B2 (en) | 2018-04-27 | 2023-01-31 | St. Jude Medical International Holdings Sarl | Apparatus for fiducial-association as part of extracting projection parameters relative to a 3D coordinate system |
| CN108492874A (en) | 2018-04-28 | 2018-09-04 | 江苏医像信息技术有限公司 | The critical value diagnosis and therapy system of intelligence |
| US12121304B2 (en) | 2018-05-04 | 2024-10-22 | Hologic, Inc. | Introducer and localization wire visualization |
| WO2019213532A1 (en) | 2018-05-04 | 2019-11-07 | Hologic, Inc. | Biopsy needle visualization |
| EP3801270B1 (en) | 2018-05-25 | 2025-09-10 | Hologic, Inc. | Systems for pivoting compression paddles |
| US11126649B2 (en) | 2018-07-11 | 2021-09-21 | Google Llc | Similar image search for radiology |
| EP3830748A4 (en) | 2018-08-02 | 2022-04-27 | Imedis AI Ltd | SYSTEMS AND METHODS FOR ENHANCED MEDICAL IMAGING REPORT GENERATION AND ANALYSIS |
| JP7223539B2 (en) | 2018-09-25 | 2023-02-16 | キヤノンメディカルシステムズ株式会社 | Breast cancer diagnosis support device, breast cancer diagnosis support system, and breast cancer diagnosis support method |
| FR3088188A1 (en) | 2018-11-12 | 2020-05-15 | Pixee Medical | CUTTING DEVICE FOR LAYING A KNEE PROSTHESIS |
| US10755412B2 (en) | 2018-11-20 | 2020-08-25 | International Business Machines Corporation | Automated patient complexity classification for artificial intelligence tools |
| US11145059B2 (en) | 2018-11-21 | 2021-10-12 | Enlitic, Inc. | Medical scan viewing system with enhanced training and methods for use therewith |
| JP7664157B2 (en) | 2018-11-23 | 2025-04-17 | アイカード インコーポレイテッド | Systems and methods for assessing breast cancer risk using images - Patents.com |
| US10957442B2 (en) * | 2018-12-31 | 2021-03-23 | GE Precision Healthcare, LLC | Facilitating artificial intelligence integration into systems using a distributed learning platform |
| US20200286613A1 (en) | 2019-03-04 | 2020-09-10 | Hologic, Inc. | Detecting tube output roll off |
| US11499834B2 (en) | 2019-03-07 | 2022-11-15 | Mobileye Vision Technologies Ltd. | Aligning road information for navigation |
| JP7023254B2 (en) | 2019-03-27 | 2022-02-21 | 富士フイルム株式会社 | Shooting support equipment, methods and programs |
| US10977796B2 (en) | 2019-03-29 | 2021-04-13 | Fujifilm Medical Systems U.S.A., Inc. | Platform for evaluating medical information and method for using the same |
| EP3962240A4 (en) | 2019-04-23 | 2022-06-22 | Shanghai United Imaging Healthcare Co., Ltd. | Method, system and device for acquiring radiological image, and storage medium |
| US11883206B2 (en) | 2019-07-29 | 2024-01-30 | Hologic, Inc. | Personalized breast imaging system |
| JP7653361B2 (en) | 2019-07-31 | 2025-03-28 | ホロジック, インコーポレイテッド | Systems and methods for automating clinical workflow decisions and generating preferred read indicators - Patents.com |
| CN110464326B (en) | 2019-08-19 | 2022-05-10 | 上海联影医疗科技股份有限公司 | Scanning parameter recommendation method, system, device and storage medium |
| US20210085387A1 (en) | 2019-09-22 | 2021-03-25 | Biosense Webster (Israel) Ltd. | Guiding cardiac ablation using machine learning (ml) |
| DE202020006045U1 (en) | 2019-09-27 | 2024-07-02 | Hologic Inc. | AI system to predict reading time and reading complexity for reviewing 2D/3D breast images |
| WO2021092032A1 (en) | 2019-11-05 | 2021-05-14 | Cianna Medical, Inc. | Systems and methods for imaging a body region using implanted markers |
| JP7537095B2 (en) | 2020-02-18 | 2024-08-21 | 株式会社リコー | Information processing device, program, information generation method, and information processing system |
| US20230098785A1 (en) | 2020-02-21 | 2023-03-30 | Hologic, Inc. | Real-time ai for physical biopsy marker detection |
| JP2023519878A (en) | 2020-03-27 | 2023-05-15 | ホロジック, インコーポレイテッド | Systems and methods for correlating regions of interest in multiple imaging modalities |
| US11481038B2 (en) | 2020-03-27 | 2022-10-25 | Hologic, Inc. | Gesture recognition in controlling medical hardware or software |
| CN111584046B (en) | 2020-05-15 | 2023-10-27 | 周凌霄 | AI processing method for medical image data |
| US11210848B1 (en) | 2020-06-14 | 2021-12-28 | International Business Machines Corporation | Machine learning model for analysis of 2D images depicting a 3D object |
| US12136481B2 (en) * | 2020-06-23 | 2024-11-05 | Virtual Radiologic Corporation | Medical imaging characteristic detection, workflows, and AI model management |
| US20230285081A1 (en) | 2020-08-11 | 2023-09-14 | Intuitive Surgical Operations, Inc. | Systems for planning and performing biopsy procedures and associated methods |
| US20220164951A1 (en) | 2020-11-20 | 2022-05-26 | Hologic, Inc. | Systems and methods for using ai to identify regions of interest in medical images |
| KR20240008894A (en) | 2021-05-18 | 2024-01-19 | 홀로직, 인크. | Systems and methods for predicting optimal exposure technique based on machine learning for acquiring mammography images |
-
2021
- 2021-11-19 US US17/531,177 patent/US20220164951A1/en not_active Abandoned
- 2021-11-22 US US17/532,286 patent/US12530860B2/en active Active
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20180018590A1 (en) * | 2016-07-18 | 2018-01-18 | NantOmics, Inc. | Distributed Machine Learning Systems, Apparatus, and Methods |
| US11853401B1 (en) * | 2018-06-05 | 2023-12-26 | Amazon Technologies, Inc. | Machine learning model creation via user-configured model building blocks |
Non-Patent Citations (1)
| Title |
|---|
| Computer English Translation of Chinese Patent No. CN111584046 A, pages 1-14. (Year: 2020) * |
Cited By (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US11883206B2 (en) | 2019-07-29 | 2024-01-30 | Hologic, Inc. | Personalized breast imaging system |
| US12226233B2 (en) | 2019-07-29 | 2025-02-18 | Hologic, Inc. | Personalized breast imaging system |
| US11694792B2 (en) | 2019-09-27 | 2023-07-04 | Hologic, Inc. | AI system for predicting reading time and reading complexity for reviewing 2D/3D breast images |
| US12119107B2 (en) | 2019-09-27 | 2024-10-15 | Hologic, Inc. | AI system for predicting reading time and reading complexity for reviewing 2D/3D breast images |
| US12530860B2 (en) | 2020-11-20 | 2026-01-20 | Hologic, Inc. | Systems and methods for using AI to identify regions of interest in medical images |
| US20240120114A1 (en) * | 2021-02-09 | 2024-04-11 | Talking Medicines Limited | Medicine evaluation system |
| US20230074950A1 (en) * | 2021-08-24 | 2023-03-09 | Nvidia Corporation | Object characterization using one or more neural networks |
| US20240095976A1 (en) * | 2022-09-20 | 2024-03-21 | United Imaging Intelligence (Beijing) Co., Ltd. | Systems and methods associated with breast tomosynthesis |
| US12450793B2 (en) * | 2022-09-20 | 2025-10-21 | United Imaging Intelligence (Beijing) Co., Ltd. | Systems and methods for processing breast slice images through an artificial neural network to predict abnormalities in breasts |
Also Published As
| Publication number | Publication date |
|---|---|
| US20220164586A1 (en) | 2022-05-26 |
| US12530860B2 (en) | 2026-01-20 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US12530860B2 (en) | Systems and methods for using AI to identify regions of interest in medical images | |
| US11282196B2 (en) | Automated patient complexity classification for artificial intelligence tools | |
| US11244755B1 (en) | Automatic generation of medical imaging reports based on fine grained finding labels | |
| US12340293B2 (en) | Machine learning model repository management and search engine | |
| US10679345B2 (en) | Automatic contour annotation of medical images based on correlations with medical reports | |
| JP5952835B2 (en) | Imaging protocol updates and / or recommenders | |
| US10037407B2 (en) | Structured finding objects for integration of third party applications in the image interpretation workflow | |
| US11069432B2 (en) | Automatic disease detection from unstructured textual reports | |
| US10667794B2 (en) | Automatic detection of disease from analysis of echocardiographer findings in echocardiogram videos | |
| CN111696642A (en) | System and method for generating a description of an abnormality in a medical image | |
| US20220028507A1 (en) | Workflow for automatic measurement of doppler pipeline | |
| US10892056B2 (en) | Artificial intelligence based alert system | |
| US11195600B2 (en) | Automatic discrepancy detection in medical data | |
| US20180107791A1 (en) | Cohort detection from multimodal data and machine learning | |
| US11763081B2 (en) | Extracting fine grain labels from medical imaging reports | |
| JP6875993B2 (en) | Methods and systems for contextual evaluation of clinical findings | |
| US10650923B2 (en) | Automatic creation of imaging story boards from medical imaging studies | |
| US11080326B2 (en) | Intelligently organizing displays of medical imaging content for rapid browsing and report creation | |
| WO2025111606A1 (en) | Methods and systems for vector embedding search for predicting abnormalities in medical images | |
| US20250246281A1 (en) | Multi-modal, multi-omic enterprise graph-based, semantic ontology-based recommender framework | |
| CN110582810A (en) | Summary of Clinical Documents Using Clinical Documentation Endpoints | |
| Ashwitha et al. | AI-Powered Disease Prediction Tool | |
| US12224047B2 (en) | Systems and methods of radiology report processing and display | |
| Kades | Erlangung der Doktorwürde |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AS | Assignment |
Owner name: HOLOGIC, INC., MASSACHUSETTS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CHUI, HAILI;GKANATSIOS, NIKOLAOS;JING, ZHENXUE;AND OTHERS;SIGNING DATES FROM 20201120 TO 20201221;REEL/FRAME:058167/0717 |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |