EP4114272A1 - Contextual multiplanar reconstruction of three-dimensional ultrasound imaging data and associated devices, systems, and methods - Google Patents
Contextual multiplanar reconstruction of three-dimensional ultrasound imaging data and associated devices, systems, and methodsInfo
- Publication number
- EP4114272A1 EP4114272A1 EP21707653.8A EP21707653A EP4114272A1 EP 4114272 A1 EP4114272 A1 EP 4114272A1 EP 21707653 A EP21707653 A EP 21707653A EP 4114272 A1 EP4114272 A1 EP 4114272A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- image
- adjacent
- target image
- dimensional
- ultrasound
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/48—Diagnostic techniques
- A61B8/483—Diagnostic techniques involving the acquisition of a 3D volume of data
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/52—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/5215—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
- A61B8/523—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for generating planar views from image data in a user selectable plane not corresponding to the acquisition plane
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S15/00—Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
- G01S15/88—Sonar systems specially adapted for specific applications
- G01S15/89—Sonar systems specially adapted for specific applications for mapping or imaging
- G01S15/8906—Short-range imaging systems; Acoustic microscope systems using pulse-echo techniques
- G01S15/8993—Three dimensional imaging systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/003—Navigation within 3D models or images
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/13—Tomography
- A61B8/14—Echo-tomography
- A61B8/145—Echo-tomography characterised by scanning multiple planes
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/52—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00
- G01S7/52017—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00 particularly adapted to short-range imaging
- G01S7/52053—Display arrangements
- G01S7/52057—Cathode ray tube displays
- G01S7/52073—Production of cursor lines, markers or indicia by electronic means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/41—Medical
Definitions
- the present disclosure relates generally to the acquisition and processing of ultrasound images and, in particular, to systems and methods for reconstructing two-dimensional images from three-dimensional ultrasound image data.
- Ultrasound imaging is frequently used to obtain images of internal anatomical structures of a patient.
- Ultrasound systems typically comprise an ultrasound transducer probe that includes a transducer array coupled to a probe housing.
- the transducer array is activated to vibrate at ultrasonic frequencies to transmit ultrasonic energy into the patient’s anatomy, and then receive ultrasonic echoes reflected or backscattered by the patient’s anatomy to create an image.
- Such transducer arrays may include various layers, including some with piezoelectric materials, which vibrate in response to an applied voltage to produce the desired pressure waves. These transducers may be used to successively transmit and receive several ultrasonic pressure waves through the various tissues of the body. The various ultrasonic responses may be further processed by an ultrasonic imaging system to display the various structures and tissues of the body.
- Physicians and sonographers often desire to obtain certain views of the patient’s body such that the imaging plane of the ultrasound transducer is aligned to obtain image of a particular combination and orientation of anatomical structures.
- a sonographer obtains a variety of images or views of the fetal anatomy to perform measurements and assessments of the fetus’ development. Examples of these views include trans-ventricular, trans-cerebellar, and trans-thalamic image planes.
- these target views are achieved manually by an experienced sonographer positioning and orienting the ultrasound transducer while watching a real-time or near real-time image stream of the field of view of the ultrasound transducer.
- the sonographer may freeze or save the image frame to memory, and/or continue to move the probe around the target view to ensure that the desired view is actually achieved.
- the temporal and spatial information gained in conventional two-dimensional ultrasound imaging procedures can be useful in assessing an image, as it provides important context information in addition to a dedicated two-dimensional image to be used in the biometry assessment, for example.
- the sonographer has a spatial awareness of the location of the plane they are searching for based on their knowledge of the images of anatomy that are near or adjacent to the plane they want to capture. Intrinsically, the confidence that the target plane is correctly detected is based on the temporal context in the two-dimensional image stream.
- some ultrasound imaging systems allow for a target view to be obtained automatically from a three-dimensional ultrasound data set.
- Automated image reconstruction techniques can be beneficial because they depend less on the individual skills of the sonographer.
- an ultrasound transducer array may be used to sweep out a three- dimensional volume and obtain a three-dimensional ultrasound data set representative of the volume while the sonographer holds the ultrasound probe body at a fixed position and orientation.
- a multiplanar reconstruction (MPR) technique can be used to generate a two- dimensional image associated with a desired view.
- MPR imaging techniques beneficially allow for less-experienced sonographers to achieve a target view
- one drawback of MPR is that an image may not be correctly reconstructed by the system, and the user does not have the spatial/temporal context to confirm whether or not the reconstructed image is correct or optimal.
- conventional two-dimensional ultrasound imaging provides the sonographer with real-time feedback and spatial context as the user can make manual adjustments to the probe to evaluate the imaged volume around the target view.
- an ultrasound imaging system includes a processor circuit in communication with an ultrasound transducer configured to obtain a three-dimensional ultrasound data set.
- the processor circuit is configured to reconstruct, from the ultrasound data set, a target image or image slice corresponding to a target view or image plane, such as an apical view of the heart, trans-cerebellar, trans-thalamic, etc.
- the processor circuit is configured to reconstruct, from the same three-dimensional ultrasound data set, one or more adjacent images corresponding to planes that are adjacent to the target image plane.
- the adjacent images are reconstructed such that the adjacent image and the target image correspond to image planes that are on a simulated motion path that also corresponds to the target image.
- the simulated motion path may be representative of a linear translation, a sweeping or fanning motion, a tilt, rocking motion, and/or a rotation of the ultrasound probe, with the target image plane extending from a point along the simulated motion path.
- the reconstructed adjacent images can be output to a display such that a user can view the adjacent images part of a sequence with the target image to obtain spatial, contextual information about the target image.
- the user can scan through the target image and one or more adjacent images on the display as if the ultrasound transducer were being scanned along the simulated motion path.
- the user may determine that the target image reconstructed from the image data is correctly-aligned with the target image plane, or determine that adjustments could be made to reconstruct the target image to be correctly aligned with the target image plane.
- an ultrasound imaging apparatus includes a processor circuit configured to: receive, from an ultrasound probe communicatively coupled to the processor circuit, three-dimensional ultrasound data of an anatomy; generate, from the three-dimensional ultrasound data, a target image corresponding to a target image plane of the anatomy; generate, from the three-dimensional ultrasound data, a plurality of adjacent images corresponding to image planes adjacent to the target image plane along a simulated motion path, wherein the target image and the plurality of adjacent images comprise two-dimensional images based on the three-dimensional ultrasound data; output, to a display in communication with the processor circuit, the target image; receive a user input representative of a direction of motion along the simulated motion path; and output, to the display, an adjacent image of the plurality of adjacent images corresponding to the direction of motion.
- the apparatus further includes the ultrasound probe.
- the processor circuit is configured to: interpolate between a position and orientation of the target image and a position and orientation of the adjacent image to generate an interpolated image; and output the interpolated image to the display.
- the processor circuit is configured to: determine a direction of uncertainty relative to the target image plane; and determine the simulated motion path based on the determined direction of uncertainty.
- the processor circuit is configured to apply a covariance matrix to the three-dimensional ultrasound data to determine the direction of uncertainty.
- the processor circuit is configured to: identify, from the three-dimensional ultrasound data, an image plane of interest different from the target image and the adjacent image; and determine the simulated motion path based on the target image, the adjacent image, and the image of interest.
- the processor circuit is configured to output the image of interest in response to receiving the user input.
- the plurality of adjacent images comprises a plurality of parallel adjacent images associated with a plurality of parallel adjacent image planes.
- the processor circuit is configured to generate the target image to exclude an anatomical feature.
- the processor circuit is configured to generate the adjacent image to include the anatomical feature.
- the processor circuit is further configured to output, to the display, a graphical representation of an adjacent image plane associated with the adjacent image, wherein the graphical representation includes: a diagrammatic view of a body portion associated with the target image; and an indicator of the adjacent image plane overlaid on the diagrammatic view of the body portion.
- a method for reconstructing ultrasound images includes: receiving, three-dimensional ultrasound data of an anatomy obtained by an ultrasound probe; generating, from the three-dimensional ultrasound data, a target image corresponding to a target image plane of the anatomy; generating, from the three-dimensional ultrasound data, a plurality of adjacent images corresponding to image planes adjacent to the target image plane along a simulated motion path, wherein the target image and the plurality of adjacent images comprise two-dimensional images based on the three-dimensional ultrasound data; outputting, to a display, the target image; receiving a user input representative of a direction of motion along the simulated motion path; and outputting, to a display, an adjacent image of the plurality of adjacent images corresponding to the direction of motion.
- generating the plurality of adjacent images comprises interpolating between a position and orientation of the target image and a position and orientation of the adjacent image to generate an interpolated image.
- the method further comprises outputting the interpolated image to the display.
- generating the plurality of adjacent images comprises: determining a direction of uncertainty relative to the target image plane; and determining the simulated motion path based on the determined direction of uncertainty.
- determining the direction of uncertainty comprises applying a covariance matrix to the three-dimensional ultrasound data.
- the method further comprises identifying, from the three- dimensional ultrasound data, an image of interest different from the target image and the adjacent image; and determining the simulated motion path based on the target image, the adjacent image, and the image of interest.
- the method further comprises outputting the image of interest in response to receiving the user input.
- generating the plurality of adjacent images comprises generating a plurality of parallel adjacent images associated with a plurality of parallel adjacent image planes.
- generating the target image comprises generating the target image to exclude an anatomical feature.
- generating the adjacent image comprises generating the adjacent image to include the anatomical feature.
- the method further comprises outputting, to the display, a graphical representation of an adjacent image plane associated with the adjacent image.
- the graphical representation includes: a diagrammatic view of a body portion associated with the target image; and an indicator of the adjacent image plane overlaid on the diagrammatic view of the body portion.
- Fig. l is a schematic diagram of an ultrasound imaging system, according to embodiments of the present disclosure.
- FIG. 2 is a schematic diagram of a processor circuit, according to embodiments of the present disclosure.
- Fig. 3 is a diagrammatic view of image slices reconstructed from a three- dimensional ultrasound data set of a volume, according to aspects of the present disclosure.
- Fig. 4 is a flow diagram illustrating a method for generating and displaying contextual multiplanar image reconstructions from a three-dimensional ultrasound dataset, according to aspects of the present disclosure.
- Fig. 5 is a diagrammatic view of a plurality of image planes or image slices corresponding to a simulated motion path, according to aspects of the present disclosure.
- Fig. 6A is a diagrammatic view of a plurality of image planes or image slices corresponding to a simulated linear motion path, according to aspects of the present disclosure.
- Fig. 6B is a diagrammatic view of a plurality of image planes or image slices corresponding to a simulated curved motion path, according to aspects of the present disclosure.
- Fig. 6C is a diagrammatic view of a plurality of image planes or image slices corresponding to a simulated curved motion path, according to aspects of the present disclosure.
- Fig. 7 is a flow diagram illustrating a method for generating interpolated image slices for a contextual multiplanar image reconstruction sequence, according to aspects of the present disclosure.
- Fig. 8 is a flow diagram illustrating a method for reconstructing image slices based on a direction of uncertainty for a contextual multiplanar image reconstruction sequence, according to aspects of the present disclosure.
- Fig. 9 is a screen of a graphical user interface that includes a reconstructed image frame corresponding to a target view, according to aspects of the present disclosure.
- an ultrasound system 100 according to embodiments of the present disclosure is shown in block diagram form.
- An ultrasound imaging apparatus or ultrasound probe 10 has a transducer array 12 comprising a plurality of ultrasound transducer elements or acoustic elements.
- the array 12 may include any number of acoustic elements.
- the array 12 can include between 1 acoustic element and 100000 acoustic elements, including values such as 2 acoustic elements, 4 acoustic elements, 36 acoustic elements, 64 acoustic elements, 128 acoustic elements, 300 acoustic elements, 812 acoustic elements, 3000 acoustic elements, 9000 acoustic elements, 30,000 acoustic elements, 65,000 acoustic elements, and/or other values both larger and smaller.
- the acoustic elements of the array 12 may be arranged in any suitable configuration, such as a linear array, a planar array, a curved array, a curvilinear array, a circumferential array, an annular array, a phased array, a matrix array, a one-dimensional (ID) array, a l.X dimensional array (e.g., a 1.5D array), or a two-dimensional (2D) array.
- the array of acoustic elements e.g., one or more rows, one or more columns, and/or one or more orientations
- the array 12 can be configured to obtain one-dimensional, two- dimensional, and/or three-dimensional images of patient anatomy.
- aspects of the present disclosure can be implemented in any suitable ultrasound imaging probe or system, including external ultrasound probes and intraluminal ultrasound probes.
- aspects of the present disclosure can be implemented in ultrasound imaging systems using a mechanically-scanned external ultrasound imaging probe, an intracardiac (ICE) echocardiography catheter and/or a transesophageal echocardiography (TEE) probe, a rotational intravascular ultrasound (IVUS) imaging catheter, a phased-array IVUS imaging catheter, a transthoracic echocardiography (TTE) imaging device, or any other suitable type of ultrasound imaging device.
- ICE intracardiac
- TEE transesophageal echocardiography
- IVUS rotational intravascular ultrasound
- TTE transthoracic echocardiography
- the acoustic elements of the array 12 may comprise one or more piezoelectric/piezoresistive elements, lead zirconate titanate (PZT), piezoelectric micromachined ultrasound transducer (PMUT) elements, capacitive micromachined ultrasound transducer (CMUT) elements, and/or any other suitable type of acoustic elements.
- the one or more acoustic elements of the array 12 are in communication with (e.g., electrically coupled to) electronic circuitry 14.
- the electronic circuitry 14 can comprise a microbeamformer (pBF).
- the electronic circuitry comprises a multiplexer circuit (MUX).
- the electronic circuitry 14 is located in the probe 10 and communicatively coupled to the transducer array 12. In some embodiments, one or more components of the electronic circuitry 14 can be positioned in the probe 10. In some embodiments, one or more components of the electronic circuitry 14, can be positioned in a computing device or processing system 28.
- the computing device 28 may be or include a processor, such as one or more processors in communication with a memory. As described further below, the computing device 28 may include a processor circuit as illustrated in Fig. 2.
- some components of the electronic circuitry 14 are positioned in the probe 10 and other components of the electronic circuitry 14 are positioned in the computing device 28.
- the electronic circuitry 14 may comprise one or more electrical switches, transistors, programmable logic devices, or other electronic components configured to combine and/or continuously switch between a plurality of inputs to transmit signals from each of the plurality of inputs across one or more common communication channels.
- the electronic circuitry 14 may be coupled to elements of the array 12 by a plurality of communication channels.
- the electronic circuitry 14 is coupled to a cable 16, which transmits signals including ultrasound imaging data to the computing device 28. [0030] In the computing device 28, the signals are digitized and coupled to channels of a system beamformer 22, which appropriately delays each signal.
- System beamformers may comprise electronic hardware components, hardware controlled by software, or a microprocessor executing beamforming algorithms.
- the beamformer 22 may be referenced as electronic circuitry.
- the beamformer 22 can be a system beamformer, such as the system beamformer 22 of Fig. 1, or it may be a beamformer implemented by circuitry within the ultrasound probe 10.
- the system beamformer 22 works in conjunction with a microbeamformer (e.g., electronic circuitry 14) disposed within the probe 10.
- the beamformer 22 can be an analog beamformer in some embodiments, or a digital beamformer in some embodiments.
- the system includes A/D converters which convert analog signals from the array 12 into sampled digital echo data.
- the beamformer 22 generally will include one or more microprocessors, shift registers, and or digital or analog memories to process the echo data into coherent echo signal data. Delays are effected by various means such as by the time of sampling of received signals, the write/read interval of data temporarily stored in memory, or by the length or clock rate of a shift register as described in U.S. Patent No. 4,173,007 to McKeighen et ah, the entirety of which is hereby incorporated by reference herein. Additionally, in some embodiments, the beamformer can apply appropriate weight to each of the signals generated by the array 12.
- the beamformed signals from the image field are processed by a signal and image processor 24 to produce 2D or 3D images for display on an image display 30.
- the signal and image processor 24 may comprise electronic hardware components, hardware controlled by software, or a microprocessor executing image processing algorithms. It generally will also include specialized hardware or software which processes received echo data into image data for images of a desired display format such as a scan converter.
- beamforming functions can be divided between different beamforming components.
- the system 100 can include a microbeamformer located within the probe 10 and in communication with the system beamformer 22. The microbeamformer may perform preliminary beamforming and/or signal processing that can reduce the number of communication channels required to transmit the receive signals to the computing device 28. [0031] Control of ultrasound system parameters such as scanning mode (e.g., B-mode,
- the M-mode M-mode
- probe selection is done under control of a system controller 26 which is coupled to various modules of the system 100.
- the system controller 26 may be formed by application specific integrated circuits (ASICs) or microprocessor circuitry and software data storage devices such as RAMs, ROMs, or disk drives.
- ASICs application specific integrated circuits
- microprocessor circuitry may be provided to the electronic circuitry 14 from the computing device 28 over the cable 16, conditioning the electronic circuitry 14 for operation of the array as required for the particular scanning procedure.
- the user inputs these operating parameters by means of a user interface device 20.
- the image processor 24 is configured to generate images of different modes to be further analyzed or output to the display 30.
- the image processor can be configured to compile a B-mode image, such as a live B-mode image, of an anatomy of the patient.
- the image processor 24 is configured to generate or compile an M-mode image.
- An M-mode image can be described as an image showing temporal changes in the imaged anatomy along a single scan line.
- the computing device 28 may comprise hardware circuitry, such as a computer processor, application-specific integrated circuit (ASIC), field- programmable gate array (FPGA), capacitors, resistors, and/or other electronic devices, software, or a combination of hardware and software.
- ASIC application-specific integrated circuit
- FPGA field- programmable gate array
- the computing device 28 is a single computing device. In other embodiments, the computing device 28 comprises separate computer devices in communication with one another.
- Fig. 2 is a schematic diagram of a processor circuit 150, according to embodiments of the present disclosure.
- the processor circuit 150 may be implemented in the computing device 28, the signal and image processor 24, the controller 26, and/or the probe 10 of Fig. 1.
- the processor circuit 150 may include a processor 160, a memory 164, and a communication module 168. These elements may be in direct or indirect communication with each other, for example via one or more buses.
- the processor 160 may include a central processing unit (CPU), a digital signal processor (DSP), an ASIC, a controller, an FPGA, another hardware device, a firmware device, or any combination thereof configured to perform the operations described herein.
- the processor 160 may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
- the memory 164 may include a cache memory (e.g., a cache memory of the processor 160), random access memory (RAM), magnetoresistive RAM (MRAM), read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read only memory (EPROM), electrically erasable programmable read only memory (EEPROM), flash memory, solid state memory device, hard disk drives, other forms of volatile and non-volatile memory, or a combination of different types of memory.
- the memory 164 includes a non-transitory computer-readable medium.
- the memory 164 may store instructions 166.
- the instructions 166 may include instructions that, when executed by the processor 160, cause the processor 160 to perform the operations described herein with reference to the processor 28 and/or the probe 10 (Fig. 1). Instructions 166 may also be referred to as code.
- the terms “instructions” and “code” should be interpreted broadly to include any type of computer- readable statement(s). For example, the terms “instructions” and “code” may refer to one or more programs, routines, sub-routines, functions, procedures, etc. “Instructions” and “code” may include a single computer-readable statement or many computer-readable statements.
- the communication module 168 can include any electronic circuitry and/or logic circuitry to facilitate direct or indirect communication of data between the processor 28, the probe 10, and/or the display 30.
- the communication module 168 can be an input/output (I/O) device.
- the communication module 168 facilitates direct or indirect communication between various elements of the processor circuit 150 and/or the processing system 106 (Fig. 1A).
- an ultrasound imaging system comprises an ultrasound transducer comprising an array of one or more transducer elements, where the ultrasound transducer is configured to scan or sweep through a volume of the patient to obtain a three-dimensional ultrasound data set of the volume.
- This three-dimensional ultrasound data can be used to generate three-dimensional images or models of the anatomy.
- two-dimensional images corresponding to a variety of imaging planes that intersect the volume can be reconstructed from the three-dimensional ultrasound data.
- Fig. 3 shows a plurality of image slices 210, 220 forming a three- dimensional ultrasound data set of a volume 200.
- the slices 210, 220 are obtained as part of an image sequence using an ultrasound transducer array 12.
- the array 12 comprises a two-dimensional array of ultrasound transducer elements that can be controlled as a phased array to electronically steer ultrasonic beams of acoustic energy to sweep out the image slices 210, 220, in a plurality of corresponding image planes.
- the ultrasound transducer array 12 may comprise a single ultrasound transducer element, or a one dimensional array of transducer elements in which the transducer 12 manually scans in one or more degrees of freedom to acquire the three-dimensional data set of the volume 200.
- the image slices 210, 220 may correspond to image planes of an image sequence performed by the ultrasound transducer.
- the three-dimensional data set of the volume 200 may be formed by acquiring ultrasound data in a step-wise fashion where an image slice is obtained by steering or sweeping the ultrasound beams or scan lines across the corresponding image plane, and combining the plurality of different image slices 210, 220 to form the three-dimensional data set.
- the slices 210, 220 may be combined into a three-dimensional image by determining interpolated intensity values from the different slices 210, 220, such that an intensity value can be determined for any location (e.g., in a Cartesian coordinate system) in the three-dimensional image.
- the reconstructed image 230 in contrast to the image slices 210, 220, does not directly correspond to the two-dimensional segments of the three-dimensional imaging sequence, and does not represent an image plane that intersects the ultrasound transducer 12.
- the reconstructed image 230 may represent an image plane passing through the volume if the ultrasound transducer 12 had been moved to a different position and orientation relative to the volume 200.
- an image can be reconstructed from the three-dimensional data set that is representative of an imaging plane that does intersect the ultrasound transducer 12.
- multiplanar reconstruction can be used to reconstruct, from the three-dimensional ultrasound data, a target image corresponding to a target view or imaging plane of the patient (e.g., trans-ventricular, trans-cerebellar, trans-thalamic, apical view, etc.).
- MPR may be used to generate the reconstructed image 230 shown in Fig. 3.
- MPR can be performed using input from various image processing techniques, including artificial intelligence (A.I.), machine learning, and/or deep learning techniques, to identify a two-dimensional cross section in the three-dimensional ultrasound data that includes a combination and/or arrangement of anatomical structures associated with a particular view.
- A.I. artificial intelligence
- machine learning machine learning
- deep learning techniques deep learning techniques
- PE plane estimation
- PE techniques may have some drawbacks.
- the sonographer may not be able to easily confirm that an image generated using PE and displayed as an MPR is the correct or optimal view.
- the sonographer may lack spatial context that would otherwise be gained if the sonographer was manually scanning the probe across the anatomy around the target view. Thus, confidence that the correct image has been acquired may be diminished.
- an incorrect or suboptimal image plane may be selected by the PE algorithm, and the sonographer may not be able to easily identify what adjustments are to be made to more completely achieve a desired view.
- the present disclosure provides devices, systems, and associated methods for providing contextual visualizations associated with automatically reconstructed two- dimensional images, such as images generated from PE.
- the present disclosure provides for automated image reconstruction techniques that allow for a target image to be generated and displayed, while still providing spatial contextual information to the sonographer so that the sonographer can confirm that the correct image plane has been chosen to automatically reconstruct the target image.
- the sonographer places an ultrasound probe configured to obtain three-dimensional ultrasound data on the skin of the patient at a location generally associated with the target image plane. A three- dimensional data set of a volume of the patient is acquired, and the sonographer can set the probe down.
- a processing system and/or processor circuit analyzes the three-dimensional ultrasound data using, for example, PE and/or MPR, to identify, reconstruct, or otherwise generate a target image associated with a target image plane or view, such as the trans-cerebellar view.
- the target image can then be displayed using MPR to the user, which may allow the user to make measurements or assessments of the anatomy in the target image.
- the processor circuit is configured to provide spatial context for the target image by generating a plurality of adjacent images that are associated with imaging planes adjacent to the target image plane.
- the adjacent images are generated such that the target image and the adjacent images all correspond to a common simulated motion path or trajectory.
- the simulated motion path is computed so that scanning through the adjacent images and the target image simulates physical scanning or pacing of the ultrasound probe around the target image plane.
- the sonographer may scan through the adjacent images along the simulated motion path using a keyboard, track ball, or other interface device as though the user was manually moving the probe around the target image plane. Accordingly, the sonographer may have increased confidence that the processor circuit has correctly and/or optimally reconstructed the target image. Further, by scanning through the reconstructed adjacent image, the sonographer may be able to determine whether the target image reconstruction should be revised or redone.
- embodiments of the present disclosure provide for an experience of seeing two-dimensional images that are proximate or near to a target plane.
- the two- dimensional images can be created or reconstructed from a three-dimensional data set, thereby recreating a traditional experience in order to improve the sonographer’ s confidence that the desired anatomically planes were correctly detected by the automated procedure.
- the systems and methods described herein enable the simulation of an ultrasound probe movement which allows a smooth transition from one automatically detected plane to another automatically detected plane.
- These smooth image transitions emulate the transducer movements, which are familiar to the sonographer from a standard two-dimensional ultrasound exam. Smooth transitions between different view planes can be achieved, for example, by interpolating the plane parameters (normal and offset vectors) and thus obtaining continuous spline trajectories for two-dimensional plane displacement in three-dimensional space.
- the sequence of image frames shown to the user may be a compilation of adjacent MPRs determined based on geometric spacing.
- the sequence of images or image frames shown includes a selection of geometrically adjacent images that have been selected, e.g., by an A. I. algorithm, to be somewhat similar to the target image plane.
- the proposed dynamic representation can be used to perform interpolation between respective image plane parameters. These interpolated images can be used to dynamically transition from one selected plane to the other, thus performing a transition through the three-dimensional volume in which the planes are located.
- adjacent image frames correspond to parallel planes to the target plane.
- a set of adjacent image frames can be created by interpolating between the planes of interest. For this purpose, interpolation can be done in a fixed order, e.g., always going from TT to TV, or by finding the two closest planes.
- an A.I. -based approach for plane detection provides a way of measuring uncertainty such that a direction of highest uncertainty may be identified. Images may be interpolated along the direction of highest uncertainty. This would leverage an efficient manual correction of automatically determined images.
- the adjacent images can be generated such that at least one adjacent image includes of the anatomical feature to be excluded, the other adjacent images are generated to show the anatomical feature disappearing from the field of view (e.g. the cerebellum in the TV plane of the fetal brain).
- the adjacent image set can be displayed with a schematic anatomical view, since the three-dimensional volume together with the estimated plane geometries and located anatomical objects provides sufficient information for the registration to a three-dimensional anatomical model.
- Fig. 4 is a flow chart illustrating a method 300 for providing spatial context for automated ultrasound image plane reconstructions.
- a processor circuit receives, from an ultrasound probe communicatively coupled to the processor circuit, three- dimensional ultrasound data of an anatomy of a patient.
- the ultrasound data may be obtained during a fetal ultrasound procedure, a trans-thoracic ultrasound feature, a cardiac ultrasound procedure, or any other suitable procedure.
- the ultrasound probe includes an ultrasound transducer having an array of one or more ultrasound transducer elements.
- the ultrasound transducer comprises a one-dimensional array, a 1.5-dimensional array, a l.X-dimensional array, a two-dimensional array, or any other suitable type of array of ultrasound transducer elements.
- the array is mechanically scanned to obtain a plurality of image slices of the volume.
- the array is operated as a solid-state array, or a phased array configured to electronically scan across the volume.
- the three-dimensional ultrasound data is made up of a plurality of two- dimensional images or image slices.
- the three-dimensional ultrasound data is made up of a plurality of individual scan lines of ultrasound data.
- the ultrasound data received by the processor circuit comprises raw ultrasound signals or data. In other embodiments, the ultrasound data received by the processor circuit comprises beamformed, partially beamformed, and/or filtered ultrasound data. In some embodiments, the processor circuit receives the ultrasound data directly from the ultrasound probe. In some embodiments, the ultrasound data is first received by a memory, which stores the ultrasound data, and the processor circuit then receives the ultrasound data from the memory. Accordingly, in some embodiments, the processor circuit receives the ultrasound data indirectly from the ultrasound probe after being stored in the memory.
- step 310 includes generating a user instruction to be output to a user output device or interface (e.g., display, speaker, etc.) to first place the ultrasound probe in a general location such that a target view is within a three-dimensional field of view of the ultrasound probe.
- the user instruction may include a diagrammatic illustration of the patient’s body, and an indicator showing how to position and orient the probe relative to the patient’s body.
- the processor circuit generates, from the three-dimensional ultrasound data, a target image corresponding to a target image plane of the anatomy.
- the target image plane or view to be achieved is the trans-ventricular, trans- cerebellar, or trans-thalamic view of a fetus, an apical view of a patient’s heart, or any other suitable view.
- generating the target image comprises using a PE and/or MPR process to automatically reconstruct the target image.
- image processing and recognition techniques can be used, including A.I. techniques, machine learning techniques, deep learning techniques, state machines, etc.
- generating the target image may include comparing the three-dimensional image data to a model of the anatomy.
- generating the target image includes comparing various portions of the three-dimensional ultrasound data representative of different cross sections of the three-dimensional field of view to one or more exemplary images of the anatomy.
- step 330 the processor circuit generates, from the three-dimensional ultrasound data, one or more adjacent images corresponding to image planes adjacent to the target image plane along a simulated motion path, wherein the target image and the one or more adjacent images comprise two-dimensional images based on the three-dimensional ultrasound data.
- the adjacent image(s) generated during step 330 can be used to provide contextual information to the user, which can accompany the display of the target image generated in step 320.
- the simulated motion path may be representative of a physical adjustment or movement of the ultrasound probe, such as a translation, linear movement, compression, rotation, fanning, sweeping, rocking, or any other suitable type of motion.
- step 330 includes determining or computing the simulated motion path.
- the processor circuit may determine or compute the simulated motion path based on the target view to be reconstructed.
- the simulated motion path may be a parameter for reconstructing the adjacent images, in that the simulated motion path or trajectory defines the spatial relationships between the adjacent images to be reconstructed from the three- dimensional ultrasound data of the volume.
- the path will be defined by a set of points which are smoothly connected to a curved line, e.g.
- the points set may be a set of anatomical landmarks, such as organs to be inspected, or may follow an anatomical structure such as the spine.
- the path may be constructed such that it shows, for education, the degrees of freedom when maneuvering the transducer, such as translating the transducer position on the skin surface, or rotating/tilting the transducer around its three principle axes.
- Fig. 5 shows a target image 410, an adjacent image 420, and an interpolated image 430 that correspond to a simulated motion path 440.
- the darker rectangles are representative of image planes 410, 420 reconstructed from the ultrasound data.
- the white rectangle is representative of an interpolated image plane 430 generated using the target image 410 and the adjacent image 420.
- the target image 410, adjacent image 420 and the interpolated image 430 are generated to correspond to a simulated motion path 440.
- the simulated motion path 440 may be determined or computed by the processor circuit, and may be representative of a simulated type of movement of the ultrasound probe (i.e., as if the ultrasound probe were to be physically scanned across a volume).
- the simulated motion path 440 is representative of a mixed tilting/rocking movement of the ultrasound probe.
- the orthogonal axes 442 of the images 410, 420, 430, are tangential to the simulated motion path 440 at the point at which the images 410, 420, 430 intersect the simulated motion path 440.
- Figs. 6A, 6B, and 6C are representative of image planes, both reconstructed and interpolated, that correspond to three different simulated motion paths.
- the image slices include a reconstructed image slice 510 represented by a dark rectangle, and interpolated images 520, represented by the white rectangles.
- the image slices 510, 520 are associated with a simulated linear motion path. Accordingly, the images correspond to image planes that are parallel to each other and spaced from each other.
- Fig. 6 A the image slices include a reconstructed image slice 510 represented by a dark rectangle, and interpolated images 520, represented by the white rectangles.
- the image slices 510, 520 are associated with a simulated linear motion path. Accordingly, the images correspond to image planes that are parallel to each other and spaced from each other.
- the images 600 are associated with a simulated curved motion path, which includes a combination of simulated tilting and translation of the ultrasound probe.
- the images 700 are associated with a simulated curved motion path, which includes a simulated tilting of the ultrasound probe.
- the simulated motion path is determined or computed to include more than a single type of motion along its length.
- a simulated motion path may be computed such that multiple image planes of interest (e.g., TC, TV, TT) would be obtained by the simulated movement of the probe along the motion path.
- the simulated motion path may be computed to have different segments associated with different types of simulated movements (e.g., translation, rocking, tilting, rotation, compression, etc.).
- the processor circuit is configured to identify, from the three-dimensional ultrasound data, an image of interest different from the target image, and determine the simulated motion path based on the target image plane, the adjacent image plane, and the image plane of interest. The processor circuit can then output the image of interest to the display in response to receiving a user input, as further described below.
- the processor circuit may be configured to generate one or more interpolated images (e.g., 430, Fig. 5) to accompany the one or more adjacent images (410, 420, Fig. 5).
- three-dimensional image data of a volume may be relatively sparse.
- the ultrasound probe may be capable of obtaining and displaying approximately 30 frames per second.
- sonographers may be accustomed to the ability to view two-dimensional views with a high degree of temporal and spatial resolution.
- ultrasound imaging procedures are constrained by the speed of sound, the number of scan lines that can be obtained in a given amount of time is limited.
- temporal and/or spatial resolution may be reduced compared to two-dimensional ultrasound imaging. Therefore, ultrasound data points (e.g., voxels) available for the reconstruction of adjacent images may be limited.
- Fig. 7 provides a flow chart illustrating a method 800 for generating and outputting an interpolated image. It will be understood that one or more steps of the method 800 may be performed by the ultrasound imaging system 100 shown in Fig. 1, and/or the processor circuit 150 shown in Fig. 2, for example.
- the processor circuit generates an adjacent image frame or image slice corresponding to a first image plane adjacent to the target image plane. Referring to Fig. 5, the target image and the adjacent image are reconstructed from the three-dimensional ultrasound data, as described above.
- the adjacent image frame may be generated from a three-dimensional image that was created by interpolating the intensity values from a set of two- dimensional image slices obtained of a volume by an ultrasound imaging system.
- the interpolation of the two-dimensional images is made with respect to a Cartesian coordinate system or grid.
- the target image and the adjacent image may be generated or reconstructed using MPR.
- the processor circuit interpolates the image position and orientation between the position and orientation of target image 410 and the adjacent image 420 to then generate the image 440 as an MPR at the interpolated position and orientation.
- the MPR image 440 is generated based on the interpolated intensity values of the three-dimensional image, as described above.
- the adjacent images and the interpolated image are all generated to correspond to the simulated motion path 440. Accordingly, the orthogonal axis 442 of the interpolated image, like the reconstructed adjacent images, is tangential to the curved simulated motion path.
- the target image plane, the adjacent image plane, and the interpolated image plane are all orthogonal to the simulated motion path at the points at which the planes intersect the simulated motion path.
- the processor circuit outputs the interpolated image to the display.
- the processor circuit may be configured to generate one or more interpolated images between a target image and an adjacent image, and/or between adjacent images, and arrange the adjacent and interpolated images to form an image stack that can be scrolled or scanned through in response to the input from a user.
- the processor circuit incrementally updates the display based on the extent of the input.
- the speed at which the sonographer rotates the track ball or swipes across a track pad may determine the speed at which the processor circuit updates the display with successive adjacent and interpolated images.
- the sonographer may have a similar amount of control to scanning through adjacent image planes as the sonographer would have had using a traditional two-dimensional imaging approach (i.e., without MPR). This type of visualization may provide for a familiar imaging experience while leveraging the workflow advantages of automated image reconstruction.
- the interpolated image frames 510 can supplement the reconstructed adjacent image frames 520 to provide a smoother, or more natural scanning visualization of the image planes adjacent to a target image plane.
- the processor circuit is configured to interpolate between a target image and an adjacent image, or between two adjacent images, to generate the interpolated image.
- the interpolated image can then be output to the display in response to a user input indicating a direction of motion along the simulated motion path.
- the interpolated image is generated by the processor circuit.
- the interpolated image is representative of an intermediate image plane in the volume between two reconstructed images generated directly from the three-dimensional ultrasound data.
- the interpolated image is associated with the same simulated motion path as the reconstructed adjacent image(s). Accordingly, as the user scans through the adjacent and interleaved interpolated images, the result is an image stream that more closely resembles manually scanning across a target region of an imaged volume, as performed in two-dimensional imaging procedures.
- the interpolated images may be obtained by interpolation of the plane normal vectors and the plane offset vectors.
- the interpolation may be performed based on the position and orientation associated with each of the reconstructed images to obtain an interpolated plane position and normal.
- the intensity values of the 3D grid corresponding to the reconstructed images can then be interpolated to generate the intensity values for the interpolated plane.
- motion vectors may be computed for individual pixels to generate the interpolated image.
- the processor circuit outputs the target image slice to a display in communication with the processor circuit.
- the processor circuit may output a graphical user interface to the display, wherein the GUI includes the target image.
- the processor circuit receives a user input representative of a direction of motion along the simulated motion path.
- the processor circuit may receive the user input via a user input device in communication with the processor circuit.
- the user input device may include a mouse, keyboard, trackball, microphone, touch-screen display, track pad, physical button, or any other suitable type of user input device.
- the user input may be provided by the sonographer by pressing an arrow key on a keyboard, where the arrow direction corresponds to a direction along the simulated motion path (e.g., forward, backward).
- the user input is received by a scrolling device, such as a mouse, track ball, or track pad.
- the direction of the scrolling may correspond to the direction along the simulated motion path.
- any suitable user input could be used to indicate the direction of motion along the simulated path.
- the processor circuit outputs an adjacent image from the plurality of adjacent images that corresponds to the direction of motion.
- the target image is first output to the display.
- the sonographer then uses the user input device to scan forward or backward along the simulated direction of motion, and the processor circuit outputs one or more adjacent images generated in step 330. Accordingly, the sonographer can control the display of the reconstructed images to scan forward and backward along the simulated motion path as if the user was manually scanning the ultrasound probe along the motion path, but after the ultrasound image data has already been obtained and the ultrasound probe has been set down. Using this process, the sonographer is given the benefit of both (1) automated reconstruction of a target image from a three-dimensional data set, and (2) ability to gain spatial context to confirm that the automatically reconstructed target image is correctly and/or optimally reconstructed.
- Fig. 8 is a flow diagram of a method 900 for determining a simulated motion path based on a determination of a direction of uncertainty. It will be understood that one or more steps of the method 900 may be performed by the ultrasound imaging system 100 shown in Fig. 1, and/or the processor circuit 150 shown in Fig. 2, for example. In step 910, the processor circuit determines a direction of uncertainty relative to the target image plane.
- the processor circuit determines the direction of uncertainty relative to the target image plane by applying a covariance matrix to the three-dimensional ultrasound data.
- the simulated motion path is generated by the processor circuit based on the determined direction of uncertainty.
- the processor circuit may determine a plurality of directions of uncertainty.
- the processor circuit may determine a direction or vector of highest uncertainty.
- the processor circuit determines a direction of median uncertainty, average uncertainty, minimum uncertainty, and/or any other suitable relative measure of uncertainty selected from one or more determined directions or vectors of uncertainty.
- the processor circuit generates one or more adjacent images based on the simulated motion path determined in step 920, which is based on the determined direction of uncertainty.
- the sonographer may be more likely to identify incorrectly-aligned target images, or to determine probe movements and adjustments to generate the target image such that it is aligned with a target view or target image plane. Further details on determining a direction associated with uncertainty in MPR procedures can be found in Alexander Schmidt-Richberg, et al., “Offset regression networks for view plane estimation in 3D fetal ultrasound,” Proc. SPIE, Vol. 10949, id. 109493K (March 15, 2019), the entirety of which is incorporated by reference.
- Fig. 9 is a graphical user interface (GUI) 1000 of an ultrasound imaging system, where the GUI includes an anatomical diagram 1010 of a body portion, and an MPR image 1020 of an image plane within the body portion.
- the anatomical diagram 1010 comprises a graphical representation of an anatomy of the patient (e.g., head of the fetus), and is annotated with indicators showing how various target image planes intersect the anatomy.
- the body portion comprises a human head.
- the diagram 1010 may be associated with a head of a fetus for use during a fetal ultrasound procedure.
- a plurality of line indicators is shown overlaid on the diagram of anatomy 1010. The line indicators are representative of various image planes or views of interest.
- the views associated with the indicators include a trans-ventricular (TV) image plane, a trans-cerebellar (TC) image plane, and a trans-thalamic (TT) image plane.
- Another indicator is provided showing a reconstructed frame (MPR).
- the MPR indicator may be a dynamic indicator that is responsive to input from a user provided using a user interface device.
- the MPR indicator on the diagram 1010 corresponds to the MPR image 1020 shown beside the diagram 1010.
- the GUI 1000 may display a target image corresponding to one of the views shown in the diagram 1010.
- a user may interact with the GUI using a user interface device, such as a mouse, keyboard, track ball, voice input, touch screen input, or any other suitable input.
- the input from the user indicates a direction of motion along a simulated motion path.
- the user input may be received in the form of a scrolling of the mouse, an arrow button on a keyboard, and/or scrolling of a track ball.
- the MPR indicator and the displayed MPR image 1020 can be updated in response to the received input to show adjacent MPR images reconstructed along the simulated motion path.
- a user may scan/scroll forward and backward along the path, and the MPR indicator may be updated in real time to show the location and orientation of the image plane associated with the MPR image 1020.
- the MPR image 1020 is also updated based on the user input to show the image corresponding to the MPR image plane illustrated with respect to the diagram 1010.
- the sonographer can control the display of the MPR images as if the user was manually scanning the ultrasound probe up and down a trajectory, but after the ultrasound image data has already been obtained.
- the GUI 1000 may be provided after the sonographer has obtained the three-dimensional ultrasound data, and replaced the ultrasound probe [0065]
- the processor circuit is configured to display a looped playback of the target image and adjacent images to provide the appearance of periodically scanning forward and/or backward along the simulated motion path.
- the processor circuit is configured to scan through a plurality of adjacent images in response to a single user input (e.g., key press, button press, etc.)
- the processor circuit is configured to receive a user input to correct or adjust the reconstructed target image.
- the processor circuit may be configured to receive, from the sonographer via a user input device, a selection corresponding to an adjacent image. The processor circuit may then tag, save, or otherwise identify the selected adjacent image as the target image.
- the target image comprises an interpolated image generated as described above.
- the adjacent images are displayed successively, and one-by-one. In other embodiments, the adjacent images are displayed alongside the target image.
- multiple adjacent and/or interpolated images are displayed simultaneously.
- multiple different simulated motion paths are determined. For example, motion paths corresponding to different directions of uncertainty may be provided.
- intersecting simulated motion paths are determined, as well as additional adjacent images associated with the intersecting simulated motion paths.
- different inputs e.g., up/down, left/right key presses
- received through a user interface correspond to adjacent images along different simulated motions paths.
- Embodiments of the present disclosure provide a number of benefits for automated image reconstruction procedures. For example, generating and displaying the dynamic adjacent image stream provides a familiar imaging experience, a simple and intuitive way to correct the target image, and a simple workflow transition from two-dimensional to three- dimensional ultrasound image acquisition. Further, embodiments of the present disclosure provide a user-friendly way to navigate between multiple planes of interest in one anatomy (e.g. TT, TV and TC planes for fetal ultrasound), and efficient ways to provide important anatomical context information and renderings directly related to the clinical guidelines for standard plane selection by use of anatomical objects to be or not to be included. Further, embodiments of the present disclosure allow for enhanced clinical confidence in an automatically reconstructed image.
- TT multiple planes of interest in one anatomy
- 900 described above may be performed by one or more components of an ultrasound imaging system, such as a processor or processor circuit, a multiplexer, a beamformer, a signal processing unit, an image processing unit, or any other suitable component of the system.
- a processor or processor circuit such as a processor or processor circuit, a multiplexer, a beamformer, a signal processing unit, an image processing unit, or any other suitable component of the system.
- one or more steps described above may be carried out by the processor circuit 150 described with respect to Fig. 2.
- the processing components of the system can be integrated within the ultrasound imaging device, contained within an external console, or may be a separate component.
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Pathology (AREA)
- Radiology & Medical Imaging (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Biophysics (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Ultra Sonic Daignosis Equipment (AREA)
Abstract
Description
Claims
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US202062985536P | 2020-03-05 | 2020-03-05 | |
PCT/EP2021/054251 WO2021175629A1 (en) | 2020-03-05 | 2021-02-22 | Contextual multiplanar reconstruction of three-dimensional ultrasound imaging data and associated devices, systems, and methods |
Publications (1)
Publication Number | Publication Date |
---|---|
EP4114272A1 true EP4114272A1 (en) | 2023-01-11 |
Family
ID=74701481
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP21707653.8A Withdrawn EP4114272A1 (en) | 2020-03-05 | 2021-02-22 | Contextual multiplanar reconstruction of three-dimensional ultrasound imaging data and associated devices, systems, and methods |
Country Status (5)
Country | Link |
---|---|
US (1) | US20230054610A1 (en) |
EP (1) | EP4114272A1 (en) |
JP (1) | JP2023517512A (en) |
CN (1) | CN115243621A (en) |
WO (1) | WO2021175629A1 (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116135521B (en) * | 2023-04-14 | 2023-07-28 | 易加三维增材技术(杭州)有限公司 | Scanning path display method, device and system and electronic equipment |
EP4517648A1 (en) * | 2023-08-31 | 2025-03-05 | Koninklijke Philips N.V. | Oblique multiplanar reconstruction of three-dimensional medical images |
Family Cites Families (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4173007A (en) | 1977-07-01 | 1979-10-30 | G. D. Searle & Co. | Dynamically variable electronic delay lines for real time ultrasonic imaging systems |
US6280387B1 (en) * | 1998-05-06 | 2001-08-28 | Siemens Medical Systems, Inc. | Three-dimensional tissue/flow ultrasound imaging system |
US6443896B1 (en) | 2000-08-17 | 2002-09-03 | Koninklijke Philips Electronics N.V. | Method for creating multiplanar ultrasonic images of a three dimensional object |
US7003175B2 (en) * | 2001-03-28 | 2006-02-21 | Siemens Corporate Research, Inc. | Object-order multi-planar reformatting |
US6589176B2 (en) * | 2001-12-05 | 2003-07-08 | Koninklijke Philips Electronics N.V. | Ultrasonic image stabilization system and method |
GB2395880B (en) * | 2002-11-27 | 2005-02-02 | Voxar Ltd | Curved multi-planar reformatting of three-dimensional volume data sets |
JP2006508729A (en) | 2002-12-04 | 2006-03-16 | コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ | High frame rate 3D ultrasound imager |
US20070255139A1 (en) * | 2006-04-27 | 2007-11-01 | General Electric Company | User interface for automatic multi-plane imaging ultrasound system |
CN101868737B (en) | 2007-11-16 | 2013-04-24 | 皇家飞利浦电子股份有限公司 | Interventional navigation using 3d contrast-enhanced ultrasound |
US8494250B2 (en) * | 2008-06-06 | 2013-07-23 | Siemens Medical Solutions Usa, Inc. | Animation for conveying spatial relationships in three-dimensional medical imaging |
US8265366B2 (en) * | 2008-09-24 | 2012-09-11 | Koninklijke Philips Electronic N.V. | Generation of standard protocols for review of 3D ultrasound image data |
US20100121189A1 (en) * | 2008-11-12 | 2010-05-13 | Sonosite, Inc. | Systems and methods for image presentation for medical examination and interventional procedures |
KR101904518B1 (en) * | 2010-10-27 | 2018-10-04 | 솔리도 디자인 오토메이션 인코퍼레이티드 | Method and system for identifying rare-event failure rates |
US9107607B2 (en) * | 2011-01-07 | 2015-08-18 | General Electric Company | Method and system for measuring dimensions in volumetric ultrasound data |
US10575757B2 (en) | 2011-08-16 | 2020-03-03 | Koninklijke Philips N.V. | Curved multi-planar reconstruction using fiber optic shape data |
WO2013040673A1 (en) * | 2011-09-19 | 2013-03-28 | The University Of British Columbia | Method and systems for interactive 3d image segmentation |
WO2014018983A1 (en) * | 2012-07-27 | 2014-01-30 | The Board Of Trustees Of The Leland Stanford Junior University | Manipulation of imaging probe during medical procedure |
CA2949252C (en) * | 2013-03-15 | 2021-11-09 | Synaptive Medical (Barbados) Inc. | Planning, navigation and simulation systems and methods for minimally invasive therapy |
KR102245202B1 (en) * | 2014-03-17 | 2021-04-28 | 삼성메디슨 주식회사 | The method and apparatus for changing at least one of direction and position of plane selection line based on a predetermined pattern |
US10905396B2 (en) * | 2014-11-18 | 2021-02-02 | C. R. Bard, Inc. | Ultrasound imaging system having automatic image presentation |
EP3469553A1 (en) | 2016-06-10 | 2019-04-17 | Koninklijke Philips N.V. | Systems and methods for generating b-mode images from 3d ultrasound data |
-
2021
- 2021-02-22 EP EP21707653.8A patent/EP4114272A1/en not_active Withdrawn
- 2021-02-22 JP JP2022552408A patent/JP2023517512A/en active Pending
- 2021-02-22 CN CN202180019152.4A patent/CN115243621A/en active Pending
- 2021-02-22 US US17/908,289 patent/US20230054610A1/en active Pending
- 2021-02-22 WO PCT/EP2021/054251 patent/WO2021175629A1/en unknown
Non-Patent Citations (1)
Title |
---|
SCHMIDT-RICHBERG ALEXANDER ET AL: "Offset regression networks for view plane estimation in 3D fetal ultrasound", PROGRESS IN BIOMEDICAL OPTICS AND IMAGING, SPIE - INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING, BELLINGHAM, WA, US, vol. 10949, 15 March 2019 (2019-03-15), pages 109493K - 109493K, XP060120522, ISSN: 1605-7422, ISBN: 978-1-5106-0027-0, DOI: 10.1117/12.2512697 * |
Also Published As
Publication number | Publication date |
---|---|
US20230054610A1 (en) | 2023-02-23 |
CN115243621A (en) | 2022-10-25 |
WO2021175629A1 (en) | 2021-09-10 |
JP2023517512A (en) | 2023-04-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10617384B2 (en) | M-mode ultrasound imaging of arbitrary paths | |
US10835210B2 (en) | Three-dimensional volume of interest in ultrasound imaging | |
CN112867444B (en) | System and method for guiding the acquisition of ultrasound images | |
US20210321978A1 (en) | Fat layer identification with ultrasound imaging | |
JP7346586B2 (en) | Method and system for acquiring synthetic 3D ultrasound images | |
CN115426954A (en) | Biplane and three-dimensional ultrasound image acquisition for generating roadmap images and associated systems and devices | |
JP6835587B2 (en) | Motion-adaptive visualization in medical 4D imaging | |
US20230054610A1 (en) | Contextual multiplanar reconstruction of three-dimensional ultrasound imaging data and associated devices, systems, and methods | |
EP3547923B1 (en) | Ultrasound imaging system and method | |
US12092735B2 (en) | Method and apparatus for deep learning-based ultrasound beamforming | |
US20220160333A1 (en) | Optimal ultrasound-based organ segmentation | |
US20230094631A1 (en) | Ultrasound imaging guidance and associated devices, systems, and methods | |
US20180168536A1 (en) | Intervolume lesion detection and image preparation | |
CN113573645B (en) | Method and system for adjusting field of view of an ultrasound probe | |
EP4014884A1 (en) | Apparatus for use in analysing an ultrasound image of a subject | |
US20200222030A1 (en) | Ultrasound image apparatus and method of controlling the same | |
EP4515569A1 (en) | Analysing an ultrasound image feed |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: UNKNOWN |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20221005 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
DAV | Request for validation of the european patent (deleted) | ||
DAX | Request for extension of the european patent (deleted) | ||
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
17Q | First examination report despatched |
Effective date: 20231006 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN |
|
18W | Application withdrawn |
Effective date: 20250102 |