WO2025190828A1 - Correction of an mri image - Google Patents
Correction of an mri imageInfo
- Publication number
- WO2025190828A1 WO2025190828A1 PCT/EP2025/056377 EP2025056377W WO2025190828A1 WO 2025190828 A1 WO2025190828 A1 WO 2025190828A1 EP 2025056377 W EP2025056377 W EP 2025056377W WO 2025190828 A1 WO2025190828 A1 WO 2025190828A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- mri image
- new
- examination
- mri
- intensity value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01R—MEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
- G01R33/00—Arrangements or instruments for measuring magnetic variables
- G01R33/20—Arrangements or instruments for measuring magnetic variables involving magnetic resonance
- G01R33/44—Arrangements or instruments for measuring magnetic variables involving magnetic resonance using nuclear magnetic resonance [NMR]
- G01R33/48—NMR imaging systems
- G01R33/54—Signal processing systems, e.g. using pulse sequences ; Generation or control of pulse sequences; Operator console
- G01R33/56—Image enhancement or correction, e.g. subtraction or averaging techniques, e.g. improvement of signal-to-noise ratio and resolution
- G01R33/5608—Data processing and visualization specially adapted for MR, e.g. for feature analysis and pattern recognition on the basis of measured MR data, segmentation of measured MR data, edge contour detection on the basis of measured MR data, for enhancing measured MR data in terms of signal-to-noise ratio by means of noise filtering or apodization, for enhancing measured MR data in terms of resolution by means for deblurring, windowing, zero filling, or generation of gray-scaled images, colour-coded images or images displaying vectors instead of pixels
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01R—MEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
- G01R33/00—Arrangements or instruments for measuring magnetic variables
- G01R33/20—Arrangements or instruments for measuring magnetic variables involving magnetic resonance
- G01R33/44—Arrangements or instruments for measuring magnetic variables involving magnetic resonance using nuclear magnetic resonance [NMR]
- G01R33/48—NMR imaging systems
- G01R33/54—Signal processing systems, e.g. using pulse sequences ; Generation or control of pulse sequences; Operator console
- G01R33/56—Image enhancement or correction, e.g. subtraction or averaging techniques, e.g. improvement of signal-to-noise ratio and resolution
- G01R33/5601—Image enhancement or correction, e.g. subtraction or averaging techniques, e.g. improvement of signal-to-noise ratio and resolution involving use of a contrast agent for contrast manipulation, e.g. a paramagnetic, super-paramagnetic, ferromagnetic or hyperpolarised contrast agent
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
Definitions
- the European patent application EP4332601A1 proposes to generate a synthetic contrast-enhanced MRI image of an examination region of an examination object on the basis of a first MRI image and a second MRI image.
- the first MRI image is a native MRI image, i.e. it represents the examination region of the examination object without contrast agent.
- the second MRI image represents the examination region of the examination object after application of an amount of a contrast agent.
- the first MRI image (the native MRI image) is subtracted from the second MRI image.
- the result of the subtraction is subjected to noise suppression.
- the noise- suppressed result of the subtraction is added ⁇ -fold to the first MRI image.
- the result is a synthetic MRI image of the examination region of the examination object in which the contrast enhancement caused by the contrast agent is either reduced or increased compared to the second MRI image, depending on the size of the factor ⁇ .
- the method disclosed in EP4332601A1 is based on the assumption that a sub-region of the examination region into which no contrast agent penetrates is characterized by the same intensity values (e.g.
- the method disclosed in EP4332601A1 assumes that an image element in the first MRI image has the same grey or colour value as a corresponding image element of the second MRI image, wherein the corresponding image elements represent the same sub-region of the examination region and no contrast agent enters the sub- region.
- the method disclosed in EP4332601A1 assumes that sub-regions into which no contrast agent enters are always displayed in the same way in a dynamic contrast-enhanced MRI examination. Unfortunately, this assumption is not always justified.
- the present disclosure relates to a computer-implemented method for training a machine learning model, the method comprising: - providing a plurality of data sets, wherein each data set comprises at least two magnetic resonance imaging (MRI) images, a first MRI image and a second MRI image, wherein the first MRI image represents an examination region of an examination object in a first state, wherein the first MRI image is characterized by a first intensity value distribution, wherein the second MRI image represents the examination region of the examination object in a second state, wherein the second MRI image is characterized by a second intensity value distribution, - providing a reversible transformation operation, the transformation operation comprising at least one transformation parameter, wherein the transformation operation performs a transformation of the second intensity value distribution of the second MRI image when applied to the second intensity value distribution and/or to the second MRI image, - generating training data based on the plurality of data sets, wherein generating training data based on the plurality of data sets, wherein generating training data based on the plurality
- the present disclosure relates to a computer-implemented method for correcting an intensity value distribution of a new MRI image using the trained machine learning model, the method comprising: - providing the trained machine learning model, - receiving a new data set, wherein the new data set comprises at least two new MRI images, a new first MRI image and a new second MRI image, wherein the new first MRI image represents the examination region of a new examination object in the first state, wherein the new first MRI image is characterized by a first intensity value distribution, wherein the new second MRI image represents the examination region of the new examination object in the second state, wherein the new second MRI image is characterized by a second intensity value distribution, - inputting the new first MRI image and the new second MRI image and/or their intensity value distributions into the trained machine learning model, - receiving at least one predicted value as an output from the trained machine learning model, - providing an inverse transformation operation, wherein the inverse transformation operation comprises the at least one transformation parameter, wherein the inverse
- the present disclosure relates to a computer-implemented method for generating a synthetic MRI image of an examination region of an examination object, the method comprising: - providing a new first MRI image and a new second MRI image, wherein the new first MRI image represents the examination region of a new examination object without a contrast agent, wherein the new first MRI image is characterized by a first intensity value distribution, wherein the new second MRI image represents the examination region of the new examination object after application of the amount of the contrast agent, wherein the new second MRI image is characterized by a second intensity value distribution, - generating a corrected second MRI image based on the new second MRI image using the trained machine learning model and the inverse transformation operation, - generating a third MRI image based on the new first MRI image and the corrected second MRI image, wherein generating the third MRI image comprises: subtracting the new first MRI image from the corrected second MRI image in real space or in frequency space, optionally multiplying the result of the subtraction by
- the present disclosure provides a non-transitory computer readable storage medium having stored thereon a computer program that, when executed by a processing unit of a computer system, cause the computer system to perform one or more of the computer-implemented methods disclosed above.
- the present disclosure provides a computer system comprising: a processing unit; and a memory storing a computer program configured to perform, when executed by the processing unit, one or more of the computer-implemented methods disclosed above.
- the present disclosure relates to a use of a contrast agent in a magnetic resonance imaging examination of an examination region of an examination object, the magnetic resonance imaging examination comprising: - providing a trained machine learning model, wherein the trained machine learning model is configured and was trained to determine at least one predicted value for at least one transformation parameter based on: model parameters, a first MRI image, and a second MRI image and/or their intensity value distributions, wherein training of the machine learning model comprised: providing a plurality of data sets, wherein each data set comprised at least two MRI images, a first MRI image and a second MRI image, o wherein the first MRI image represents an examination region of an examination object at a first point in time before or after application of an amount of the contrast agent, wherein the first MRI image is characterized by a first intensity value distribution, o wherein the second MRI image represents the examination region of the examination object at a second point in time before or after application of the amount of the contrast agent, wherein the second MRI image is characterized
- the present disclosure provides a contrast agent for use in a magnetic resonance imaging examination of an examination region of an examination object, the magnetic resonance imaging examination comprising: - providing a trained machine learning model, wherein the trained machine learning model is configured and was trained to determine at least one predicted value for at least one transformation parameter based on: model parameters, a first MRI image, and a second MRI image and/or their intensity value distributions, wherein training of the trained machine learning model comprised: providing a plurality of data sets, wherein each data set comprised at least two MRI images, a first MRI image and a second MRI image, o wherein the first MRI image represents an examination region of an examination object at a first point in time before or after application of an amount of the contrast agent, wherein the first MRI image is characterized by a first intensity value distribution, o wherein the second MRI image represents the examination region of the examination object at a second point in time before or after application of the amount of the contrast agent, wherein the second MRI image is characterized by a
- the present disclosure provides a kit comprising a contrast agent and a computer program that, when executed by a processing unit of a computer system, cause the computer system to execute the following steps: - providing a trained machine learning model, wherein the trained machine learning model is configured and was trained to determine at least one predicted value for at least one transformation parameter based on: model parameters, a first MRI image, and a second MRI image and/or their intensity value distributions, wherein training of the trained machine learning model comprised: providing a plurality of data sets, wherein each data set comprised at least two MRI images, a first MRI image and a second MRI image, o wherein the first MRI image represents an examination region of an examination object at a first point in time before or after application of an amount of the contrast agent, wherein the first MRI image is characterized by a first intensity value distribution, o wherein the second MRI image represents the examination region of the examination object at a second point in time before or after application of the amount of the contrast agent, wherein the second
- Fig. 1 shows a schematic example of the generation of training data and the training of a machine learning model.
- Fig.2 shows a schematic example of the generation of a corrected second MRI image using the trained machine learning model.
- Fig. 3 shows an embodiment of the computer-implemented method for training the machine learning model in form of a flow chart.
- Fig. 4 shows an embodiment of the computer-implemented method for correcting an intensity value distribution of an MRI image using the trained machine learning model in form of a flow chart.
- Fig. 5 shows an embodiment of the computer-implemented method for generating a synthetic MRI image of an examination region of an examination object.
- Fig. 1 shows a schematic example of the generation of training data and the training of a machine learning model.
- Fig.2 shows a schematic example of the generation of a corrected second MRI image using the trained machine learning model.
- Fig. 3 shows an embodiment of the computer-implemented method for training the machine learning model in form of a flow chart.
- FIG. 6 shows an embodiment of the computer-implemented method for generating a synthetic MRI image of an examination region of an examination object in form of a flow chart.
- Fig. 7 illustrates a computer system according to some example implementations of the present disclosure in more detail.
- Fig.8 shows a first MRI image and a second MRI image together with their intensity value distributions.
- DETAILED DESCRIPTION The aspects of the present disclosure will be more particularly elucidated below without distinguishing between the aspects of the present disclosure (method, computer system, computer-readable storage medium, use, contrast agent for use, kit).
- the articles “a” and “an” are intended to include one or more items and may be used interchangeably with “one or more” and “at least one.”
- the singular form of “a”, “an”, and “the” include plural referents, unless the context clearly dictates otherwise. Where only one item is intended, the term “one” or similar language is used.
- the terms “has”, “have”, “having”, or the like are intended to be open-ended terms. Further, the phrase “based on” is intended to mean “based at least partially on” unless explicitly stated otherwise.
- the machine learning model can receive input data and provide output data based on that input data and on parameters of the machine learning model (model parameters).
- the machine learning model can learn a relation between input data and output data through training. In training, parameters of the machine learning model may be adjusted in order to provide a desired output for a given input.
- the process of training a machine learning model involves providing a machine learning algorithm (that is the learning algorithm) with training data to learn from.
- the term “trained machine learning model” refers to the model artifact that is created by the training process.
- the training data usually contains the correct answer, which is referred to as the target.
- the learning algorithm finds patterns in the training data that map input data to the target, and it outputs a trained machine learning model that captures these patterns.
- training data are inputted into the machine learning model and the machine learning model generates an output.
- the output is compared with the (known) target.
- Parameters of the machine learning model are modified in order to reduce the deviations between the output and the (known) target to a (defined) minimum.
- a loss function can be used for training, where the loss function can quantify the deviations between the output and the target.
- the loss function may be chosen in such a way that it rewards a wanted relation between output and target and/or penalizes an unwanted relation between an output and a target.
- Such a relation can be, e.g., a similarity, or a dissimilarity, or another relation.
- a loss function can be used to calculate a loss for a given pair of output and target.
- the aim of the training process can be to modify (adjust) parameters of the machine learning model in order to reduce the loss to a (defined) minimum.
- the loss function can be the absolute difference between these values.
- a high absolute loss value can mean that one or more model parameters needs to be changed to a substantial degree.
- difference metrics between vectors such as the mean square error, a cosine distance, a norm of the difference vector such as a Euclidean distance, a Chebyshev distance, an Lp norm of a difference vector, a weighted norm or another type of difference metric of two vectors can be chosen as the loss function.
- training data for training the machine learning model is generated.
- the training data is generated based on a plurality of data sets.
- the term “plurality” means more than 10, or even more than 100.
- Each data set of the plurality of data sets is usually the result of a magnetic resonance imaging (MRI) examination of an examination region of an examination object.
- MRI magnetic resonance imaging
- the first MRI image represents an examination region of an examination object in a first state.
- the second MRI image represents the examination region of the examination object in a second state. It is possible that data sets contain more than two MRI images, e.g. three or four or five or more than five.
- a third MRI image may represent the examination region of the examination object in a third state
- a fourth MRI image may represent the examination region of the examination object in a fourth state
- a fifth MRI image may represent the examination region of the examination object in a fifth state, and so forth.
- the states may indicate the amounts of a contrast agent administered to the examination object and/or different time points in a dynamic MRI examination and/or different contrast agents and/or different measurement parameters used to generate the respective MRI image.
- the first MRI image represents an examination region of an examination object without contrast agent and the second MRI image represents the examination region of the examination object after application of an amount of a contrast agent.
- the first MRI image represents an examination region of an examination object after application of a first amount of a contrast agent and the second MRI image represents the examination region of the examination object after application of a second amount of a contrast agent.
- the first MRI image represents an examination region of an examination object at a first point in time before or after application of a contrast agent
- the second MRI image represents the examination region of the examination object at a second point in time before or after application of the contrast agent.
- the first MRI image represents an examination region of an examination object at a first point in time after application of a first amount of a contrast agent
- the second MRI image represents the examination region of the examination object at a second point after application of a second amount of the contrast agent.
- the first MRI image represents an examination region of an examination object after application of an amount of a first contrast agent
- the second MRI image represents the examination region of the examination object after application of an amount of a second contrast agent.
- the first MRI image represents an examination region of an examination object at a first point in time of a dynamic MRI examination and the second MRI image represents the examination region of the examination at a second point in time of a dynamic MRI examination.
- the first MRI image represents an examination region of an examination object as a result of an examination with first measurement parameters and the second MRI image represents the examination region of the examination object as a result of an examination with second measurement parameters.
- the “examination object” is normally a living being, e.g. a mammal, e.g. a human. In an embodiment of the present disclosure, the examination object is a human.
- the examination object is usually different in all data sets, but it does not have to be different, there may be one or more data sets in which the examination object is identical.
- the “examination region” is a part of the examination object, for example an organ or part of an organ or a plurality of organs or another part of the examination object.
- the examination region can be the same or different for all data sets.
- the examination region may be a liver, kidney, heart, lung, brain, stomach, bladder, prostate, intestine, pancreas, thyroid, breast, uterus or a part of said parts or another part of the body of a mammal (for example a human).
- the examination region includes a liver or part of a liver or the examination region is a liver or part of a liver of a mammal, e.g. a human.
- the examination region includes a brain or part of a brain or the examination region is a brain or part of a brain of a mammal, e.g. a human.
- the examination region includes a heart or part of a heart or the examination region is a heart or part of a heart of a mammal, e.g. a human.
- the examination region includes a thorax or part of a thorax or the examination region is a thorax or part of a thorax of a mammal, e.g. a human.
- the examination region includes a stomach or part of a stomach or the examination region is a stomach or part of a stomach of a mammal, e.g. a human.
- the examination region includes a pancreas or part of a pancreas or the examination region is a pancreas or part of a pancreas of a mammal, e.g. a human.
- the examination region includes a kidney or part of a kidney or the examination region is a kidney or part of a kidney of a mammal, e.g. a human. In a further embodiment, the examination region includes one or both lungs or part of a lung of a mammal, e.g. a human. In a further embodiment, the examination region includes a breast or part of a breast or the examination region is a breast or part of a breast of a female mammal, e.g. a female human. In a further embodiment, the examination region includes a prostate or part of a prostate or the examination region is a prostate or part of a prostate of a male mammal, e.g. a male human.
- the examination region also referred to as the field of view (FOV) is in particular a volume that is imaged in MRI images.
- the examination region is typically defined by a radiologist, for example on an overview image. It is of course also possible for the examination region to be alternatively or additionally defined in an automated manner, for example on the basis of a selected protocol.
- the examination region is usually the same for all data sets, however, the examination region can also be different.
- the first MRI image and the second MRI image (and any further MRI image) can be two-dimensional (2D) or three-dimensional (3D) or four-dimensional (4D) MRI images, for example.
- Each MRI image usually comprises a multitude of image elements (e.g., pixels or voxels or doxels).
- Each image element usually represents a sub-region of the examination region.
- Each image element has an intensity value.
- This intensity value can, for example, be a grey value or a colour value that correlates with the intensity of a physical signal. However, the intensity value can also be the intensity value of the physical signal itself.
- each MRI image is characterized by an intensity value distribution.
- the intensity value distribution indicates how the intensity values are distributed across the image elements.
- the intensity value distribution indicates how many image elements (absolute or in relation to the total number of image elements) have a defined intensity value or have an intensity value in a defined range of intensity values.
- An intensity value distribution can be displayed graphically in the form of a histogram, for example. Fig.
- the intensity value distributions IVD1 and IVD2 are shown graphically in the form of a histogram.
- the intensity values I are plotted on the x-axis (abscissa). In the example shown in Fig.8, the intensity values I are grey values ranging from 0 to 255.
- a frequency F is plotted on the y- axis (ordinate). The frequency F indicates how often an intensity value I occurs in the respective MRI image.
- the frequency F can be an absolute frequency, i.e. indicate, for example, how many image elements of the MRI image have a certain intensity value, or a relative frequency, i.e.
- the intensity value distributions IVD1 and IVD2 are shown as curves in Fig. 8; however, the values on which the intensity value distributions are based are discrete values.
- the intensity value distributions (histograms) can therefore also be displayed as a bar chart.
- the intensity values can also be grouped. For example, the intensity values 0 to 9 can be combined into a first group, the intensity values 10 to 19 into a second group, and so on. A histogram can then be used to display the frequencies of the image elements that belong to the respective groups.
- the first MRI image is characterized by a first intensity value distribution
- the second MRI image is characterized by a second intensity value distribution.
- a third MRI image it may be characterized by a third intensity value distribution.
- a fourth MRI image it may be characterized by a fourth intensity value distribution.
- a fifth MRI image it may be characterized by a fifth intensity value distribution.
- Each data set of the plurality of data sets originates from an MRI examination.
- the data sets used for training the machine learning model should be data sets in which the problem described in the introduction of this disclosure, or an analogous problem does not occur.
- Such data sets can be determined and selected by analysing the first MRI image and the second MRI image (and any further MRI image if available). This is explained below using an example and applies mutatis mutandis to all other embodiments (and to any number of MRI images available).
- the first MRI image represents the examination region without contrast agent
- the second MRI image represents the examination region after application of an amount of contrast agent
- image elements of sub-regions of the examination region in which no contrast medium entered have equal intensity values in the first MRI image and the second MRI image or the intensity values of image elements of the first MRI image and the second MRI image lie within a defined range.
- the second MRI image shows sub-regions of the examination region into which no contrast agent has penetrated in the same or at least a similar way as in the first MRI image.
- the deviations between the first MRI image and the second MRI image with respect to the intensity values of the image elements of sub- regions into which no contrast agent has reached or entered are smaller than a predetermined threshold value.
- the deviations can be determined, for example, by subtracting the intensity values of the corresponding image elements (or a selection thereof) of the first MRI image from the second MRI image.
- the absolute values of the individual differences can be added together and compared with a predefined threshold value.
- a mean value e.g.
- the arithmetically averaged absolute differences can also be determined and compared with a predefined threshold value. If the deviations are smaller than the predefined threshold value, the MRI images of the data set can be used to generate the training data; if the deviations are greater than or equal to the predefined threshold value, the MRI images of the data set can be discarded. Threshold values can be defined by a radiologist, for example. Usually, the lower the threshold value, the more suitable the MRI images of the data set are for generating training data. As described, the first MRI image and/or the second MRI may represent the examination region of the examination object after the application of an amount of a contrast agent.
- a “contrast agent” is a substance that is administered to an examination object to enhance the visibility of specific tissues and/or blood vessels within the examination region. These agents usually contain paramagnetic or superparamagnetic properties, which alter the magnetic resonance properties of tissues, leading to improved contrast in the resulting images. By highlighting certain structures, such as blood vessels or abnormal tissues, contrast agents help radiologists and physicians to better visualize and analyze specific areas of interest within the examination object, aiding in the diagnosis and treatment of various medical conditions.
- An example of a superparamagnetic contrast agent is iron oxide nanoparticles (SPIO, superparamagnetic iron oxide).
- paramagnetic contrast agents examples include gadolinium chelates such as gadopentetate dimeglumine (trade name: Magnevist ® and others), gadoteric acid (Dotarem ® , Dotagita ® , Cyclolux ® ), gadodiamide (Omniscan ® ), gadoteridol (ProHance ® ), gadobutrol (Gadovist ® ), gadopiclenol (Elucirem, Vueway) and gadoxetic acid (Primovist ® /Eovist ® ).
- gadolinium chelates such as gadopentetate dimeglumine (trade name: Magnevist ® and others), gadoteric acid (Dotarem ® , Dotagita ® , Cyclolux ® ), gadodiamide (Omniscan ® ), gadoteridol (ProHance ® ), gadobutrol (G
- the contrast agent is an agent that includes gadolinium(III) 2-[4,7,10-tris(carboxymethyl)-1,4,7,10-tetrazacyclododec-1-yl]acetic acid (also referred to as gadolinium-DOTA or gadoteric acid).
- the contrast agent is an agent that includes gadolinium(III) ethoxybenzyldiethylenetriaminepentaacetic acid (Gd-EOB-DTPA); preferably, the contrast agent includes the disodium salt of gadolinium(III) ethoxybenzyldiethylenetriaminepentaacetic acid (also referred to as gadoxetic acid).
- the contrast agent is an agent that includes gadolinium(III) 2-[3,9-bis[1-carboxylato-4-(2,3-dihydroxypropylamino)-4-oxobutyl]-3,6,9,15- tetrazabicyclo[9.3.1]pentadeca-1(15),11,13-trien-6-yl]-5-(2,3-dihydroxypropylamino)-5- oxopentanoate (also referred to as gadopiclenol) (see for example WO2007/042504 and WO2020/030618 and/or WO2022/013454).
- the contrast agent is an agent that includes dihydrogen [( ⁇ )- 4-carboxy-5,8,11-tris(carboxymethyl)-1-phenyl-2-oxa-5,8,11-triazatridecan-13-oato(5-)]gadolinate(2-) (also referred to as gadobenic acid).
- the contrast agent is an agent that includes tetragadolinium [4,10-bis(carboxylatomethyl)-7- ⁇ 3,6,12,15-tetraoxo-16-[4,7,10-tris-(carboxylatomethyl)-1,4,7,10- tetraazacyclododecan-1-yl]-9,9-bis( ⁇ [( ⁇ 2-[4,7,10-tris-(carboxylatomethyl)-1,4,7,10- tetraazacyclododecan-1-yl]propanoyl ⁇ amino)acetyl]-amino ⁇ methyl)-4,7,11,14-tetraazahepta-decan-2- yl ⁇ -1,4,7,10-tetraazacyclododecan-1-yl]acetate (also referred to as gadoquatrane) (see for example J.
- the contrast agent is an agent that includes a Gd 3+ complex of a compound of the formula (I)
- Ar is a group selected from where # is the linkage to X
- X is a group selected from CH 2 , (CH 2 ) 2 , (CH 2 ) 3 , (CH 2 ) 4 and *-(CH 2 ) 2 -O-CH 2 - # , where * is the linkage to Ar and # is the linkage to the acetic acid residue
- R 1 , R 2 and R 3 are each independently a hydrogen atom or a group selected from C 1 -C 3 alkyl, -CH 2 OH, -(CH 2 ) 2 OH and -CH 2 OCH 3
- R 4 is a group selected from C 2 -C 4 alkoxy, (H 3 C-CH 2 )-O-(CH 2 ) 2 -O-, (H 3 C-CH 2 )-O-(CH 2 ) 2 -O- (CH 2 ) 2
- the contrast agent is an agent that includes a Gd 3+ complex of a compound of the formula (II)
- Ar is a group selected from where # is the linkage to X, X is a group selected from CH 2 , (CH 2 ) 2 , (CH 2 ) 3 , (CH 2 ) 4 and *-(CH 2 ) 2 -O-CH 2 - # , where * is the linkage to Ar and # is the linkage to the acetic acid residue, R 7 is a hydrogen atom or a group selected from C 1 -C 3 alkyl, -CH 2 OH, -(CH 2 ) 2 OH and -CH 2 OCH 3 ; R 8 is a group selected from C 2 -C 4 alkoxy, (H 3 C-CH 2 O)-(CH 2 ) 2 -O-, (H 3 C-CH 2 O)-(CH 2 ) 2 -O-(CH 2 ) 2 -O- and (H 3 C-CH 2
- C 1 -C 3 alkyl denotes a linear or branched, saturated monovalent hydrocarbon group having 1, 2 or 3 carbon atoms, for example methyl, ethyl, n-propyl or isopropyl.
- C 2 -C 4 alkyl denotes a linear or branched, saturated monovalent hydrocarbon group having 2, 3 or 4 carbon atoms.
- C 2 -C 4 refers to a linear or branched, saturated monovalent group of the formula (C 2 -C 4 alkyl)-O-, in which the term “C 2 -C 4 alkyl” is as defined above, for example a methoxy, ethoxy, n- propoxy or isopropoxy group.
- the contrast agent is an agent that includes gadolinium 2,2',2''-(10- ⁇ 1-carboxy-2-[2-(4-ethoxyphenyl)ethoxy]ethyl ⁇ -1,4,7,10-tetraazacyclododecane-1,4,7- triyl)triacetate (see for example WO2022/194777, example 1).
- the contrast agent is an agent that includes gadolinium 2,2',2''- ⁇ 10-[1-carboxy-2- ⁇ 4-[2-(2-ethoxyethoxy)ethoxy]phenyl ⁇ ethyl]-1,4,7,10- tetraazacyclododecane-1,4,7-triyl ⁇ triacetate (see for example WO2022/194777, example 2).
- the contrast agent is an agent that includes gadolinium 2,2',2''- ⁇ 10-[(1R)-1-carboxy-2- ⁇ 4-[2-(2-ethoxyethoxy)ethoxy]phenyl ⁇ ethyl]-1,4,7,10- tetraazacyclododecane-1,4,7-triyl ⁇ triacetate (see for example WO2022/194777, example 4).
- the contrast agent is an agent that includes gadolinium (2S,2'S,2''S)-2,2',2''- ⁇ 10-[(1S)-1-carboxy-4- ⁇ 4-[2-(2-ethoxyethoxy)ethoxy]phenyl ⁇ butyl]-1,4,7,10- tetraazacyclododecane-1,4,7-triyl ⁇ tris(3-hydroxypropanoate (see for example WO2022/194777, example 15).
- the contrast agent is an agent that includes gadolinium 2,2',2''- ⁇ 10-[(1S)-4-(4-butoxyphenyl)-1-carboxybutyl]-1,4,7,10-tetraazacyclododecane-1,4,7- triyl ⁇ triacetate (see for example WO2022/194777, example 31).
- the contrast agent is an agent that includes gadolinium - 2,2',2''- ⁇ (2S)-10-(carboxymethyl)-2-[4-(2-ethoxyethoxy)benzyl]-1,4,7,10-tetraazacyclododecane- 1,4,7-triyl ⁇ triacetate.
- the contrast agent is an agent that includes gadolinium 2,2',2''-[10-(carboxymethyl)-2-(4-ethoxybenzyl)-1,4,7,10-tetraazacyclododecane-1,4,7-triyl]triacetate.
- the contrast agent is an agent that includes gadolinium(III) 5,8-bis(carboxylatomethyl)-2-[2-(methylamino)-2-oxoethyl]-10-oxo-2,5,8,11-tetraazadodecane-1- carboxylate hydrate (also referred to as gadodiamide).
- the contrast agent is an agent that includes gadolinium(III) 2-[4-(2-hydroxypropyl)-7,10-bis(2-oxido-2-oxoethyl)-1,4,7,10-tetrazacyclododec-1-yl]acetate (also referred to as gadoteridol).
- the contrast agent is an agent that includes gadolinium(III) 2,2',2''-(10-((2R,3S)-1,3,4-trihydroxybutan-2-yl)-1,4,7,10-tetraazacyclododecane-1,4,7-triyl)triacetate (also referred to as gadobutrol or Gd-DO3A-butrol).
- the amount of contrast agent administered to the examination object can be the standard amount; however, it can also be an amount that is smaller or larger than the standard amount.
- the standard amount is usually the amount recommended by the manufacturer and/or distributor of the contrast agent and/or approved by a regulatory authority and/or the amount specified in a package leaflet for the contrast agent.
- the standard amount of Primovist ® is 0.025 mmol Gd-EOB-DTPA disodium/kg body weight.
- the amount(s) of contrast agent can be the same or different for all data sets.
- the time between the administration of the contrast agent and the generation of the second MRI image can be the same or different for all data sets.
- each second MRI image and/or its intensity value distribution i.e., the respective second intensity value distribution
- each second MRI image and/or its intensity value distribution is subjected to a reversible transformation operation. If there are more than two MRI images, the other MRI images (except for the first MRI image) are also subjected to the reversible transformation operation.
- the first MRI image is usually the reference; it is usually not transformed and/or corrected.
- the transformation operation is non-linear.
- the transformation operation can be defined by an expert (e.g. a radiologist or an MRI scanner manufacturer). It is also possible to derive the transformation operation from the analysis of a large number of MRI images, e.g., with and without contrast agent.
- the reversible transformation operation can be applied to the second MRI image or the intensity value distribution of the second MRI image (i.e. the second intensity value distribution).
- the second MRI image and/or the second intensity value distribution of each data set is subjected to the reversible transformation operation as described. At least one value of at least one transformation parameter is selected. The selection can be random (e.g. in a predefined range).
- an upper limit value and lower limit value can be defined for at least one transformation parameter. Random numbers can be generated within the defined range. Each of these random numbers can be a value for the at least one transformation parameter. If the transformation operation has more than one transformation parameter, a value range can be defined for each transformation parameter, within which a selection of values can be made. Value limits and ranges can be defined by a radiologist, for example. Within a predefined range, a selection can also be made according to a predefined probability distribution, so that not all values are selected with the same probability.
- the training data comprises, for each data set of the plurality of data sets, (i) the first MRI image and the transformed second MRI image and/or their intensity value distributions (i.e., the first intensity value distribution and/or the second intensity value distribution) as input data for the machine learning model and (ii) the at least one selected value of the at least one transformation parameter as target data.
- the training data may also include, for each data set of the plurality of data sets, further MRI images, e.g. a transformed third MRI image, a transformed fourth MRI image, a transformed fifth MRI image and so on, and the respective selected values of the at least one transformation parameter.
- a machine learning model is provided.
- the machine learning model of the present disclosure can be or comprise a convolutional neural network.
- a convolutional neural network is a type of artificial neural network that is primarily designed to process structured grid data, such as images, and is especially well-suited to analysing visual imagery.
- a CNN comprises an input layer with input neurons, an output layer with output neurons, as well as multiple hidden layers between the input layer and the output layer.
- the nodes in the CNN input layer are usually organized into a set of “filters” (feature detectors), and the output of each set of filters is propagated to nodes in successive layers of the network.
- the computations for a CNN include applying the convolution mathematical operation to each filter to produce the output of that filter.
- Convolution is a specialized kind of mathematical operation performed by two functions to produce a third function that is a modified version of one of the two original functions.
- the first function to the convolution can be referred to as the input, while the second function can be referred to as the convolution kernel.
- the output may be referred to as the feature map.
- the input to a convolution layer can be a multidimensional array of data that defines the various colour/greyscale components of an input image.
- the convolution kernel can be a multidimensional array of parameters, where the parameters are adapted by the training process for the neural network.
- the objective of the convolution operation is to extract features (such as, e.g., edges from an input image).
- the first convolutional layer is responsible for capturing the low-level features such as edges, colour, gradient orientation, etc.
- the architecture adapts to the high- level features as well, giving a network which has the wholesome understanding of images in the dataset.
- the pooling layer is responsible for reducing the spatial size of the feature maps. It is useful for extracting dominant features with some degree of rotational and positional invariance, thus maintaining the process of effectively training of the model.
- Adding a fully-connected layer is a way of learning non-linear combinations of the high-level features as represented by the output of the convolutional part.
- the CNN can be set up as a regression model.
- the CNN may have fully connected layers. After several convolutional and pooling layers, the high- level reasoning in the neural network occurs via fully connected layers. Neurons in a fully connected layer have connections to all activations in the previous layer. In a regression CNN, the output layer usually has a single neuron (for single-valued regression) or multiple neurons (for multi-valued regression) with a linear or sometimes no activation function. This produces a continuous output, allowing the network to be used for regression tasks.
- the machine learning model of the present disclosure does not have to be or comprise a CNN, it can also be or comprise another machine learning model that can perform regression based on images and/or intensity value distributions.
- One example is a vision transformer. Vision transformers are a recent advancement in the field of computer vision, which have shown to perform at par or even better than CNNs on several benchmark datasets. They are based on the transformer architecture, originally designed for natural language processing tasks, and have been adapted for image processing tasks. Vision transformers are often used for classification.
- a vision transformer for regression To use a vision transformer for regression, the architecture would remain largely the same, except for the final layer. Instead of a classification layer (often a softmax activation function for multi-class outputs), there is a single neuron with a linear activation function for single-valued regression, or multiple neurons for multi-valued regression. Training a vision transformer for regression would also involve using a regression-appropriate loss function, like mean squared error (MSE) or mean absolute error (MAE), instead of a classification loss function like cross-entropy.
- MSE mean squared error
- MAE mean absolute error
- a possible architecture of a vision transformer for performing a regression is disclosed, for example, in: K. Parmar, J. Parker, D.
- the second MRI image I2 represents the examination region of the examination object in a second state (the state is not visible in Fig.1).
- the first MRI image I1 is characterized by a first intensity value distribution; the second MRI image I2 is characterized by a second intensity value distribution (the intensity value distributions are not visible in Fig.1).
- a transformed second MRI image I2 T is generated on the basis of the second MRI image I2.
- the transformed second MRI image I2 T is generated using a transformation operation T(a, b).
- the transformation operation T(a, b) comprises two transformation parameters a and b. It is possible that transformation operation comprises more or fewer transformation parameters. Values v a and v b are selected (sampled) for the transformation parameters a and b.
- the selection can be made randomly or according to a specified probability distribution and/or within defined limits.
- the first MRI image I1 and the transformed second MRI image I2 T are fed to the machine learning model MLM as input data.
- the machine learning model MLM is configured and trained to predict the selected values of the transformation parameters a and b on the basis of the first MRI image I1, the transformed second MRI image I2 T and model parameters.
- the machine learning model MLM is configured and trained to determine predicted values v ⁇ a and v ⁇ b of the transformation parameters a and b based on the first MRI image I1, the transformed second MRI image I2 T and model parameters.
- the predicted values v ⁇ a and v ⁇ b are output data from the machine learning model MLM.
- the training of the machine learning model can also be carried out using the intensity value distributions of the MRI images.
- a transformed second intensity value distribution can be generated based on the second intensity value distribution using the reversible transformation operator with one or more selected values of the one or more transformation parameters.
- the first intensity value distribution and the transformed second intensity value distribution can be fed to the machine learning model.
- the machine learning model outputs one or more predicted values of the one or more transformation parameters.
- a loss function LF is used to determine (quantify) deviations between the selected values v a and v b and the predicted values v ⁇ a and v ⁇ b .
- an optimization process e.g.
- model parameters of the machine learning model MLM can be modified to reduce the deviations.
- the process shown in Fig.1 is repeated for a large number of data sets.
- One or more data sets can also be used several times in training, e.g. with different values v a and v b of the transformation parameters a and b.
- the training of the machine learning model can be ended when a stop criterion is met.
- Such a stop criterion can be for example: a predefined maximum number of training steps/cycles/epochs has been performed, deviations between output data and target data can no longer be reduced by modifying the model parameters, a predefined minimum of the loss function is reached, and/or an extreme value (e.g., maximum or minimum) of another performance value is reached.
- the trained machine learning model and/or the modified model parameters of the trained machine learning model may be output, stored and/or transmitted to a separate computer system. As shown in Fig.1, the machine learning model can be trained using pairs (e.g., a first MRI image and a transformed second MRI image). If the data sets on which the training data is based comprise more than two MRI images, e.g.
- a third MRI image and a fourth MRI image training can also be performed using pairs.
- a transformed third MRI image may be generated, and the first MRI image and the transformed third MRI image may be provided as input data to the machine learning model.
- a transformed fourth MRI image may be generated, and the first MRI image and the transformed fourth MRI image may be provided to the machine learning model as input data.
- a transformed second MRI image can be generated from the second MRI image
- a transformed third MRI image can be generated from the third MRI image
- a transformed fourth MRI image can be generated from the fourth MRI image.
- the first MRI image, the transformed second MRI image, the transformed third MRI image and the transformed fourth MRI image can be fed together to the machine learning model.
- the machine learning model is then trained to predict the transformation parameters for one or more reversible transformation operations used to generate the transformed second MRI image, the transformed third MRI image, and/or the transformed fourth MRI image.
- the machine learning model can also be trained using the intensity value distributions (instead of or in addition to the MRI images).
- the trained machine learning model can be used for correcting an intensity value distribution of a new MRI image.
- a new data set is received.
- the term “receiving” includes both retrieving data sets and receiving data sets that are transmitted, for example, to the computer system of the present disclosure.
- the new data set may be received from a magnetic resonance imaging scanner or from a separate computer system.
- the new data set may be read from one or more data storage devices.
- the new data set is the result of an MRI examination of an examination region of an examination object.
- the new data set comprises a first MRI image of the examination region of the examination object and a second MRI image of the examination region of the examination object.
- the first MRI image of the new data set is also referred to as “new first MRI image” in this disclosure
- the second MRI image of the new data set is also referred to as “new second MRI image” in this disclosure.
- the term “new” is used in this disclosure to distinguish the training phase from the inference phase.
- the examination object can be an examination object from which data has already been used to train the machine learning model, or a new examination object, i.e. an examination object from which no data was used to train the machine learning model.
- the examination region is part of the examination object.
- the first MRI image represents the examination region of the examination object in the first state
- the second MRI image represents the examination region of the examination object in the second state.
- the first state and the second state correspond to the states that are represented in the data sets of the training data for training the machine learning model.
- the first MRI image and the second MRI image and/or their intensity distributions are inputted into the trained machine learning model as input data, depending on the data used to train the machine learning model.
- the trained machine learning model is configured and was trained to determine at least one predicted value of at least one transformation parameter based on the input data.
- the at least one predicted value of at least one transformation parameter is outputted by the trained machine learning model.
- the at least one predicted value of the at least one transformation parameter is used to generate a corrected second MRI image.
- An inverse transformation operation is applied to the second MRI image using the at least one predicted value of the at least one transformation parameter.
- the inverse transformation operation is the inverse transformation to the reversible transformation operation used in training the machine learning model.
- the inverse transformation operation reverses the process performed by the reversible transformation operation.
- the value b is subtracted, and the result is divided by a.
- the result of the inverse transformation operation is a corrected second MRI image.
- the corrected second MRI image can be output (e.g. displayed on a monitor and/or printed out with a printer) and/or stored in a data memory and/or transmitted to a separate computer system.
- the new data set includes further MRI images of the examination region of the examination object, for example a new third MRI image, or a new third and a new fourth MRI image, or a new third and a new fourth and a new fifth MRI image, or more than five new MRI images.
- Each new MRI image can represent the examination region of the examination object in a different state.
- a new third MRI image may represent the examination region of the examination object in a third state
- a new fourth MRI image may represent the examination region of the examination object in a fourth state
- two or more MRI images are fed to the trained machine learning model in order to predict one or more transformation parameters.
- Fig.2 shows a schematic example of the generation of a corrected second MRI image using the trained machine learning model.
- Fig.2 shows a new data set DS* comprising a mew first MRI image I1* and a new second MRI image I2*.
- Generation of the third MRI image can comprise: - subtracting the new first MRI image from the corrected second MRI image, and - subjecting the result of the subtraction to noise suppression.
- Subjecting the result of the subtraction to noise suppression may comprise: - multiplying the result of the subtraction by a frequency-dependent weighting function in frequency space.
- at least the result of the subtraction of the new first MRI image from the corrected second MRI image is available in frequency space (it is also possible that the subtraction is performed in frequency space; in this case, the result of the subtraction in frequency space is a frequency- space representation).
- This frequency-space representation can be multiplied by a frequency-dependent weighting function.
- the multiplication thus results in the amplitude and/or phase values of the frequency-space representation being multiplied by a weighting factor which is dependent on the respective frequency.
- the weighting factors decrease with increasing frequency.
- preferably low frequencies are multiplied by a higher weighting factor than high frequencies.
- the lower the frequency the greater the respective weighting factor.
- Contrast information is represented in a frequency-space representation by low frequencies, while the higher frequencies represent information about fine structures.
- Such weighting thus means that a higher weighting will be given to frequencies making a higher contribution to contrast than to those making a smaller contribution.
- Image noise is typically evenly distributed in the frequency-space representation.
- the frequency-dependent weighting function has the effect of a filter.
- a fourth MRI image can be generated on the basis of the (optionally noise-suppressed) third MRI image. Generating the fourth MRI image can comprise: multiplying the (optionally noise-suppressed) third MRI image by a gain factor and adding the result of the multiplication to the new first MRI image and/or the corrected second MRI image.
- the (optionally noise- suppressed) third MRI image is multiplied by the gain factor and added to the new first MRI image. These steps can be performed in frequency space or in real space.
- the result is a fourth MRI image of the examination region of the examination object, in which the signal amplification caused by the contrast agent is increased or decreased compared to the corrected second MRI image, depending on the magnitude of the gain factor.
- the fourth MRI image is a synthetic MRI image.
- the gain factor is a positive or negative real number.
- the gain factor may be selected by a user, i.e. it may be variable or predefined, i.e. predetermined.
- the gain factor may also be determined automatically, for example based on the intensity value distribution of the new first MRI image and/or of the corrected second MRI image and/or of the difference of the new first MRI image from the corrected second MRI image. Varying the gain factor thus allows the contrast between regions with contrast agent and regions without contrast agent to be varied.
- the gain factor is greater than 1, preferably greater than 2.
- the gain factor is usually not greater than 10.
- the gain factor is greater than zero and less than 1.
- the gain factor is less than zero.
- the gain factor is usually not less than -10. If the fourth MRI image is a frequency space representation, it can be converted into a fourth MRI image in real space by an inverse Fourier transformation.
- the fourth MRI image can be output (e.g. displayed on a monitor and/or printed out with a printer) and/or stored in a data memory and/or transmitted to a separate computer system.
- Fig. 5 shows an embodiment of the computer-implemented method for generating a synthetic MRI image of the examination region of the examination object.
- a new first MRI image I1* and a corrected second MRI image I2* c are provided.
- the new first MRI image I1* represents the examination region of an examination object without contrast agent.
- the corrected second MRI image I2* c represents the examination region of the examination object after application of an amount of a contrast agent.
- the corrected second MRI image I2* c was generated based on the new first MRI image I1* and a new second MRI image using a trained machine learning model.
- the corrected second MRI image I2* c may have been generated as described in relation to Fig. 2.
- the trained model machine learning may have been trained as described in relation to Fig.1.
- a third MRI image I3* is generated.
- a fourth MRI image I4* is generated.
- Fig. 6 shows an embodiment of the computer-implemented method for generating a synthetic MRI image of the examination region of the examination object in form of a flow chart.
- This method (300) comprises the steps: (310) providing a new first MRI image and a corrected second MRI image, wherein the new first MRI image represents the examination region of the examination object without contrast agent, wherein the corrected second MRI image represents the examination region of the examination object after application of an amount of a contrast agent, (320) generating a third MRI image based on the new first MRI image and the corrected second MRI image, wherein generating the third MRI image comprises: (321) subtracting the new first MRI image from the corrected second MRI image, (322) optionally multiplying the result of the subtraction by a frequency-dependent weighting function in frequency space, (330) generating a fourth MRI image based on the third MRI image and the new first MRI image and/or corrected second MRI image, wherein generating the fourth MRI image comprises: (331) multiplying the third MRI image by a gain factor and adding the result of the multiplication to the new first MRI image and/or the corrected second MRI image, wherein the gain factor is a positive
- a “computer system” is a system for electronic data processing that processes data by means of programmable calculation rules. Such a system usually comprises a “computer”, that unit which comprises a processor for carrying out logical operations, and also peripherals.
- peripherals refer to all devices which are connected to the computer and serve for the control of the computer and/or as input and output devices. Examples thereof are monitor (screen), printer, scanner, mouse, keyboard, drives, camera, microphone, loudspeaker, etc.
- Non-transitory is used herein to exclude transitory, propagating signals or waves, but to otherwise include any volatile or non-volatile computer memory technology suitable to the application.
- the term “computer” should be broadly construed to cover any kind of electronic device with data processing capabilities, including, by way of non-limiting example, personal computers, servers, embedded cores, computing system, communication devices, processors (e.g., digital signal processor (DSP)), microcontrollers, field programmable gate array (FPGA), application specific integrated circuit (ASIC), etc.) and other electronic computing devices.
- processors e.g., digital signal processor (DSP)
- microcontrollers e.g., field programmable gate array (FPGA), application specific integrated circuit (ASIC), etc.
- ASIC application specific integrated circuit
- a computer system of exemplary implementations of the present disclosure may be referred to as a computer and may comprise, include, or be embodied in one or more fixed or portable electronic devices.
- the computer may include one or more of each of a number of components such as, for example, a processing unit (20) connected to a memory (50) (e.g., storage device).
- the processing unit (20) may be composed of one or more processors alone or in combination with one or more memories.
- the processing unit (20) is generally any piece of computer hardware that is capable of processing information such as, for example, data, computer programs and/or other suitable electronic information.
- the memory may be referred to as a computer-readable storage medium or data memory.
- the computer-readable storage medium is a non-transitory device capable of storing information, and is distinguishable from computer-readable transmission media such as electronic transitory signals capable of carrying information from one location to another.
- Computer-readable medium as described herein may generally refer to a computer- readable storage medium or computer-readable transmission medium.
- the processing unit (20) may also be connected to one or more interfaces for displaying, transmitting and/or receiving information.
- the interfaces may include one or more communications interfaces and/or one or more user interfaces.
- the communications interface(s) may be configured to transmit and/or receive information, such as to and/or from other computer(s), network(s), database(s) or the like.
- the communications interface may be configured to transmit and/or receive information by physical (wired) and/or wireless communications links.
- the communications interface(s) may include interface(s) (41) to connect to a network, such as using technologies such as cellular telephone, Wi-Fi, satellite, cable, digital subscriber line (DSL), fiber optics and the like.
- the communications interface(s) may include one or more short-range communications interfaces (42) configured to connect devices using short-range communications technologies such as NFC, RFID, Bluetooth, Bluetooth LE, ZigBee, infrared (e.g., IrDA) or the like.
- a computer system (1) may include processing unit (20) and a computer-readable storage medium or memory (50) coupled to the processing circuitry, where the processing circuitry is configured to execute computer-readable program code instructions (60) stored in the memory (50).
- processing circuitry is configured to execute computer-readable program code instructions (60) stored in the memory (50).
- computer-readable program code instructions 60
- the computer system of the present disclosure may be in the form of a laptop, notebook, netbook, and/or tablet PC; it may also be a component of an MRI scanner.
- the present disclosure provides a computer program product.
- Such a computer program product comprises a non-volatile data carrier, such as a CD, a DVD, a USB stick or other medium for storing data.
- a computer program is stored on the data carrier.
- the computer program can be loaded into a working memory of a computer system (in particular, into a working memory of a computer system of the present disclosure), where it can cause the computer system to perform the computer-implemented methods disclosed herein.
- the computer program may also be marketed in combination with a contrast agent. Such a combination is also referred to as a kit.
- a kit includes the contrast agent and the computer program. It is also possible that such a kit includes the contrast agent and means for allowing a purchaser to obtain the computer program, e.g., download it from an Internet site.
- These means may include a link, i.e., an address of the Internet site from which the computer program may be obtained, e.g., from which the computer program may be downloaded to a computer system connected to the Internet.
- Such means may include a code (e.g., an alphanumeric string or a QR code, or a DataMatrix code or a barcode or other optically and/or electronically readable code) by which the purchaser can access the computer program.
- a link and/or code may, for example, be printed on a package of the contrast agent and/or printed on a package insert for the contrast agent.
- a kit is thus a combination product comprising a contrast agent and a computer program (e.g., in the form of access to the computer program or in the form of executable program code on a data carrier) that is offered for sale together.
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- General Engineering & Computer Science (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Signal Processing (AREA)
- High Energy & Nuclear Physics (AREA)
- Condensed Matter Physics & Semiconductors (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
Abstract
Systems, methods, and computer programs disclosed herein relate to training a machine learning model and using the trained machine learning model for correcting an intensity value distribution of a magnetic resonance imaging (MRI) image.
Description
BHC241007 FC Correction of an MRI image FIELD OF THE DISCLOSURE Systems, methods, and computer programs disclosed herein relate to training a machine learning model and using the trained machine learning model for correcting an intensity value distribution of a magnetic resonance imaging (MRI) image. BACKGROUND The European patent application EP4332601A1 proposes to generate a synthetic contrast-enhanced MRI image of an examination region of an examination object on the basis of a first MRI image and a second MRI image. The first MRI image is a native MRI image, i.e. it represents the examination region of the examination object without contrast agent. The second MRI image represents the examination region of the examination object after application of an amount of a contrast agent. In a first step, the first MRI image (the native MRI image) is subtracted from the second MRI image. In a second step, the result of the subtraction is subjected to noise suppression. In a third step, the noise- suppressed result of the subtraction is added α-fold to the first MRI image. The result is a synthetic MRI image of the examination region of the examination object in which the contrast enhancement caused by the contrast agent is either reduced or increased compared to the second MRI image, depending on the size of the factor α. The method disclosed in EP4332601A1 is based on the assumption that a sub-region of the examination region into which no contrast agent penetrates is characterized by the same intensity values (e.g. grey or colour values) in the first MRI image and in the second MRI image. In other words, the method disclosed in EP4332601A1 assumes that an image element in the first MRI image has the same grey or colour value as a corresponding image element of the second MRI image, wherein the corresponding image elements represent the same sub-region of the examination region and no contrast agent enters the sub- region. In other words, the method disclosed in EP4332601A1 assumes that sub-regions into which no contrast agent enters are always displayed in the same way in a dynamic contrast-enhanced MRI examination. Unfortunately, this assumption is not always justified. There are scanners and/or measurement sequences and/or image reconstruction tools that display sub-regions of the examination region differently in different MRI images of a dynamic contrast-enhanced MRI examination, even if no contrast medium reaches these sub-regions (see, e.g., H. Kim: Variability in Quantitative DCE-MRI: Sources and Solution, J Nat Sci.2018, 4(1): e484). Quantitative evaluations of MRI images from dynamic contrast-enhanced MRI examinations and/or the generation of synthetic MRI images using a method such as that described in EP4332601A1 are thus made more difficult. SUMMARY This problem and further problems are addressed by the subject matter of the independent claims of the present disclosure. Preferred embodiments are defined in the dependent claims, the description, and the drawings. In a first aspect, the present disclosure relates to a computer-implemented method for training a machine learning model, the method comprising: - providing a plurality of data sets, wherein each data set comprises at least two magnetic resonance imaging (MRI) images, a first MRI image and a second MRI image, wherein the first MRI image represents an examination region of an examination object in a first state, wherein the first MRI image is characterized by a first intensity value distribution,
wherein the second MRI image represents the examination region of the examination object in a second state, wherein the second MRI image is characterized by a second intensity value distribution, - providing a reversible transformation operation, the transformation operation comprising at least one transformation parameter, wherein the transformation operation performs a transformation of the second intensity value distribution of the second MRI image when applied to the second intensity value distribution and/or to the second MRI image, - generating training data based on the plurality of data sets, wherein generating training data comprises, for each data set of the plurality of data sets: sampling of at least one value for the at least one transformation parameter of the transformation operation, generating a transformed second MRI image and/or a transformed second intensity value distribution by applying the transformation operation with the at least one sampled value of the at least one transformation parameter to the second MRI image and/or the second intensity value distribution, - providing a machine learning model, wherein the machine learning model is configured to determine at least one predicted value for the at least one transformation parameter based on: model parameters, a first MRI image, and a transformed second MRI image and/or their intensity value distributions, - training the machine learning model on the training data, wherein the training comprises, for each data set of the plurality of data sets: inputting the first MRI image and the transformed second MRI image and/or their intensity value distributions into the machine learning model, receiving at least one predicted value for the at least one transformation parameter, determining a deviation between the at least one value of the at least one transformation parameter and the at least one predicted value of the at least one transformation parameter, reducing the deviation by modifying model parameters of the machine learning model, - outputting and/or storing the trained machine learning model and/or transmitting the trained machine learning model to a separate computer system and/or using the trained machine learning model for correcting an intensity value distribution of a new MRI image. In another aspect, the present disclosure relates to a computer-implemented method for correcting an intensity value distribution of a new MRI image using the trained machine learning model, the method comprising: - providing the trained machine learning model, - receiving a new data set, wherein the new data set comprises at least two new MRI images, a new first MRI image and a new second MRI image, wherein the new first MRI image represents the examination region of a new examination object in the first state, wherein the new first MRI image is characterized by a first intensity value distribution, wherein the new second MRI image represents the examination region of the new examination object in the second state, wherein the new second MRI image is characterized by a second intensity value distribution,
- inputting the new first MRI image and the new second MRI image and/or their intensity value distributions into the trained machine learning model, - receiving at least one predicted value as an output from the trained machine learning model, - providing an inverse transformation operation, wherein the inverse transformation operation comprises the at least one transformation parameter, wherein the inverse transformation operation reverses the reversible transformation operation used in training the machine learning model, - generating a corrected second MRI image based on the new second MRI image and the at least one predicted value, wherein generating the corrected second MRI image comprises: applying the inverse transformation operation with the at least one predicted value of the at least one transformation parameter to the new second MRI image, - outputting and/or storing the corrected second MRI image and/or transmitting the corrected second MRI image to a separate computer system. In another aspect, the present disclosure relates to a computer-implemented method for generating a synthetic MRI image of an examination region of an examination object, the method comprising: - providing a new first MRI image and a new second MRI image, wherein the new first MRI image represents the examination region of a new examination object without a contrast agent, wherein the new first MRI image is characterized by a first intensity value distribution, wherein the new second MRI image represents the examination region of the new examination object after application of the amount of the contrast agent, wherein the new second MRI image is characterized by a second intensity value distribution, - generating a corrected second MRI image based on the new second MRI image using the trained machine learning model and the inverse transformation operation, - generating a third MRI image based on the new first MRI image and the corrected second MRI image, wherein generating the third MRI image comprises: subtracting the new first MRI image from the corrected second MRI image in real space or in frequency space, optionally multiplying the result of the subtraction by a frequency-dependent weighting function in frequency space, - generating a fourth MRI image based on the third MRI image and the new first MRI image and/or the corrected second MRI image, wherein generating the fourth MRI image comprises: multiplying the third MRI image by a gain factor and adding the result of the multiplication to the new first MRI image and/or the corrected second MRI image, wherein the gain factor is a positive or negative real number, - outputting and/or storing the fourth MRI image and/or transmitting the fourth MRI image to a separate computer system. In another aspect, the present disclosure provides a non-transitory computer readable storage medium having stored thereon a computer program that, when executed by a processing unit of a computer system, cause the computer system to perform one or more of the computer-implemented methods disclosed above. In another aspect, the present disclosure provides a computer system comprising:
a processing unit; and a memory storing a computer program configured to perform, when executed by the processing unit, one or more of the computer-implemented methods disclosed above. In another aspect, the present disclosure relates to a use of a contrast agent in a magnetic resonance imaging examination of an examination region of an examination object, the magnetic resonance imaging examination comprising: - providing a trained machine learning model, wherein the trained machine learning model is configured and was trained to determine at least one predicted value for at least one transformation parameter based on: model parameters, a first MRI image, and a second MRI image and/or their intensity value distributions, wherein training of the machine learning model comprised: providing a plurality of data sets, wherein each data set comprised at least two MRI images, a first MRI image and a second MRI image, o wherein the first MRI image represents an examination region of an examination object at a first point in time before or after application of an amount of the contrast agent, wherein the first MRI image is characterized by a first intensity value distribution, o wherein the second MRI image represents the examination region of the examination object at a second point in time before or after application of the amount of the contrast agent, wherein the second MRI image is characterized by a second intensity value distribution, providing a reversible transformation operation, the reversible transformation operation comprising the at least one transformation parameter, wherein the reversible transformation operation performs a transformation of the intensity value distribution of a second MRI image when applied to the second MRI image and/or the second intensity value distribution, generating training data based on the plurality of data sets, wherein generating training data comprised, for each data set of the plurality of data sets: o sampling of at least one value for the at least one transformation parameter of the reversible transformation operation, o generating a transformed second MRI image and/or a transformed second intensity value distribution by applying the reversible transformation operation with the at least one sampled value of the at least one transformation parameter to the second MRI image and/or the second intensity value distribution, training the machine learning model on the training data, wherein the training comprised, for each data set of the plurality of data sets: o inputting the first MRI image and the transformed second MRI image and/or their intensity value distributions into the machine learning model, o receiving at least one predicted value for the at least one transformation parameter, o determining a deviation between the at least one value of the at least one transformation parameter and the at least one predicted value of the at least one transformation parameter, o reducing the deviation by modifying model parameters of the machine learning model, - receiving a new data set, wherein the new data set comprises at least two new MRI images, a new first MRI image and a new second MRI image,
wherein the new first MRI image represents an examination region of a new examination object at the first point in time before or after application of the amount of the contrast agent, wherein the new first MRI image is characterized by a first intensity value distribution, wherein the new second MRI image represents the examination region of the new examination object at the second point in time before or after application of the amount of the contrast agent, wherein the new second MRI image is characterized by a second intensity value distribution, - inputting the new first MRI image and the new second MRI image and/or their intensity value distributions into the trained machine learning model, - receiving at least one predicted value as an output from the trained machine learning model, - providing an inverse transformation operation, wherein the inverse transformation operation comprises the at least one transformation parameter, wherein the inverse transformation operation reverses the reversible transformation operation used in training the machine learning model, - generating a corrected second MRI image based on the new second MRI image and the at least one predicted value, wherein generating the corrected second MRI image comprises: applying the inverse transformation operation with the at least one predicted value of the at least one transformation parameter to the new second MRI image, - outputting and/or storing the corrected second MRI image and/or transmitting the corrected second MRI image to a separate computer system. In another aspect, the present disclosure provides a contrast agent for use in a magnetic resonance imaging examination of an examination region of an examination object, the magnetic resonance imaging examination comprising: - providing a trained machine learning model, wherein the trained machine learning model is configured and was trained to determine at least one predicted value for at least one transformation parameter based on: model parameters, a first MRI image, and a second MRI image and/or their intensity value distributions, wherein training of the trained machine learning model comprised: providing a plurality of data sets, wherein each data set comprised at least two MRI images, a first MRI image and a second MRI image, o wherein the first MRI image represents an examination region of an examination object at a first point in time before or after application of an amount of the contrast agent, wherein the first MRI image is characterized by a first intensity value distribution, o wherein the second MRI image represents the examination region of the examination object at a second point in time before or after application of the amount of the contrast agent, wherein the second MRI image is characterized by a second intensity value distribution, providing a reversible transformation operation, the reversible transformation operation comprising at least one transformation parameter, wherein the reversible transformation operation performs a transformation of the intensity value distribution of a second MRI image when applied to the second MRI image and/or the second intensity value distribution, generating training data based on the plurality of data sets, wherein generating training data comprised, for each data set of the plurality of data sets: o sampling of at least one value for the at least one transformation parameter of the reversible transformation operation,
o generating a transformed second MRI image and/or a transformed second intensity value distribution by applying the reversible transformation operation with the at least one sampled value of the at least one transformation parameter to the second MRI image and/or the second intensity value distribution, training the machine learning model on the training data, wherein the training comprised, for each data set of the plurality of data sets: o inputting the first MRI image and the transformed second MRI image and/or their intensity value distributions into the machine learning model, o receiving at least one predicted value for the at least one transformation parameter, o determining a deviation between the at least one value of the at least one transformation parameter and the at least one predicted value of the at least one transformation parameter, o reducing the deviation by modifying model parameters of the machine learning model, - receiving a new data set, wherein the new data set comprises at least two new MRI images, a new first MRI image and a new second MRI image, wherein the new first MRI image represents the examination region of a new examination object at the first point in time before or after application of the amount of the contrast agent, wherein the new first MRI image is characterized by a first intensity value distribution, wherein the new second MRI image represents the examination region of the new examination object at the second point in time before or after application of the amount of the contrast agent, wherein the new second MRI image is characterized by a second intensity value distribution, - inputting the new first MRI image and the new second MRI image and/or their intensity value distributions into the trained machine learning model, - receiving at least one predicted value as an output from the trained machine learning model, - providing an inverse transformation operation, wherein the inverse transformation operation comprises the at least one transformation parameter, wherein the inverse transformation operation reverses the reversible transformation operation used in training the machine learning model, - generating a corrected second MRI image based on the new second MRI image and the at least one predicted value, wherein generating the corrected second MRI image comprises: applying the inverse transformation operation with the at least one predicted value of the at least one transformation parameter to the new second MRI image, - outputting and/or storing the corrected second MRI image and/or transmitting the corrected second MRI image to a separate computer system. In another aspect, the present disclosure provides a kit comprising a contrast agent and a computer program that, when executed by a processing unit of a computer system, cause the computer system to execute the following steps: - providing a trained machine learning model, wherein the trained machine learning model is configured and was trained to determine at least one predicted value for at least one transformation parameter based on: model parameters, a first MRI image, and a second MRI image and/or their intensity value distributions, wherein training of the trained machine learning model comprised:
providing a plurality of data sets, wherein each data set comprised at least two MRI images, a first MRI image and a second MRI image, o wherein the first MRI image represents an examination region of an examination object at a first point in time before or after application of an amount of the contrast agent, wherein the first MRI image is characterized by a first intensity value distribution, o wherein the second MRI image represents the examination region of the examination object at a second point in time before or after application of the amount of the contrast agent, wherein the second MRI image is characterized by a second intensity value distribution, providing a reversible transformation operation, the reversible transformation operation comprising at least one transformation parameter, wherein the reversible transformation operation performs a transformation of the intensity value distribution of a second MRI image when applied to the second MRI image and/or the second intensity value distribution, generating training data based on the plurality of data sets, wherein generating training data comprised, for each data set of the plurality of data sets: o sampling of at least one value for the at least one transformation parameter of the reversible transformation operation, o generating a transformed second MRI image and/or a transformed second intensity value distribution by applying the reversible transformation operation with the at least one sampled value of the at least one transformation parameter to the second MRI image and/or the second intensity value distribution, training the machine learning model on the training data, wherein the training comprised, for each data set of the plurality of data sets: o inputting the first MRI image and the transformed second MRI image and/or their intensity value distributions into the machine learning model, o receiving at least one predicted value for the at least one transformation parameter, o determining a deviation between the at least one value of the at least one transformation parameter and the at least one predicted value of the at least one transformation parameter, o reducing the deviation by modifying model parameters of the machine learning model, - receiving a new data set, wherein the new data set comprises at least two new MRI images, a new first MRI image and a new second MRI image, wherein the new first MRI image represents the examination region of a new examination object at the first point in time before or after application of the amount of the contrast agent, wherein the new first MRI image is characterized by a first intensity value distribution, wherein the new second MRI image represents the examination region of the new examination object at the second point in time before or after application of the amount of the contrast agent, wherein the new second MRI image is characterized by a second intensity value distribution, - inputting the new first MRI image and the new second MRI image and/or their intensity value distributions into the trained machine learning model, - receiving at least one predicted value as an output from the trained machine learning model, - providing an inverse transformation operation, wherein the inverse transformation operation comprises the at least one transformation parameter, wherein the inverse transformation
operation reverses the reversible transformation operation used in training the machine learning model, - generating a corrected second MRI image based on the new second MRI image and the at least one predicted value, wherein generating the corrected second MRI image comprises: applying the inverse transformation operation with the at least one predicted value of the at least one transformation parameter to the new second MRI image, - outputting and/or storing the corrected second MRI image and/or transmitting the corrected second MRI image to a separate computer system. BRIEF DESCRIPTION OF THE DRAWINGS Fig. 1 shows a schematic example of the generation of training data and the training of a machine learning model. Fig.2 shows a schematic example of the generation of a corrected second MRI image using the trained machine learning model. Fig. 3 shows an embodiment of the computer-implemented method for training the machine learning model in form of a flow chart. Fig. 4 shows an embodiment of the computer-implemented method for correcting an intensity value distribution of an MRI image using the trained machine learning model in form of a flow chart. Fig. 5 shows an embodiment of the computer-implemented method for generating a synthetic MRI image of an examination region of an examination object. Fig. 6 shows an embodiment of the computer-implemented method for generating a synthetic MRI image of an examination region of an examination object in form of a flow chart. Fig. 7 illustrates a computer system according to some example implementations of the present disclosure in more detail. Fig.8 shows a first MRI image and a second MRI image together with their intensity value distributions. DETAILED DESCRIPTION The aspects of the present disclosure will be more particularly elucidated below without distinguishing between the aspects of the present disclosure (method, computer system, computer-readable storage medium, use, contrast agent for use, kit). On the contrary, the following elucidations are intended to apply analogously to all the aspects of the disclosure, irrespective of in which context (method, computer system, computer-readable storage medium, use, contrast agent for use, kit) they occur. If steps are stated in an order in the present description or in the claims, this does not necessarily mean that the disclosure is restricted to the stated order. On the contrary, it is conceivable that the steps can also be executed in a different order or else in parallel to one another, unless, for example, one step builds upon another step, this requiring that the building step be executed subsequently (this being, however, clear in the individual case). The stated orders are thus exemplary embodiments of the present disclosure. As used herein, the articles “a” and “an” are intended to include one or more items and may be used interchangeably with “one or more” and “at least one.” As used in the specification and the claims, the singular form of “a”, “an”, and “the” include plural referents, unless the context clearly dictates otherwise. Where only one item is intended, the term “one” or similar language is used. Also, as used herein, the terms “has”, “have”, “having”, or the like are intended to be open-ended terms. Further, the phrase “based on” is intended to mean “based at least partially on” unless explicitly stated otherwise. Some implementations of the present disclosure will be described more fully hereinafter with reference to the accompanying drawings, in which some, but not all implementations of the disclosure are shown.
Indeed, various implementations of the disclosure may be embodied in many different forms and should not be construed as limited to the implementations set forth herein; rather, these example implementations are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art. The terms used in this disclosure have the meaning that these terms have in the prior art, in particular in the prior art cited in this disclosure, unless otherwise indicated. The present disclosure provides means for correcting intensity value distributions of MRI images derived, for example, from dynamic contrast-enhanced MRI examinations. The correction is made with the help of a trained machine learning model. The term “machine learning model”, as used herein, may be understood as a computer implemented data processing architecture. The machine learning model can receive input data and provide output data based on that input data and on parameters of the machine learning model (model parameters). The machine learning model can learn a relation between input data and output data through training. In training, parameters of the machine learning model may be adjusted in order to provide a desired output for a given input. The process of training a machine learning model involves providing a machine learning algorithm (that is the learning algorithm) with training data to learn from. The term “trained machine learning model” refers to the model artifact that is created by the training process. The training data usually contains the correct answer, which is referred to as the target. The learning algorithm finds patterns in the training data that map input data to the target, and it outputs a trained machine learning model that captures these patterns. In the training process, training data are inputted into the machine learning model and the machine learning model generates an output. The output is compared with the (known) target. Parameters of the machine learning model are modified in order to reduce the deviations between the output and the (known) target to a (defined) minimum. In general, a loss function can be used for training, where the loss function can quantify the deviations between the output and the target. The loss function may be chosen in such a way that it rewards a wanted relation between output and target and/or penalizes an unwanted relation between an output and a target. Such a relation can be, e.g., a similarity, or a dissimilarity, or another relation. A loss function can be used to calculate a loss for a given pair of output and target. The aim of the training process can be to modify (adjust) parameters of the machine learning model in order to reduce the loss to a (defined) minimum. For example, if the input data and the target data are numerical values, the loss function can be the absolute difference between these values. In this case, a high absolute loss value can mean that one or more model parameters needs to be changed to a substantial degree. For example, for output data in the form of vectors, difference metrics between vectors such as the mean square error, a cosine distance, a norm of the difference vector such as a Euclidean distance, a Chebyshev distance, an Lp norm of a difference vector, a weighted norm or another type of difference metric of two vectors can be chosen as the loss function. In the case of higher-dimensional outputs, such as two-dimensional, three-dimensional or higher- dimensional outputs, an element-by-element difference metric can for example be used. Alternatively or in addition, the output data may be transformed into for example a one-dimensional vector before calculation of a loss value. In a first step, training data for training the machine learning model is generated. The training data is generated based on a plurality of data sets. The term “plurality” means more than 10, or even more than 100.
Each data set of the plurality of data sets is usually the result of a magnetic resonance imaging (MRI) examination of an examination region of an examination object. Each data set comprises at least two MRI images, a first MRI image and a second MRI image. The first MRI image represents an examination region of an examination object in a first state. The second MRI image represents the examination region of the examination object in a second state. It is possible that data sets contain more than two MRI images, e.g. three or four or five or more than five. A third MRI image may represent the examination region of the examination object in a third state, a fourth MRI image may represent the examination region of the examination object in a fourth state, a fifth MRI image may represent the examination region of the examination object in a fifth state, and so forth. The states may indicate the amounts of a contrast agent administered to the examination object and/or different time points in a dynamic MRI examination and/or different contrast agents and/or different measurement parameters used to generate the respective MRI image. In an embodiment of the present disclosure, the first MRI image represents an examination region of an examination object without contrast agent and the second MRI image represents the examination region of the examination object after application of an amount of a contrast agent. In another embodiment of the present disclosure, the first MRI image represents an examination region of an examination object after application of a first amount of a contrast agent and the second MRI image represents the examination region of the examination object after application of a second amount of a contrast agent. In another embodiment of the present disclosure, the first MRI image represents an examination region of an examination object at a first point in time before or after application of a contrast agent, and the second MRI image represents the examination region of the examination object at a second point in time before or after application of the contrast agent. In another embodiment of the present disclosure, the first MRI image represents an examination region of an examination object at a first point in time after application of a first amount of a contrast agent, and the second MRI image represents the examination region of the examination object at a second point after application of a second amount of the contrast agent. In another embodiment of the present disclosure, the first MRI image represents an examination region of an examination object after application of an amount of a first contrast agent, and the second MRI image represents the examination region of the examination object after application of an amount of a second contrast agent. In another embodiment of the present disclosure, the first MRI image represents an examination region of an examination object at a first point in time of a dynamic MRI examination and the second MRI image represents the examination region of the examination at a second point in time of a dynamic MRI examination. In another embodiment of the present disclosure, the first MRI image represents an examination region of an examination object as a result of an examination with first measurement parameters and the second MRI image represents the examination region of the examination object as a result of an examination with second measurement parameters. The “examination object” is normally a living being, e.g. a mammal, e.g. a human. In an embodiment of the present disclosure, the examination object is a human. The examination object is usually different in all data sets, but it does not have to be different, there may be one or more data sets in which the examination object is identical. The “examination region” is a part of the examination object, for example an organ or part of an organ or a plurality of organs or another part of the examination object. The examination region can be the same or different for all data sets.
For example, the examination region may be a liver, kidney, heart, lung, brain, stomach, bladder, prostate, intestine, pancreas, thyroid, breast, uterus or a part of said parts or another part of the body of a mammal (for example a human). In an embodiment, the examination region includes a liver or part of a liver or the examination region is a liver or part of a liver of a mammal, e.g. a human. In a further embodiment, the examination region includes a brain or part of a brain or the examination region is a brain or part of a brain of a mammal, e.g. a human. In a further embodiment, the examination region includes a heart or part of a heart or the examination region is a heart or part of a heart of a mammal, e.g. a human. In a further embodiment, the examination region includes a thorax or part of a thorax or the examination region is a thorax or part of a thorax of a mammal, e.g. a human. In a further embodiment, the examination region includes a stomach or part of a stomach or the examination region is a stomach or part of a stomach of a mammal, e.g. a human. In a further embodiment, the examination region includes a pancreas or part of a pancreas or the examination region is a pancreas or part of a pancreas of a mammal, e.g. a human. In a further embodiment, the examination region includes a kidney or part of a kidney or the examination region is a kidney or part of a kidney of a mammal, e.g. a human. In a further embodiment, the examination region includes one or both lungs or part of a lung of a mammal, e.g. a human. In a further embodiment, the examination region includes a breast or part of a breast or the examination region is a breast or part of a breast of a female mammal, e.g. a female human. In a further embodiment, the examination region includes a prostate or part of a prostate or the examination region is a prostate or part of a prostate of a male mammal, e.g. a male human. The examination region, also referred to as the field of view (FOV), is in particular a volume that is imaged in MRI images. The examination region is typically defined by a radiologist, for example on an overview image. It is of course also possible for the examination region to be alternatively or additionally defined in an automated manner, for example on the basis of a selected protocol. The examination region is usually the same for all data sets, however, the examination region can also be different. The first MRI image and the second MRI image (and any further MRI image) can be two-dimensional (2D) or three-dimensional (3D) or four-dimensional (4D) MRI images, for example. Each MRI image usually comprises a multitude of image elements (e.g., pixels or voxels or doxels). Each image element usually represents a sub-region of the examination region. Each image element has an intensity value. This intensity value can, for example, be a grey value or a colour value that correlates with the intensity of a physical signal. However, the intensity value can also be the intensity value of the physical signal itself. Since the intensity values of the image elements in an image usually differ at least partially, each MRI image is characterized by an intensity value distribution. The intensity value distribution indicates how the intensity values are distributed across the image elements. The intensity value distribution indicates how many image elements (absolute or in relation to the total number of image elements) have a defined intensity value or have an intensity value in a defined range of intensity values. An intensity value distribution can be displayed graphically in the form of a histogram, for example. Fig. 8 shows a first MRI image I1 and a second MRI image I2 together with their intensity value distributions IVD1 and IVD2. The intensity value distributions IVD1 and IVD2 are shown graphically in the form of a histogram. The intensity values I are plotted on the x-axis (abscissa). In the example shown in Fig.8, the intensity values I are grey values ranging from 0 to 255. A frequency F is plotted on the y- axis (ordinate). The frequency F indicates how often an intensity value I occurs in the respective MRI image. The frequency F can be an absolute frequency, i.e. indicate, for example, how many image
elements of the MRI image have a certain intensity value, or a relative frequency, i.e. indicate what proportion (e.g. percentage) of the image elements have the certain intensity value. It should be noted that the intensity value distributions IVD1 and IVD2 are shown as curves in Fig. 8; however, the values on which the intensity value distributions are based are discrete values. The intensity value distributions (histograms) can therefore also be displayed as a bar chart. It should also be noted that the intensity values can also be grouped. For example, the intensity values 0 to 9 can be combined into a first group, the intensity values 10 to 19 into a second group, and so on. A histogram can then be used to display the frequencies of the image elements that belong to the respective groups. So, the first MRI image is characterized by a first intensity value distribution, and the second MRI image is characterized by a second intensity value distribution. If there is a third MRI image, it may be characterized by a third intensity value distribution. If there is a fourth MRI image, it may be characterized by a fourth intensity value distribution. If there is a fifth MRI image, it may be characterized by a fifth intensity value distribution. And so forth. Each data set of the plurality of data sets originates from an MRI examination. The data sets used for training the machine learning model should be data sets in which the problem described in the introduction of this disclosure, or an analogous problem does not occur. Such data sets can be determined and selected by analysing the first MRI image and the second MRI image (and any further MRI image if available). This is explained below using an example and applies mutatis mutandis to all other embodiments (and to any number of MRI images available). In the case that the first MRI image represents the examination region without contrast agent and the second MRI image represents the examination region after application of an amount of contrast agent, image elements of sub-regions of the examination region in which no contrast medium entered have equal intensity values in the first MRI image and the second MRI image or the intensity values of image elements of the first MRI image and the second MRI image lie within a defined range. In other words: for each data set of the plurality of data sets, the second MRI image shows sub-regions of the examination region into which no contrast agent has penetrated in the same or at least a similar way as in the first MRI image. In other words, for each representation pair, the deviations between the first MRI image and the second MRI image with respect to the intensity values of the image elements of sub- regions into which no contrast agent has reached or entered are smaller than a predetermined threshold value. The deviations can be determined, for example, by subtracting the intensity values of the corresponding image elements (or a selection thereof) of the first MRI image from the second MRI image. The absolute values of the individual differences can be added together and compared with a predefined threshold value. A mean value (e.g. the arithmetically averaged absolute differences) can also be determined and compared with a predefined threshold value. If the deviations are smaller than the predefined threshold value, the MRI images of the data set can be used to generate the training data; if the deviations are greater than or equal to the predefined threshold value, the MRI images of the data set can be discarded. Threshold values can be defined by a radiologist, for example. Usually, the lower the threshold value, the more suitable the MRI images of the data set are for generating training data. As described, the first MRI image and/or the second MRI may represent the examination region of the examination object after the application of an amount of a contrast agent. A “contrast agent” is a substance that is administered to an examination object to enhance the visibility of specific tissues and/or blood vessels within the examination region. These agents usually contain paramagnetic or superparamagnetic properties, which alter the magnetic resonance properties of tissues, leading to improved contrast in the resulting images. By highlighting certain structures, such as blood vessels or abnormal tissues, contrast agents help radiologists and physicians to better visualize and analyze specific areas of interest within the examination object, aiding in the diagnosis and treatment of various medical conditions.
An example of a superparamagnetic contrast agent is iron oxide nanoparticles (SPIO, superparamagnetic iron oxide). Examples of paramagnetic contrast agents are gadolinium chelates such as gadopentetate dimeglumine (trade name: Magnevist® and others), gadoteric acid (Dotarem®, Dotagita®, Cyclolux®), gadodiamide (Omniscan®), gadoteridol (ProHance®), gadobutrol (Gadovist®), gadopiclenol (Elucirem, Vueway) and gadoxetic acid (Primovist®/Eovist®). In an embodiment of the present disclosure, the contrast agent is an agent that includes gadolinium(III) 2-[4,7,10-tris(carboxymethyl)-1,4,7,10-tetrazacyclododec-1-yl]acetic acid (also referred to as gadolinium-DOTA or gadoteric acid). In a further embodiment, the contrast agent is an agent that includes gadolinium(III) ethoxybenzyldiethylenetriaminepentaacetic acid (Gd-EOB-DTPA); preferably, the contrast agent includes the disodium salt of gadolinium(III) ethoxybenzyldiethylenetriaminepentaacetic acid (also referred to as gadoxetic acid). In an embodiment of the present disclosure, the contrast agent is an agent that includes gadolinium(III) 2-[3,9-bis[1-carboxylato-4-(2,3-dihydroxypropylamino)-4-oxobutyl]-3,6,9,15- tetrazabicyclo[9.3.1]pentadeca-1(15),11,13-trien-6-yl]-5-(2,3-dihydroxypropylamino)-5- oxopentanoate (also referred to as gadopiclenol) (see for example WO2007/042504 and WO2020/030618 and/or WO2022/013454). In an embodiment of the present disclosure, the contrast agent is an agent that includes dihydrogen [(±)- 4-carboxy-5,8,11-tris(carboxymethyl)-1-phenyl-2-oxa-5,8,11-triazatridecan-13-oato(5-)]gadolinate(2-) (also referred to as gadobenic acid). In an embodiment of the present disclosure, the contrast agent is an agent that includes tetragadolinium [4,10-bis(carboxylatomethyl)-7-{3,6,12,15-tetraoxo-16-[4,7,10-tris-(carboxylatomethyl)-1,4,7,10- tetraazacyclododecan-1-yl]-9,9-bis({[({2-[4,7,10-tris-(carboxylatomethyl)-1,4,7,10- tetraazacyclododecan-1-yl]propanoyl}amino)acetyl]-amino}methyl)-4,7,11,14-tetraazahepta-decan-2- yl}-1,4,7,10-tetraazacyclododecan-1-yl]acetate (also referred to as gadoquatrane) (see for example J. Lohrke et al.: Preclinical Profile of Gadoquatrane: A Novel Tetrameric, Macrocyclic High Relaxivity Gadolinium-Based Contrast Agent. Invest Radiol., 2022, 1, 57(10): 629-638; WO2016193190). In an embodiment of the present disclosure, the contrast agent is an agent that includes a Gd3+ complex of a compound of the formula (I)
Ar is a group selected from
where # is the linkage to X,
X is a group selected from CH2, (CH2)2, (CH2)3, (CH2)4 and *-(CH2)2-O-CH2-# , where * is the linkage to Ar and # is the linkage to the acetic acid residue, R1, R2 and R3 are each independently a hydrogen atom or a group selected from C1-C3 alkyl, -CH2OH, -(CH2)2OH and -CH2OCH3, R4 is a group selected from C2-C4 alkoxy, (H3C-CH2)-O-(CH2)2-O-, (H3C-CH2)-O-(CH2)2-O- (CH2)2-O- and (H3C-CH2)-O-(CH2)2-O-(CH2)2-O-(CH2)2-O-, R5 is a hydrogen atom, and R6 is a hydrogen atom, or a stereoisomer, tautomer, hydrate, solvate or salt thereof, or a mixture thereof. In an embodiment of the present disclosure, the contrast agent is an agent that includes a Gd3+ complex of a compound of the formula (II)
Ar is a group selected from
where # is the linkage to X, X is a group selected from CH2, (CH2)2, (CH2)3, (CH2)4 and *-(CH2)2-O-CH2-#, where * is the linkage to Ar and # is the linkage to the acetic acid residue, R7 is a hydrogen atom or a group selected from C1-C3 alkyl, -CH2OH, -(CH2)2OH and -CH2OCH3; R8 is a group selected from C2-C4 alkoxy, (H3C-CH2O)-(CH2)2-O-, (H3C-CH2O)-(CH2)2-O-(CH2)2-O- and (H3C-CH2O)-(CH2)2-O-(CH2)2-O-(CH2)2-O-; R9 and R10 independently represent a hydrogen atom; or a stereoisomer, tautomer, hydrate, solvate or salt thereof, or a mixture thereof.
The term “C1-C3 alkyl” denotes a linear or branched, saturated monovalent hydrocarbon group having 1, 2 or 3 carbon atoms, for example methyl, ethyl, n-propyl or isopropyl. The term “C2-C4 alkyl” denotes a linear or branched, saturated monovalent hydrocarbon group having 2, 3 or 4 carbon atoms. The term “C2-C4” refers to a linear or branched, saturated monovalent group of the formula (C2-C4 alkyl)-O-, in which the term “C2-C4 alkyl” is as defined above, for example a methoxy, ethoxy, n- propoxy or isopropoxy group. In an embodiment of the present disclosure, the contrast agent is an agent that includes gadolinium 2,2',2''-(10-{1-carboxy-2-[2-(4-ethoxyphenyl)ethoxy]ethyl}-1,4,7,10-tetraazacyclododecane-1,4,7- triyl)triacetate (see for example WO2022/194777, example 1). In an embodiment of the present disclosure, the contrast agent is an agent that includes gadolinium 2,2',2''-{10-[1-carboxy-2-{4-[2-(2-ethoxyethoxy)ethoxy]phenyl}ethyl]-1,4,7,10- tetraazacyclododecane-1,4,7-triyl}triacetate (see for example WO2022/194777, example 2). In an embodiment of the present disclosure, the contrast agent is an agent that includes gadolinium 2,2',2''-{10-[(1R)-1-carboxy-2-{4-[2-(2-ethoxyethoxy)ethoxy]phenyl}ethyl]-1,4,7,10- tetraazacyclododecane-1,4,7-triyl}triacetate (see for example WO2022/194777, example 4). In an embodiment of the present disclosure, the contrast agent is an agent that includes gadolinium (2S,2'S,2''S)-2,2',2''-{10-[(1S)-1-carboxy-4-{4-[2-(2-ethoxyethoxy)ethoxy]phenyl}butyl]-1,4,7,10- tetraazacyclododecane-1,4,7-triyl}tris(3-hydroxypropanoate (see for example WO2022/194777, example 15). In an embodiment of the present disclosure, the contrast agent is an agent that includes gadolinium 2,2',2''-{10-[(1S)-4-(4-butoxyphenyl)-1-carboxybutyl]-1,4,7,10-tetraazacyclododecane-1,4,7- triyl}triacetate (see for example WO2022/194777, example 31). In an embodiment of the present disclosure, the contrast agent is an agent that includes gadolinium - 2,2',2''-{(2S)-10-(carboxymethyl)-2-[4-(2-ethoxyethoxy)benzyl]-1,4,7,10-tetraazacyclododecane- 1,4,7-triyl}triacetate. In an embodiment of the present disclosure, the contrast agent is an agent that includes gadolinium 2,2',2''-[10-(carboxymethyl)-2-(4-ethoxybenzyl)-1,4,7,10-tetraazacyclododecane-1,4,7-triyl]triacetate. In an embodiment of the present disclosure, the contrast agent is an agent that includes gadolinium(III) 5,8-bis(carboxylatomethyl)-2-[2-(methylamino)-2-oxoethyl]-10-oxo-2,5,8,11-tetraazadodecane-1- carboxylate hydrate (also referred to as gadodiamide). In an embodiment of the present disclosure, the contrast agent is an agent that includes gadolinium(III) 2-[4-(2-hydroxypropyl)-7,10-bis(2-oxido-2-oxoethyl)-1,4,7,10-tetrazacyclododec-1-yl]acetate (also referred to as gadoteridol). In an embodiment of the present disclosure, the contrast agent is an agent that includes gadolinium(III) 2,2',2''-(10-((2R,3S)-1,3,4-trihydroxybutan-2-yl)-1,4,7,10-tetraazacyclododecane-1,4,7-triyl)triacetate (also referred to as gadobutrol or Gd-DO3A-butrol). The amount of contrast agent administered to the examination object can be the standard amount; however, it can also be an amount that is smaller or larger than the standard amount. The standard amount is usually the amount recommended by the manufacturer and/or distributor of the contrast agent and/or approved by a regulatory authority and/or the amount specified in a package leaflet for the contrast agent. For example, the standard amount of Primovist® is 0.025 mmol Gd-EOB-DTPA disodium/kg body weight. The amount(s) of contrast agent can be the same or different for all data sets.
The time between the administration of the contrast agent and the generation of the second MRI image can be the same or different for all data sets. To generate the training data, each second MRI image and/or its intensity value distribution (i.e., the respective second intensity value distribution) is subjected to a reversible transformation operation. If there are more than two MRI images, the other MRI images (except for the first MRI image) are also subjected to the reversible transformation operation. The first MRI image is usually the reference; it is usually not transformed and/or corrected. The term “reversible” means that the transformation operation can be reversed. In other words, the transformation operation can be undone. In other words, the reversible transformation operation converts a second MRI into a transformed second representation and/or converts a second intensity value distribution into a transformed second intensity value distribution, and there is an inverse transformation operation that converts the transformed second MRI image into the (original) second MRI image and/or converts the transformed second intensity value distribution into the (original) second intensity value distribution. The transformation operation affects the intensity values of the second MRI image and/or the second intensity value distribution. The transformation operation affects the intensity values, regardless of where these intensity values occur in a second MRI image. In this context, one can also speak of a global transformation operation (in contrast to a local operation, where the coordinates of the image elements can have an influence on the operation). The intensity value distribution of the second MRI image (i.e., the second intensity value distribution) is changed by the transformation operation. The transformation operation can, for example, result in the intensity value distribution being stretched or compressed and/or shifted. The transformation operation is intended to produce, e.g., what is observed in some MRI scanners when a dynamic contrast-enhanced MRI examination produces a first MRI image without contrast agent and a second MRI image representing the examination region after the application of a contrast agent: sub- regions of the examination region are displayed differently in the first MRI image and in the second MRI image, even if no contrast agent has entered the sub-regions. In other words, the reversible transformation operation is intended to create the undesired effect that is often observed so that it can be reversed with an inverse transformation operation. The transformation operation can be a linear transformation or a non-linear transformation. The transformation operation is characterized by one or more transformation parameters. The at least one transformation parameter can, for example, specify how much the intensity value distribution is shifted and/or stretched or compressed. In the case of a linear transformation, for example, the transformation operation can have the following mathematical equation (1): ^T = ^ ∙ ^ + ^ (1) wherein I can be an intensity value of an image element of the second MRI image, IT can be the intensity values of the image element of the transformed second MRI image, and a and b can be transformation parameters. In this example, each intensity value of each image element of the second MRI image can be multiplied by the factor a, and the offset b can be added to the result. The result is a transformed second MRI image. Similarly, a transformed third MRI image, a transformed fourth MRI image, a transformed fifth MRI image and so on can be generated, if applicable. It should be noted that the reversible transformation operation and/or the transformation parameters may be the same or different for all MRI images. For example, the same reversible transformation operation and/or transformation parameters may be used for generating the transformed third MRI images as for the generation of the transformed second MRI image or a different reversible transformation operation and/or different transformation parameters.
In the example, the transformation parameters a and b do not depend on the intensity value I. It is possible to select a transformation operation in which the transformation parameter a, and/or the transformation parameter b depend on the magnitude of the intensity value I. In such a case, the transformation operation is non-linear. The transformation operation can be defined by an expert (e.g. a radiologist or an MRI scanner manufacturer). It is also possible to derive the transformation operation from the analysis of a large number of MRI images, e.g., with and without contrast agent. As described, the reversible transformation operation can be applied to the second MRI image or the intensity value distribution of the second MRI image (i.e. the second intensity value distribution). To generate the training data, the second MRI image and/or the second intensity value distribution of each data set is subjected to the reversible transformation operation as described. At least one value of at least one transformation parameter is selected. The selection can be random (e.g. in a predefined range). For example, an upper limit value and lower limit value can be defined for at least one transformation parameter. Random numbers can be generated within the defined range. Each of these random numbers can be a value for the at least one transformation parameter. If the transformation operation has more than one transformation parameter, a value range can be defined for each transformation parameter, within which a selection of values can be made. Value limits and ranges can be defined by a radiologist, for example. Within a predefined range, a selection can also be made according to a predefined probability distribution, so that not all values are selected with the same probability. The training data comprises, for each data set of the plurality of data sets, (i) the first MRI image and the transformed second MRI image and/or their intensity value distributions (i.e., the first intensity value distribution and/or the second intensity value distribution) as input data for the machine learning model and (ii) the at least one selected value of the at least one transformation parameter as target data. The training data may also include, for each data set of the plurality of data sets, further MRI images, e.g. a transformed third MRI image, a transformed fourth MRI image, a transformed fifth MRI image and so on, and the respective selected values of the at least one transformation parameter. In addition to the training data, a machine learning model is provided. The machine learning model of the present disclosure can be or comprise a convolutional neural network. A convolutional neural network (CNN) is a type of artificial neural network that is primarily designed to process structured grid data, such as images, and is especially well-suited to analysing visual imagery. A CNN comprises an input layer with input neurons, an output layer with output neurons, as well as multiple hidden layers between the input layer and the output layer. The nodes in the CNN input layer are usually organized into a set of “filters” (feature detectors), and the output of each set of filters is propagated to nodes in successive layers of the network. The computations for a CNN include applying the convolution mathematical operation to each filter to produce the output of that filter. Convolution is a specialized kind of mathematical operation performed by two functions to produce a third function that is a modified version of one of the two original functions. In convolutional network terminology, the first function to the convolution can be referred to as the input, while the second function can be referred to as the convolution kernel. The output may be referred to as the feature map. For example, the input to a convolution layer can be a multidimensional array of data that defines the various colour/greyscale components of an input image. The convolution kernel can be a multidimensional array of parameters, where the parameters are adapted by the training process for the neural network.
The objective of the convolution operation is to extract features (such as, e.g., edges from an input image). Conventionally, the first convolutional layer is responsible for capturing the low-level features such as edges, colour, gradient orientation, etc. With added layers, the architecture adapts to the high- level features as well, giving a network which has the wholesome understanding of images in the dataset. Similar to the convolutional layer, the pooling layer is responsible for reducing the spatial size of the feature maps. It is useful for extracting dominant features with some degree of rotational and positional invariance, thus maintaining the process of effectively training of the model. Adding a fully-connected layer is a way of learning non-linear combinations of the high-level features as represented by the output of the convolutional part. The CNN can be set up as a regression model. In other words, it may be configured to receive a first MRI image and a transformed second MRI image of an examination region of an examination object and/or their intensity value distributions and output one or more values for the one or more transformation parameters. Such a CNN may have an input layer that receives the first MRI image and the transformed second MRI image and/or their intensity value distributions. The CNN may have a number of convolutional layers. These layers apply a series of filters to the input data and create feature maps that represent the presence of those features in the input data. These layers may be designed to automatically and adaptively learn spatial hierarchies of features. The CNN may have a number of pooling layers. These layers are often inserted between successive convolutional layers in a CNN. They perform a down-sampling operation along the spatial dimensions (e.g., width, height), resulting in volume reduction, computation reduction, and also allowing the machine learning model to become somewhat invariant to shifts and distortions. The CNN may have fully connected layers. After several convolutional and pooling layers, the high- level reasoning in the neural network occurs via fully connected layers. Neurons in a fully connected layer have connections to all activations in the previous layer. In a regression CNN, the output layer usually has a single neuron (for single-valued regression) or multiple neurons (for multi-valued regression) with a linear or sometimes no activation function. This produces a continuous output, allowing the network to be used for regression tasks. Possible architectures and specifications for the training of machine learning models can be found, for example, in: L. Alzubaidi et al.: Review of deep learning: concepts, CNN architectures, challenges, applications, future directions, J Big Data 8, 53 (2021). However, it should be noted that the machine learning model of the present disclosure does not have to be or comprise a CNN, it can also be or comprise another machine learning model that can perform regression based on images and/or intensity value distributions. One example is a vision transformer. Vision transformers are a recent advancement in the field of computer vision, which have shown to perform at par or even better than CNNs on several benchmark datasets. They are based on the transformer architecture, originally designed for natural language processing tasks, and have been adapted for image processing tasks. Vision transformers are often used for classification. To use a vision transformer for regression, the architecture would remain largely the same, except for the final layer. Instead of a classification layer (often a softmax activation function for multi-class outputs), there is a single neuron with a linear activation function for single-valued regression, or multiple neurons for multi-valued regression. Training a vision transformer for regression would also involve using a regression-appropriate loss function, like mean squared error (MSE) or mean absolute error (MAE), instead of a classification loss function like cross-entropy. A possible architecture of a vision transformer for performing a regression is disclosed, for example, in: K. Parmar, J. Parker, D. Guzzetti: Applications of Regression Vision Transformers for Autonomous Spacecraft Optical Navigation in Simulated Orbital Environments, Conference: AAS/AIAA 2023/09/20.
Fig. 1 shows a schematic example of the generation of training data and the training of a machine learning model. Fig. 1 shows the generation of training data and the training of the machine learning model using the example of one data set DS; there are usually a plurality of such data sets. In this example, the data set DS comprises a first MRI image I1 and a second MRI image I2. The first MRI image I1 represents an examination region of an examination object in a first state (the state is not visible in Fig.1). The second MRI image I2 represents the examination region of the examination object in a second state (the state is not visible in Fig.1). The first MRI image I1 is characterized by a first intensity value distribution; the second MRI image I2 is characterized by a second intensity value distribution (the intensity value distributions are not visible in Fig.1). A transformed second MRI image I2T is generated on the basis of the second MRI image I2. The transformed second MRI image I2T is generated using a transformation operation T(a, b). In this example, the transformation operation T(a, b) comprises two transformation parameters a and b. It is possible that transformation operation comprises more or fewer transformation parameters. Values va and vb are selected (sampled) for the transformation parameters a and b. As described, the selection can be made randomly or according to a specified probability distribution and/or within defined limits. The first MRI image I1 and the transformed second MRI image I2T are fed to the machine learning model MLM as input data. The machine learning model MLM is configured and trained to predict the selected values of the transformation parameters a and b on the basis of the first MRI image I1, the transformed second MRI image I2T and model parameters. In other words, the machine learning model MLM is configured and trained to determine predicted values v^a and v^b of the transformation parameters a and b based on the first MRI image I1, the transformed second MRI image I2T and model parameters. The predicted values v^a and v^b are output data from the machine learning model MLM. The training of the machine learning model can also be carried out using the intensity value distributions of the MRI images. For example, a transformed second intensity value distribution can be generated based on the second intensity value distribution using the reversible transformation operator with one or more selected values of the one or more transformation parameters. The first intensity value distribution and the transformed second intensity value distribution can be fed to the machine learning model. The machine learning model outputs one or more predicted values of the one or more transformation parameters. In the example shown in Fig. 1, a loss function LF is used to determine (quantify) deviations between the selected values va and vb and the predicted values v^a and v^b. In an optimization process (e.g. a gradient descent procedure), model parameters of the machine learning model MLM can be modified to reduce the deviations. The process shown in Fig.1 is repeated for a large number of data sets. One or more data sets can also be used several times in training, e.g. with different values va and vb of the transformation parameters a and b. The training of the machine learning model can be ended when a stop criterion is met. Such a stop criterion can be for example: a predefined maximum number of training steps/cycles/epochs has been performed, deviations between output data and target data can no longer be reduced by modifying the model parameters, a predefined minimum of the loss function is reached, and/or an extreme value (e.g., maximum or minimum) of another performance value is reached. The trained machine learning model and/or the modified model parameters of the trained machine learning model may be output, stored and/or transmitted to a separate computer system.
As shown in Fig.1, the machine learning model can be trained using pairs (e.g., a first MRI image and a transformed second MRI image). If the data sets on which the training data is based comprise more than two MRI images, e.g. a third MRI image and a fourth MRI image, training can also be performed using pairs. For example, a transformed third MRI image may be generated, and the first MRI image and the transformed third MRI image may be provided as input data to the machine learning model. Similarly, a transformed fourth MRI image may be generated, and the first MRI image and the transformed fourth MRI image may be provided to the machine learning model as input data. However, it is also possible to train the machine learning model using all available MRI images. If four MRI images are available in a data set, a first MRI image, a second MRI image, a third MRI image and a fourth MRI image, a transformed second MRI image can be generated from the second MRI image, a transformed third MRI image can be generated from the third MRI image and a transformed fourth MRI image can be generated from the fourth MRI image. The first MRI image, the transformed second MRI image, the transformed third MRI image and the transformed fourth MRI image can be fed together to the machine learning model. The machine learning model is then trained to predict the transformation parameters for one or more reversible transformation operations used to generate the transformed second MRI image, the transformed third MRI image, and/or the transformed fourth MRI image. As described, the machine learning model can also be trained using the intensity value distributions (instead of or in addition to the MRI images). The trained machine learning model can be used for correcting an intensity value distribution of a new MRI image. In a first step, a new data set is received. The term “receiving” includes both retrieving data sets and receiving data sets that are transmitted, for example, to the computer system of the present disclosure. The new data set may be received from a magnetic resonance imaging scanner or from a separate computer system. The new data set may be read from one or more data storage devices. The new data set is the result of an MRI examination of an examination region of an examination object. The new data set comprises a first MRI image of the examination region of the examination object and a second MRI image of the examination region of the examination object. The first MRI image of the new data set is also referred to as “new first MRI image” in this disclosure, and the second MRI image of the new data set is also referred to as “new second MRI image” in this disclosure. The term “new” is used in this disclosure to distinguish the training phase from the inference phase. The examination object can be an examination object from which data has already been used to train the machine learning model, or a new examination object, i.e. an examination object from which no data was used to train the machine learning model. The examination region is part of the examination object. The first MRI image represents the examination region of the examination object in the first state, the second MRI image represents the examination region of the examination object in the second state. The first state and the second state correspond to the states that are represented in the data sets of the training data for training the machine learning model. The first MRI image and the second MRI image and/or their intensity distributions are inputted into the trained machine learning model as input data, depending on the data used to train the machine learning model. The trained machine learning model is configured and was trained to determine at least one predicted value of at least one transformation parameter based on the input data. The at least one predicted value of at least one transformation parameter is outputted by the trained machine learning model. The at least one predicted value of the at least one transformation parameter is used to generate a corrected second MRI image.
An inverse transformation operation is applied to the second MRI image using the at least one predicted value of the at least one transformation parameter. The inverse transformation operation is the inverse transformation to the reversible transformation operation used in training the machine learning model. In other words, the inverse transformation operation reverses the process performed by the reversible transformation operation. For example, if the reversible transformation operation performs a transformation according to equation (1) (see above) then the inverse transformation operation performs the transformation according to the following equation (2): ^c = ூି^ ^ (2) wherein Ic is an intensity value of an image element of the corrected second MRI image, I is the intensity value of the image element of the second MRI image, and a and b are transformation parameters. In this example, from each intensity value of each image element of the second MRI image the value b is subtracted, and the result is divided by a. The result of the inverse transformation operation is a corrected second MRI image. The corrected second MRI image can be output (e.g. displayed on a monitor and/or printed out with a printer) and/or stored in a data memory and/or transmitted to a separate computer system. It is possible that the new data set includes further MRI images of the examination region of the examination object, for example a new third MRI image, or a new third and a new fourth MRI image, or a new third and a new fourth and a new fifth MRI image, or more than five new MRI images. Each new MRI image can represent the examination region of the examination object in a different state. A new third MRI image may represent the examination region of the examination object in a third state, a new fourth MRI image may represent the examination region of the examination object in a fourth state, and so on. Depending on how the machine learning model was trained (with pairs or with more than two MRI images), two or more MRI images are fed to the trained machine learning model in order to predict one or more transformation parameters. Fig.2 shows a schematic example of the generation of a corrected second MRI image using the trained machine learning model. Fig.2 shows a new data set DS* comprising a mew first MRI image I1* and a new second MRI image I2*. The new first MRI image I1* represents an examination region of an examination object in the first state, the new second MRI image I2* represents the examination region of the examination object in the second state. The new first MRI image I1* and the new second MRI image I2* are inputted into the trained machine learning model MLMt as input data. The trained machine learning model MLMt is configured and was trained to determine predicted values v^a and v^b of the transformation parameters a and b based on the input data. The predicted values v^a and v^b of the transformation parameters a and b are outputted by the trained machine learning model MLMt. The predicted values v^a and v^b of the transformation parameters a and b are used to generate a corrected second MRI image I2*c. An inverse transformation operation T-1(a, b) is applied to the new second MRI image I2* using the predicted values v^a and v^b of the transformation parameters a and b. The inverse transformation operation T-1(a, b) is the inverse transformation to the reversible transformation operation used in training the machine learning model. The corrected second MRI image I2*c can be output (e.g. displayed on a monitor and/or printed out with a printer) and/or stored in a data memory and/or transmitted to a separate computer system.
Fig. 3 shows an embodiment of the computer-implemented method for training the machine learning model in form of a flow chart. The training method (100) comprises the steps: (110) providing a plurality of data sets, wherein each data set comprises at least two MRI images, a first MRI image and a second MRI image, wherein the first MRI image represents an examination region of an examination object in a first state, wherein the first MRI image is characterized by a first intensity value distribution, wherein the second MRI image represents the examination region of the examination object in a second state, wherein the second MRI image is characterized by a second intensity value distribution, (120) providing a reversible transformation operation, the reversible transformation operation comprising at least one transformation parameter, wherein the reversible transformation operation performs a transformation of a second intensity value distribution of a second MRI image when applied to the second MRI image, (130) generating training data based on the plurality of data sets, wherein generating training data comprises, for each data set of the plurality of data sets: sampling of at least one value for the at least one transformation parameter of the reversible transformation operation, generating a transformed second MRI image by applying the reversible transformation operation with the at least one sampled value of the at least one transformation parameter to the second MRI image, (140) providing a machine learning model, wherein the machine learning model is configured to determine at least one predicted value for the at least one transformation parameter based on: model parameters, a first MRI image, and a transformed second MRI image and/or their intensity value distributions, (150) training the machine learning model on the training data, wherein the training comprises, for each data set of the plurality of data sets: (151) inputting the first MRI image and the transformed second MRI image into the machine learning model, (152) receiving at least one predicted value for the at least one transformation parameter, (153) determining a deviation between the at least one value of the at least one transformation parameter and the at least one predicted value of the at least one transformation parameter, (154) reducing the deviation by modifying model parameters of the machine learning model, (160) outputting and/or storing the trained machine learning model and/or transmitting the trained machine learning model to a separate computer system. Fig. 4 shows an embodiment of the computer-implemented method for correcting an intensity value distribution of an MRI image using the trained machine learning model in form of a flow chart. This method (200) comprises the steps: (210) providing the trained machine learning model, (220) receiving a new data set, wherein the new data set comprises at least two MRI images, a new first MRI image and a new second MRI image,
wherein the new first MRI image represents the examination region of the examination object in the first state, wherein the new first MRI image is characterized by a first intensity value distribution, wherein the new second MRI image represents the examination region of the examination object in the second state, wherein the new second MRI image is characterized by a second intensity value distribution, (230) inputting the new first MRI image and the new second MRI image and/or their intensity value distributions into the trained machine learning model, (240) receiving at least one predicted value as an output from the machine learning model, (250) providing an inverse transformation operation, wherein the inverse transformation operation comprises the at least one transformation parameter, wherein the inverse transformation operation reverses the reversible transformation operation used in training the machine learning model, (260) generating a corrected second MRI image based on the new second MRI image and the at least one predicted value, wherein generating the corrected second MRI image comprises: applying the inverse transformation operation with the at least one predicted value of the at least one transformation parameter to the new second MRI image, (270) outputting and/or storing the corrected second MRI image and/or transmitting the corrected second MRI image to a separate computer system and/or using the corrected second MRI image for generating a synthetic MRI image of the examination region of the examination object. Based on the new first MRI image and the corrected second MRI image, a synthetic MRI image can be generated as described in EP4332601A1, the content of which is incorporated herein by reference in its entirety. The term “synthetic” means that the synthetic MRI image is not the (direct) result of a physical measurement on an actual examination object, but that the synthetic MRI image has been generated (calculated). A synonym for the term “synthetic” is the term “artificial”. A synthetic MRI image may however be based on measured MRI images. The new first MRI image represents the examination region of an examination object without contrast agent. The corrected second MRI image represents the examination region of the examination object after application of an amount of a contrast agent. Based on the new first MRI image and the corrected second MRI image, a third MRI image can be generated. Generation of the third MRI image can comprise: subtracting the new first MRI image from the corrected second MRI image. The subtraction can be performed in real space or in frequency space. In order to be able to perform a subtraction in real space, the new first MRI image and the corrected second MRI image must be real-space representations of the examination region of the examination object. In order to be able to perform a subtraction in frequency space, the new first MRI image and the corrected second MRI image must be frequency-space representations of the examination area of the examination object. The “real space” is the usual three-dimensional Euclidean space, which corresponds to the space that we humans experience with our senses and in which we move around. A representation in real space is therefore the representation familiar to humans. In a representation in real space, also referred to in this description as real-space representation, the examination region is normally represented by a large number of image elements (for example pixels or voxels or doxels), which may for example be in a raster arrangement in which each image element
represents a part of the examination region, wherein each image element may be assigned a colour value or grey value. The colour value or grey value represents a physical signal intensity. A format widely used in radiology for storing and processing representations in real space is the DICOM format. DICOM (Digital Imaging and Communications in Medicine) is an open standard for storing and exchanging information in medical image data management. In a representation in frequency space, also referred to in this description as frequency-space representation, the examination region is represented by a superposition of fundamental vibrations. For example, the examination region may be represented by a sum of sine and cosine functions having different amplitudes, frequencies and phases. The amplitudes and phases may be plotted as a function of the frequencies, for example, in a two- or three-dimensional plot. Normally, the lowest frequency (origin) is placed in the centre. The further away from this centre, the higher the frequencies. Each frequency can be assigned an amplitude representing the frequency in the frequency-space representation and a phase indicating the extent to which the respective vibration is shifted towards a sine or cosine vibration. A representation in real space can for example be converted (transformed) by a Fourier transform into a representation in frequency space. Conversely, a representation in frequency space can for example be converted (transformed) by an inverse Fourier transform into a representation in real space. Details about real-space depictions and frequency-space depictions and their respective interconversion are described in numerous publications, see for example https://see.stanford.edu/materials/lsoftaee261/book-fall-07.pdf. Generation of the third MRI image can comprise: - subtracting the new first MRI image from the corrected second MRI image, and - subjecting the result of the subtraction to noise suppression. Subjecting the result of the subtraction to noise suppression may comprise: - multiplying the result of the subtraction by a frequency-dependent weighting function in frequency space. In this embodiment, therefore, at least the result of the subtraction of the new first MRI image from the corrected second MRI image is available in frequency space (it is also possible that the subtraction is performed in frequency space; in this case, the result of the subtraction in frequency space is a frequency- space representation). This frequency-space representation can be multiplied by a frequency-dependent weighting function. The multiplication thus results in the amplitude and/or phase values of the frequency-space representation being multiplied by a weighting factor which is dependent on the respective frequency. Preferably, the weighting factors decrease with increasing frequency. In other words, preferably low frequencies are multiplied by a higher weighting factor than high frequencies. Preferably, the lower the frequency, the greater the respective weighting factor. Contrast information is represented in a frequency-space representation by low frequencies, while the higher frequencies represent information about fine structures. Such weighting thus means that a higher weighting will be given to frequencies making a higher contribution to contrast than to those making a smaller contribution. Image noise is typically evenly distributed in the frequency-space representation. The frequency-dependent weighting function has the effect of a filter. The filter increases the signal-to- noise ratio by reducing the spectral noise density for high frequencies. Preferred weighting functions are Hann function (also referred to as the Hann window) and Poisson function (Poisson window). Examples of other weighting functions can be found for example at https://de.wikipedia.org/wiki/Fensterfunktion#Beispiele_von_Fensterfunktionen; F.J. Harris et al.: On the Use of Windows for Harmonic Analysis with the Discrete Fourier Transform, Proceedings of the
IEEE, vol.66, No.1, 1978; https://docs.scipy.org/doc/scipy/reference/signal.windows.html; K. M. M Prabhu: Window Functions and Their Applications in Signal Processing, CRC Press, 2014, 978-1-4665- 1583-3). Examples of weighting functions can also be found in EP4332601A1 (see in particular Fig. 3), the content of which is incorporated herein by reference in its entirety. A fourth MRI image can be generated on the basis of the (optionally noise-suppressed) third MRI image. Generating the fourth MRI image can comprise: multiplying the (optionally noise-suppressed) third MRI image by a gain factor and adding the result of the multiplication to the new first MRI image and/or the corrected second MRI image. In an embodiment of the present disclosure, the (optionally noise- suppressed) third MRI image is multiplied by the gain factor and added to the new first MRI image. These steps can be performed in frequency space or in real space. The result is a fourth MRI image of the examination region of the examination object, in which the signal amplification caused by the contrast agent is increased or decreased compared to the corrected second MRI image, depending on the magnitude of the gain factor. The fourth MRI image is a synthetic MRI image. The gain factor is a positive or negative real number. The gain factor may be selected by a user, i.e. it may be variable or predefined, i.e. predetermined. The gain factor may also be determined automatically, for example based on the intensity value distribution of the new first MRI image and/or of the corrected second MRI image and/or of the difference of the new first MRI image from the corrected second MRI image. Varying the gain factor thus allows the contrast between regions with contrast agent and regions without contrast agent to be varied. In an embodiment of the present disclosure, the gain factor is greater than 1, preferably greater than 2. The gain factor is usually not greater than 10. In another embodiment of the present disclosure, the gain factor is greater than zero and less than 1. In another embodiment of the present disclosure, the gain factor is less than zero. The gain factor is usually not less than -10. If the fourth MRI image is a frequency space representation, it can be converted into a fourth MRI image in real space by an inverse Fourier transformation. The fourth MRI image can be output (e.g. displayed on a monitor and/or printed out with a printer) and/or stored in a data memory and/or transmitted to a separate computer system. Fig. 5 shows an embodiment of the computer-implemented method for generating a synthetic MRI image of the examination region of the examination object. In a first step, a new first MRI image I1* and a corrected second MRI image I2*c are provided. The new first MRI image I1* represents the examination region of an examination object without contrast agent. The corrected second MRI image I2*c represents the examination region of the examination object after application of an amount of a contrast agent. The corrected second MRI image I2*c was generated based on the new first MRI image I1* and a new second MRI image using a trained machine learning model. The corrected second MRI image I2*c may have been generated as described in relation to Fig. 2. The trained model machine learning may have been trained as described in relation to Fig.1. Based on the new first MRI image I1* and the corrected second MRI image I2*c a third MRI image I3* is generated. Generation of the third MRI image I3* comprises: subtracting the new first MRI image I1* from the corrected second MRI image I2*c: I3* = I2*c – I1*.
Generation of the third MRI image I3* optionally comprises: multiplying the result of the subtraction by a frequency-dependent weighting function WF: I3* = (I2*c – I1*)F WF. This optional step, when executed, is performed in frequency space based on a frequency-space representation of the subtraction result (indicated by the subscript letter F). Based on the third MRI image I3* and the new first MRI image I1* a fourth MRI image I4* is generated. Generation of the fourth MRI image I4* comprises: multiplying the third MRI image I3* by a gain factor and adding the result of the multiplication to the new first MRI image I1*: I4* = I3* + I1*. It should be noted that the fourth MRI image I4* can also be generated based on the third MRI image I3* and the corrected second MRI image I2*c by multiplying the third MRI image I3* by a gain factor and adding the result of the multiplication to the corrected second MRI image I2*c: I4* = I3* + I2*c. Fig. 6 shows an embodiment of the computer-implemented method for generating a synthetic MRI image of the examination region of the examination object in form of a flow chart. This method (300) comprises the steps: (310) providing a new first MRI image and a corrected second MRI image, wherein the new first MRI image represents the examination region of the examination object without contrast agent, wherein the corrected second MRI image represents the examination region of the examination object after application of an amount of a contrast agent, (320) generating a third MRI image based on the new first MRI image and the corrected second MRI image, wherein generating the third MRI image comprises: (321) subtracting the new first MRI image from the corrected second MRI image, (322) optionally multiplying the result of the subtraction by a frequency-dependent weighting function in frequency space, (330) generating a fourth MRI image based on the third MRI image and the new first MRI image and/or corrected second MRI image, wherein generating the fourth MRI image comprises: (331) multiplying the third MRI image by a gain factor and adding the result of the multiplication to the new first MRI image and/or the corrected second MRI image, wherein the gain factor is a positive or negative real number, (340) in the case that the fourth MRI image is a representation in frequency space: converting the fourth MRI image into a fourth MRI image in real space, (350) outputting and/or storing the fourth MRI image and/or transmitting the fourth MRI image to a separate computer system. The operations in accordance with the teachings herein may be performed by at least one computer system specially constructed for the desired purposes or general-purpose computer system specially configured for the desired purpose by at least one computer program stored in a typically non-transitory computer readable storage medium. A “computer system” is a system for electronic data processing that processes data by means of programmable calculation rules. Such a system usually comprises a “computer”, that unit which comprises a processor for carrying out logical operations, and also peripherals. In computer technology, “peripherals” refer to all devices which are connected to the computer and serve for the control of the computer and/or as input and output devices. Examples thereof are monitor (screen), printer, scanner, mouse, keyboard, drives, camera, microphone, loudspeaker, etc. Internal ports and expansion cards are, too, considered to be peripherals in computer technology. Computer systems of today are frequently divided into desktop PCs, portable PCs, laptops, notebooks, netbooks and tablet PCs and so-called handhelds (e.g. smartphone); all these systems can be utilized for carrying out the invention.
The term “non-transitory” is used herein to exclude transitory, propagating signals or waves, but to otherwise include any volatile or non-volatile computer memory technology suitable to the application. The term “computer” should be broadly construed to cover any kind of electronic device with data processing capabilities, including, by way of non-limiting example, personal computers, servers, embedded cores, computing system, communication devices, processors (e.g., digital signal processor (DSP)), microcontrollers, field programmable gate array (FPGA), application specific integrated circuit (ASIC), etc.) and other electronic computing devices. The term “process” as used above is intended to include any type of computation or manipulation or transformation of data represented as physical, e.g., electronic, phenomena which may occur or reside e.g., within registers and/or memories of at least one computer or processor. The term processor includes a single processing unit or a plurality of distributed or remote such units. Fig. 7 illustrates a computer system (1) according to some example implementations of the present disclosure in more detail. Generally, a computer system of exemplary implementations of the present disclosure may be referred to as a computer and may comprise, include, or be embodied in one or more fixed or portable electronic devices. The computer may include one or more of each of a number of components such as, for example, a processing unit (20) connected to a memory (50) (e.g., storage device). The processing unit (20) may be composed of one or more processors alone or in combination with one or more memories. The processing unit (20) is generally any piece of computer hardware that is capable of processing information such as, for example, data, computer programs and/or other suitable electronic information. The processing unit (20) is composed of a collection of electronic circuits some of which may be packaged as an integrated circuit or multiple interconnected integrated circuits (an integrated circuit at times more commonly referred to as a “chip”). The processing unit (20) may be configured to execute computer programs, which may be stored onboard the processing unit (20) or otherwise stored in the memory (50) of the same or another computer. The processing unit (20) may be a number of processors, a multi-core processor or some other type of processor, depending on the particular implementation. For example, it may be a central processing unit (CPU), a field programmable gate array (FPGA), a graphics processing unit (GPU) and/or a tensor processing unit (TPU). Further, the processing unit (20) may be implemented using a number of heterogeneous processor systems in which a main processor is present with one or more secondary processors on a single chip. As another illustrative example, the processing unit (20) may be a symmetric multi-processor system containing multiple processors of the same type. In yet another example, the processing unit (20) may be embodied as or otherwise include one or more ASICs, FPGAs or the like. Thus, although the processing unit (20) may be capable of executing a computer program to perform one or more functions, the processing unit (20) of various examples may be capable of performing one or more functions without the aid of a computer program. In either instance, the processing unit (20) may be appropriately programmed to perform functions or operations according to example implementations of the present disclosure. The memory (50) is generally any piece of computer hardware that is capable of storing information such as, for example, data, computer programs (e.g., computer-readable program code (60)) and/or other suitable information either on a temporary basis and/or a permanent basis. The memory (50) may include volatile and/or non-volatile memory, and may be fixed or removable. Examples of suitable memory include random access memory (RAM), read-only memory (ROM), a hard drive, a flash memory, a thumb drive, a removable computer diskette, an optical disk, a magnetic tape or some combination of the above. Optical disks may include compact disk – read only memory (CD-ROM), compact disk – read/write (CD-R/W), DVD, Blu-ray disk or the like. In various instances, the memory may be referred to as a computer-readable storage medium or data memory. The computer-readable storage medium is a non-transitory device capable of storing information, and is distinguishable from computer-readable transmission media such as electronic transitory signals capable of carrying information from one
location to another. Computer-readable medium as described herein may generally refer to a computer- readable storage medium or computer-readable transmission medium. In addition to the memory (50), the processing unit (20) may also be connected to one or more interfaces for displaying, transmitting and/or receiving information. The interfaces may include one or more communications interfaces and/or one or more user interfaces. The communications interface(s) may be configured to transmit and/or receive information, such as to and/or from other computer(s), network(s), database(s) or the like. The communications interface may be configured to transmit and/or receive information by physical (wired) and/or wireless communications links. The communications interface(s) may include interface(s) (41) to connect to a network, such as using technologies such as cellular telephone, Wi-Fi, satellite, cable, digital subscriber line (DSL), fiber optics and the like. In some examples, the communications interface(s) may include one or more short-range communications interfaces (42) configured to connect devices using short-range communications technologies such as NFC, RFID, Bluetooth, Bluetooth LE, ZigBee, infrared (e.g., IrDA) or the like. The user interfaces may include a display (30). The display (screen) may be configured to present or otherwise display information to a user, suitable examples of which include a liquid crystal display (LCD), light-emitting diode display (LED), plasma display panel (PDP) or the like. The user input interface(s) (11) may be wired or wireless and may be configured to receive information from a user into the computer system (1), such as for processing, storage and/or display. Suitable examples of user input interfaces include a microphone, image or video capture device, keyboard or keypad, joystick, touch-sensitive surface (separate from or integrated into a touchscreen) or the like. In some examples, the user interfaces may include automatic identification and data capture (AIDC) technology (12) for machine-readable information. This may include barcode, radio frequency identification (RFID), magnetic stripes, optical character recognition (OCR), integrated circuit card (ICC), and the like. The user interfaces may further include one or more interfaces for communicating with peripherals such as printers and the like. As indicated above, program code instructions (60) may be stored in memory (50) and executed by processing unit (20) that is thereby programmed, to implement functions of the systems, subsystems, tools and their respective elements described herein. As will be appreciated, any suitable program code instructions (60) may be loaded onto a computer or other programmable apparatus from a computer- readable storage medium to produce a particular machine, such that the particular machine becomes a means for implementing the functions specified herein. These program code instructions (60) may also be stored in a computer-readable storage medium that can direct a computer, processing unit or other programmable apparatus to function in a particular manner to thereby generate a particular machine or particular article of manufacture. The instructions stored in the computer-readable storage medium may produce an article of manufacture, where the article of manufacture becomes a means for implementing functions described herein. The program code instructions (60) may be retrieved from a computer- readable storage medium and loaded into a computer, processing unit or other programmable apparatus to configure the computer, processing unit or other programmable apparatus to execute operations to be performed on or by the computer, processing unit or other programmable apparatus. Retrieval, loading and execution of the program code instructions (60) may be performed sequentially such that one instruction is retrieved, loaded and executed at a time. In some example implementations, retrieval, loading and/or execution may be performed in parallel such that multiple instructions are retrieved, loaded, and/or executed together. Execution of the program code instructions (60) may produce a computer-implemented process such that the instructions executed by the computer, processing circuitry or other programmable apparatus provide operations for implementing functions described herein. Execution of instructions by processing unit, or storage of instructions in a computer-readable storage medium, supports combinations of operations for performing the specified functions. In this manner, a computer system (1) may include processing unit (20) and a computer-readable storage medium or memory (50) coupled to the processing circuitry, where the processing circuitry is configured to execute
computer-readable program code instructions (60) stored in the memory (50). It will also be understood that one or more functions, and combinations of functions, may be implemented by special purpose hardware-based computer systems and/or processing circuitry which perform the specified functions, or combinations of special purpose hardware and program code instructions. The computer system of the present disclosure may be in the form of a laptop, notebook, netbook, and/or tablet PC; it may also be a component of an MRI scanner. In another aspect, the present disclosure provides a computer program product. Such a computer program product comprises a non-volatile data carrier, such as a CD, a DVD, a USB stick or other medium for storing data. A computer program is stored on the data carrier. The computer program can be loaded into a working memory of a computer system (in particular, into a working memory of a computer system of the present disclosure), where it can cause the computer system to perform the computer-implemented methods disclosed herein. The computer program may also be marketed in combination with a contrast agent. Such a combination is also referred to as a kit. Such a kit includes the contrast agent and the computer program. It is also possible that such a kit includes the contrast agent and means for allowing a purchaser to obtain the computer program, e.g., download it from an Internet site. These means may include a link, i.e., an address of the Internet site from which the computer program may be obtained, e.g., from which the computer program may be downloaded to a computer system connected to the Internet. Such means may include a code (e.g., an alphanumeric string or a QR code, or a DataMatrix code or a barcode or other optically and/or electronically readable code) by which the purchaser can access the computer program. Such a link and/or code may, for example, be printed on a package of the contrast agent and/or printed on a package insert for the contrast agent. A kit is thus a combination product comprising a contrast agent and a computer program (e.g., in the form of access to the computer program or in the form of executable program code on a data carrier) that is offered for sale together.
Claims
CLAIMS 1. A computer-implemented method comprising: - providing a plurality of data sets, wherein each data set (DS) comprises at least two MRI images, a first MRI image (I1) and a second MRI image (I2), wherein the first MRI image (I1) represents an examination region of an examination object in a first state, wherein the first MRI image (I1) is characterized by a first intensity value distribution (IVD1), wherein the second MRI image (I2) represents the examination region of the examination object in a second state, wherein the second MRI image (I2) is characterized by a second intensity value distribution (IVD2), - providing a reversible transformation operation (T(a, b)), the reversible transformation operation (T(a, b)) comprising at least one transformation parameter (a, b), wherein the reversible transformation operation (T(a, b)) performs a transformation of the second intensity value distribution (IVD2) when applied to the second intensity value distribution (IVD2) and/or to the second MRI image (I2), - generating training data based on the plurality of data sets, wherein generating training data comprises, for each data set (DS) of the plurality of data sets: sampling of at least one value (va, vb) for the at least one transformation parameter (a, b) of the reversible transformation operation (T(a, b)), generating a transformed second MRI image (I2T) by applying the reversible transformation operation (T(a, b)) with the at least one sampled value (va, vb) of the at least one transformation parameter (a, b) to the second MRI image (I2) and/or the second intensity value distribution (IVD2), - providing a machine learning model (MLM), wherein the machine learning model (MLM) is configured to determine at least one predicted value for the at least one transformation parameter (a, b) based on: model parameters, a first MRI image, a transformed second MRI image and/or their intensity value distributions, - training the machine learning model (MLM) on the training data, wherein the training comprises, for each data set (DS) of the plurality of data sets: inputting the first MRI image (I1) and the transformed second MRI image (I2T) and/or their intensity value distributions into the machine learning model (MLM), receiving at least one predicted value (v^a, v^b) for the at least one transformation parameter (a, b), determining a deviation between the at least one value (va, vb) of the at least one transformation parameter (a, b) and the at least one predicted value (v^a, v^b) of the at least one transformation parameter (a, b), reducing the deviation by modifying model parameters of the machine learning model (MLM), - outputting and/or storing the trained machine learning model (MLMt) and/or transmitting the trained machine learning model (MLMt) to a separate computer system and/or using the trained machine learning model (MLMt) for correcting an intensity value distribution of a new MRI image.
2. The computer-implemented method of claim 1, further comprising: - receiving a new data set (DS*),
wherein the new data set (DS*) comprises at least two MRI images, a new first MRI image (I1*) and a new second MRI image (I2*), wherein the new first MRI image (I1*) represents the examination region of a new examination object in the first state, wherein the new first MRI (I1*) image is characterized by a first intensity value distribution, wherein the new second MRI image (I2*) represents the examination region of the new examination object in the second state, wherein the new second MRI image (I2*) is characterized by a second intensity value distribution, - inputting the new first MRI image (I1*) and the new second MRI image (I2*) and/or their intensity value distributions into the trained machine learning model (MLMt), - receiving at least one predicted value (v^a*, v^b*) as an output from the trained machine learning model (MLMt), - providing an inverse transformation operation (T-1(a, b)), wherein the inverse transformation operation (T-1(a, b)) comprises the at least one transformation parameter (a, b), wherein the inverse transformation operation (T-1(a, b)) reverses the reversible transformation operation (T(a, b)), - generating a corrected second MRI image (I2*c) based on the new second MRI image (I2*) and the at least one predicted value (v^a*, v^b*), wherein generating the corrected second MRI image (I2*c) comprises: applying the inverse transformation operation (T-1(a, b)) with the at least one predicted value (v^a*, v^b*) of the at least one transformation parameter (a, b) to the new second MRI image (I2*), - outputting and/or storing the corrected second MRI image (I2*c) and/or transmitting the corrected second MRI image (I2*c) to a separate computer system and/or using the corrected second MRI image (I2*c) for generating a synthetic MRI image of the examination region of the new examination object.
3. The computer-implemented method of claim 2, further comprising: - generating a third MRI image (I3*) based on the new first MRI image (I1*) and the corrected second MRI image (I2*c), wherein the new first MRI image (I1*) represents the examination region of the new examination object without contrast agent, wherein the corrected second MRI image (I2*c) represents the examination region of the new examination object after application of an amount of a contrast agent, wherein generating the third MRI image (I3*) comprises: subtracting the new first MRI image (I1*) from the corrected second MRI image (I2*c), optionally multiplying the result of the subtraction by a frequency-dependent weighting function (WF) in frequency space, - generating a fourth MRI image (I4*) based on the third MRI image (I3*) and the new first MRI image (I1*) and/or the corrected second MRI image (I2*c), wherein generating the fourth MRI image (I4*) comprises: multiplying the third MRI image (I3*) by a gain factor ( ) and adding the result of the multiplication to the new first MRI image (I1*) and/or the corrected second MRI image (I2*c), wherein the gain factor ( ) is a positive or negative real number, - in the case that the fourth MRI image (I4*) is a representation in frequency space: converting the fourth MRI image (I4*) into a fourth MRI image (I4*) in real space, - outputting and/or storing the fourth MRI image (I4*) and/or transmitting the fourth MRI image (I4*) to a separate computer system.
4. The computer-implemented method of any one of claims 1 to 3, wherein the reversible transformation operation (T(a, b)) results in the second intensity value distribution (IVD2) of the second MRI image (I2) being stretched or compressed and/or shifted.
5. The computer-implemented method of any one of claims 1 to 5, wherein the reversible transformation operation (T(a, b)) has the following mathematical equation (1): ^T = ^ ∙ ^ + ^ (1) wherein I is an intensity value of an image element of the second MRI image (I2), IT is the intensity value of the image element of the transformed second MRI image (I2T), and a and b are the transformation parameters (a, b).
6. The computer-implemented method of claim 5, wherein the inverse transformation operation (T-1(a, b)) has the following mathematical equation (2):
wherein Ic is an intensity value of an image element of the corrected second MRI image (I2*c), I is the intensity value of the image element of the new second MRI image (I2*), and a and b are the transformation parameters (a, b).
7. The computer-implemented method of any one of claims 1 to 6, wherein the machine learning model (MLM, MLMt) is or comprises a convolutional neural network.
8. The computer-implemented method of any one of claims 1 to 7, wherein the machine learning model (MLM, MLMt) is or comprises a vision transformer.
9. The computer-implemented method of any one of claims 1 to 8, wherein the machine learning model (MLM, MLMt) is or comprises a regression model.
10. The computer-implemented method of any one of claims 1 to 9, wherein the first state and the second state indicate - amounts of a contrast agent administered to the examination object and/or - time points in a dynamic MRI examination and/or - contrast agents used to generate the first MRI image (I1), the second MRI image (I2), the new first MRI image (I1*) and the new second MRI image (I2*) and/or - measurement parameters used to generate the first MRI image (I1), the second MRI image (I2), the new first MRI image (I1*) and the new second MRI image (I2*).
11. The computer-implemented method of any one of claims 1 to 10, wherein - the first MRI image (I1) represents the examination region of the examination object without contrast agent, the second MRI image (I2) represents the examination region of the examination object after application of an amount of a contrast agent, the new first MRI image (I1*) represents the examination region of the new examination object without contrast agent, the new second MRI image (I2*) represents the examination region of the new examination object after application of the amount of the contrast agent, or
- the first MRI image (I1) represents the examination region of the examination object after application of a first amount of a contrast agent, the second MRI image (I2) represents the examination region of the examination object after application of a second amount of the contrast agent, the new first MRI image (I1*) represents the examination region of the new examination object after application of the first amount of the contrast agent, the new second MRI image (I2*) represents the examination region of the new examination object after application of the second amount of the contrast agent, or - the first MRI image (I1) represents the examination region of the examination object at a first point in time before or after application of a contrast agent, the second MRI image (I2) represents the examination region of the examination object at a second point in time before or after application of the contrast agent, the new first MRI image (I1*) represents the examination region of the new examination object at the first point in time before or after application of the contrast agent, the new second MRI image (I2*) represents the examination region of the new examination object at the second point in time before or after application of the contrast agent, or - the first MRI image (I1) represents the examination region of the examination object at a first point in time after application of a first amount of a contrast agent, the second MRI image (I2) represents the examination region of the examination object at a second point after application of a second amount of the contrast agent, the new first MRI image (I1*) represents the examination region of the new examination object at the first point in time after application of the first amount of the contrast agent, the new second MRI image (I2*) represents the examination region of the new examination object at the second point after application of the second amount of the contrast agent, or - the first MRI image (I1) represents the examination region of the examination object after application of an amount of a first contrast agent, the second MRI image (I2) represents the examination region of the examination object after application of an amount of a second contrast agent, the new first MRI image (I1*) represents the examination region of the new examination object after application of the amount of the first contrast agent, the new second MRI image (I2*) represents the examination region of the new examination object after application of the amount of the second contrast agent, or - the first MRI image (I1) represents the examination region of the examination object at a first point in time of a dynamic MRI examination, the second MRI image (I2) represents the examination region of the examination object at a second point in time of the dynamic MRI examination, the new first MRI image (I1*) represents the examination region of the new examination object at the first point in time of the dynamic MRI examination, the new second MRI image (I2*) represents the examination region of the new examination object at the second point in time of the dynamic MRI examination, or - the first MRI image (I1) represents the examination region of the examination object as a result of an examination with first measurement parameters, the second MRI image (I2) represents the examination region as a result of an examination with second measurement parameters, the new first MRI image (I1*) represents the examination region of the new examination object as a result of an examination with the first measurement parameters, the new second MRI image (I2*) represents the examination region of the new examination object as a result of an examination with second measurement parameters.
12. A non-transitory computer readable storage medium having stored thereon a computer program that, when executed by a processing unit (20) of a computer system (1), cause the computer system (1) to perform the computer-implemented method of any one of claims 1 to 11.
13. A computer system (1) comprising:
a processing unit (20); and a memory (50) storing a computer program (60) configured to perform, when executed by the processing unit (20), the computer-implemented method of any one of claims 1 to 11.
14. Use of a contrast agent in a magnetic resonance imaging examination of an examination region of an examination object, the magnetic resonance imaging examination comprising: - providing a trained machine learning model (MLMt), wherein the trained machine learning model (MLMt) is configured and was trained to determine at least one predicted value for at least one transformation parameter (a, b) based on: model parameters, a first MRI image, and a second MRI image or their intensity value distributions, wherein training of the machine learning model (MLM) comprised: providing a plurality of data sets, wherein each data set (DS) comprised at least two MRI images, a first MRI image (I1) and a second MRI image (I2), o wherein the first MRI image (I1) represents an examination region of an examination object at a first point in time before or after application of an amount of the contrast agent, wherein the first MRI image (I1) is characterized by a first intensity value distribution (IVD1), o wherein the second MRI image (I2) represents the examination region of the examination object at a second point in time before or after application of the amount of the contrast agent, wherein the second MRI image (I2) is characterized by a second intensity value distribution (IVD2), providing a reversible transformation operation, the reversible transformation operation comprising the at least one transformation parameter (a, b), wherein the reversible transformation operation performs a transformation of the second intensity value distribution (IVD2) of the second MRI image (I2) when applied to the second MRI image (I2) and/or to the second intensity value distribution (IVD2), generating training data based on the plurality of data sets, wherein generating training data comprised, for each data set (DS) of the plurality of data sets: o sampling of at least one value (va, vb) for the at least one transformation parameter (a, b) of the reversible transformation operation, o generating a transformed second MRI image (I2T) and/or a transformed second intensity value distribution by applying the reversible transformation operation with the at least one sampled value (va, vb) of the at least one transformation parameter (a, b) to the second MRI image (I2) and/or to the second intensity value distribution (IVD2), training the machine learning model (MLM) on the training data, wherein the training comprised, for each data set (DS) of the plurality of data sets: o inputting the first MRI image (I1) and the transformed second MRI image (I2T) and/or their intensity value distributions into the machine learning model (MLM), o receiving at least one predicted value (v^a, v^b) for the at least one transformation parameter (a, b), o determining a deviation between the at least one value (va, vb) of the at least one transformation parameter (a, b) and the at least one predicted value (v^a, v^b) of the at least one transformation parameter (a, b), o reducing the deviation by modifying model parameters of the machine learning model (MLM), - receiving a new data set (DS*),
wherein the new data set (DS*) comprises at least two new MRI images, a new first MRI image (I1*) and a new second MRI image (I2*), wherein the new first MRI image (I1*) represents the examination region of a new examination object at the first point in time before or after application of the amount of the contrast agent, wherein the new first MRI image (I1*) is characterized by a first intensity value distribution, wherein the new second MRI image (I2*) represents the examination region of the new examination object at the second point in time before or after application of the amount of the contrast agent, wherein the new second MRI image (I2*) is characterized by a second intensity value distribution, - inputting the new first MRI image (I1*) and the new second MRI image (I2*) and/or their intensity value distributions into the trained machine learning model (MLMt), - receiving at least one predicted value (v^a*, v^b*) as an output from the trained machine learning model (MLMt), - providing an inverse transformation operation, wherein the inverse transformation operation comprises the at least one transformation parameter (a, b), wherein the inverse transformation operation reverses the reversible transformation operation used in training the machine learning model (MLM), - generating a corrected second MRI image (I2*c) based on the new second MRI image (I2*) and the at least one predicted value (v^a*, v^b*), wherein generating the corrected second MRI image (I2*c) comprises: applying the inverse transformation operation with the at least one predicted value (v^a*, v^b*) of the at least one transformation parameter (a, b) to the new second MRI image (I2*), - outputting and/or storing the corrected second MRI image (I2*c) and/or transmitting the corrected second MRI image (I2*c) to a separate computer system.
15. Use according to claim 14, wherein the contrast agent comprises: - a Gd3+ complex of a compound of the formula (I)
(I) , where Ar is a group selected from
, where # is the linkage to X,
X is a group selected from CH2, (CH2)2, (CH2)3, (CH2)4 and *-(CH2)2-O-CH2-# , where * is the linkage to Ar and # is the linkage to the acetic acid residue, R1, R2 and R3 are each independently a hydrogen atom or a group selected from C1-C3 alkyl, -CH2OH, -(CH2)2OH and -CH2OCH3, R4 is a group selected from C2-C4 alkoxy, (H3C-CH2)-O-(CH2)2-O-, (H3C-CH2)-O-(CH2)2-O- (CH2)2-O- and (H3C-CH2)-O-(CH2)2-O-(CH2)2-O-(CH2)2-O-, R5 is a hydrogen atom, and R6 is a hydrogen atom, or a stereoisomer, tautomer, hydrate, solvate or salt thereof, or a mixture thereof, or - a Gd3+ complex of a compound of the formula (II)
Ar is a group selected from
where # is the linkage to X, X is a group selected from CH2, (CH2)2, (CH2)3, (CH2)4 and *-(CH2)2-O-CH2-#, where * is the linkage to Ar and # is the linkage to the acetic acid residue, R7 is a hydrogen atom or a group selected from C1-C3 alkyl, -CH2OH, -(CH2)2OH and -CH2OCH3; R8 is a group selected from C2-C4 alkoxy, (H3C-CH2O)-(CH2)2-O-, (H3C-CH2O)-(CH2)2-O-(CH2)2-O- and (H3C-CH2O)-(CH2)2-O-(CH2)2-O-(CH2)2-O-; R9 and R10 independently represent a hydrogen atom; or a stereoisomer, tautomer, hydrate, solvate or salt thereof, or a mixture thereof, or the contrast agent comprises one of the following substances:
- gadolinium(III) 2-[4,7,10-tris(carboxymethyl)-1,4,7,10-tetrazacyclododec-1-yl]acetic acid, - gadolinium(III) ethoxybenzyldiethylenetriaminepentaacetic acid, - gadolinium(III) 2-[3,9-bis[1-carboxylato-4-(2,3-dihydroxypropylamino)-4-oxobutyl]-3,6,9,15- tetrazabicyclo[9.3.1]pentadeca-1(15),11,13-trien-6-yl]-5-(2,3-dihydroxypropylamino)-5- oxopentanoate, - dihydrogen [(±)-4-carboxy-5,8,11-tris(carboxymethyl)-1-phenyl-2-oxa-5,8,11-triazatridecan-13- oato(5-)]gadolinate(2-), - tetragadolinium [4,10-bis(carboxylatomethyl)-7-{3,6,12,15-tetraoxo-16-[4,7,10- tris(carboxylatomethyl)-1,4,7,10-tetraazacyclododecan-1-yl]-9,9-bis({[({2-[4,7,10- tris(carboxylatomethyl)-1,4,7,10-tetraazacyclododecan-1-yl]propanoyl}amino)acetyl]- amino}methyl)-4,7,11,14-tetraazahepta-decan-2-yl}-1,4,7,10-tetraazacyclododecan-1-yl]acetate, - 2,2',2''-(10-{1-carboxy-2-[2-(4-ethoxyphenyl)ethoxy]ethyl}-1,4,7,10-tetraazacyclododecane- 1,4,7-triyl)triacetate, - gadolinium 2,2',2''-{10-[1-carboxy-2-{4-[2-(2-ethoxyethoxy)ethoxy]phenyl}ethyl]-1,4,7,10- tetraazacyclododecane-1,4,7-triyl}triacetate, - gadolinium 2,2',2''-{10-[(1R)-1-carboxy-2-{4-[2-(2-ethoxyethoxy)ethoxy]phenyl}ethyl]-1,4,7,10- tetraazacyclododecane-1,4,7-triyl}triacetate, - gadolinium (2S,2'S,2''S)-2,2',2''-{10-[(1S)-1-carboxy-4-{4-[2-(2-ethoxyethoxy)ethoxy] phenyl}butyl]-1,4,7,10-tetraazacyclododecane-1,4,7-triyl}tris(3-hydroxypropanoate) - gadolinium 2,2',2''-{10-[(1S)-4-(4-butoxyphenyl)-1-carboxybutyl]-1,4,7,10- tetraazacyclododecane-1,4,7-triyl}triacetate, - gadolinium(III) 5,8-bis(carboxylatomethyl)-2-[2-(methylamino)-2-oxoethyl]-10-oxo-2,5,8,11- tetraazadodecane-1-carboxylate hydrate - gadolinium(III) 2-[4-(2-hydroxypropyl)-7,10-bis(2-oxido-2-oxoethyl)-1,4,7,10- tetrazacyclododec-1-yl]acetate, - gadolinium(III) 2,2',2''-(10-((2R,3S)-1,3,4-trihydroxybutan-2-yl)-1,4,7,10-tetraazacyclododecane- 1,4,7-triyl)triacetate, - gadolinium-2,2',2''-{(2S)-10-(carboxymethyl)-2-[4-(2-ethoxyethoxy)benzyl]-1,4,7,10- tetraazacyclododecane-1,4,7-triyl}triacetate, - gadolinium-2,2',2''-[10-(carboxymethyl)-2-(4-ethoxybenzyl)-1,4,7,10-tetraazacyclododecane- 1,4,7-triyl]triacetate.
16. A contrast agent for use in a magnetic resonance imaging examination of an examination region of an examination object, the magnetic resonance imaging examination comprising: - providing a trained machine learning model (MLMt), wherein the trained machine learning model (MLMt) is configured and was trained to determine at least one predicted value for at least one transformation parameter (a, b) based on: model parameters, a first MRI image, and a second MRI image or their intensity value distributions, wherein training of the machine learning model (MLM) comprised: providing a plurality of data sets, wherein each data set (DS) comprised at least two MRI images, a first MRI image (I1) and a second MRI image (I2), o wherein the first MRI image (I1) represents an examination region of an examination object at a first point in time before or after application of an amount of the contrast
agent, wherein the first MRI image (I1) is characterized by a first intensity value distribution (IVD1), o wherein the second MRI image (I2) represents the examination region of the examination object at a second point in time before or after application of the amount of the contrast agent, wherein the second MRI image (I2) is characterized by a second intensity value distribution (IVD2), providing a reversible transformation operation (T(a, b)), the reversible transformation operation (T(a, b)) comprising the at least one transformation parameter (a, b), wherein the reversible transformation operation (T(a, b)) performs a transformation of the second intensity value distribution (IVD2) of the second MRI image (I2) when applied to the second MRI image (I2) and/or to the second intensity value distribution (IVD2), generating training data based on the plurality of data sets, wherein generating training data comprised, for each data set (DS) of the plurality of data sets: o sampling of at least one value (va, vb) for the at least one transformation parameter (a, b) of the reversible transformation operation (T(a, b)), o generating a transformed second MRI image (I2T) and/or a transformed second intensity value distribution by applying the reversible transformation operation (T(a, b)) with the at least one sampled value (va, vb) of the at least one transformation parameter (a, b) to the second MRI image (I2) and/or to the second intensity value distribution (IVD2), training the machine learning model (MLM) on the training data, wherein the training comprised, for each data set (DS) of the plurality of data sets: o inputting the first MRI image (I1) and the transformed second MRI image (I2T) and/or their intensity value distributions into the machine learning model (MLM), o receiving at least one predicted value (v^a, v^b) for the at least one transformation parameter (a, b), o determining a deviation between the at least one value (va, vb) of the at least one transformation parameter (a, b) and the at least one predicted value (v^a, v^b) of the at least one transformation parameter (a, b), o reducing the deviation by modifying model parameters of the machine learning model (MLM), - receiving a new data set (DS*), wherein the new data set (DS*) comprises at least two new MRI images, a new first MRI image (I1*) and a new second MRI image (I2*), wherein the new first MRI image (I1*) represents the examination region of a new examination object at the first point in time before or after application of the amount of the contrast agent, wherein the new first MRI image (I1*) is characterized by a first intensity value distribution, wherein the new second MRI image (I2*) represents the examination region of the new examination object at the second point in time before or after application of the amount of the contrast agent, wherein the new second MRI image (I2*) is characterized by a second intensity value distribution, - inputting the new first MRI image (I1*) and the new second MRI image (I2*) and/or their intensity value distributions into the trained machine learning model (MLMt), - receiving at least one predicted value (v^a*, v^b*) as an output from the trained machine learning model (MLMt),
- providing an inverse transformation operation (T-1(a, b)), wherein the inverse transformation operation (T-1(a, b)) comprises the at least one transformation parameter (a, b), wherein the inverse transformation operation (T-1(a, b)) reverses the reversible transformation operation (T(a, b)) used in training the machine learning model (MLM), - generating a corrected second MRI image (I2*c) based on the new second MRI image (I2*) and the at least one predicted value (v^a*, v^b*), wherein generating the corrected second MRI image (I2*c) comprises: applying the inverse transformation operation (T-1(a, b)) with the at least one predicted value (v^a*, v^b*) of the at least one transformation parameter (a, b) to the new second MRI image (I2*), - outputting and/or storing the corrected second MRI image (I2*c) and/or transmitting the corrected second MRI image (I2*c) to a separate computer system.
17. A kit comprising a contrast agent and a computer program that, when executed by a processing unit of a computer system, causes the computer system to execute the following steps: - providing a trained machine learning model (MLMt), wherein the trained machine learning model (MLMt) is configured and was trained to determine at least one predicted value for at least one transformation parameter (a, b) based on: model parameters, a first MRI image, and a second MRI image or their intensity value distributions, wherein training of the machine learning model (MLM) comprised: providing a plurality of data sets, wherein each data set (DS) comprised at least two MRI images, a first MRI image (I1) and a second MRI image (I2), o wherein the first MRI image (I1) represents an examination region of an examination object at a first point in time before or after application of an amount of the contrast agent, wherein the first MRI image (I1) is characterized by a first intensity value distribution (IVD1), o wherein the second MRI image (I2) represents the examination region of the examination object at a second point in time before or after application of the amount of the contrast agent, wherein the second MRI image (I2) is characterized by a second intensity value distribution (IVD2), providing a reversible transformation operation (T(a, b)), the reversible transformation operation (T(a, b)) comprising the at least one transformation parameter (a, b), wherein the reversible transformation operation (T(a, b)) performs a transformation of the second intensity value distribution (IVD2) of the second MRI image (I2) when applied to the second MRI image (I2) and/or to the second intensity value distribution (IVD2), generating training data based on the plurality of data sets, wherein generating training data comprised, for each data set (DS) of the plurality of data sets: o sampling of at least one value (va, vb) for the at least one transformation parameter (a, b) of the reversible transformation operation (T(a, b)), o generating a transformed second MRI image (I2T) and/or a transformed second intensity value distribution by applying the reversible transformation operation (T(a, b)) with the at least one sampled value (va, vb) of the at least one transformation parameter (a, b) to the second MRI image (I2) and/or the second intensity value distribution (IVD2), training the machine learning model (MLM) on the training data, wherein the training comprised, for each data set (DS) of the plurality of data sets: o inputting the first MRI image (I1) and the transformed second MRI image (I2T) and/or their intensity value distributions into the machine learning model (MLM),
o receiving at least one predicted value (v^a, v^b) for the at least one transformation parameter (a, b), o determining a deviation between the at least one value (va, vb) of the at least one transformation parameter (a, b) and the at least one predicted value (v^a, v^b) of the at least one transformation parameter (a, b), o reducing the deviation by modifying model parameters of the machine learning model (MLM), - receiving a new data set (DS*), wherein the new data set (DS*) comprises at least two new MRI images, a new first MRI image (I1*) and a new second MRI image (I2*), wherein the new first MRI image (I1*) represents the examination region of a new examination object at the first point in time before or after application of the amount of the contrast agent, wherein the new first MRI image (I1*) is characterized by a first intensity value distribution, wherein the new second MRI image (I2*) represents the examination region of the new examination object at the second point in time before or after application of the amount of the contrast agent, wherein the new second MRI image (I2*) is characterized by a second intensity value distribution, - inputting the new first MRI image (I1*) and the new second MRI image (I2*) and/or their intensity value distributions into the trained machine learning model (MLMt), - receiving at least one predicted value (v^a*, v^b*) as an output from the trained machine learning model (MLMt), - providing an inverse transformation operation (T-1(a, b)), wherein the inverse transformation operation (T-1(a, b)) comprises the at least one transformation parameter (a, b), wherein the inverse transformation operation (T-1(a, b)) reverses the reversible transformation operation (T(a, b)) used in training the machine learning model (MLM), - generating a corrected second MRI image (I2*c) based on the new second MRI image (I2*) and the at least one predicted value (v^a*, v^b*), wherein generating the corrected second MRI image (I2*c) comprises: applying the inverse transformation operation (T-1(a, b)) with the at least one predicted value (v^a*, v^b*) of the at least one transformation parameter (a, b) to the new second MRI image (I2*), - outputting and/or storing the corrected second MRI image (I2*c) and/or transmitting the corrected second MRI image (I2*c) to a separate computer system.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| EP24163958 | 2024-03-15 | ||
| EP24163958.2 | 2024-03-15 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025190828A1 true WO2025190828A1 (en) | 2025-09-18 |
Family
ID=90366098
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/EP2025/056377 Pending WO2025190828A1 (en) | 2024-03-15 | 2025-03-10 | Correction of an mri image |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025190828A1 (en) |
Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2007042504A2 (en) | 2005-10-07 | 2007-04-19 | Guerbet | Compounds comprising a biological target recognizing part, coupled to a signal part capable of complexing gallium |
| WO2016193190A1 (en) | 2015-06-04 | 2016-12-08 | Bayer Pharma Aktiengesellschaft | New gadolinium chelate compounds for use in magnetic resonance imaging |
| WO2020030618A1 (en) | 2018-08-06 | 2020-02-13 | Bracco Imaging Spa | Gadolinium bearing pcta-based contrast agents |
| WO2022013454A1 (en) | 2020-07-17 | 2022-01-20 | Guerbet | Method for preparing a chelating ligand derived from pcta |
| EP4044120A1 (en) * | 2021-02-15 | 2022-08-17 | Koninklijke Philips N.V. | Training data synthesizer for contrast enhancing machine learning systems |
| WO2022194777A1 (en) | 2021-03-15 | 2022-09-22 | Bayer Aktiengesellschaft | New contrast agent for use in magnetic resonance imaging |
| EP4332601A1 (en) | 2022-09-05 | 2024-03-06 | Bayer AG | Generation of artificial contrast agent-enhanced radiological recordings |
| WO2024052156A1 (en) * | 2022-09-05 | 2024-03-14 | Bayer Aktiengesellschaft | Generation of artificial contrast-enhanced radiological images |
-
2025
- 2025-03-10 WO PCT/EP2025/056377 patent/WO2025190828A1/en active Pending
Patent Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2007042504A2 (en) | 2005-10-07 | 2007-04-19 | Guerbet | Compounds comprising a biological target recognizing part, coupled to a signal part capable of complexing gallium |
| WO2016193190A1 (en) | 2015-06-04 | 2016-12-08 | Bayer Pharma Aktiengesellschaft | New gadolinium chelate compounds for use in magnetic resonance imaging |
| WO2020030618A1 (en) | 2018-08-06 | 2020-02-13 | Bracco Imaging Spa | Gadolinium bearing pcta-based contrast agents |
| WO2022013454A1 (en) | 2020-07-17 | 2022-01-20 | Guerbet | Method for preparing a chelating ligand derived from pcta |
| EP4044120A1 (en) * | 2021-02-15 | 2022-08-17 | Koninklijke Philips N.V. | Training data synthesizer for contrast enhancing machine learning systems |
| WO2022194777A1 (en) | 2021-03-15 | 2022-09-22 | Bayer Aktiengesellschaft | New contrast agent for use in magnetic resonance imaging |
| EP4332601A1 (en) | 2022-09-05 | 2024-03-06 | Bayer AG | Generation of artificial contrast agent-enhanced radiological recordings |
| WO2024052156A1 (en) * | 2022-09-05 | 2024-03-14 | Bayer Aktiengesellschaft | Generation of artificial contrast-enhanced radiological images |
Non-Patent Citations (9)
| Title |
|---|
| F.J. HARRIS ET AL.: "On the Use of Windows for Harmonic Analysis with the Discrete Fourier Transform", PROCEEDINGS OF THE IEEE, vol. 66, no. 1, pages 1978, Retrieved from the Internet <URL:https://docsscipy.org/doc/scipy/reference/signalwindowshtml> |
| GYUTAEK OH ET AL: "Unpaired Deep Learning for Pharmacokinetic Parameter Estimation from Dynamic Contrast-Enhanced MRI", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 7 June 2023 (2023-06-07), XP091532572 * |
| H. KIM: "Variability in Quantitative DCE MRI: Sources and Solution", J NAT SCI., vol. 4, no. 1, 2018, pages e484 |
| HOSSBACH JULIAN ET AL: "Deep learning-based motion quantification from k-space for fast model-based magnetic resonance imaging motion correction", MEDICAL PHYSICS., vol. 50, no. 4, 13 December 2022 (2022-12-13), US, pages 2148 - 2161, XP093194127, ISSN: 0094-2405, DOI: 10.1002/mp.16119 * |
| J. LOHRKE ET AL.: "Preclinical Profile of Gadoquatrane: A Novel Tetrameric, Macrocyclic High Relaxivity Gadolinium-Based Contrast Agent", INVEST RADIOL., vol. 1, no. 10, 2022, pages 629 - 638 |
| JULIAN HOSSBACH ET AL: "Think outside the box: Exploiting the imaging workflow for Deep Learning based motion estimation and correction", PROCEEDINGS OF THE JOINT ANNUAL MEETING ISMRM-ESMRMB 2022 & ISMRT ANNUAL MEETING, LONDON, UK, 07-12 MAY 2022, ISMRM, 2030 ADDISON STREET, 7TH FLOOR, BERKELEY, CA 94704 USA, no. 1952, 22 April 2022 (2022-04-22), XP040728500 * |
| K. M. M PRABHU: "Window Functions and Their Applications in Signal Processing", 2014, CRC PRESS |
| K. PARMARJ. PARKERD. GUZZETTI: "Applications of Regression Vision Transformers for Autonomous Spacecraft Optical Navigation in Simulated Orbital Environments, Conference", AAS/AIAA, 20 September 2023 (2023-09-20) |
| L. ALZUBAIDI ET AL.: "Review of deep learning: concepts, CNN architectures, challenges, applications, future directions", J BIG DATA, vol. 8, 2021, pages 53 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Pérez-García et al. | TorchIO: a Python library for efficient loading, preprocessing, augmentation and patch-based sampling of medical images in deep learning | |
| Gibson et al. | NiftyNet: a deep-learning platform for medical imaging | |
| Usman Akbar et al. | Beware of diffusion models for synthesizing medical images—a comparison with GANs in terms of memorizing brain MRI and chest x-ray images | |
| US12175730B2 (en) | Image learning method, apparatus, program, and recording medium using generative adversarial network | |
| Aghabiglou et al. | Projection-Based cascaded U-Net model for MR image reconstruction | |
| US20240303973A1 (en) | Actor-critic approach for generating synthetic images | |
| Bhadra et al. | Medical image reconstruction with image-adaptive priors learned by use of generative adversarial networks | |
| Göçeri et al. | Fully automated liver segmentation from SPIR image series | |
| Spiclin et al. | Groupwise registration of multimodal images by an efficient joint entropy minimization scheme | |
| Hu et al. | Aorta-aware GAN for non-contrast to artery contrasted CT translation and its application to abdominal aortic aneurysm detection | |
| Zhou et al. | Learning stochastic object models from medical imaging measurements by use of advanced ambient generative adversarial networks | |
| Chang et al. | Brain MR image restoration using an automatic trilateral filter with GPU-based acceleration | |
| US9390549B2 (en) | Shape data generation method and apparatus | |
| Saeed et al. | GGLA-NeXtE2NET: a dual-branch ensemble network with gated global-local attention for enhanced brain tumor recognition | |
| Zhong et al. | Deep action learning enables robust 3D segmentation of body organs in various CT and MRI images | |
| Fourcade et al. | Deformable image registration with deep network priors: a study on longitudinal PET images | |
| Ifty et al. | Implementation of liver segmentation from computed tomography (CT) images using deep learning | |
| EP4567715A1 (en) | Generating synthetic representations | |
| WO2025190828A1 (en) | Correction of an mri image | |
| Freiman et al. | A curvelet-based patient-specific prior for accurate multi-modal brain image rigid registration | |
| Orłowski et al. | Efficient computation of Hessian-based enhancement filters for tubular structures in 3D images | |
| CN119784669A (en) | A high-precision detection method based on automatic processing of multi-module adenoid images | |
| Klein et al. | Multimodal image registration by edge attraction and regularization using a B-spline grid | |
| van Opbroek et al. | Feature-space transformation improves supervised segmentation across scanners | |
| Tummala et al. | Machine learning framework for fully automatic quality checking of rigid and affine registrations in big data brain MRI |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 25711461 Country of ref document: EP Kind code of ref document: A1 |