WO2011037063A1 - Procédé de génération d'images de projection et dispositif d'imagerie par résonance magnétique - Google Patents
Procédé de génération d'images de projection et dispositif d'imagerie par résonance magnétique Download PDFInfo
- Publication number
- WO2011037063A1 WO2011037063A1 PCT/JP2010/066045 JP2010066045W WO2011037063A1 WO 2011037063 A1 WO2011037063 A1 WO 2011037063A1 JP 2010066045 W JP2010066045 W JP 2010066045W WO 2011037063 A1 WO2011037063 A1 WO 2011037063A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- template
- projection image
- image data
- image generation
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 121
- 238000002595 magnetic resonance imaging Methods 0.000 title claims description 23
- 238000003384 imaging method Methods 0.000 claims description 70
- 210000001519 tissue Anatomy 0.000 claims description 59
- 238000003860 storage Methods 0.000 claims description 38
- 210000003625 skull Anatomy 0.000 claims description 11
- 230000008569 process Effects 0.000 description 59
- 230000006870 function Effects 0.000 description 15
- 238000006243 chemical reaction Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 8
- 238000001208 nuclear magnetic resonance pulse sequence Methods 0.000 description 6
- 210000004279 orbit Anatomy 0.000 description 6
- 238000005259 measurement Methods 0.000 description 5
- 210000004003 subcutaneous fat Anatomy 0.000 description 5
- 210000003141 lower extremity Anatomy 0.000 description 4
- 230000004069 differentiation Effects 0.000 description 3
- 239000000284 extract Substances 0.000 description 3
- 230000005484 gravity Effects 0.000 description 3
- 238000002583 angiography Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 230000017531 blood circulation Effects 0.000 description 2
- 210000004204 blood vessel Anatomy 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 1
- 210000004369 blood Anatomy 0.000 description 1
- 239000008280 blood Substances 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 230000005764 inhibitory process Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000001678 irradiating effect Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 210000003205 muscle Anatomy 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 238000007430 reference method Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000007920 subcutaneous administration Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/05—Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves
- A61B5/055—Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves involving electronic [EMR] or nuclear [NMR] magnetic resonance, e.g. magnetic resonance imaging
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01R—MEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
- G01R33/00—Arrangements or instruments for measuring magnetic variables
- G01R33/20—Arrangements or instruments for measuring magnetic variables involving magnetic resonance
- G01R33/44—Arrangements or instruments for measuring magnetic variables involving magnetic resonance using nuclear magnetic resonance [NMR]
- G01R33/48—NMR imaging systems
- G01R33/54—Signal processing systems, e.g. using pulse sequences ; Generation or control of pulse sequences; Operator console
- G01R33/56—Image enhancement or correction, e.g. subtraction or averaging techniques, e.g. improvement of signal-to-noise ratio and resolution
- G01R33/5608—Data processing and visualization specially adapted for MR, e.g. for feature analysis and pattern recognition on the basis of measured MR data, segmentation of measured MR data, edge contour detection on the basis of measured MR data, for enhancing measured MR data in terms of signal-to-noise ratio by means of noise filtering or apodization, for enhancing measured MR data in terms of resolution by means for deblurring, windowing, zero filling, or generation of gray-scaled images, colour-coded images or images displaying vectors instead of pixels
Definitions
- the present invention relates to a technique for generating a projection image from three-dimensional image data acquired by a medical image acquisition apparatus such as a magnetic resonance imaging apparatus (hereinafter referred to as an MRI apparatus), and in particular, by a magnetic resonance angiography (MRA) method.
- MRI apparatus magnetic resonance imaging apparatus
- MRA magnetic resonance angiography
- the present invention relates to a technique for creating a MIP (Maximum Intensity Projection) image from obtained data.
- MIP Maximum Intensity Projection
- MRA magnetic resonance angiography
- MIP Maximum Intensity Projection
- a projection image generated by MIP is referred to as a MIP image.
- a tissue with high signal intensity such as fat
- signals from these tissues are mixed in the MIP image, and the visibility of the target blood vessel and blood flow is reduced.
- the imaging region is the head, fat such as subcutaneous fat outside the skull, fat in the orbit, and the like are distributed throughout the head, so that an image in which fat is mixed regardless of the projection direction is obtained.
- clipping processing that removes unnecessary tissue other than the target tissue from the 3D image data is essential.
- this clipping process is performed manually by an operator confirming the position of unnecessary tissue (here, fat) on the projection image (here, MIP image) (see, for example, Non-Patent Document 1). .
- the operator refers to a plurality of projection images projected in different directions and removes them while repeating trial and error. Such an operation is complicated and takes time. Further, the quality of images obtained depends on the proficiency level of the operator.
- the present invention has been made in view of the above circumstances, and provides a technique for reducing the operation when generating a projection image from three-dimensional image data and obtaining a high-quality projection image regardless of the skill of the operator. With the goal.
- an unnecessary tissue region is specified by a predetermined procedure using a template prepared for each part and for each unnecessary tissue, and the specified region is removed.
- a projection image generation method for generating a projection image from three-dimensional image data acquired by a magnetic resonance imaging apparatus, wherein an unnecessary tissue region is specified on the three-dimensional image data.
- a projection image generation method is provided.
- imaging means for acquiring three-dimensional image data
- unnecessary tissue region specifying means for specifying an unnecessary tissue region on the three-dimensional image data
- data of the specified unnecessary tissue region on the three-dimensional image There is provided a magnetic resonance imaging apparatus comprising: removal means for removing from data; and projection image generation means for generating a projection image from the removed three-dimensional image data.
- FIG. 1 is a functional block diagram of an example of a magnetic resonance imaging (MRI) apparatus 100 of the present embodiment.
- the MRI apparatus 100 of the present embodiment detects a echo signal by irradiating a subject 101 such as a patient with a magnet 101 that generates a static magnetic field, a bed 103 on which a subject 102 such as a patient is placed, and a high-frequency magnetic field (hereinafter referred to as RF).
- RF high-frequency magnetic field
- RF coil 104 gradient magnetic field generating coils 105, 106, 107 for generating gradient magnetic fields of slice selection, phase encoding, and frequency encoding in any of the X, Y, and Z directions, respectively, and RF RF power source 108 that supplies power to the coil 104, gradient magnetic field power sources 109, 110, and 111 for supplying power to the gradient magnetic field generating coils 105, 106, and 107, a synthesizer 112, a modulator 113, and an amplifier 114
- An echo receiver 115, a sequencer 116, and a calculator 120 An echo receiver 115, a sequencer 116, and a calculator 120.
- the computer 120 includes a CPU 121, a memory (not shown), a storage device 123, a display device 124, and an input device 125. Further, an external storage device may be provided.
- the sequencer 116 includes a gradient magnetic field power source according to the imaging conditions set by the operator via the input device 124 of the computer 120 and the pulse sequence stored in the storage device 112 of the computer 120.
- a command is transmitted to 105, 106, and 107, and a gradient magnetic field in each direction is generated by the gradient coils 108, 109, and 110.
- an RF waveform is generated by the synthesizer 112 and the modulator 113, and the RF power source 108 amplifies the RF waveform to generate an RF pulse, and the RF coil 104 irradiates the RF pulse.
- the echo signal generated from the subject 102 is received by the RF coil 104, amplified by the amplifier 114, and A / D converted and detected by the receiver 115.
- the center frequency used as a reference for detection is stored in the storage device 123, and the sequencer 116 sets it in the echo receiver 115.
- the detected echo signal is sent to the computer 120, subjected to image reconstruction processing, and the result is displayed on the display device 124.
- the RF coil 104 is used for both transmission and reception, but the present invention is not limited to this.
- a transmission coil that irradiates RF and a reception coil that detects an echo signal may be provided separately.
- the receiving coil a plurality of receiving coils may be arranged and used in parallel.
- a three-dimensional region of the subject 102 is imaged by the MRA method or the like, a projection image such as an MIP image is generated from the reconstructed three-dimensional image data, and displayed on the display device 124.
- a clipping process for removing unnecessary tissue is automatically performed.
- an image (MRA image) obtained by the MRA method as three-dimensional image data and an MIP image as a projection image will be described as examples.
- FIG. 2 is a functional block diagram of the computer 120 of this embodiment.
- the computer 120 of the present embodiment operates each unit according to a pulse sequence stored in advance in the storage device 123 and imaging conditions input from an operator, and implements the imaging unit 280 that realizes the imaging, and MRA from the obtained echo signal.
- An image reconstruction unit 290 that obtains an image and a display image generation unit 200 that performs display image generation processing that performs clipping processing on an MRA image to generate an MIP image are provided.
- the display image generation unit 200 includes a projection image generation unit 210 that generates a projection image from the three-dimensional image data, and an unnecessary region specification unit 220 that performs map processing for specifying a predetermined unnecessary tissue region in the MRA image. And an unnecessary area removing unit 230 that performs a removal process for removing the unnecessary area specified by the unnecessary area specifying unit 220 from the MRA image.
- the display image generation unit 200 operates the projection image generation unit 210, the unnecessary region specifying unit 220, and the unnecessary region removal unit 230 in a predetermined procedure using a predetermined template for each imaging target region and unnecessary tissue.
- the display image generation process is performed. Therefore, the computer 120 of this embodiment includes a storage unit 300 and holds various data necessary for display image generation processing.
- the storage 300 includes a template database (template DB) 310 that stores templates that can be used for each imaging target region and unnecessary tissue, and an algorithm database that stores clipping processing procedures (algorithms) for each imaging target region and unnecessary tissue. (Algorithm DB) 320 and a display image database (display image DB) 330 that holds display image data generated by display image generation processing.
- Each function realized by the computer 120 is realized by loading a program stored in the storage device 123 into the memory by the CPU 121 of the computer 120 and executing it. Further, the storage unit 310 is constructed on the storage device 123.
- the imaging unit 280 Upon receiving an instruction to start imaging from the operator, the imaging unit 280 gives an instruction to the sequencer 116 according to the imaging parameters input from the operator and stored in the storage device 123 and the pulse sequence stored in the storage device 123 in advance. Then, measurement by the MRA method is executed, and echo signals are collected (step S1001). The image reconstruction unit 290 reconstructs an MRA image from the collected echo signals (step S1002).
- the display image generation unit 200 performs display image generation processing including clipping processing on the MRA image, generates a MIP image, stores the MRA image after clipping processing in the storage device 123, and stores it in the display device 124. It is displayed (step S1003).
- FIG. 4 is a processing flow of display image generation processing of the present embodiment.
- the display image generation unit 200 starts processing.
- the display image generation unit 200 holds the MRA image as display image data in the display image database 330 (step S1101).
- the unnecessary area specifying unit 220 performs map processing (step S1102)
- the unnecessary area removing unit 230 performs removal processing (step S1103).
- the display image generation unit 200 causes the projection image generation unit 210 to generate an MIP image from the MRA image after removal of unnecessary regions (post-removal MRA image) (step S1104), and the display device 124 displays the generated MIP image. (Step S1105). Further, the display image held in the display image database 330 is updated with the MRA image after removal (step S1106). Note that step S1106 may be performed anytime after the removal process. Further, the process of step S1101 may not be performed.
- FIG. 5 is a processing flow of map processing and removal processing
- FIG. 6 is a diagram for explaining map processing.
- map processing and removal processing are performed using the imaging target region as the head, unnecessary tissue as fat around the skull, and an axial image in the MIP image will be described as an example.
- MRA image two orthogonal directions on the axial plane are defined as x-axis and y-axis, and a direction orthogonal to the axial plane is defined as z-axis.
- the unnecessary area specifying unit 220 causes the projection image generation unit 210 to generate a MIP image from the MRA image to be processed (the MRA image stored in the display image DB 330) (step S1201).
- the unnecessary area specifying unit 220 extracts a template or the like used for map processing from the template DB 310 (step S1202).
- a function model template is used for map processing.
- This template is stored in advance in the template DB 310 in association with these for each imaging target region and unnecessary tissue.
- information for specifying an unnecessary region is similarly stored for each imaging target region and unnecessary tissue.
- an ellipse shown in FIG. 6A is stored as a template 410 for removing fat around the skull as unnecessary tissue.
- information indicating an area having a radius of 80% or more and 100% or less of each of the short axis and the long axis of the ellipse is stored as information for specifying the unnecessary area. Therefore, here, the unnecessary area specifying unit 220 extracts the template 410 and information for specifying the unnecessary area based on the information on the imaging target part and the unnecessary area which are instructed in advance.
- step S1203 a shape conversion process for matching the template 410 with the head image of the axial image 420 in the generated MIP image is performed.
- the unnecessary area specifying unit 220 first matches the center (Xc, Yc) of the ellipse 411 with the center (Xh, Yh) of the head 421.
- the center (Xh, Yh) of the head 421 is, as shown in FIG. Is the intersection of the midpoint between the lines Y1 and Y2 in the front-rear direction.
- Lines X1 and X2 are lines that scan the axial image 420 from the left end and the right end, respectively, and exceed a predetermined threshold value for the first time.
- the lines Y1 and Y2 are lines that scan the axial image 420 from the front end and the rear end, respectively, and exceed the predetermined threshold value for the first time.
- the unnecessary area specifying unit 220 After matching both centers, the unnecessary area specifying unit 220 performs a fitting process for deforming the ellipse 411 by affine transformation to fit the outline of the head 421. Note that the contour of the head 421 is extracted using edge processing of normal image processing. By this shape conversion process, the ellipse 411 of the template 410 matches the shape of the head image (head image) 421 of the subject 102 on the axial image 420.
- the unnecessary area specifying unit 220 specifies the area on the head image 421 associated with the area specified as the unnecessary area on the template 410 as the removal target area (step S1204). That is, by the fitting process, in the head 421, an area associated with a predetermined unnecessary area on the template 410 (Fig. 6 (c), an ellipse having a short axis of ⁇ x1 and a long axis of ⁇ y1, and a short axis Is an area surrounded by an ellipse whose major axis is ⁇ y2, and is specified as a removal target area.
- the unnecessary area removal unit 230 performs an unnecessary area removal process (step S1205).
- the pixel value (signal value) of the pixel corresponding to the removal target area is set to 0, and the pixel value (signal value) of the pixel corresponding to the inner area of the ellipse having the short axis of ⁇ x1 and the long axis of ⁇ y1 is set to 1.
- a mask image 430 to be generated is generated. An example of the mask image is shown in FIG. This mask image is integrated over the entire range in the z-axis direction of the MRA image, and unnecessary tissue regions (here, fat regions around the head skull) are removed.
- a pixel in a double elliptical cylindrical area whose base is the area surrounded by an ellipse whose short axis is ⁇ x1 and long axis is ⁇ y1, and whose short axis is ⁇ x2 and long axis is ⁇ y2
- a post-removal MRA image with a value of 0 is obtained.
- An example of the MIP image 440 generated from the post-removal MRA image is shown in FIG. 6 (e).
- the display image generation unit 200 performs the above processing using the template 410 or the like stored in the template DB 310 according to the algorithm stored in advance in the algorithm DB 320 for each imaging target region and unnecessary tissue.
- FIG. 7 shows a basic processing flow in this case. Up to step S1104 is the same as the display image generation process without the determination process shown in FIG.
- the display image generation unit 200 displays a reception screen for accepting permission / inhibition together with the MIP image (step S1105).
- the screen data for generating the reception screen is also stored in advance in the storage unit 310.
- the display image generation unit 200 Upon receiving an approval (possible) instruction from the operator via the reception screen (step S1106), the display image generation unit 200 displays the MIP image generated from the post-removal MRA image as it is and displays it as the post-removal MRA image.
- the image data is updated (step S1107), and the process ends.
- the display image generation unit 200 uses the MRA image stored in the display image database as display image data at that time for the MIP image to the projection image generation unit 210.
- An image is generated (step S1108), and the generated MIP image is displayed on the display device 124 (step S1109).
- Figure 8 shows an example of the reception screen.
- the acceptance screen 910 includes an image display area 913 that displays an MIP image, an approval button 911 that accepts a possible instruction, and a reject button 912 that accepts an impossible instruction.
- the display image generation unit 200 generates the reception screen 910 from the image data held in the display image DB 330 of the storage unit 300.
- FIG. 9 is a processing flow when the display image generation processing of this embodiment is applied to the head.
- the operator inputs in advance an instruction to set the imaging target region as the head and the unnecessary tissue as fat.
- the display image generation unit 200 accesses the storage unit 310, and stores the template and processing algorithm stored in association with the instructed imaging target region and unnecessary tissue, respectively, with the template DB 310 and the algorithm. Extract from DB320 (step S1301).
- the fat region around the skull is specified on the axial image in the MIP image, and is removed from the MRA image over the entire range orthogonal to the axial image (z-axis direction). Then, a determination is made as to whether or not the removal result is acceptable. To remove. Similarly, it is judged whether the result is acceptable, and if it is possible, a MIP image is generated from the MRA image after removal, the fat region around the orbit is specified on the coronal image, and the entire range in the direction orthogonal to the coronal image is obtained. Remove. Note that if the determination is negative, the process ends at that point.
- templates 410 corresponding to head and fat region removal those specifying the fat region around the skull on the axial image, those specifying the fat region around the orbit on the sagittal image, and the orbit on the coronal image, respectively. Those that specify peripheral fat regions are registered.
- the display image generation unit 200 holds the MRA image as display image data in the display image DB 330 (step S1302). Then, the unnecessary area specifying unit 220 performs map processing (step S1303), and the unnecessary area removing unit 230 performs removal processing (step S1304), thereby obtaining a post-removal MRA image.
- the MIP image is generated by the projection image generation unit 210 and is performed on the axial image.
- the display image generation unit 200 causes the projection image generation unit 210 to generate a MIP image from the MRA image after removal (step S1305), generates a reception screen 910, and displays it on the display device 124 (step S1306).
- the display image generation unit 200 waits for an approval or rejection instruction.
- the display image generation unit 200 causes the projection image generation unit 210 to generate a MIP image from the display image data currently held in the display image DB 330 (step S1331). Then, it is displayed on the display device 240 (step S1332), and the process is terminated.
- the display image generation unit 200 stores the MRA image after removal in the display image DB 330 as display image data (step S1308).
- the display image generation unit 200 performs map processing on the display image data (MRA image) stored in the display image DB 330 (step S1309).
- the projection image generation unit 210 generates an MIP image and uses the sagittal image.
- the unnecessary area removing unit 230 performs a removal process (step S1310).
- the projection image generation unit 210 generates a MIP image from the MRA image after removal (step S1311), generates an acceptance screen 910, and displays it on the display device 124 (step S1312).
- the display image generation unit 200 waits for an approval or rejection instruction.
- step S1313 When a rejection instruction is received from the operator (step S1313), the display image generation unit 200 proceeds to step S1331.
- the display image generation unit 200 stores the MRA image after removal in the display image DB 330 as display image data (step S1314).
- the display image generating unit 200 causes the unnecessary area specifying unit 220 to perform map processing (step S1315).
- the projection image generation unit 210 generates a MIP image from the MRA image stored in the display image DB 330, and performs map processing on the coronal image.
- the unnecessary area specifying unit 220 performs map processing on the coronal image.
- the unnecessary area removing unit 230 performs the removal process (step S1316).
- the projection image generation unit 210 generates a MIP image from the MRA image after removal (step S1317), generates the reception screen 910, and displays it on the display device 124 (step S1318).
- the display image generation unit 200 waits for an approval or rejection instruction.
- step S1319 When a rejection instruction is received from the operator (step S1319), the display image generation unit 200 proceeds to step S1331.
- the display image generation unit 200 stores the removed MRA image as display image data in the display image DB 330 (step S1320), and ends the process.
- the processing on the sagittal image is configured to be performed before the processing on the coronal image, but the processing on the coronal image may be performed first.
- the processing result is determined after the removal processing in each of the axial image, the sagittal image, and the coronal image is determined, but it is not always necessary to determine whether or not it is possible.
- it may be configured to shift to removal with a sagittal image or a coronal image without determining whether or not it is possible.
- the above procedure automatically and efficiently even when a region having a complicated shape such as a head and a fat that is an unnecessary tissue is complicatedly distributed is a region to be imaged.
- Clipping processing can be performed.
- the imaging target region is the head and the unnecessary tissue is fat has been described as an example, but basically the same applies even if the imaging target region or the unnecessary tissue is other.
- the map processing and the removal processing are performed on all the MIP images of the axial image, the sagittal image, and the coronal image.
- any one of an axial image, a sagittal image, and a coronal image may be performed.
- a subcutaneous fat region is specified on the axial image in the MIP image, and is removed from the MRA image over the entire range in the z-axis direction as described above. Then, upon receiving the determination of whether or not the removal result is possible, a MIP image is generated from the MRA image after removal, and the remaining subcutaneous fat region is identified on the sagittal image, and the direction orthogonal to the sagittal image Is removed over the entire range.
- Subcutaneous identification is performed, for example, by creating the profile shown in FIG. 14 and determining the boundary between the background, fat and muscle. In addition, you may comprise so that a coronal image may be used instead of a sagittal image.
- templates 410 corresponding to the removal of the fat region of the lower limb those specifying the subcutaneous fat region on the axial image and those specifying the subcutaneous fat region on the coronal image (or sagittal image) are registered, respectively.
- the An example of a template 410 for specifying a fat region on an axial image is shown in FIG.
- FIG. 10 (a) a template 410 having two ellipses is registered.
- the fat region information for specifying a predetermined range in the minor axis and major axis directions is held in each ellipse. Note that the shape conversion processing for matching the center and the contour is performed independently for each of the two ellipses, and an unnecessary area is specified.
- FIG. 10 (b) is an example of a mask image 430 used for integration of the template 410 and the axial image
- FIG. 10 (c) is generated from the MRA image after removal that has undergone the display image generation processing of the present embodiment. This is a MIP image (axial image) 440.
- a clipping process is automatically performed according to an imaging target region and unnecessary tissue. For this reason, a projection image can be obtained from an MRA image from which unnecessary tissue has been removed without complicated operations. Therefore, it is possible to reduce the operation of the operator when generating the MIP image from the MRA image, and it is possible to obtain a high-quality projection image from which unnecessary tissue is appropriately removed regardless of the skill of the operator.
- Clipping can be performed manually as before.
- an unnecessary tissue region is specified on the axial image of the MIP image, and the case where it is removed in an elliptical cylinder over the entire region in the direction orthogonal to the axial image is described as an example. Not limited to this.
- a map process may be performed for each slice at a predetermined interval, and a removal process may be performed for the slice thickness.
- the template may be a three-dimensional function model.
- shape conversion is performed so that the template of the three-dimensional function model matches the MRA image, and fitting is performed on the MRA image that is three-dimensional data without generating a MIP image. Then, the specified removal target area is removed.
- the MRI apparatus of this embodiment basically has the same configuration as that of the first embodiment.
- a function template stored in advance is used as a template used for map processing.
- the template is generated from an image obtained by another type of photographing performed prior to acquisition of 3D image data.
- an image (MRA image) obtained by the MRA method as an MRA image and an MIP image as a projected image will be described as examples.
- FIG. 11 is a functional block diagram of the computer 120 of this embodiment.
- the computer 120 of this embodiment basically has the same function as the computer 120 of the first embodiment. However, in order to generate the template 410 as described above, the computer 120 of this embodiment further includes a template creation unit 270. This function is also realized by loading the program stored in the storage device 123 into the memory and executing it by the CPU 121 of the computer 120, as with other functions.
- the algorithm DB 320 of the storage unit 300 further stores an imaging sequence, imaging parameters, and a template creation procedure to be executed for generating a template for each imaging target region and unnecessary tissue.
- the imaging parameters to be executed to generate the template the same imaging parameters relating to the imaging area in the imaging for acquiring the MRA image are used as the imaging parameters relating to the imaging area.
- the imaging target region is the head
- the removal target tissue is fat. Fat shows a high signal in both T1-weighted and T2-weighted images. Therefore, the algorithm DB 320 stores a T1-weighted image acquisition sequence and a T2-weighted image acquisition sequence as imaging to be executed for generating the template 410.
- the template creation unit 270 generates a template 410 for specifying a fat region by setting a region showing a high signal equal to or higher than a predetermined threshold in both the T1-weighted image and the T2-weighted image as a fat region. That is, MIP images (axial image, sagittal image, coronal image) in three directions are generated from the MRA images of the T1-weighted image and the T2-weighted image. In each MIP image, a region showing a high signal equal to or higher than the threshold is set as a fat region (removal target region), and the signal value (pixel value) of the corresponding pixel is set to 0.
- a region excluding the background portion and the fat region is set as a non-removal region, and the signal value (pixel value) of the corresponding pixel is set to 1. Then, the two are integrated to generate a template 410 in three directions (for axial direction, sagittal direction, and coronal direction).
- FIG. 12 shows the overall flow of the photographing process of this embodiment.
- the imaging unit 280 Upon receiving an instruction to start imaging from the operator, the imaging unit 280 gives an instruction to the sequencer 116 according to the imaging parameters input from the operator and stored in the storage device 123 and the pulse sequence stored in the storage device 123 in advance. Then, measurement capable of generating T1-weighted image data is executed, echo signals are collected, and the image reconstruction unit 290 reconstructs a T1-weighted image from the obtained echo signals (step S1401).
- the imaging unit 280 similarly performs measurement capable of generating T2-weighted image data, collects echo signals, and the image reconstruction unit 290 reconstructs a T2-weighted image from the obtained echo signals ( Step S1402). Note that either step S1401 or step S1402 may be first.
- the imaging unit 280 can generate an MRA image by giving an instruction to the sequencer 116 according to the imaging parameters input from the operator and stored in the storage device 123 and the pulse sequence stored in the storage device 123 in advance. Measurement is performed and echo signals are collected, and the image reconstruction unit 290 reconstructs an MRA image from the collected echo signals (step S1403).
- the template generation unit 270 generates a template 410 from the obtained T1 weighted image and T2 weighted image, and stores it in the template database 310 (step S1404).
- the display image generation unit 200 performs display image generation processing on the 3D image acquired in step S1403 using the template 410 stored in step S1403, and stores and displays the MRA image after clipping processing in the storage device 123.
- the information is displayed on the device 124 (step S1405).
- the display image generation process is the same as that in the first embodiment.
- the T1-weighted image and the T2-weighted image on which the template 410 is created are images of the same imaging region of the same subject 102 as the MRA image to be clipped. Therefore, the shape and size of the template 410 and the MIP image generated from the MRA image to be clipped substantially match. Therefore, in the map process, the shape conversion process may not be basically performed. If the two sizes are different, the shape conversion process is performed and matched as in the first embodiment, the unnecessary area is specified, and the unnecessary area removal process is performed.
- the display image generation process can use any of the methods described in the first embodiment.
- a template generated from another image obtained by photographing an imaging target for acquiring an MRA image is used for map processing. Therefore, as in the first embodiment, unnecessary areas can be efficiently identified from the MRA image, and a template is generated from data obtained from the same object. Can be matched.
- the shooting for generating the template is not limited to this.
- the unnecessary tissue is fat as described above
- water fat separation imaging may be performed to obtain a fat image, and the template 410 may be generated based on the fat image.
- the generated template may be a three-dimensional template.
- it is created as it is from the acquired T1-weighted image and T2-weighted image that are the three-dimensional data.
- the map process and the removal process are the same as in the first embodiment when the template is a three-dimensional model.
- a template is generated from the 3D image data itself to be processed.
- the MRI apparatus of this embodiment is basically the same as any one of the above embodiments.
- the present embodiment will be described focusing on the configuration different from the first embodiment.
- an image (MRA image) obtained by the MRA method as an MRA image and an MIP image as a projected image will be described as examples.
- the computer 120 includes the template generation unit 270 as in the second embodiment.
- the template generation unit 270 of the present embodiment generates the template 410 from the MRA image.
- FIGS. 13 is a processing flow of template generation processing by the template generation unit 270 of this embodiment.
- FIG. 14 is a diagram for explaining the template generation processing of the present embodiment.
- FIG. 14 shows a case where a virtual straight line 511a in the front-rear direction and a virtual straight line 511b in the left-right direction are set as the virtual straight line 511.
- 14A shows a central cross-sectional image 510
- FIG. 14B shows a signal profile 512a on the imaginary straight line 511a in the front-rear direction
- FIG. 14C shows a signal profile 512b on the imaginary straight line 511b in the left-right direction.
- the horizontal axis indicates the position along the virtual straight line
- the vertical axis indicates the signal intensity.
- the central slice in the head / foot direction is identified from among a plurality of slices having a cross section perpendicular to the head / foot direction (body axis direction) of the MRA image (step S1501).
- An image (center cross-sectional image) 510 of the identified slice is drawn, and as shown in FIG. 14 (a), the center P1 of the central cross-sectional image 510 is detected and used as a reference point (step S1502).
- the center P1 is obtained on the center cross-sectional image 510 by a method similar to the method for obtaining the center of the head 421 in the first embodiment.
- the center of gravity calculated from each pixel value of the central cross-sectional image may be set as the center P1.
- step S1503 a plurality of virtual straight lines 511 passing through the center P1 are set on the center cross-sectional image 510 (step S1503).
- the signal profiles 512 on the set virtual lines 511 are respectively derived (step S1504).
- step S1505 unnecessary areas predetermined for each imaging region and unnecessary tissue are detected, and a template 410 for specifying the unnecessary areas is generated (step S1505).
- the detection of the unnecessary area is performed by specifying the outer boundary OB and the inner boundary IB that constitute both sides of the unnecessary target area.
- the outer boundary OB matches the boundary between the skull and the background, and the inner boundary IB matches the boundary between the skull and the brain. Therefore, both of these boundaries are detected, a closed curve is formed, and a template 410 is defined as an unnecessary tissue region in the area surrounded by the closed curve.
- the outer boundary OB is detected.
- the outer boundary OB is on the signal profile 512, 1) large spatial differentiation, 2) when the edge of the head image is outside the imaging range, the signal intensity at both ends of the signal profile 512 is a relatively high value, It has the following characteristics. Based on these characteristics, it is determined whether or not the signal intensity is higher than a predetermined threshold value at both ends. When it is high, it is determined that the end of the head image is outside the imaging range, and the end is set as the outer boundary OB.
- the signal profile 512 is scanned in the direction of the center P1 for the end portion that is equal to or smaller than the threshold value, and a portion where the spatial differential value is first greater than a predetermined threshold value is determined as the outer boundary OB.
- the outer boundary OB is detected on each signal profile 512 by the above procedure. Then, a closed curve is generated by connecting the nearest outer boundaries OB between adjacent signal profiles 512.
- the inner boundary IB is detected.In the inner boundary IB, on the signal profile 512, 1) the spatial differentiation is large, and 2) the spatial differentiation is reduced even at the inner boundary IB in a specific place, It has the characteristic that it may become indistinct. Based on these characteristics, first, from each outer boundary OB detected on each signal profile 512, the signal profile 512 is scanned in the direction of the center P1, and first, a place where the spatial differential value becomes larger than a predetermined threshold value. Discriminated as the inner boundary IB. If such a point is not detected until the center P1, the distance d1 between the nearest outer boundary OB on the virtual curve 511 that is the adjacent virtual straight line 511 where the inner boundary IB is detected is calculated.
- the position on the center P1 side by the distance d1 from the outer boundary OB is set as the inner boundary IB on the virtual line 511.
- the inner boundary IB similar to the outer boundary OB, a closed curve is generated by connecting the nearest inner boundaries IB between adjacent virtual straight lines 511. Note that the two locations indicated by the arrows in FIG. 14 (c) correspond to the inner boundary IB.
- the width d2 of the removal target tissue between the outer boundary OB and the inner boundary IB is determined in advance for each target site and removal target tissue, You may comprise so that it may hold
- the outer area OB may be detected, and the position of the inner area IB on the virtual straight line 511 on the center P1 side by the distance d2 from the OB may be determined as the inner area.
- FIG. 15 shows a flow of the entire photographing process of the present embodiment.
- the imaging unit 280 Upon receiving an instruction to start imaging from the operator, the imaging unit 280 gives an instruction to the sequencer 116 according to the imaging parameters input from the operator and stored in the storage device 123 and the pulse sequence stored in the storage device 123 in advance. Then, measurement by the MRA method is executed, and echo signals are collected (step S1601). The image reconstruction unit 290 reconstructs an MRA image from the collected echo signals (step S1602).
- the template generation unit 270 performs the template generation process described above, and generates a template 410 from the MRA image (step S1603). Thereafter, the display image generation unit 200 performs display image generation processing that automatically performs clipping processing on the MRA image using the template 410 generated in step S1603, and generates a projection image (here, MIP image). Then, the MRA image after the clipping process is stored in the storage device 123 and displayed on the display device 124 (step S1604). In this embodiment, the display image generation process is the same as that in the first embodiment.
- a template is generated from the MRA image itself, and unnecessary regions are removed, so that shape conversion processing is unnecessary. Further, as in the second embodiment, imaging for creating a template is not necessary. For this reason, in addition to the effects obtained in the above embodiments, high-accuracy removal can be realized in a short time.
- the present invention is not limited to this.
- the same processing as described above may be performed over all slices, a template may be generated, and unnecessary areas may be identified and removed for each slice.
- the spatial resolution of the MRA image may be reduced to generate a template.
- the MRA image is composed of 512 ⁇ 512 ⁇ 64 pixels
- the average value of neighboring pixels is used as a new pixel, reconstructed into image data composed of 256 ⁇ 256 ⁇ 32 pixels, and then a template Perform the creation process.
- an unnecessary area specifying process is performed by performing an enlargement process on the template.
- a template may be generated by the above procedure for each slice at a predetermined interval, and unnecessary areas may be removed in a cylindrical shape at each interval.
- the generated template may be three-dimensional.
- the center of gravity of the three-dimensional image data is determined, and the outer region OB and the inner region IB are determined by the above method from signal profiles on at least three virtual straight lines that pass through this center of gravity and are on different planes. Then, two closed surfaces are determined in the three-dimensional space, and a template that can specify the removal region is generated.
- the map process and the removal process are the same as in the first embodiment.
- the embodiment using a function template, the one using a template generated from another image, and the one using a template generated from an MRA image have been described as independent embodiments.
- the computer 120 may be configured to have all the functions described in each of the above embodiments so that an operator can select which template is used for removal.
- An example of the selection screen 920 in this case is shown in FIG.
- Image data for generating the selection screen 920 is stored in the storage unit 300 in advance.
- the computer 120 When the operator inputs an instruction to perform display image generation processing including clipping processing, the computer 120 generates a selection screen 920 from the image data and displays it on the display device 124. Then, the computer 120 performs shooting and image reconstruction according to the accepted method, and performs display image generation processing.
- the selection screen 920 includes an imaging part designation area 921 that accepts designation of an imaging part, a specific technique selection area 922 that accepts an instruction of a technique for identifying an unnecessary area, and a detailed condition display input area 923.
- the type of template used by the display image generation unit 200 is displayed, and is configured so that the operator can select it.
- the geometric method using the function template stored in advance in the storage device 123 shown in the first embodiment and the template generation prior to acquisition of the MRA image shown in the second embodiment are suitable.
- a fat map method for generating a template from the results of different imaging and a self-reference method for generating a template from the MRA image itself shown in the third embodiment will be described.
- a usable template is displayed and which template is used by the operator.
- the instruction is accepted.
- initial values of various threshold values used for processing are displayed, and changes from the operator are accepted.
- imaging region designation area 921 may not be provided. What is the imaging site information? Imaging parameters input for imaging may be used. Or you may comprise so that the receiving coil used for an imaging may be detected and an imaging region may be specified.
- the selection screen 920 may be displayed at the timing when the operator inputs other shooting parameters, and the input from the operator may be accepted.
- the information input by the operator from the selection screen 920 may be stored in the storage device 123.
- the clipping process is automatically performed, the unnecessary area is specified, and the removed display image data can be obtained. Further, when shooting for acquiring a three-dimensional image is performed, the clipping process may be executed.
- the method described in each embodiment may be configured to be used together.
- a template generated from the MRA image is used to remove the fat around the skull. Is removed using a template generated from the T1-weighted image and the T2-weighted image.
- it may be configured to determine and determine an approximate position by referring to a template held in advance.
- the computer 120 is configured to include an algorithm generation unit.
- This algorithm generation unit identifies a template to be used for each MIP image, and completes a template for display image generation processing.
- the imaging unit 280 acquires the T1-weighted image and the T2-weighted image prior to acquiring the MRA image. For example, in the case of the above example, a T1-weighted image and a T2-weighted image are acquired. Then, the template generation unit 270 generates a first template from these and stores it in the template DB 310. Further, after acquiring the T1-weighted image and the T2-weighted image, the imaging unit 280 acquires an MRA image, and the template generation unit 270 generates a second template from the MRA image and stores it in the template DB 310. After that, the display image generation unit 200 removes unnecessary areas from each MIP image according to the procedure stored in the algorithm DB 320 and the template to be used, and generates display image data.
- the optimum clipping process can be automatically performed for each processing target MIP image of the imaging target region. Therefore, it is possible to obtain an MRA image from which unnecessary tissue has been removed with high accuracy and an MIP image generated therefrom. That is, a high-quality image with high visibility of the target tissue can be obtained without performing complicated operations.
- the MIP image is generated from the MRA image
- the present invention is not limited to this. Regardless of the type of 3D image data and the type of projection image, it can be applied to clipping processing when generating a projection image from 3D image data.
- the display image generation process is continuously performed after obtaining the three-dimensional image data by photographing, but the present invention is not limited to this.
- data necessary for display image generation processing such as 3D image data may be temporarily stored in the storage device 123, and display image generation processing may be performed when an instruction from an operator is received.
- the display image generation unit 200 is described as being realized on the computer 120 included in the MRI apparatus 100, but is not limited thereto.
- it may be configured to be realized on an external information processing apparatus capable of transmitting and receiving data to and from the computer 120 included in the MRI apparatus 100.
Landscapes
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Engineering & Computer Science (AREA)
- Radiology & Medical Imaging (AREA)
- High Energy & Nuclear Physics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Veterinary Medicine (AREA)
- Public Health (AREA)
- Animal Behavior & Ethology (AREA)
- Biophysics (AREA)
- Pathology (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Artificial Intelligence (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Condensed Matter Physics & Semiconductors (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
Abstract
L'invention permet d'obtenir des images de projection de haute qualité indépendamment de l'habilité de l'opérateur, tout en facilitant les manipulations lors de la génération d'images de projection à partir de données d'image tridimensionnelles. Dans cet objectif, des modèles pour chaque partie et chaque tissu superflu sont préparés; les modèles préparés sont mis en œuvre selon une procédure prédéterminée; un traitement de mise en correspondance qui identifie les tissus superflus est appliqué; et un traitement d'effacement dans lequel sont effacées les régions identifiées au moyen du traitement de mise en correspondance, est appliqué. En outre, le dispositif de l'invention est équipé d'une fonction qui conserve les données d'image tridimensionnelles d'avant les traitements et qui permet de rétablir les images tridimensionnelles d'avant les traitements, lorsque les traitements de mise en correspondance et d'effacement ne correspondent pas aux attentes.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2011532978A JP5738193B2 (ja) | 2009-09-24 | 2010-09-16 | 投影像生成方法および磁気共鳴イメージング装置 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2009219416 | 2009-09-24 | ||
JP2009-219416 | 2009-09-24 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2011037063A1 true WO2011037063A1 (fr) | 2011-03-31 |
Family
ID=43795813
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2010/066045 WO2011037063A1 (fr) | 2009-09-24 | 2010-09-16 | Procédé de génération d'images de projection et dispositif d'imagerie par résonance magnétique |
Country Status (2)
Country | Link |
---|---|
JP (1) | JP5738193B2 (fr) |
WO (1) | WO2011037063A1 (fr) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2017140132A (ja) * | 2016-02-09 | 2017-08-17 | 東芝メディカルシステムズ株式会社 | 画像処理装置およびmri装置 |
JP2020039507A (ja) * | 2018-09-07 | 2020-03-19 | 株式会社日立製作所 | 磁気共鳴撮像装置、画像処理装置、及び、画像処理方法 |
JP2020120734A (ja) * | 2019-01-29 | 2020-08-13 | キヤノンメディカルシステムズ株式会社 | 医用画像処理装置及び医用画像処理方法 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001149333A (ja) * | 1999-11-25 | 2001-06-05 | Hitachi Medical Corp | 画像処理装置 |
JP2006116299A (ja) * | 2004-09-22 | 2006-05-11 | Toshiba Corp | 磁気共鳴イメージング装置および磁気共鳴イメージング装置のデータ処理方法 |
JP2008054738A (ja) * | 2006-08-29 | 2008-03-13 | Hitachi Medical Corp | 磁気共鳴イメージング装置 |
JP2008220861A (ja) * | 2007-03-15 | 2008-09-25 | Ge Medical Systems Global Technology Co Llc | 磁気共鳴イメージング装置および磁気共鳴イメージング方法 |
JP2009005839A (ja) * | 2007-06-27 | 2009-01-15 | Hitachi Medical Corp | 医用画像処理装置 |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2544638Y2 (ja) * | 1992-10-06 | 1997-08-20 | ジーイー横河メディカルシステム株式会社 | 画像処理装置 |
JP3244347B2 (ja) * | 1993-07-01 | 2002-01-07 | ジーイー横河メディカルシステム株式会社 | 画像処理方法及び画像処理装置 |
JPH0855210A (ja) * | 1994-08-12 | 1996-02-27 | Ge Yokogawa Medical Syst Ltd | 画像処理方法及び画像処理装置 |
EP2316341B1 (fr) * | 2009-03-31 | 2013-03-06 | FUJIFILM Corporation | Appareil, procédé et programme de traitement d'image |
-
2010
- 2010-09-16 JP JP2011532978A patent/JP5738193B2/ja active Active
- 2010-09-16 WO PCT/JP2010/066045 patent/WO2011037063A1/fr active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001149333A (ja) * | 1999-11-25 | 2001-06-05 | Hitachi Medical Corp | 画像処理装置 |
JP2006116299A (ja) * | 2004-09-22 | 2006-05-11 | Toshiba Corp | 磁気共鳴イメージング装置および磁気共鳴イメージング装置のデータ処理方法 |
JP2008054738A (ja) * | 2006-08-29 | 2008-03-13 | Hitachi Medical Corp | 磁気共鳴イメージング装置 |
JP2008220861A (ja) * | 2007-03-15 | 2008-09-25 | Ge Medical Systems Global Technology Co Llc | 磁気共鳴イメージング装置および磁気共鳴イメージング方法 |
JP2009005839A (ja) * | 2007-06-27 | 2009-01-15 | Hitachi Medical Corp | 医用画像処理装置 |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2017140132A (ja) * | 2016-02-09 | 2017-08-17 | 東芝メディカルシステムズ株式会社 | 画像処理装置およびmri装置 |
JP2020039507A (ja) * | 2018-09-07 | 2020-03-19 | 株式会社日立製作所 | 磁気共鳴撮像装置、画像処理装置、及び、画像処理方法 |
JP7125312B2 (ja) | 2018-09-07 | 2022-08-24 | 富士フイルムヘルスケア株式会社 | 磁気共鳴撮像装置、画像処理装置、及び、画像処理方法 |
JP2020120734A (ja) * | 2019-01-29 | 2020-08-13 | キヤノンメディカルシステムズ株式会社 | 医用画像処理装置及び医用画像処理方法 |
JP7278790B2 (ja) | 2019-01-29 | 2023-05-22 | キヤノンメディカルシステムズ株式会社 | 医用画像処理装置及び医用画像処理方法 |
Also Published As
Publication number | Publication date |
---|---|
JP5738193B2 (ja) | 2015-06-17 |
JPWO2011037063A1 (ja) | 2013-02-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7024027B1 (en) | Method and apparatus for three-dimensional filtering of angiographic volume data | |
JP2021180866A (ja) | パッチガイド方法及びプログラム | |
JP6218569B2 (ja) | 磁気共鳴イメージング装置 | |
JP5575491B2 (ja) | 医療画像診断装置 | |
CN102958431B (zh) | 医用图像摄像装置及摄像切片决定方法 | |
US8379946B2 (en) | Method and control device to operate a magnetic resonance system | |
CN103099617A (zh) | 磁共振成像装置 | |
JP2010125329A (ja) | 対称性検出及び画像位置合わせを用いた自動式走査計画のシステム及び方法 | |
CN110797112B (zh) | 利用深度神经网络的自动图形处方的系统和方法 | |
CN108968960A (zh) | 用于磁共振系统的定位方法以及磁共振系统 | |
JP2017529963A (ja) | 高性能な骨可視化核磁気共鳴画像法 | |
KR20150016032A (ko) | 영상 복원 모드 선택이 가능한 영상 복원 방법 및 그 장치 | |
JP7274972B2 (ja) | 医用画像診断装置、医用撮像装置及び医用撮像方法 | |
JP2017080349A (ja) | 磁気共鳴イメージング及び医用画像処理装置 | |
WO2011040289A1 (fr) | Dispositif d'imagerie à résonance magnétique et procédé pour ajuster une région d'excitation | |
JP5738193B2 (ja) | 投影像生成方法および磁気共鳴イメージング装置 | |
KR101621849B1 (ko) | 뇌 네트워크의 분석을 위한 노드를 결정하는 방법 및 장치 | |
JP2016093494A (ja) | 磁気共鳴イメージング装置、画像処理装置及び画像処理方法 | |
KR20160012559A (ko) | 자기 공명 영상 장치 및 그에 따른 자기 공명 영상의 이미징 방법 | |
JP6943663B2 (ja) | 磁気共鳴イメージング装置及び画像処理装置 | |
JP2008206959A (ja) | 磁気共鳴イメージング装置およびスライス領域設定方法 | |
JP3689509B2 (ja) | 画像補正処理方法 | |
KR101475686B1 (ko) | Mr 스펙트럼 생성 장치 및 이를 이용한 mr 스펙트럼 생성 방법 | |
KR102306534B1 (ko) | 자기 공명 영상 장치 및 그 동작방법 | |
JP5535748B2 (ja) | 磁気共鳴イメージング装置および画像処理方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 10818738 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2011532978 Country of ref document: JP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 10818738 Country of ref document: EP Kind code of ref document: A1 |