[go: up one dir, main page]

JPH01135339A - Three-dimensional image diagnostic apparatus - Google Patents

Three-dimensional image diagnostic apparatus

Info

Publication number
JPH01135339A
JPH01135339A JP62293251A JP29325187A JPH01135339A JP H01135339 A JPH01135339 A JP H01135339A JP 62293251 A JP62293251 A JP 62293251A JP 29325187 A JP29325187 A JP 29325187A JP H01135339 A JPH01135339 A JP H01135339A
Authority
JP
Japan
Prior art keywords
dimensional
image
region
image data
dimensional image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP62293251A
Other languages
Japanese (ja)
Inventor
Yasuzo Shudo
安造 周藤
Norifumi Ko
黄 徳文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Toshiba Corp
Canon Medical Systems Corp
Original Assignee
Toshiba Corp
Toshiba Medical Systems Engineering Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Toshiba Corp, Toshiba Medical Systems Engineering Co Ltd filed Critical Toshiba Corp
Priority to JP62293251A priority Critical patent/JPH01135339A/en
Publication of JPH01135339A publication Critical patent/JPH01135339A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

PURPOSE:To perform fluoroscopic display and to enhance diagnostic capacity, by forming shadow value image data from three-dimensional image data and constituting a three-dimensional fluoroscopic image on the basis of said shadow value image data and the positional relation between an outside region and an inside region. CONSTITUTION:A three-dimensional image forming means 2 loads the multistylus image data of an object to be examined and forms each three-dimensional image data of the outside region and inside region of the objective region of the subject. A shadow value image forming means 3 loads each three- dimensional image data to form shadow value image data. A position discrimination means 4 discriminates the positional relation between the outside region and the inside region and a fluoroscopic image constituting means 5 constitutes a three-dimensional fluoroscopic image from said discrimination result and the shadow value image data to display the same on a display means.

Description

【発明の詳細な説明】 [発明の目的] (産業上の利用分野) 本発明は、三次元画像診断装置に関し、ざらに詳しくは
゛被検体の対象部位の三次元透視表示が可能な三次元画
像診断装置に関するものである。
[Detailed Description of the Invention] [Object of the Invention] (Industrial Field of Application) The present invention relates to a three-dimensional image diagnostic apparatus, and more particularly, to a three-dimensional image capable of three-dimensional transparent display of a target part of a subject. This invention relates to diagnostic equipment.

(従来の技術) 一般に、CT (Computed Tomograp
hy)装置や磁気共鳴診断装置(MRI)等により得ら
れる被検体の対象部位の二次元画像では、例えば対象部
位としての臓器の立体的な関係を把握することが困難で
ある。
(Prior art) Generally, CT (Computed Tomograph)
In a two-dimensional image of a target part of a subject obtained by a hy) apparatus, a magnetic resonance diagnostic apparatus (MRI), or the like, it is difficult to understand, for example, the three-dimensional relationship of organs as the target part.

このため、従来においてもコンピュータグラフィックス
による診断画像の三次元表示の手法が試みられている。
For this reason, techniques for three-dimensional display of diagnostic images using computer graphics have been attempted in the past.

このような三次元表示の手法としてパッチ法と、ボクセ
ル法とがあり、これら両手法の比較を第5図に示す。
There are a patch method and a voxel method as methods for such three-dimensional display, and a comparison of these two methods is shown in FIG.

これら両手法の各種特徴のうち、対象部位の透視表示又
は半透視表水の点に着目すると、パッチ法では、対象部
位のうち内側に位置する臓器例えば脳室をサーフエース
法で表示し、外側に位置する脳表面をフレーム法で表示
すれば外側の脳表面を透かして内側の脳室を容易に観察
することが可能である。
Among the various characteristics of these two methods, focusing on the perspective display of the target area or the semi-fluoroscopic surface water point, in the patch method, organs located inside the target area, such as the ventricles of the brain, are displayed using the surf ace method, and the outside By displaying the brain surface located in the frame using the frame method, it is possible to easily observe the inner ventricle through the outer brain surface.

一方、ボクセル法ではパッチ法に比べ表示精度が高く高
速処理に適する等の他の特徴があるもののパッチ法のよ
うに透視表示を行うためには特別のアルゴリズムが必要
となり、この結果、ボクセル法を例えば脳神経外Hにお
ける手術計画等に適用した場合、外側の脳表面から脳室
を透かして見ることはできず極めて不便であった。
On the other hand, although the voxel method has other characteristics such as higher display accuracy and suitability for high-speed processing than the patch method, it requires a special algorithm to perform perspective display like the patch method, and as a result, the voxel method For example, when applied to surgical planning for extracranial nerves, it is extremely inconvenient because it is not possible to see through the ventricles from the outer brain surface.

このようなボクセル法における三次元画像の透視表示が
不可能な理由は、ボクセル法においては対象部位のマル
チスライス(連続断層)作を閾値処理して外側の境界面
のみを抽出表示するものであり、境界面内部の各画素成
分は表示の対象としていないことによるものである(外
から見える境界面のみを処理の対象としている)。
The reason why perspective display of three-dimensional images using the voxel method is not possible is because the voxel method uses threshold processing for multi-slice (continuous tomography) images of the target area to extract and display only the outer boundary surface. This is because each pixel component inside the boundary surface is not targeted for display (only the boundary surface visible from the outside is targeted for processing).

(発明が解決しようとする問題点) 上述したように従来におけるボクセル法を用いた三次元
画像診断装置においては被検体の対象部位の透視表示が
困難であり、このため、診断能が低下するという問題が
あった。
(Problems to be Solved by the Invention) As mentioned above, in the conventional three-dimensional image diagnostic apparatus using the voxel method, it is difficult to transparently display the target part of the subject, and as a result, the diagnostic performance is reduced. There was a problem.

そこで本゛発明は、ボクセル法を採用しつつ対象部位の
透視表示が可能で診断能の向上に寄与できる三次元画像
診vIrT装置を提供することを目的とするものである
Therefore, an object of the present invention is to provide a three-dimensional image diagnosis vIrT apparatus that employs the voxel method and is capable of transparently displaying a target region, thereby contributing to improving diagnostic performance.

[発明の構成] (問題点を解決するための手段) 本発明の三次元画像診断装置は、被検体のマルチスライ
ス画像情報を取込み、対象部位のうち外側部位と内側部
位との各三次元画像情報をボクセル法により作成する三
次元画像作成手段と、前記外側部位及び内側部位の各三
次元画像情報に対゛して光の方向を反映させた各陰影値
画像情報を作成する陰影値画像作成手段と、前記両部位
の位置関係を判別する位置判別手段と、前記各陰影値画
像情報を取込むと共に前記位置判別手段の判別結果に基
づいて外側部位を透視して内側部位を視認し得る光の透
過率をもった三次元透視画像を構成する透視画像構成手
段とを有して構成したものである。
[Structure of the Invention] (Means for Solving the Problems) The three-dimensional image diagnostic apparatus of the present invention takes in multi-slice image information of a subject and generates three-dimensional images of an outer region and an inner region of the target region. A three-dimensional image creating means for creating information using a voxel method, and a shading value image creating means for creating each shading value image information reflecting the direction of light for each of the three-dimensional image information of the outer region and the inner region. means, a position determining means for determining the positional relationship between the two parts, and a light capable of capturing the respective shade value image information and allowing the outer part to be seen through and the inner part to be visually recognized based on the determination result of the position determining means. and a perspective image forming means for forming a three-dimensional perspective image having a transmittance of .

(作 用) 次に上記構成の装置の作用を説明する。(for production) Next, the operation of the device having the above configuration will be explained.

この装置の三次元画像作成手段は、被検体のマルチスラ
イス画像情報を取込み、被検体の対象部位のうち外側部
位と内側部位との各三次元画像情報を作成する。
The three-dimensional image creation means of this apparatus takes in multi-slice image information of the subject and creates three-dimensional image information of an outer region and an inner region of the target region of the subject.

陰影値画像作成手段は、前記両部位の各三次元画像情報
を取込み、これらに対して光の方向を反映させた陰影値
画像情報をそれぞれ作成する。
The shade value image creation means takes in each three-dimensional image information of the two parts, and creates shade value image information each of which reflects the direction of light.

一方、位置判別手段は、前記両部位の位置関係を判別し
判別結果を透視画像構成手段に送る。
On the other hand, the position determining means determines the positional relationship between the two parts and sends the determination result to the fluoroscopic image composing means.

透視画像構成手段は、前記両部位の陰影値画像情報を取
込むと共に位置判別手段の判別結果に基づいて、前記外
側部位を透視して内側部位を視認し得る光の透過率をも
った三次元透視画像を構成しこれを表示に供する。
The fluoroscopic image composition means takes in the shading value image information of both the parts, and based on the determination result of the position determination means, creates a three-dimensional image having a light transmittance that allows the outer part to be seen through and the inner part to be visually recognized. A perspective image is constructed and displayed.

(実施例) 以下に本発明の詳細な説明する。(Example) The present invention will be explained in detail below.

第1図に示す三次元画像診断装置1は、CT装置や磁気
共鳴診断装置により得られる被検体の対象部位を含むマ
ルチスライス画像情報を取込み、ボクセル法により対象
部位における外側部位P2(例えば脳表面)及び内側部
位PI  (例えばの脳腫瘍部分を含む脳室)の三次元
画像情報を作成する三次元画像作成手段2と、この三次
元画像作成手段2により作成した前記両部位P1 、P
2の三次元画像情報を取込み、これらに対してそれぞれ
光の方向を反映させた前記両部位P1.P2に対応する
陰影値画像情報を作成する陰影値画像作成手段3と、前
記両三次元画像情報を元にして前記両部位P1.P2の
位置関係、すなわち、外側部位P2が内側部位P1より
も手前にあるか又はその逆の関係かを判別する位置判別
手段4と、前記陰影値画像情報を取込むと共に、位置判
別手段4の判別結果を基にして外側部位P2を通して内
側部位P1を視認し得る光の透過率をもった対象部位の
三次元透視画像を構成する透視画像構成手段5と、この
透視画像構成手段5により構成した三次元透視画像を表
示するCRTの如き表示手段6と、この装置の動作制御
を行うための動作プログラムを格納したプログラムメモ
リ7と、前記動作プログラムに基づいてこの装置全体の
制御を行うcpuaと、この装置における上述した各要
素において実行される各種情報処理の結果を記憶する記
憶部9とを有している。
The three-dimensional image diagnosis apparatus 1 shown in FIG. ) and medial site PI (for example, a ventricle containing a brain tumor).
The three-dimensional image information of P1. A shading value image creating means 3 that creates shading value image information corresponding to P2; A position determining means 4 that determines the positional relationship of P2, that is, whether the outer region P2 is in front of the inner region P1 or vice versa; A fluoroscopic image composing means 5 constitutes a three-dimensional fluoroscopic image of the target region having a light transmittance that allows the inner region P1 to be visually recognized through the outer region P2 based on the discrimination result, and the fluoroscopic image composing means 5 A display means 6 such as a CRT that displays a three-dimensional perspective image, a program memory 7 that stores an operation program for controlling the operation of this device, and a CPU that controls the entire device based on the operation program. The device includes a storage section 9 that stores the results of various information processing executed in each of the above-mentioned elements in this device.

前記三次元画像作成手段2は、マルチスライス画像情報
を基にして対象部位のうち外側部位P2の輪郭をボクセ
ル法により抽出する第1の輪郭抽出部10と、同様に対
象部位のうち内側部位P1の輪郭をボクセル法により抽
出する第2の輪郭抽出部11とを具備している。
The three-dimensional image creation means 2 includes a first contour extraction unit 10 that extracts the contour of an outer region P2 of the target region using the voxel method based on multi-slice image information, and a first contour extraction unit 10 that extracts the contour of the outer region P2 of the target region based on multi-slice image information; The second contour extracting unit 11 extracts the contour of the image using the voxel method.

この三次元画像作成手段2により作成する前記両部位P
1 、P2の三次元画像情報に対応する三次元画像を第
2図に示す。すなわち、この三次元画像はx、y、zの
座標軸を同図に示すように三次元空間に定義するとき、
例えば外側部位P2の三次元画像は恰も[jlLも殻の
如く、また、内側部位P1の三次元画像は恰も卵の黄身
の如く表示したようになる。
Both parts P created by this three-dimensional image creation means 2
A three-dimensional image corresponding to the three-dimensional image information of 1 and P2 is shown in FIG. In other words, when this three-dimensional image is defined in the three-dimensional space with x, y, and z coordinate axes as shown in the figure,
For example, the three-dimensional image of the outer part P2 looks like a shell, and the three-dimensional image of the inner part P1 looks like the yolk of an egg.

前記位置判別手段4は、三次元画像作成手段2により作
成した第2図に示す如き三次元画像に対応する前記゛両
三次元画保情報を基にして、前記外側部位P2と内側部
位P1との位置関係を表す二次元の深さ画像に対応する
深さ画像情報を作成する深さ画僅作成部12と、この深
さ画働作成部12により作成した深さ画像情報を基に前
記両部位Pi 、P2の深さの度合を比較する深さ値比
較部13とを具備している。
The position determining means 4 determines the outer region P2 and the inner region P1 based on the two 3D image data corresponding to the 3D image as shown in FIG. 2 created by the 3D image creation means 2. a depth image creation section 12 that creates depth image information corresponding to a two-dimensional depth image representing the positional relationship between the two; The depth value comparison section 13 compares the degree of depth of parts Pi and P2.

すなわち、第2図に示すように深さ画像を、前記三次元
画像をある方向から見るとき二次元の画像表示@Qに投
影表示される画像として定義すれば、この画像表示面Q
に表示される深さ画像の各画素値が画像表示面Qから外
側部位P2及び内側部位P1に対する深さの度合、すな
わち、深さ値を表すことになる。この場合、画像表示面
Qに近い部分の深さ値は小さく、より遠くになる部分の
深さ値は大きくなる。
That is, if the depth image is defined as the image projected and displayed on the two-dimensional image display @Q when the three-dimensional image is viewed from a certain direction as shown in FIG.
Each pixel value of the depth image displayed represents the degree of depth from the image display surface Q to the outer region P2 and the inner region P1, that is, the depth value. In this case, the depth value of a portion close to the image display surface Q is small, and the depth value of a portion farther away is large.

そして、ここでは深さ画像のうち内側部位P1の各画素
値を下記(1)式で、外側部位P2の各画素値を下記(
2)式で与えるものとする。
Here, each pixel value of the inner region P1 in the depth image is calculated using the following equation (1), and each pixel value of the outer region P2 is calculated using the following equation (1).
2) It shall be given by Eq.

Zl+1 = (zT 、 zT 、 ++、 Z+:
) 、 、、、、 zH)・・・(1) Z ″= (Zt7J 、  212) 、 ++*、
  Z?、 *t+、  zz’ )・・・(2) 前記深さ値比較部13は、上述した(1)、 (21式
で示される各画素値を比較し、前記両部位P1゜P2の
位置関係、すなわち、画像表示面Qに対し内側部位P1
が手前にあるか又は外側部位P2が手前にあるかを判別
するようになっている。
Zl+1 = (zT, zT, ++, Z+:
) , , , , zH)...(1) Z ″= (Zt7J , 212) , ++*,
Z? , *t+, zz')...(2) The depth value comparison unit 13 compares each pixel value shown in the above-mentioned (1) (21), and determines the positional relationship between the two parts P1 and P2. , that is, the inner part P1 with respect to the image display surface Q
is in the front, or whether the outer part P2 is in the front.

前記陰影値画像作成手段3は、前記深さ画像作成部12
によ・り作成した深さ画像情報に対応する深さ画像に対
して所定の光源方向を定義し、その光源からの光との関
係から前記両部位の深さ画像の画素値zm、Z′21に
対して下記(31,(43式で示す陰影値Cf1) 、
 CIMを与えるようになっている。
The shading value image creating means 3 includes the depth image creating section 12
A predetermined light source direction is defined for the depth image corresponding to the depth image information created by For 21, the following (31, (shading value Cf1 shown in formula 43),
It is designed to give CIM.

C111=  (CT  、   CT  、  ・、
・、   Cl1l  、  ・、、 、   C1#
  )・・・(3) C′n = (CT 、 Q ?、・・・ Ql:l、
・・・、C冑)・・・(4) 前記透視画像構成手段5は、外側部位P2が内側部位P
1よりも手前にある場合に俊述する(5)式で示す演算
を行って外側部位P2を透視して内側部位P1を?J2
認し得る光の透過率t1 を求める透過率演算部14と
、透過率演算部14で求めた光の透過率ti  と前記
(3)、 (4)式で与えられる陰影値C111、C1
211とを取込み下記(6)式で示す演算を行って外側
部位P2から内側部位P1を透視し得る三次元透視画像
を構成する透視画像構成部15とを具備している。
C111= (CT, CT, ・,
・, Cl1l , ・,, , C1#
)...(3) C'n = (CT, Q?,...Ql:l,
..., C helmet) ... (4) The fluoroscopic image composition means 5 is configured such that the outer region P2 is the inner region P
1, perform the calculation shown in formula (5) to see through the outer part P2 and see the inner part P1? J2
A transmittance calculating section 14 calculates the perceptible light transmittance t1, the light transmittance ti calculated by the transmittance calculating section 14, and the shading values C111 and C1 given by the above equations (3) and (4).
211 and performs the calculation shown in equation (6) below to construct a three-dimensional perspective image that can see through the inner region P1 from the outer region P2.

t+ =to + (tl−to )(1−cosθ)
P・・・(5) C;=t+C’:’+(1−t+)CT   ・(6]
ここに、 CT :内側部位の陰影値。
t+ =to + (tl-to)(1-cosθ)
P...(5) C;=t+C':'+(1-t+)CT ・(6]
Here, CT: shading value of the medial site.

CT”:外側部位の陰影値。CT”: Shading value of the outer region.

C1:透視画像の画素値。C1: Pixel value of perspective image.

to :第3図に示す角度θ=0°の場合の透過率。to: Transmittance when the angle θ=0° shown in FIG.

tl :第3図に示す角度θ−90”の場合の透過率。tl: Transmittance at the angle θ-90'' shown in FIG.

P :透過強度パラメータ (例えば、1,2.3.4・・・) である。P: Transmission intensity parameter (For example, 1, 2, 3, 4...) It is.

次に上記構成の実施例装置の作用を第4図を示すこの装
置の動作を示すフローチャートをも参照して説明する。
Next, the operation of the apparatus of the embodiment having the above structure will be explained with reference to the flowchart shown in FIG. 4 showing the operation of this apparatus.

三次元画像作成手段2の第1の輪郭抽出手段10、第2
の輪郭抽出手段11はそれぞれマルチスライス画像情報
を取込み、外側部位P2及び内側部位P1の輪郭を抽出
して第2図に示すような両三次元画像に対応する両三次
元画幽情報を構成する(ST1)。
The first contour extraction means 10 of the three-dimensional image creation means 2, the second
The contour extracting means 11 each takes in multi-slice image information, extracts the contours of the outer region P2 and the inner region P1, and constructs both three-dimensional image information corresponding to both three-dimensional images as shown in FIG. (ST1).

次に、位置判別手段4の深さ画像作成部12は、内側部
位P1及び外側部位P2の両三次元画像情報を基にして
内側部位P1に対応する前記(1)式で示される深さ画
像情報Z′″′及び外側部位P2に対応する前記(21
式で示される深さ画像情報Z″をそれぞれ作成する(S
T2>。
Next, the depth image creation unit 12 of the position determination means 4 generates a depth image corresponding to the inner region P1 and expressed by the above equation (1) based on the three-dimensional image information of both the inner region P1 and the outer region P2. Information Z′″′ and the above (21
Depth image information Z'' shown by the formula is created respectively (S
T2>.

前記陰影値画像作成手段3は、前記両深ざ画像情報Zl
ll 、 Zl″を取込み、これらに対しそれぞれ前記
(3)、 (4)式で示す陰影値C“+1.C′2′を
各画素毎に与えて内側部位P1及び外側部位P2の陰影
値画像情報を゛作成する(ST3.ST4,5T5)。
The shading value image creating means 3 is configured to generate the both depth image information Zl.
ll, Zl'' are taken in, and the shading values C''+1. C'2' is given to each pixel to create shading value image information of the inner part P1 and the outer part P2 (ST3, ST4, 5T5).

次に、前記比較判別手段4の深さ値比較部13は、前記
深さ画像情報Z+lI、7−Onを取込み、対応する画
素毎の深さ値Z“;l 、 Z 121を比較しく5T
6)、前記内側部位P1と外側部位P2との位置関係を
判別する。
Next, the depth value comparison unit 13 of the comparison and determination means 4 takes in the depth image information Z+lI, 7-On, and compares the depth value Z";l, Z121 for each corresponding pixel.
6) Determine the positional relationship between the inner region P1 and the outer region P2.

そして、もし深さ値Z“:′≧深ざ値Z171と判別さ
れたとき、すなわち、内側部位P1が外側部位P2より
も表示平面Qに対して手前にあると判別されたときには
、透視画像構成部15は前記陰影値C“1′が与えられ
た陰影値画像情報を用いて三次元画像を構成しくST7
.ST8,5T9)、これを表示手段6に送って画像表
示に供する。
Then, if it is determined that the depth value Z":'≧depth value Z171, that is, if it is determined that the inner region P1 is closer to the display plane Q than the outer region P2, the perspective image configuration The unit 15 constructs a three-dimensional image using the shading value image information given the shading value C"1' ST7
.. ST8, 5T9), this is sent to the display means 6 for image display.

一方、深さ値比較部13において深さ値ZT<深さ値Z
 ?と判別されたときには、前記透過率演篩部14によ
り前記(5)式に示す演算が実行され外側部位P2を透
視して内側部位P1を視認するための透過率t;が求め
られると共に(ST10)、透視画像構成部15は上述
のようにして求めた透過率1.と前記陰影値C11l 
、 C′2′をもった内側部位P1及び外側部位P2の
両陰影値画像情報とを基にて、前記(6)式で示す演算
を各画素全てについて実行する(ST11.ST8,5
T9)。
On the other hand, in the depth value comparison section 13, depth value ZT<depth value Z
? When it is determined that ), the perspective image composition unit 15 calculates the transmittance 1. and the shading value C11l
, based on the shading value image information of both the inner part P1 and the outer part P2 having C'2', the calculation shown in the above equation (6) is executed for each pixel (ST11.ST8,5
T9).

これにより、外側部位P2を透視して内側部位P1を視
認し得る三次元透視画像が構成され、この三次元透視画
像は表示手段6により表示されて診断に供される。
As a result, a three-dimensional fluoroscopic image is constructed in which the inner region P1 can be viewed through the outer region P2, and this three-dimensional fluoroscopic image is displayed on the display means 6 and used for diagnosis.

以上詳述した本実施例装置によれば、従来例の場合の如
く、被検体の表面の内方にある臓器等の画像表示が困難
であることの不便さを解消し、例えば脳神経外科等で三
次元画像表示により手術計画を立てることが容易となる
According to the present embodiment device described in detail above, it is possible to solve the inconvenience of difficulty in displaying images of organs, etc. located inside the surface of a subject, as in the case of conventional examples, and for example, in neurosurgery, etc. Three-dimensional image display facilitates surgical planning.

すなわち、脳表面を透かして脳腫瘍等に関係する脳室等
の内部臓器の三次元的位置関係の把握が容易となり、手
術計画に多大の利点をもたらすものである。
That is, it becomes easy to see through the brain surface and understand the three-dimensional positional relationship of internal organs such as the ventricles related to a brain tumor, which brings great advantages to surgical planning.

本発明は上述した実施例に限定されるものではなく、そ
の要旨の範囲内で種々の変形が可能である。
The present invention is not limited to the embodiments described above, and various modifications can be made within the scope of the invention.

[発明の効果] 以上「述した本発明によれば、従来のボクセル法におけ
る欠点を解消し、対象部位の三次元的透視表示が可能で
診断能の向上を図ることができる三次元画像診断装置を
提供することができる。
[Effects of the Invention] According to the present invention as described above, there is provided a three-dimensional image diagnostic apparatus that eliminates the drawbacks of the conventional voxel method, is capable of three-dimensional transparent display of a target region, and is capable of improving diagnostic performance. can be provided.

【図面の簡単な説明】[Brief explanation of the drawing]

第1図は本発明の実施例装置のブロック図、第2図は同
装置における三次元画像と深さ画像との位置関係を示す
説明図、第3図は同装置における透視表示の際の光透過
率を求めるための説明図、第4図は同装置の動作を示す
フローチャート、第5図はパッチ法とボクセル法との比
較説明図である。 1・・・三次元両会診断装置、 2・・・三次元画像作成手段、 3・・・陰影値画像作成手段、 4・・・位置判別手段、5・・・透視画像構成手段。
FIG. 1 is a block diagram of an apparatus according to an embodiment of the present invention, FIG. 2 is an explanatory diagram showing the positional relationship between a three-dimensional image and a depth image in the same apparatus, and FIG. 3 is an illustration of light during perspective display in the same apparatus. FIG. 4 is a flowchart showing the operation of the apparatus, and FIG. 5 is an explanatory diagram for comparing the patch method and the voxel method. DESCRIPTION OF SYMBOLS 1... Three-dimensional diagnosing device, 2... Three-dimensional image creation means, 3... Shade value image creation means, 4... Position discrimination means, 5... Fluoroscopic image composition means.

Claims (1)

【特許請求の範囲】[Claims] 被検体のマルチスライス画像情報を取込み、対象部位の
うち外側部位と内側部位との各三次元画像情報をボクセ
ル法により作成する三次元画像作成手段と、前記外側部
位及び内側部位の各三次元画像情報に対して光の方向を
反映させた各陰影値画像情報を作成する陰影値画像作成
手段と、前記両部位の位置関係を判別する位置判別手段
と、前記各陰影値画像情報を取込むと共に前記位置判別
手段の判別結果に基づいて外側部位を透視して内側部位
を視認し得る光の透過率をもつた三次元透視画像を構成
する透視画像構成手段とを有することを特徴とする三次
元画像診断装置。
a three-dimensional image creation means that captures multi-slice image information of a subject and creates three-dimensional image information of an outer region and an inner region of the target region by a voxel method; and three-dimensional images of the outer region and the inner region. Shade value image creation means for creating each shade value image information reflecting the direction of light with respect to the information; position determining means for determining the positional relationship between the two parts; a three-dimensional perspective image construction means for constructing a three-dimensional perspective image having a light transmittance that allows the outside part to be seen through and the inside part to be visually recognized based on the determination result of the position determination means; Diagnostic imaging equipment.
JP62293251A 1987-11-19 1987-11-19 Three-dimensional image diagnostic apparatus Pending JPH01135339A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP62293251A JPH01135339A (en) 1987-11-19 1987-11-19 Three-dimensional image diagnostic apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP62293251A JPH01135339A (en) 1987-11-19 1987-11-19 Three-dimensional image diagnostic apparatus

Publications (1)

Publication Number Publication Date
JPH01135339A true JPH01135339A (en) 1989-05-29

Family

ID=17792402

Family Applications (1)

Application Number Title Priority Date Filing Date
JP62293251A Pending JPH01135339A (en) 1987-11-19 1987-11-19 Three-dimensional image diagnostic apparatus

Country Status (1)

Country Link
JP (1) JPH01135339A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH057554A (en) * 1991-03-01 1993-01-19 General Electric Co <Ge> Surgical simulation device using display list surface data
JP2001291090A (en) * 2000-04-06 2001-10-19 Terarikon Inc Three-dimensional image display device
JP2010022602A (en) * 2008-07-18 2010-02-04 Fujifilm Ri Pharma Co Ltd Display device and method for organ surface image
KR20160123750A (en) * 2015-04-17 2016-10-26 한국전자통신연구원 Apparatus and method for generating thickness model for 3d printing

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS62219075A (en) * 1986-03-20 1987-09-26 Hitachi Medical Corp Translucent display method for three-dimensional picture

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS62219075A (en) * 1986-03-20 1987-09-26 Hitachi Medical Corp Translucent display method for three-dimensional picture

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH057554A (en) * 1991-03-01 1993-01-19 General Electric Co <Ge> Surgical simulation device using display list surface data
JP2001291090A (en) * 2000-04-06 2001-10-19 Terarikon Inc Three-dimensional image display device
JP2010022602A (en) * 2008-07-18 2010-02-04 Fujifilm Ri Pharma Co Ltd Display device and method for organ surface image
KR20160123750A (en) * 2015-04-17 2016-10-26 한국전자통신연구원 Apparatus and method for generating thickness model for 3d printing

Similar Documents

Publication Publication Date Title
Stytz et al. Three-dimensional medical imaging: algorithms and computer systems
EP2203894B1 (en) Visualization of voxel data
JP4421016B2 (en) Medical image processing device
Khan et al. A methodological review of 3D reconstruction techniques in tomographic imaging
US7890155B2 (en) Feature emphasis and contextual cutaways for image visualization
RU2711140C2 (en) Editing medical images
JP5415068B2 (en) Visualization of cut surfaces of curved and elongated structures
JP6918528B2 (en) Medical image processing equipment and medical image processing program
WO2010081094A2 (en) A system for registration and information overlay on deformable surfaces from video data
JP2004057411A (en) Method for preparing visible image for medical use
Englmeier et al. Hybrid rendering of multidimensional image data
JP7003635B2 (en) Computer program, image processing device and image processing method
Birkeland et al. The ultrasound visualization pipeline
JPH01135339A (en) Three-dimensional image diagnostic apparatus
JP4010034B2 (en) Image creation device
CN116051553B (en) Method and device for marking inside three-dimensional medical model
Jung et al. Occlusion and slice-based volume rendering augmentation for PET-CT
KR20180131211A (en) Apparatus and method of producing three dimensional image of orbital prosthesis
JP6436258B1 (en) Computer program, image processing apparatus, and image processing method
JP2022138098A (en) Medical image processing apparatus and method
Mori et al. Method for detecting unobserved regions in virtual endoscopy system
JP7476403B2 (en) COMPUTER-IMPLEMENTED METHOD FOR RENDERING MEDICAL VOLUME DATA - Patent application
EP4428827A1 (en) Apparatus for generating a visualization of a medical image volume
EP4231246A1 (en) Technique for optical guidance during a surgical procedure
JP2025509706A (en) Method for displaying a 3D model of a patient - Patent application