JPH0273471A - Estimating method for three-dimensional form - Google Patents
Estimating method for three-dimensional formInfo
- Publication number
- JPH0273471A JPH0273471A JP63225929A JP22592988A JPH0273471A JP H0273471 A JPH0273471 A JP H0273471A JP 63225929 A JP63225929 A JP 63225929A JP 22592988 A JP22592988 A JP 22592988A JP H0273471 A JPH0273471 A JP H0273471A
- Authority
- JP
- Japan
- Prior art keywords
- shape
- dimensional
- image
- model
- feature point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 14
- 238000003672 processing method Methods 0.000 claims 1
- 230000015654 memory Effects 0.000 abstract description 4
- 238000001514 detection method Methods 0.000 description 16
- 210000003128 head Anatomy 0.000 description 12
- 238000012545 processing Methods 0.000 description 4
- 238000004891 communication Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 238000003786 synthesis reaction Methods 0.000 description 2
- 230000001131 transforming effect Effects 0.000 description 2
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 210000005069 ears Anatomy 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000008921 facial expression Effects 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000008685 targeting Effects 0.000 description 1
Landscapes
- Processing Or Creating Images (AREA)
Abstract
Description
【発明の詳細な説明】
「産業上の利用分野」
この発明は、CA D (con+puter aid
ed design)、CA M (computor
aided nIInufacLure ) 、コン
ピュータ・グラフィクスなどに必要な任意の物体の三次
元形状を得るための方法に関するものである。特に、構
造や三次元形状が似ている物体の集まり、即ち、典型的
な構造や三次元形状が存在する物体の集合を対象にして
、その巾のある特定の物体の三次元形状を得る方法に関
する。[Detailed Description of the Invention] "Industrial Application Field" This invention is directed to CAD (con+puter aid
ed design), CA M (computer
The present invention relates to a method for obtaining the three-dimensional shape of any object required for computer graphics and the like. In particular, a method for obtaining the three-dimensional shape of a specific object with a certain width by targeting a collection of objects with similar structures and three-dimensional shapes, that is, objects with typical structures and three-dimensional shapes. Regarding.
以下では、前述した物体の集合として人物を例にとり説
明するが、本発明自体は、人物以外の各種物体に広く適
用可能であることはいうまでもない。In the following, description will be given using a person as an example of a collection of objects, but it goes without saying that the present invention itself is widely applicable to various objects other than people.
「従来の技術」
放送・映画におけるコンピュータグラフィクスを用いた
映像作成(例えば、〔文献1〕小松二「−)−ヤラクタ
・アユメーンシンのだめの顔の曲面モデルj、情報処理
学会グラフィクスとCΔ1〕研究会資才131−5 、
1988年)や画像通信における分析・認識符5)化(
例えば、(文献2〕相沢他: 「分析合成符号化におけ
るモデル構成と表情合成J、電子情報通信学会画像工学
研究会技術報告IE87−2 、1987年)など、人
物の動きや変形を計算機によりシミュレーソシンして各
種の分野に応用する試みが行われている。そのためには
特定人物の三次元形状が必要である。"Prior art" Video creation using computer graphics for broadcasting and movies (for example, [Reference 1] Komatsuji "-" - Curved face model of Yarakuta Ayumensin's face, Information Processing Society of Japan Graphics and CΔ1] Study Group Fund Sai131-5,
1988) and analysis/recognition marks in image communication 5) (
For example, (Reference 2) Aizawa et al.: "Model construction and facial expression synthesis in analysis-synthesis coding J, Institute of Electronics, Information and Communication Engineers Image Engineering Research Group Technical Report IE87-2, 1987", which simulates human movement and deformation using a computer. Attempts are being made to apply this method to various fields.For this purpose, a three-dimensional shape of a specific person is required.
従来の三次元物体の形状得る方法として、(1)実際の
物体にスリット光やレーザーを当てて物体表面上の各点
の三次元座標を得るか、モアレを利用して表面上の各点
までの奥行きを得て三次元形状を構成する方法や、(2
)多くのC′Fスライス画像から三次元形状を構成する
方法や、(3)グラフィック・デイスプレィを見ながら
元になる三次元形状データを特定の人物に似るように人
間が少しづつ修正して三次元形状を得る方法が使用され
ていた。Conventional methods for obtaining the shape of a three-dimensional object include (1) obtaining the three-dimensional coordinates of each point on the object's surface by shining a slit light or laser on the actual object, or using moiré to obtain the three-dimensional coordinates of each point on the surface; A method of constructing a three-dimensional shape by obtaining the depth of
) A method for constructing a three-dimensional shape from many C'F slice images, and (3) a method for creating a three-dimensional shape by manually modifying the original three-dimensional shape data little by little to resemble a specific person while looking at a graphic display. A method was used to obtain the original shape.
従って、11訂記(1)または(2)の方法では、特殊
な計測装置が必要であり、測定される人物は特殊な計測
装置の中で計測が終わるまで静止しておかなければなら
ず、計測時間を短くしようとすると精度が低くなるとい
う欠点があった。また、前記(3)の方法は、三次元形
状を得るために、グラフインク・デイスプレィ等の装置
の操作になれた人間が長い時間をかけて作業をしなけれ
ばならないという欠点があった。Therefore, the method of 11th Amendment (1) or (2) requires a special measuring device, and the person being measured must remain stationary within the special measuring device until the measurement is completed. There is a drawback that the accuracy decreases when trying to shorten the measurement time. Furthermore, the method (3) has the disadvantage that in order to obtain a three-dimensional shape, a person who is experienced in operating a device such as a graph ink display must spend a long time working on it.
人物をTVカメラなどで撮影し、各種の画像処理を用い
て自動的に三次元形状を得ようとする試みもあるが、対
象物体の形状の手ががりなしで、人物像のみに基づいて
自動的に三次元形状を得ることは困難であり、未だ実現
されてぃない。There are attempts to automatically obtain a three-dimensional shape by photographing a person with a TV camera or other means and using various types of image processing, but this method does not automatically obtain a three-dimensional shape based only on the person's image without knowing the shape of the target object. It is difficult to obtain a three-dimensional shape, and this has not yet been achieved.
本発明は、これらの欠点を除去し、TVカメラなどで1
8影された何枚かの人物画像のみから人手を介さず機械
により自動的に人物の三次元形状を得ることを目的とす
る。The present invention eliminates these drawbacks and allows TV cameras to
8. To automatically obtain a three-dimensional shape of a person by a machine from only some shadowed images of the person without human intervention.
「課題を解決するための手段」
本発明は、予め部位された典型的人間の三次元モデルを
用意しておき、そのモデルを使うことにより、TVカメ
ラなどで撮影した人物像から特(牧的な部位(例えば頭
部では耳、目、鼻、口など)の位置と形状を自動検出し
て、検出された各部位の位置と形状が前記三次元モデル
の形状と一致するように前記三次元モデルの形状を自動
変形することにより、特定人物の三次元形状を推定する
ものである。以下、図面を用いて詳細に説明する。"Means for Solving the Problem" The present invention prepares a three-dimensional model of a typical human body with body parts arranged in advance, and uses that model to create a special (pastoral) image of a person photographed with a TV camera or the like. The position and shape of each body part (for example, ears, eyes, nose, mouth, etc. in the head) are automatically detected, and the position and shape of each detected body part match the shape of the three-dimensional model. The three-dimensional shape of a specific person is estimated by automatically transforming the shape of a model.Hereinafter, it will be explained in detail using the drawings.
「実施例」
第1図は、人物の正面像と側面像から三次元形状を推定
するために構成された本発明の実施例であり、
1aとtbは、それぞれ正面像と側面像を撮影するカメ
ラ、
2aと2bは、それぞれ正面像と側面像を記憶する画像
メモリ、
3は、典型的な人間の三次元形状や構造を表している三
次元モデルの格納部、
4aと4bは、三次元モデル3の形状がそれぞれ正面像
と側面像の撮影時の人物の姿勢および大きさと一致する
ように、三次元モデル3の形状を拡大・縮ノ1い移動・
回転させる形状変換部、
5aと5bは、それぞれ撮影された正面像と側面像の人
物の姿勢や大きさを検出する姿勢大きさ検出部、
6aと6bは、それぞれ4a、4bにより変換された三
次元モデル3を正面像と側面像の退転条件と同し条件で
二次元に投影し、その結果から目じり、鼻の先、口の両
端など物体形状を特徴づける点(以後、特徴点と呼ぶ)
の位置やあごや頭の輪郭の、両画像内での存在範囲を計
Iγする範囲計算部、
7aと7bは、それぞれ、範囲計算部6a6bで計算さ
れた範囲を用いて、正面像と側面像から頭やあごなどの
輪郭形状を検出する輪郭検出部、
8、コと8bは、それぞれ、範囲計算部6a6bで31
算されたfa囲を用いて、正面像と側面像から特徴点の
位置を検出する特徴点位置検11部、
9は、三次元モデル3の中の輪郭形状と特徴点位置が、
輪郭検出部7a、7bおよび特徴点位置検出部8a、8
bから得られた正面と側面での輪郭形状と特徴点の位置
に一致するように、形状変換された三次元モデルの形状
を変形させる形状変形部であるや
次に、第1図の実施例の動作を具体的に説明する。説明
を簡単にするために処理対象を人物頭部に限定して、そ
の三次元形状を推定すると仮定する。``Example'' Figure 1 shows an example of the present invention configured to estimate a three-dimensional shape from a front image and a side image of a person, and 1a and tb are images for photographing a front image and a side image, respectively. Cameras 2a and 2b are image memories that store front and side images, respectively; 3 is a storage unit for a three-dimensional model representing the three-dimensional shape and structure of a typical human; 4a and 4b are three-dimensional The shape of the three-dimensional model 3 can be enlarged, compressed or moved so that the shape of the model 3 matches the posture and size of the person when the front and side images were taken respectively.
5a and 5b are posture and size detection units that detect the posture and size of the person in the photographed frontal and side images, respectively; 6a and 6b are the cubic shape converters converted by 4a and 4b, respectively; The original model 3 is projected two-dimensionally under the same conditions as the receding conditions of the front and side images, and from the results, points that characterize the shape of the object, such as the corner of the eyes, the tip of the nose, and both ends of the mouth (hereinafter referred to as feature points)
Range calculation units 7a and 7b calculate the existing range Iγ of the position of the chin and the outline of the head in both images, respectively, using the ranges calculated by the range calculation units 6a and 6b, Contour detection units 8, 8 and 8b detect contour shapes of the head, chin, etc. from the range calculation unit 6a6b, respectively.
The feature point position detection section 9 detects the position of the feature point from the front image and the side image using the calculated fa area.
Contour detection units 7a, 7b and feature point position detection units 8a, 8
This is a shape transforming unit that transforms the shape of the transformed three-dimensional model so that it matches the front and side contour shapes and the positions of feature points obtained from b. Next, the embodiment shown in FIG. The operation will be explained in detail. To simplify the explanation, it is assumed that the processing target is limited to a human head and its three-dimensional shape is estimated.
7次元モデルは、第2図に示すような頭部の三次元形状
と、その形状中における目じり、鼻の先、口の両端、耳
穴などの特徴点に対応する頂点のデータと、正面・側面
から見たときのあごと頭の輪郭上にある頂点のデータに
よって構成されているとする。The 7-dimensional model consists of the three-dimensional shape of the head as shown in Figure 2, the data of vertices corresponding to feature points in that shape such as the corners of the eyes, the tip of the nose, both ends of the mouth, and the ear holes, and the front and side views. Suppose that it is composed of data of vertices on the outline of the chin and head when viewed from above.
まず、姿勢大きさ検出部5a、5bでの処理を容易にす
るためにある人間を所定の位置に置き所定の姿勢をとら
せる。そして、カメラla。First, a person is placed in a predetermined position and made to assume a predetermined posture in order to facilitate processing in the posture size detection units 5a and 5b. And camera la.
lbで1i影し、正面像と側面像をそれぞれ画像メモリ
2a 2bに格納する。lb and 1i shadow, and store the front image and side image in image memories 2a and 2b, respectively.
姿勢大きさ検出部5a、5bでは、しきい値処理などに
より措影された画像から頭部像と背景を分離し、頭部像
の縦横の大きさを?l1ll定すると共に、頭部の中心
位置を計算する。The posture size detection units 5a and 5b separate the head image and the background from the image by threshold processing, etc., and determine the vertical and horizontal sizes of the head image. l1ll and calculate the center position of the head.
形状変換部4a、4bでは、三次元モデル3の形状が前
記姿勢大きさ検出部で測定された大きさ・位置と一致す
るように、三次元モデルの形状を拡大・縮/]い移動・
回転する。The shape converters 4a and 4b expand, contract, and move the shape of the three-dimensional model 3 so that the shape of the three-dimensional model 3 matches the size and position measured by the posture and size detection section.
Rotate.
範囲計算部6aでは、変換された三次元モデルの形状を
カメラlaの撮影条件と同じ条件で二次元に投影し二次
元図形をつくる0次に、その二次元図形を正面像に重ね
ることにより、三次元モデル中に設定されている特徴点
および輪郭のデータを用いて正面像上での特(数点およ
び輪郭の存在範囲を決める。範囲計算部6bも6dと同
様に、側面像Eでの特徴点および輪郭の存在範囲を決め
る。In the range calculation unit 6a, the shape of the converted three-dimensional model is projected two-dimensionally under the same shooting conditions as the camera la to create a two-dimensional figure.Then, by superimposing the two-dimensional figure on the front image, The feature point and contour data set in the three-dimensional model are used to determine the range of features (several points and contours) on the frontal image.Similar to 6d, the range calculation unit 6b also calculates the range of features in the side view E. Determine the range of feature points and contours.
輸シ1(検出部7a、7bでは、範囲計算部Ga。Transport 1 (In the detection units 7a and 7b, the range calculation unit Ga.
6bで計算された輪郭の存在範囲を用いて輪゛邦を検出
する。例えば、正面・側面像それぞれに前記二次元図形
を重ね、頭部像の中心から二次元図形の輪郭上の各頂点
へ直線を伸ばし、その直線が実際の頭部像の輪郭と公差
する位置を検出し、輪郭上の頂点の位置を求める。The circle is detected using the contour existence range calculated in step 6b. For example, superimpose the two-dimensional figure on each of the front and side images, extend a straight line from the center of the head image to each vertex on the outline of the two-dimensional figure, and find the position where the straight line has a tolerance with the outline of the actual head image. Detect and find the position of the vertex on the contour.
特徴点位置検出部Oa、8bでは、範囲旧算部6y+、
6bから与えられる特徴点の存在範囲に基づいて、画像
メモリ2a、2b内の正面・側面画像から各特徴点の実
際の位置を検出する。In the feature point position detection unit Oa, 8b, the range calculation unit 6y+,
The actual position of each feature point is detected from the front and side images in the image memories 2a and 2b based on the existence range of the feature point given from 6b.
例えば、予め特徴点付近の輝度変化のパターンを用意し
ておき、与えられた存在範囲内でそのパターンを少しづ
つずらしながら画像とマツチングをとり、最もよく一致
した位置を特徴点の位置とする。For example, a pattern of brightness changes near a feature point is prepared in advance, and the pattern is matched with the image while being shifted little by little within a given range of existence, and the position that best matches is determined as the position of the feature point.
形状変形部9では、三次元モデル中の特徴点と輪ff1
i上の頂点の位置が、輪郭検出部7a、7bおよび特徴
点位置検出部8a、8bで得られた位置に一致するよう
に、拡大・縮〕1い移動・回転された三次元モデルの形
状全体を変形する。In the shape deformation unit 9, the feature points in the three-dimensional model and the ring ff1
The shape of the three-dimensional model has been expanded/reduced/moved/rotated so that the position of the vertex on i matches the position obtained by the contour detection units 7a, 7b and the feature point position detection units 8a, 8b. Transform the whole thing.
例えば、輪郭検出部7aと特徴点位置検出部8aによっ
て得られた正面像での各特徴点の位置と輪郭上の頂点の
位置を用いて、第2図に示す形状を(x、y)座標につ
いて変形し、同様に、側面像から得られた各特徴点の位
置と輪郭上の頂点の位置から(y、z)座標について変
形し、それをまとめることで頭部三次元形状を得る。For example, using the position of each feature point in the front image obtained by the contour detection section 7a and the feature point position detection section 8a and the position of the vertex on the contour, the shape shown in FIG. Similarly, the (y, z) coordinates are transformed from the position of each feature point obtained from the side image and the position of the vertex on the contour, and the three-dimensional shape of the head is obtained by combining them.
変形の方法は、例えば、三次元モデル形状中の頂点の中
で特徴点に対する頂点と輪郭上の頂点を、測定された位
置まで移動させ、それ以外の頂点は、移動させられる頂
点の移動量を補間することで移動させる。For example, among the vertices in the three-dimensional model shape, the vertices corresponding to the feature points and the vertices on the contour are moved to the measured positions, and the other vertices are moved by the amount of movement of the vertices to be moved. Move by interpolating.
一般に、計算機などで画像から前述した部位の位置や形
状を自動検出することは、それらが画像内のどの辺りに
、どの様な色または絃度で、どのくらいの大きさで存在
しているかが分からないので困難である。しかし、人物
は、個人間で形状が異なっても基本的な構造が同じで形
状もほとんど同し物体なので、その構造や典型的な形状
を持つ三次元モデルを用いて画像上における各部位の存
在範囲や輪郭を検出処理前に求めるごとにより、自動検
出が回帰となる。即ち、例えばa淡画像の二値化処理に
よっである部位を分離しようとする場合、その部位の存
在範囲と大まかな形が分かっていれば、存在範囲内で与
えられた形に近い図形を探索し、見つからなければ二値
化のしきい値を変化させて探索を繰り返せばよい。In general, automatically detecting the position and shape of the above-mentioned parts from an image using a computer etc. means knowing where in the image they are, in what color or intensity, and in what size. It is difficult because there is no However, even if the shape of a person differs between individuals, the basic structure and shape of a person are the same and the shape is almost the same. Automatic detection becomes regression by determining the range and contour before the detection process. In other words, when attempting to separate a certain part by binarizing an a-light image, for example, if the range and rough shape of the part are known, a figure close to the given shape within the range of existence can be extracted. If it is not found, the search may be repeated by changing the binarization threshold.
以上の説明では、人物を正面と側面の2方向から退転し
、2枚の人物画像を用いて三次元形状を推定したが、3
方向以上から盪影して3枚以上の人物画像を用いるよう
に拡張することも容易である。その場合、形状の推定精
度を高めることができる。In the above explanation, the person was retracted from two directions, the front and the side, and the three-dimensional shape was estimated using two images of the person.
It is also easy to expand the system to use three or more human images viewed from more than one direction. In that case, the accuracy of shape estimation can be improved.
「発明の効果」
以上説明したように、本発明によれば、人物をはじめと
する各種の物体を一方向または複数方向から盪影するの
みで、人手を介さず自動的にその人物の三次元形状を推
定することができるので、本発明は特定の物体の三次元
形状が必要な用途、例えは、コンピュータ・グラフィク
スによる人物像の生成、人物の三次元形状に基づく個人
識別、工業製品の三次元画像生成、工業製品や農産物な
どの形状識別、検査、自動選別など多くの用途において
きわめて有効である。"Effects of the Invention" As explained above, according to the present invention, by simply projecting images of various objects, including a person, from one or more directions, the person's three-dimensional image is automatically created without human intervention. Since the shape can be estimated, the present invention can be applied to applications that require the three-dimensional shape of a specific object, such as the generation of human images by computer graphics, personal identification based on the three-dimensional shape of a person, and the three-dimensional shape of industrial products. It is extremely effective in many applications such as original image generation, shape identification of industrial products and agricultural products, inspection, and automatic sorting.
第1図はこの発明の実施例を示すブロック図、第2図は
三角形パッチ表現された顔形状の三次元モデルの例を示
す図である。
特許出願人 日本電信電話株式会社FIG. 1 is a block diagram showing an embodiment of the present invention, and FIG. 2 is a diagram showing an example of a three-dimensional model of a face shape represented by a triangular patch. Patent applicant Nippon Telegraph and Telephone Corporation
Claims (1)
状を数値データとして得るための処理方法において、 処理対象とする物体の一般的な構造と三次元形状を表し
ている三次元モデルを予め用意しておき、 形状を推定したい物体を一方向または複数の方向から撮
影し、 前記三次元モデルの形状を前記物体の大きさ及び撮影時
の姿勢に一致するように拡大・縮小・移動・回転し、 拡大・縮小・移動・回転された三次元モデルの形状を前
記撮影画像の撮影条件と同じ条件で前記画像上に投影す
ることで、前記画像上での物体の輪郭および特徴的な部
分の存在範囲を求め、 前記存在範囲を手がかりにして前記画像上での物体の輪
郭と特徴的な部分の実際の位置を検出し、 三次元モデルの形状が検出された輪郭および各部分の実
際の位置と一致するように、拡大・縮小・移動・回転さ
れた三次元モデルの形状を変形することにより物体の三
次元形状を得ることを特徴とする三次元形状推定方法。(1) In a processing method for obtaining the three-dimensional shape of an object as numerical data from an image taken of the object, a three-dimensional model representing the general structure and three-dimensional shape of the object to be processed is created in advance. Prepare the object whose shape you want to estimate from one direction or multiple directions, and enlarge, reduce, move, and rotate the shape of the three-dimensional model to match the size and orientation of the object at the time of photographing. Then, by projecting the enlarged, reduced, moved, and rotated shape of the three-dimensional model onto the image under the same conditions as the shooting conditions of the captured image, the outline and characteristic parts of the object on the image can be determined. Find the existing range, use the existing range as a clue to detect the outline of the object and the actual position of the characteristic parts on the image, and detect the detected outline of the shape of the three-dimensional model and the actual position of each part. A three-dimensional shape estimation method characterized by obtaining a three-dimensional shape of an object by deforming the shape of a three-dimensional model that has been enlarged, reduced, moved, or rotated so as to match the shape of the object.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP63225929A JPH0273471A (en) | 1988-09-09 | 1988-09-09 | Estimating method for three-dimensional form |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP63225929A JPH0273471A (en) | 1988-09-09 | 1988-09-09 | Estimating method for three-dimensional form |
Publications (1)
Publication Number | Publication Date |
---|---|
JPH0273471A true JPH0273471A (en) | 1990-03-13 |
Family
ID=16837106
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
JP63225929A Pending JPH0273471A (en) | 1988-09-09 | 1988-09-09 | Estimating method for three-dimensional form |
Country Status (1)
Country | Link |
---|---|
JP (1) | JPH0273471A (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5920320A (en) * | 1996-06-13 | 1999-07-06 | Fujitsu Ltd. | Three-dimensional shape data transforming apparatus |
KR100360487B1 (en) * | 2000-02-28 | 2002-11-13 | 삼성전자 주식회사 | Texture mapping method and apparatus for 2D facial image to 3D facial model |
JP2003115042A (en) * | 2001-10-05 | 2003-04-18 | Minolta Co Ltd | Evaluation method, generation method and apparatus of three-dimensional shape model |
JP2006527434A (en) * | 2003-06-10 | 2006-11-30 | バイオスペース インスツルメンツ | Radiation imaging method for three-dimensional reconstruction and computer program and apparatus for implementing the method |
JP2006337355A (en) * | 2005-06-01 | 2006-12-14 | Inus Technology Inc | Real time inspection guide system and method using three-dimensional scanner |
JP2007241579A (en) * | 2006-03-07 | 2007-09-20 | Toshiba Corp | Feature point detector and its method |
JP2020060945A (en) * | 2018-10-10 | 2020-04-16 | 株式会社ネイン | Information processing system, information processing method and computer program |
-
1988
- 1988-09-09 JP JP63225929A patent/JPH0273471A/en active Pending
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5920320A (en) * | 1996-06-13 | 1999-07-06 | Fujitsu Ltd. | Three-dimensional shape data transforming apparatus |
KR100360487B1 (en) * | 2000-02-28 | 2002-11-13 | 삼성전자 주식회사 | Texture mapping method and apparatus for 2D facial image to 3D facial model |
JP2003115042A (en) * | 2001-10-05 | 2003-04-18 | Minolta Co Ltd | Evaluation method, generation method and apparatus of three-dimensional shape model |
JP2006527434A (en) * | 2003-06-10 | 2006-11-30 | バイオスペース インスツルメンツ | Radiation imaging method for three-dimensional reconstruction and computer program and apparatus for implementing the method |
JP2006337355A (en) * | 2005-06-01 | 2006-12-14 | Inus Technology Inc | Real time inspection guide system and method using three-dimensional scanner |
JP2007241579A (en) * | 2006-03-07 | 2007-09-20 | Toshiba Corp | Feature point detector and its method |
JP4585471B2 (en) * | 2006-03-07 | 2010-11-24 | 株式会社東芝 | Feature point detection apparatus and method |
US7848547B2 (en) | 2006-03-07 | 2010-12-07 | Kabushiki Kaisha Toshiba | Apparatus for detecting feature point and method of detecting feature point |
JP2020060945A (en) * | 2018-10-10 | 2020-04-16 | 株式会社ネイン | Information processing system, information processing method and computer program |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6072903A (en) | Image processing apparatus and image processing method | |
JP6426968B2 (en) | INFORMATION PROCESSING APPARATUS AND METHOD THEREOF | |
CA2274977C (en) | Apparatus and method for 3-dimensional surface geometry reconstruction | |
KR101007276B1 (en) | 3D face recognition | |
US9234749B2 (en) | Enhanced object reconstruction | |
CN104574432B (en) | Three-dimensional face reconstruction method and three-dimensional face reconstruction system for automatic multi-view-angle face auto-shooting image | |
WO2018075053A1 (en) | Object pose based on matching 2.5d depth information to 3d information | |
WO2012096747A1 (en) | Forming range maps using periodic illumination patterns | |
CN106155299B (en) | A kind of pair of smart machine carries out the method and device of gesture control | |
JP4284664B2 (en) | Three-dimensional shape estimation system and image generation system | |
JP4761670B2 (en) | Moving stereo model generation apparatus and method | |
KR20160088814A (en) | Conversion Method For A 2D Image to 3D Graphic Models | |
Wong et al. | Fast acquisition of dense depth data by a new structured light scheme | |
US9558406B2 (en) | Image processing apparatus including an object setting section, image processing method, and program using the same | |
Ye et al. | Facial micro-expression analysis via a high speed structured light sensing system | |
CN113939852A (en) | Object recognition device and object recognition method | |
JPH0273471A (en) | Estimating method for three-dimensional form | |
KR101673144B1 (en) | Stereoscopic image registration method based on a partial linear method | |
Aliakbarpour et al. | Multi-sensor 3D volumetric reconstruction using CUDA | |
JP3253328B2 (en) | Distance video input processing method | |
JP2787612B2 (en) | Face image model generation device | |
Nguyen et al. | 3D model reconstruction system development based on laser-vision technology | |
JP4623320B2 (en) | Three-dimensional shape estimation system and image generation system | |
JP2006215743A (en) | Image processing apparatus and image processing method | |
JP2011149952A (en) | Model input device and model generation system |