[go: up one dir, main page]

JPS6278684A - Object search method - Google Patents

Object search method

Info

Publication number
JPS6278684A
JPS6278684A JP60219437A JP21943785A JPS6278684A JP S6278684 A JPS6278684 A JP S6278684A JP 60219437 A JP60219437 A JP 60219437A JP 21943785 A JP21943785 A JP 21943785A JP S6278684 A JPS6278684 A JP S6278684A
Authority
JP
Japan
Prior art keywords
dictionary
dimensional
input
projected image
view
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP60219437A
Other languages
Japanese (ja)
Other versions
JPH0644282B2 (en
Inventor
Eiichiro Yamamoto
山本 栄一郎
Tomomitsu Murano
朋光 村野
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujitsu Ltd
Original Assignee
Fujitsu Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujitsu Ltd filed Critical Fujitsu Ltd
Priority to JP60219437A priority Critical patent/JPH0644282B2/en
Publication of JPS6278684A publication Critical patent/JPS6278684A/en
Publication of JPH0644282B2 publication Critical patent/JPH0644282B2/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Landscapes

  • Image Processing (AREA)
  • Character Discrimination (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Image Analysis (AREA)

Abstract

(57)【要約】本公報は電子出願前の出願データであるた
め要約のデータは記録されません。
(57) [Summary] This bulletin contains application data before electronic filing, so abstract data is not recorded.

Description

【発明の詳細な説明】 〔概要〕 人力未知物体の検索方式において、他面体状物体の2次
元投影像が有限の見え方しか持たないことに着目し、該
多面体物体に関する見え方辞書を備えて、予め上記入力
未知物体の2次元投影像と、この見え方辞書との照合を
行うことにより、3次元座標への参照回数を少なくずろ
ようにしたものである。
[Detailed Description of the Invention] [Summary] In a manual search method for unknown objects, we focused on the fact that a two-dimensional projected image of a polyhedral object has only a finite number of views, and provided a dictionary of views regarding the polyhedral object. By comparing the two-dimensional projected image of the input unknown object with this appearance dictionary in advance, the number of references to the three-dimensional coordinates can be reduced.

〔産業上の利用分野〕[Industrial application field]

本発明は物体検索方式に係り、特に多面体物体に関する
見え方辞書を備えることにより、未知の入力物体の検索
を高速に実行できる物体検索方式最近のロボ7)技術の
進歩により、高度の作業ができるロボットが出現するよ
うになっ“ζきた。
The present invention relates to an object retrieval method, and in particular, an object retrieval method that can perform a high-speed search for an unknown input object by being equipped with a view dictionary for polyhedral objects.Recent robot technology advances have enabled advanced work. Robots have begun to appear.

一方、原子炉の炉内検査とか、海底探査等、人間が行う
には危険な極限作業が多くなっており、上記のような高
度の作業をこなすロボットに代行させることの必要性が
増大している。
On the other hand, there is an increasing number of tasks that are too dangerous for humans to perform, such as inspecting the inside of a nuclear reactor or exploring the ocean floor, and there is an increasing need for robots to perform such high-level tasks on their behalf. There is.

この場合、当該ロボットには、作業対象の物体に対する
正確で、且つ高速なロボットビジョン、即ち、ロボット
の目が要求される。
In this case, the robot is required to have accurate and high-speed robot vision for the object being worked on, that is, the robot's eyes.

〔従来の技術と発明が解決しようとする問題点〕第4図
は従来の物体検索方式を説明する図であって、(a)は
模式図で示したものであり、(b)はブロック図である
[Prior art and problems to be solved by the invention] FIG. 4 is a diagram explaining a conventional object search method, in which (a) is a schematic diagram, and (b) is a block diagram. It is.

先ず、観測袋W1において、未知の物体1aをテレビカ
メラ28等で撮影し2次元投影像3aを作り、線画抽出
回路2に送出する。
First, in the observation bag W1, an unknown object 1a is photographed with a television camera 28 or the like to create a two-dimensional projected image 3a, and sent to the line drawing extraction circuit 2.

線画抽出回路2においては、該2次元投影像から線画を
抽出し、画像メモリ3に格納する。
A line drawing extraction circuit 2 extracts a line drawing from the two-dimensional projected image and stores it in an image memory 3.

この画像メモリ3に格納されている線画情報と、3次元
座標で表現されている立体辞書6から読み出された物体
情報5aの2次元投影像6aとを直接照合回路4で照合
していた。
The line drawing information stored in the image memory 3 is directly compared with the two-dimensional projection image 6a of the object information 5a read from the three-dimensional dictionary 6 expressed in three-dimensional coordinates by the matching circuit 4.

その為、従来方式においては、■(b)図で示したよう
に、入力物体1aの向き(視点位置)を予め何等かの方
法で予測して、投影変換回路5に別の手段で入力してお
き、それと同じ方向に、該辞書6に記述されている物体
を回転し、それを2次元に投影したもの(6a)と2.
]−記入力の2次元投影像3aとを照合するか、或いは
、■該辞書6に記述された物体を総ての立体角に回転し
ながら2次元投影像68を作ν)、人力の2次元投影像
3aと照合する必要があった。
Therefore, in the conventional method, the direction (viewpoint position) of the input object 1a is predicted in advance by some method, and then input to the projection conversion circuit 5 by another means, as shown in Figure (b). 2. Rotate the object described in the dictionary 6 in the same direction and project it in two dimensions (6a).
] - collate the input two-dimensional projected image 3a, or (v) create a two-dimensional projected image 68 while rotating the object described in the dictionary 6 into all solid angles. It was necessary to check with the dimensional projection image 3a.

従って、■の方法では、予め未知物体1aの視点位置を
知らなければならないと云う問題があり、■の方法では
膨大な照合時間が必要になると云う問題があった。
Therefore, in the method (2), there is a problem in that the viewpoint position of the unknown object 1a must be known in advance, and in the method (2), there is a problem in that an enormous amount of matching time is required.

本発明は上記従来の欠点に鑑み、見え方辞書を備えるこ
とにより、立体辞書との照合回数を減らし、入力物体の
向きが分からなくても、高速な検索ができる方法を提供
することを目的とするものである。
In view of the above-mentioned drawbacks of the conventional art, an object of the present invention is to provide a method that reduces the number of times of matching with a 3D dictionary by providing a view dictionary, and allows high-speed searching even when the orientation of an input object is unknown. It is something to do.

〔問題点を解決する為の手段〕[Means for solving problems]

第1図は本発明の詳細な説明する図である。 FIG. 1 is a diagram explaining the present invention in detail.

入力物体1aはテレビカメラ28等で撮影され、2次元
投影像3aとなる。
The input object 1a is photographed by a television camera 28 or the like, and becomes a two-dimensional projected image 3a.

見え方辞書4aには、物体をどの視点から見た    
 □らどういう見え方(つまり、多角形の形状と、その
接続関係)をするかが記述されている。
Viewing Dictionary 4a describes the perspective from which an object is viewed.
□Describes how it will look (that is, the shape of the polygon and its connection relationship).

上記見え方辞書については、本願出願者が先願している
特願昭60−105355 r立体の見え方像作成装置
」に開示されているので、その詳細は省略するが、第2
図の見え方辞書の概念図によって要約すると、下記の通
りとなる。
The above-mentioned appearance dictionary is disclosed in Japanese Patent Application No. 105355/1983, ``Stereoscopic Appearance Image Creation Apparatus'', which was previously filed by the applicant of the present application, so its details will be omitted, but the second
The following can be summarized using a conceptual diagram of the diagram appearance dictionary.

例えば、多面体として四角錐を考えた場合、ある方向か
ら見た時の見え方(面の形状と、その接続関係)は、本
図に示すように7種類に限られる。
For example, when considering a square pyramid as a polyhedron, the appearance (shapes of the surfaces and their connection relationships) when viewed from a certain direction is limited to seven types, as shown in this figure.

従って、入力未知物体の2次元投影像と、この見え方を
照合した時、若し該人力未知物体が四角錐であると、こ
の7種類の何れかと一致することになる。
Therefore, when this appearance is compared with the two-dimensional projected image of the input unknown object, if the unknown human-powered object is a square pyramid, it will match any of these seven types.

該見え方辞書4aには、各多面体に対応して、その見え
方と、その見え方に対する視点情報とが対となって格納
されている。
The view dictionary 4a stores, corresponding to each polyhedron, a pair of the view and viewpoint information for the view.

このような見え方辞書4aと上記入力未知物体の2次元
投影像3aとを照合すると、第1図において、見え方辞
書4aの2番目の見え方と一致することか検知される。
When such a view dictionary 4a is compared with the two-dimensional projected image 3a of the input unknown object, it is detected whether it matches the second view in the view dictionary 4a in FIG.

この見え方に対する領域を検索することにより、物体を
どの方向から見た見え方であるかを認識することができ
る。
By searching the area corresponding to this view, it is possible to recognize from which direction the object appears.

そこで、立体辞書5aの3次元座標を、この方向に投影
して立体辞書5aの2次元投影像6aを作成する。
Therefore, the three-dimensional coordinates of the three-dimensional dictionary 5a are projected in this direction to create a two-dimensional projected image 6a of the three-dimensional dictionary 5a.

こうして、入力の2次元投影像3aと立体辞書5aの2
次元投影像6aとを照合し、一致すれば、この辞書のカ
テゴリが検索すべきカテゴリである。
In this way, the input two-dimensional projected image 3a and the two-dimensional dictionary 5a
The dictionary is compared with the dimensional projection image 6a, and if they match, the category of this dictionary is the category to be searched.

従って、このカテゴリに付随した情報、例えば、°五角
錐゛が検索結果7aとして出力されるように構成する。
Therefore, the configuration is such that information associated with this category, for example, "pentagonal pyramid" is output as the search result 7a.

〔作用〕[Effect]

即ち、本発明によれば、入力未知物体の検索方式におい
て、他面体状物体の2次元投影像が有限の見え方しか持
たないことに着目し、該多面体物体に関する見え方辞書
を備えて、予め[1記入力未知物体の2次元投影像と、
この見え方辞書との照合を行うことにより、3次元座標
への参照同数を少なくするようにしたものであるので、
立体辞書との照合同数を減らすことができ、物体検索の
高速化が図れる効果がある。
That is, according to the present invention, in a search method for an input unknown object, focusing on the fact that a two-dimensional projected image of a polyhedral object has only a finite number of views, a view dictionary regarding the polyhedral object is provided, and [1 input 2D projected image of unknown object,
By checking this view dictionary, the number of references to three-dimensional coordinates is reduced, so
This has the effect of reducing the number of matching matches with the three-dimensional dictionary and speeding up object retrieval.

〔実施例〕〔Example〕

以下本発明の実施例を第1図、第2図を参照しながら図
面によって詳述する。
Embodiments of the present invention will be described in detail below with reference to the drawings with reference to FIGS. 1 and 2.

第3図は本発明の一実施例をブロック図で示した図であ
り、見え方辞書8.及び照合回路■ 7が本発明を実施
するのに必要な機能ブロックである。
FIG. 3 is a block diagram showing an embodiment of the present invention. and verification circuit (7) are functional blocks necessary to implement the present invention.

尚、全図を1ffiシて同じ符号は同じ対象物を示して
いる。
Note that the same reference numerals indicate the same objects throughout the drawings.

本図において、1は3次元の物体を2次元に投影する観
測装置、2は該2次元に投影された濃淡画像から線画を
抽出する線画抽出回路、3は該抽出された線画を記憶し
ておく画像メモリ、8は前述の見え方辞書、7は線画と
見え方辞書を照合する照合回路L6は物体の3次元座標
を記述した立体辞書、5は該立体辞書6からの3次元座
標を2次元平面にIジ影する投影変換r1路、4は入力
の2次元投影像と立体辞書6の2次元投影像とを照合す
る照合回路■である。
In this figure, 1 is an observation device that projects a three-dimensional object into two dimensions, 2 is a line drawing extraction circuit that extracts a line drawing from the shaded image projected in the two dimensions, and 3 is a memory for storing the extracted line drawing. Reference numeral 8 indicates the above-mentioned perspective dictionary; 7 indicates a matching circuit L6 for comparing the line drawing and the perspective dictionary; 5 indicates the 3D coordinates from the 3D dictionary 6; Projection transformation r1 is projected onto the dimensional plane, and reference numeral 4 is a collation circuit (2) that collates the input two-dimensional projected image with the two-dimensional projected image of the three-dimensional dictionary 6.

今、与えられた未知物体1aは観測回路1により2次元
座標に投影される。この2次元画像は線画抽出回路2に
よって、物体の稜線に対応する線分が抽出され線画とな
って画像メモリ3に格納される。
Now, the given unknown object 1a is projected onto two-dimensional coordinates by the observation circuit 1. A line drawing extraction circuit 2 extracts line segments corresponding to the ridges of the object from this two-dimensional image, and the line drawings are stored in the image memory 3 as line drawings.

照合回路I 7は、画像メモリ3から読み出した線画と
、見え方辞書8に格納されている多数の線画とを照合し
、多角形の形状、及びその接続関係が同じものを、具え
方の候補として選択する。
The matching circuit I 7 matches the line drawing read out from the image memory 3 with a large number of line drawings stored in the appearance dictionary 8, and selects polygons with the same polygonal shape and connection relationship as candidates for the appearance. Select as.

投影変換回路5は、−1−紀見え方の候補に対応する領
域を、該見え方辞書8から読み出し、立体辞書に格納さ
れている3次元データを、この領域に対応する2次元平
面に投影する。
The projection conversion circuit 5 reads out a region corresponding to a candidate for the −1-century view from the view dictionary 8, and projects the three-dimensional data stored in the three-dimensional dictionary onto a two-dimensional plane corresponding to this region. do.

上記画像メモリ3から読み出した入力物体の2次元投影
像と、投影変換回路5から出力される立体辞書6の2次
元投影像を照合回路■4で照合し、一致すれば、この立
体辞書のカテゴリが、検索すべきカテゴリであるとして
付随情報を出力させる。
The two-dimensional projected image of the input object read from the image memory 3 and the two-dimensional projected image of the three-dimensional dictionary 6 output from the projection conversion circuit 5 are compared in the matching circuit ■4, and if they match, the category of this three-dimensional dictionary is determined. is the category to be searched, and the accompanying information is output.

若し、−数情報が得られなければ、上記見え方候補につ
いて、総ての照合が終了する迄、この処理を繰り返す。
If -number information is not obtained, this process is repeated until all the verifications are completed for the above-mentioned appearance candidates.

このように、本発明は、他面体状物体の2次元投影像が
有限の見え方しかないことを利用し、この見え方で予め
照合を行うことにより、3次元座標の格納されている立
体辞書への参照回数を少なくするようにした所に特徴が
ある。
In this way, the present invention makes use of the fact that there are only a finite number of ways in which a two-dimensional projected image of a transhedral object can be seen, and by performing a pre-verification using these ways of seeing, a three-dimensional dictionary storing three-dimensional coordinates can be created. The feature is that the number of references to is reduced.

〔発明の効果〕〔Effect of the invention〕

以上、詳細に説明したように、本発明の物体検索方式は
、入力未知物体の検索方式において、他面体状物体の2
次元投影像が有限の見え方しか持たないことに着目し、
該多面体物体に関する見え方辞書を備えて、予め上記入
力未知物体の2次元投影像と、この見え方辞書との照合
を行うことにより、3次元座標への参照回数を少なくす
るようにしたものであるので、立体辞書との照合回数を
減らずことができ、物体検索の高速化が図れる効Ω 果がある。
As described above in detail, the object search method of the present invention is a search method for an input unknown object.
Focusing on the fact that a dimensional projected image has only a finite appearance,
A view dictionary regarding the polyhedral object is provided, and the two-dimensional projected image of the input unknown object is compared with this view dictionary in advance, thereby reducing the number of references to three-dimensional coordinates. Therefore, the number of checks with the 3D dictionary can be avoided without reducing the number of times, and the object retrieval speed can be increased.

【図面の簡単な説明】[Brief explanation of drawings]

第1図は本発明の詳細な説明する図。 第2図は見え方辞書の概念図。 第3図は本発明の一実施例をブロック図で示した図。 第4図は従来の物体検索方式を説明する図2である。 図面において、 1は観測装置、   2は線画抽出回路。 3は画像メモリ、  4は照合回路、照合回路■7は照
合回路[、8は見え方辞書。 6ば立体辞書、   5は投影変換回路。 laは入力物体、2aはテレビカメラ。 3aは入力物体の2次元投影像。 4aは見え方辞書の内容。 5aは立体辞書の内容。 6aは立体辞書に格納されている物体の2次元投影7a
は検索結果。 をそれぞれ示す。 (かノ 征よの糊#;−お1案先式 0丸明1ろ図 図
FIG. 1 is a diagram explaining the present invention in detail. Figure 2 is a conceptual diagram of a visual dictionary. FIG. 3 is a block diagram showing one embodiment of the present invention. FIG. 4 is FIG. 2 illustrating a conventional object search method. In the drawing, 1 is an observation device, and 2 is a line drawing extraction circuit. 3 is an image memory, 4 is a matching circuit, 7 is a matching circuit, 8 is a view dictionary. 6 is a three-dimensional dictionary, and 5 is a projection conversion circuit. la is an input object, and 2a is a television camera. 3a is a two-dimensional projected image of the input object. 4a is the contents of the appearance dictionary. 5a is the contents of the three-dimensional dictionary. 6a is a two-dimensional projection 7a of an object stored in a three-dimensional dictionary
is the search result. are shown respectively. (Kano Seiyo's glue #; - 1st suggestion type 0 Maruaki 1ro diagram

Claims (1)

【特許請求の範囲】 他面体状物体を、ある方向から見た時に、どのような多
角形が、どういう接続関係で見えるかと云う見え方を格
納した見え方辞書(8)と、3次元物体の3次元座標を
格納する立体辞書(6)と、 入力された未知物体の2次元投影像と、上記見え方辞書
(8)とを照合する手段(7)とを備え、該照合手段(
7)によって、上記入力物体の候補カテゴリをまず絞っ
た後、上記立体辞書(6)との詳細な照合を行い、入力
未知物体の検索ができるようにしたことを特徴とする物
体検索方式。
[Claims] A visibility dictionary (8) that stores the visibility of what kind of polygons are visible in what connection relationships when an other-hedral object is viewed from a certain direction, and a 3D object. A three-dimensional dictionary (6) for storing three-dimensional coordinates; and means (7) for comparing the input two-dimensional projected image of the unknown object with the appearance dictionary (8);
7), after first narrowing down the candidate categories of the input object, a detailed comparison with the three-dimensional dictionary (6) is performed to search for an input unknown object.
JP60219437A 1985-10-02 1985-10-02 Object search method Expired - Lifetime JPH0644282B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP60219437A JPH0644282B2 (en) 1985-10-02 1985-10-02 Object search method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP60219437A JPH0644282B2 (en) 1985-10-02 1985-10-02 Object search method

Publications (2)

Publication Number Publication Date
JPS6278684A true JPS6278684A (en) 1987-04-10
JPH0644282B2 JPH0644282B2 (en) 1994-06-08

Family

ID=16735387

Family Applications (1)

Application Number Title Priority Date Filing Date
JP60219437A Expired - Lifetime JPH0644282B2 (en) 1985-10-02 1985-10-02 Object search method

Country Status (1)

Country Link
JP (1) JPH0644282B2 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS6423379A (en) * 1987-07-20 1989-01-26 Agency Ind Science Techn Object recognizing device
JPS6431188A (en) * 1987-07-28 1989-02-01 Agency Ind Science Techn Image recognition equipment for mobile robot
JPH06309457A (en) * 1993-04-26 1994-11-04 Fuji Photo Film Co Ltd Method for judging picture
JP2002133413A (en) * 2000-10-26 2002-05-10 Kawasaki Heavy Ind Ltd Method and apparatus for identifying a three-dimensional object using image processing
JP2004503017A (en) * 2000-07-07 2004-01-29 ミツビシ・エレクトリック・インフォメイション・テクノロジー・センター・ヨーロッパ・ビーヴィ Method and apparatus for representing and searching for objects in an image
EP1424721A2 (en) * 2002-11-27 2004-06-02 Hitachi High-Technologies Corporation Sample observation method and transmission electron microscope
JP2007502473A (en) * 2003-08-15 2007-02-08 スカーペ アクティーゼルスカブ Computer vision system for classification and spatial localization of bounded 3D objects
US7545973B2 (en) 2002-07-10 2009-06-09 Nec Corporation Image matching system using 3-dimensional object model, image matching method, and image matching program
JP2012141962A (en) * 2010-12-14 2012-07-26 Canon Inc Position and orientation measurement device and position and orientation measurement method

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS6423379A (en) * 1987-07-20 1989-01-26 Agency Ind Science Techn Object recognizing device
JPS6431188A (en) * 1987-07-28 1989-02-01 Agency Ind Science Techn Image recognition equipment for mobile robot
JPH06309457A (en) * 1993-04-26 1994-11-04 Fuji Photo Film Co Ltd Method for judging picture
JP2004503017A (en) * 2000-07-07 2004-01-29 ミツビシ・エレクトリック・インフォメイション・テクノロジー・センター・ヨーロッパ・ビーヴィ Method and apparatus for representing and searching for objects in an image
JP4632627B2 (en) * 2000-07-07 2011-02-16 ミツビシ・エレクトリック・アールアンドディー・センター・ヨーロッパ・ビーヴィ Method and apparatus for representing and searching for objects in an image
JP2002133413A (en) * 2000-10-26 2002-05-10 Kawasaki Heavy Ind Ltd Method and apparatus for identifying a three-dimensional object using image processing
US7873208B2 (en) 2002-07-10 2011-01-18 Nec Corporation Image matching system using three-dimensional object model, image matching method, and image matching program
US7545973B2 (en) 2002-07-10 2009-06-09 Nec Corporation Image matching system using 3-dimensional object model, image matching method, and image matching program
US7214938B2 (en) 2002-11-27 2007-05-08 Hitachi Science Systems, Ltd. Sample observation method and transmission electron microscope
EP1424721A2 (en) * 2002-11-27 2004-06-02 Hitachi High-Technologies Corporation Sample observation method and transmission electron microscope
EP1424721A3 (en) * 2002-11-27 2011-11-16 Hitachi High-Technologies Corporation Sample observation method and transmission electron microscope
EP2565901A1 (en) * 2002-11-27 2013-03-06 Hitachi High-Technologies Corporation Sample observation method and transmission electron microscope
JP2007502473A (en) * 2003-08-15 2007-02-08 スカーペ アクティーゼルスカブ Computer vision system for classification and spatial localization of bounded 3D objects
JP4865557B2 (en) * 2003-08-15 2012-02-01 スカーペ テクノロジーズ アクティーゼルスカブ Computer vision system for classification and spatial localization of bounded 3D objects
JP2012141962A (en) * 2010-12-14 2012-07-26 Canon Inc Position and orientation measurement device and position and orientation measurement method

Also Published As

Publication number Publication date
JPH0644282B2 (en) 1994-06-08

Similar Documents

Publication Publication Date Title
Horaud New methods for matching 3-D objects with single perspective views
Ikeuchi et al. Determining grasp configurations using photometric stereo and the prism binocular stereo system
Brady Robotics science
Hauck et al. Visual determination of 3D grasping points on unknown objects with a binocular camera system
Tanase et al. Polygon decomposition based on the straight line skeleton
JPS6278684A (en) Object search method
JP2520397B2 (en) Visual system for distinguishing contact parts
Raviv et al. A unified approach to camera fixation and vision-based road following
EP1178436A2 (en) Image measurement method, image measurement apparatus and image measurement program storage medium
Shneier et al. Prediction-based vision for robot control
KR100442817B1 (en) 3D object recognition method based on one 2D image and modelbase generation method
Wang Machine visualization, understanding and interpretation of polyhedral line-drawings in document analysis
Gingins et al. Model-based 3D object recognition by a hybrid hypothesis generation and verification approach
Stevens Obtaining 3 D silhouettes and sampled surfaces from solid models for use in computer vision
Gvozdjak et al. From nomad to explorer: Active object recognition on mobile robots
Seales et al. An Occlusion-Based Representation of Shape for Viewpoint Recovery.
Wang Perception and visualization of line images
Stein Structural indexing for object recognition
JPS6125190B2 (en)
Okubo et al. Selective reconstruction of a 3-D scene with an active stereo vision system
Bao-Zong Artificial vision for mobile robots: Stereo vision and multisensory perception: by NICHOLAS AYACHE, translated by PETER T. SANDER. The MIT Press, Cambridge, MA (1991). 345pp.,£ 40.50, ISBN 0-262-01124-7.
JPH0410667B2 (en)
Gemmerle et al. Construction of 3D views from stereoscopic triplets of images
Adan et al. Objects layout graph for 3D complex scenes
Roh et al. 3-D object recognition using projective invariant relationship by single-view