[go: up one dir, main page]

CN101807244B - Machine Recognition and Reconstruction Methods - Google Patents

Machine Recognition and Reconstruction Methods Download PDF

Info

Publication number
CN101807244B
CN101807244B CN2009100780799A CN200910078079A CN101807244B CN 101807244 B CN101807244 B CN 101807244B CN 2009100780799 A CN2009100780799 A CN 2009100780799A CN 200910078079 A CN200910078079 A CN 200910078079A CN 101807244 B CN101807244 B CN 101807244B
Authority
CN
China
Prior art keywords
model
characteristic
image
object model
knowledge base
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN2009100780799A
Other languages
Chinese (zh)
Other versions
CN101807244A (en
Inventor
王晨升
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Posts and Telecommunications
Original Assignee
Beijing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Posts and Telecommunications filed Critical Beijing University of Posts and Telecommunications
Priority to CN2009100780799A priority Critical patent/CN101807244B/en
Publication of CN101807244A publication Critical patent/CN101807244A/en
Application granted granted Critical
Publication of CN101807244B publication Critical patent/CN101807244B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a machine recognition and reconstruction method which is used for recognizing and reconstructing an object. The method comprises the following steps of: acquiring an image of the object to be recognized; extracting image characteristics; calling an object model from an object model knowledge base, and extracting the characteristics of the model; comparing the model characteristics with the image characteristics, if matching, recognizing the object and calling relevant information in the object model knowledge base to reconstruct the object to be recognized; and if not, continuing to calling the next model and repeating the operation above until searching the matched model. By utilizing the method, the robot can easily recognize and reconstruct the object.

Description

机器识别和重构方法Machine Recognition and Reconstruction Methods

技术领域 technical field

本发明涉及机器视觉领域。更详而言之,本发明涉及一种机器识别和重构方法。The invention relates to the field of machine vision. More specifically, the present invention relates to a machine recognition and reconstruction method.

背景技术 Background technique

对象物体识别和重构是计算机视觉和机器人识别领域中极其关键的技术问题。例如,在工业制造领域,在机器人操作现场,机器人需要识别出现场的物体对象,并在识别的基础上重构所识别的物体,再根据识别结果针对不同的物体对象做出相应的动作。如何实现物体对象的正确识别和重构呢,这是一直困扰着机器视觉发展的技术难题。Object recognition and reconstruction are extremely critical technical issues in the field of computer vision and robot recognition. For example, in the field of industrial manufacturing, at the robot operation site, the robot needs to recognize the objects on the scene, reconstruct the recognized objects on the basis of recognition, and then make corresponding actions for different objects according to the recognition results. How to realize the correct recognition and reconstruction of objects is a technical problem that has always plagued the development of machine vision.

现有技术中,主要有两种识别方法:一种是基于物体结构描述的识别方法;另一种是基于影像的识别算法。基于物体结构描述的识别算法主要应用了Marr的视觉计算理论,这种方法将物体识别作为一个多层次识别的过程,主要是先识别局部简单特征,然后逐步识别复杂三维物体。由于是先从局部特征开始识别,因此利用这种识别方法对于三维复杂物体的整体特征的识别结果不好,往往与真实结构相差很大。基于影像的识别算法,主要是人类对物体整体形状的认知策略,进而修正在利用基于物体结构描述的识别方法所识别中的不足。In the prior art, there are mainly two recognition methods: one is a recognition method based on object structure description; the other is an image-based recognition algorithm. The recognition algorithm based on object structure description mainly applies Marr's visual computing theory. This method regards object recognition as a multi-level recognition process. It mainly recognizes local simple features first, and then gradually recognizes complex three-dimensional objects. Since the recognition starts from the local features first, the recognition results of the overall features of the three-dimensional complex objects are not good by using this recognition method, which is often very different from the real structure. The image-based recognition algorithm is mainly a cognitive strategy for human beings to the overall shape of the object, and then corrects the shortcomings in the recognition method based on the object structure description.

在识别的基础上将物体对象进行重构一直都是本领域的研究热点。现有的物体重构方法主要是基于投影影像的重构方法和基于几何投影特征的物体重构方法。由于物体三维构成的复杂性和拓扑关系的复杂性,现有技术中还没有提出能够达到实用水平的物体重构方法。Reconstructing objects based on recognition has always been a research hotspot in this field. The existing object reconstruction methods are mainly reconstruction methods based on projection images and object reconstruction methods based on geometric projection features. Due to the complexity of the three-dimensional structure of the object and the complexity of the topological relationship, no object reconstruction method that can reach a practical level has been proposed in the prior art.

发明内容 Contents of the invention

因此,本发明的目的是提供一种机器识别方法,用于物体的识别和重构,以实现机器人对物体的识别,并在识别的基础上进行建构。Therefore, the purpose of the present invention is to provide a machine recognition method for object recognition and reconstruction, so as to realize the robot's recognition of the object and build on the basis of the recognition.

为此,本发明提供了一种机器识别方法,用于物体的识别和/或重构,包括:To this end, the present invention provides a machine recognition method for object recognition and/or reconstruction, comprising:

采集要识别物体影像;Collect images of objects to be identified;

对采集的物体影像进行特征提取,即提取影像特征;Feature extraction is performed on the collected object images, that is, image features are extracted;

提供物体模型知识库,该物体模型知识库包括N个物体模型,其中,N≥1;调取物体模型知识库中的第一物体模型;Provide an object model knowledge base, the object model knowledge base includes N object models, where N≥1; call the first object model in the object model knowledge base;

对该提取的物体模型进行特征提取,即提取模型特征;Perform feature extraction on the extracted object model, that is, extract model features;

将影像特征与模型特征进行对比;Compare image features with model features;

如果影像特征与模型特征匹配率不小于设定临界值,则将该提取的物体模型进行记录,以作为备选模型;If the matching rate between the image feature and the model feature is not less than the set critical value, record the extracted object model as an alternative model;

如果影像特征与模型特征边界匹配率小于设定临界值,则从物体模型知识库中调取与第一物体模型不同的第二物体模型,重复执行提取模型特征、特征对比的步骤,遍历物体模型知识库中的第三、第四物体模型......第N物体模型,直至寻找到采集的影像特征与提取的模型特征相匹配的物体模型为止。If the boundary matching rate between the image feature and the model feature is less than the set critical value, a second object model different from the first object model is retrieved from the object model knowledge base, and the steps of extracting model features and feature comparison are repeated to traverse the object model The third and fourth object models in the knowledge base...the Nth object model, until an object model matching the collected image features and the extracted model features is found.

本发明与现有技术相比,由于建立了物体模型知识库,因此,在进行机器视觉时,只需要将要识别的影像进行特征提取,将提取的特征与物体模型知识库中模型的特征进行对比,如果两者的匹配率达到一定阀值,则要识别的物体与调用的物体模型匹配,即实现了物体的识别,同时调取存储在物体模型知识库中的关于模型的信息,则实现了要识别物体的建构。这种识别方法可以应用在各种技术领域,如工厂自动化、太空探索、现代医疗等。例如工厂自动化领域中,常常需要机器人对现场的有限的工具或操作物体进行识别,如从工具架上选取某一工具,并选择适当的操作对象进行相应的操作,根据本发明的方法,如果预先建立这些有限的物体的模型并存储物体模型知识库中,那么机器人可以利用本发明的识别和建构方法,识别出现场的物体,并根据此做出进一步的动作。Compared with the prior art, the present invention has established the object model knowledge base, therefore, when performing machine vision, it only needs to perform feature extraction on the image to be recognized, and compare the extracted features with the features of the model in the object model knowledge base , if the matching rate of the two reaches a certain threshold, the object to be recognized matches the called object model, that is, the recognition of the object is realized, and at the same time, the information about the model stored in the object model knowledge base is called, the realization of To identify the construction of the object. This identification method can be applied in various technical fields, such as factory automation, space exploration, modern medical treatment, etc. For example, in the field of factory automation, robots are often required to identify limited tools or operating objects on site, such as selecting a certain tool from the tool rack, and selecting an appropriate operating object to perform corresponding operations. According to the method of the present invention, if the Establish the models of these limited objects and store them in the object model knowledge base, then the robot can use the recognition and construction method of the present invention to recognize the objects on the scene and make further actions based on them.

本发明的这些特征、优点以及其他特征和优点将通过下面具体实施例的说明而变得清楚。These and other features and advantages of the present invention will become apparent from the following description of specific embodiments.

附图说明 Description of drawings

下面参考附图对本发明进行说明,其中附图仅以示例的方式示出了本发明的优选实施例。图中:The invention is described below with reference to the accompanying drawings, which show preferred embodiments of the invention by way of example only. In the picture:

图1为根据本发明的机器识别方法的一个示例性实施例的流程图;Fig. 1 is a flowchart of an exemplary embodiment of the machine identification method according to the present invention;

图2为根据本发明的机器识别方法的另一个示例性实施例的流程图。Fig. 2 is a flow chart of another exemplary embodiment of the machine identification method according to the present invention.

具体实施方式 Detailed ways

下面,参考附图,对本发明进行更全面的说明,附图中示出了本发明的示例性实施例。然而,本发明可以体现为多种不同形式,并不应理解为局限于这里叙述的示例性实施例。而是通过提供这些实施例,从而使本发明全面和完整,并将本发明的范围完全地传达给本领域的普通技术人员。The present invention will now be described more fully with reference to the accompanying drawings, in which exemplary embodiments of the invention are shown. This invention may, however, be embodied in many different forms and should not be construed as limited to the exemplary embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.

参考图1,图中示出了根据本发明的机器识别方法的一个示例性实施例的流程图。如图中所示,用于物体的识别和重构的机器识别方法,包括:Referring to FIG. 1 , it shows a flowchart of an exemplary embodiment of a machine identification method according to the present invention. As shown in the figure, the machine recognition method for object recognition and reconstruction includes:

S101、采集要识别物体影像。在该步骤中。采集要识别物体影像,可以利用各种各样的影像摄取装置(如相机、摄像机等)获取场景中的物体影像。在一个实施例中,例如在遥控控制领域,远程机器人利用相机获取现场的照片,并将照片通过无线网络发送至控制台,工作人员可以通过图形界面选择机器人要操作的对象物体的影像。在另一个实施例中,例如在工厂自动化领域中,机械加工机器人根据加工制造的流程的程序控制从拍摄的现场场景中选择要识别的物体影像。S101. Collect an image of an object to be identified. in this step. Acquisition To identify object images, various image capture devices (such as cameras, video cameras, etc.) can be used to acquire object images in the scene. In one embodiment, for example, in the field of remote control, the remote robot uses a camera to capture photos of the scene, and sends the photos to the console through a wireless network, and the staff can select the image of the object to be operated by the robot through a graphical interface. In another embodiment, for example, in the field of factory automation, the machining robot selects the image of the object to be recognized from the shot scene according to the program control of the manufacturing process.

S102、对影像进行预处理。在该步骤中,需要对所选取的物体影像进行滤波、去噪、畸变校正等操作,以去除掉物体影像的各种杂音,便于进行特征提取。在一个实施例中,可以省略掉该步骤。S102. Preprocessing the image. In this step, operations such as filtering, denoising, and distortion correction need to be performed on the selected object image to remove various noises of the object image and facilitate feature extraction. In one embodiment, this step can be omitted.

S103、对采集的物体影像进行特征提取,即提取影像特征。在该步骤中,特征提取是利用现有技术中常用的各种特征的方法,例如Canny算法及其改进算法进行特征提取,利用sift及其算法进行结构特征提取。为简单起见,不做详细说明。S103. Perform feature extraction on the collected object image, that is, extract image features. In this step, feature extraction is a method of using various features commonly used in the prior art, such as Canny algorithm and its improved algorithm for feature extraction, and sift and its algorithm for structural feature extraction. For simplicity, no detailed description is given.

S104、提供物体模型知识库,该物体模型知识库包括N个物体模型,其中,N≥1。在该步骤中,可以在物体模型知识库中预先设置多个物体模型。在一个实施例中,例如,在工厂自动化中,机器人操作现场中,机器人所需要接触或操作的工具或对象相对是有限的,因此可以将这有限的工具或对象的模型建立起来,并存储在物体模型知识库中。另外,也可以在识别过程中,根据要求添加物体模型。S104. Provide an object model knowledge base, where the object model knowledge base includes N object models, where N≥1. In this step, multiple object models may be preset in the object model knowledge base. In one embodiment, for example, in factory automation, in the robot operation site, the tools or objects that the robot needs to contact or operate are relatively limited, so the models of the limited tools or objects can be established and stored in object model knowledge base. In addition, object models can also be added as required during the recognition process.

S105、调取物体模型知识库中的第一物体模型。S105. Call the first object model in the object model knowledge base.

S106、对该提取的物体模型进行特征提取,即提取模型特征。提取模型的特征是利用现有技术中的方法来实现的,例如利用Canny算法。提取模型的特征可以提取模型的结构特征、形状特征、投影特征、边界特征等,例如可以利用背景技术中所使用的方法提取模型特征。S106. Perform feature extraction on the extracted object model, that is, extract model features. The feature extraction of the model is realized by using methods in the prior art, for example, using the Canny algorithm. Extracting the features of the model can extract the structural features, shape features, projection features, boundary features, etc. of the model, for example, the method used in the background art can be used to extract model features.

S107、将影像特征与模型特征进行对比。将将影像特征与模型特征进行对比用来判断影像的特征与模型的特征的相似度,为了说明方便,利用匹配率来描述这个意义,匹配率用来描述两个特征之间的相似程度。例如匹配率越高,则两者越相似,匹配率为100%,则说明两者完全相同。在机器视觉过程中,可以设定某一个匹配率的临界值(或阀值),例如匹配率为70%、80%、90、95%、99%等,这样可以加快匹配判断的过程,不必将所有的特征完全匹配就能做出正确的结论,能够节省时间,提高效率。S107. Comparing the image features with the model features. The image features are compared with the model features to judge the similarity between the image features and the model features. For the convenience of explanation, the matching rate is used to describe this meaning, and the matching rate is used to describe the similarity between two features. For example, the higher the matching rate, the more similar the two are, and a matching rate of 100% means that the two are exactly the same. In the process of machine vision, a critical value (or threshold) of a certain matching rate can be set, such as a matching rate of 70%, 80%, 90, 95%, 99%, etc., which can speed up the process of matching judgment without having to The correct conclusion can be made by completely matching all the features, which can save time and improve efficiency.

对于对比的结果进行判定,并根据不同的判定结构执行不同的步骤:Judge the results of the comparison, and perform different steps according to different judgment structures:

S108、如果影像特征与模型特征匹配率不小于设定临界值,则该要识别的物体被识别为该提取的物体模型或者记录被模型,作为备选模型。S108. If the matching rate between the image feature and the model feature is not less than the set threshold, the object to be identified is identified as the extracted object model or the recorded object model as a candidate model.

S109、如果影像特征与模型特征边界匹配率小于设定临界值,则从物体模型知识库中调取与第一物体模型不同的第二物体模型,在调取第二物体模型之前,首先判断是否第一物体模型是否为物体模型知识库中的最后一个模型,如果不是,则执行步骤S110,调取下一个模型;并重复执行S106提取模型特征、S107特征对比的步骤,遍历物体模型知识库中的第三、第四物体模型......第N物体模型,直至寻找到采集的影像特征与提取的模型特征相匹配的物体模型为止。S109. If the boundary matching rate between the image feature and the model feature is less than the set critical value, call a second object model different from the first object model from the object model knowledge base, and first judge whether to call the second object model before calling the second object model Whether the first object model is the last model in the object model knowledge base, if not, then execute step S110 to call the next model; and repeat the steps of S106 extracting model features and S107 feature comparison to traverse the object model knowledge base The third and fourth object models...the Nth object model, until an object model matching the collected image features and the extracted model features is found.

在一个实施例中,所述临界值是可调节的。例如,可以利用软件设置进行更改。In one embodiment, the threshold is adjustable. For example, it can be changed using software settings.

在另一个实施例中,当所述临界值设定较小时,例如为85%,可能从模型知识库中搜索出多个与要识别物体的影像特征的模型时,则从中选取匹配率最高的提取物体模型作为要识别的物体。In another embodiment, when the threshold value is set to be small, such as 85%, it is possible to search out a plurality of models related to the image features of the object to be recognized from the model knowledge base, and then select the one with the highest matching rate. An object model is extracted as an object to be recognized.

在另一个实施例中,当所述临界值设定较大时,例如为99.999%,则可能从知识库出搜索不出物体模型,此时可以将临界值调小,例如为80%,重新执行上述方法。In another embodiment, when the critical value is set larger, such as 99.999%, the object model may not be searched from the knowledge base. At this time, the critical value can be adjusted to a smaller value, such as 80%. Execute the method above.

参考图2,图2示出了本发明的机器识别方法的另一个示例性实施例的流程图。Referring to FIG. 2 , FIG. 2 shows a flow chart of another exemplary embodiment of the machine identification method of the present invention.

图2中示出的实施例与图1中的实施例的不同之处主要在于物体模型的特征提取,提取的是投影特征,并执行以下步骤:The difference between the embodiment shown in Fig. 2 and the embodiment in Fig. 1 mainly lies in the feature extraction of the object model, which extracts the projection feature, and performs the following steps:

步骤S206、在选定的某一方向上投影;S207、提取投影特征,例如利用前面所述的方法;S208、对于投影进行特征提取,对于提取的模型特征分别与影像的特征进行对比,即判断特征是否匹配。Step S206, projecting in a selected direction; S207, extracting projection features, for example, using the method described above; S208, performing feature extraction for projection, and comparing the extracted model features with the features of the image, that is, judging features Does it match.

然后根据特征对比的结果,做出判断:Then, based on the results of the feature comparison, a judgment is made:

如果投影的特征与物体影像的特征匹配率达到设定临界值,则将要识别的物体识别为物体模型;If the feature matching rate of the projected feature and the object image reaches the set critical value, the object to be recognized is recognized as an object model;

如果投影的特征与物体影像的特征匹配率小于设定临界值,则执行步骤S211将模型的观察方向变换为第二方向。If the matching rate of the projected features and the features of the object image is less than the set threshold value, step S211 is performed to transform the viewing direction of the model into a second direction.

然后,重复上述过程,如果投影的特征与物体影像的特征匹配率达到设定临界值则停止变换方向,否则将观察方向变换为第三、第四......第M方向,M为按照一定的搜索策略设定的数值,直到所述匹配率不小于设定临界值,否则对下一个物体模型继续执行上述过程Then, repeat the above process, if the feature matching rate of the projected feature and the object image reaches the set critical value, stop changing the direction, otherwise change the viewing direction to the third, fourth... Mth direction, M is According to the value set by a certain search strategy, until the matching rate is not less than the set critical value, otherwise continue to execute the above process for the next object model

由于在空间坐标系中,例如以观察目标(即调取的物体模型)为坐标中心,从不同的视角观察可以观察到不同的投影。为了提高投影的搜索效率,本发明还提供了一种优选的搜索方法。Since in the space coordinate system, for example, the observation target (ie, the retrieved object model) is taken as the coordinate center, different projections can be observed from different viewing angles. In order to improve the search efficiency of projections, the present invention also provides a preferred search method.

首先建立空间坐标系,First establish the space coordinate system,

然后以增量递进的方式改变模型的观察方向。例如以角度增量递进的方式,例如角度大小可以为1-10°之间的任意值,更优选地为3-5°,也可以为其他角度值。The viewing direction of the model is then changed incrementally. For example, in an angle incremental manner, for example, the angle size can be any value between 1-10°, more preferably 3-5°, or other angle values.

在改变模型的观察方向过程中,判断在某一方向投影的特征与物体影像的特征匹配率以及该方向的下一增量方向的投影的特征与物体影像的特征匹配率的大小,如果匹配率变大,则将观察方向改变为该增量方向的下一个增量方向,如果匹配率变小,则将观察方向向相反的方向移动该增量,并比较匹配率的大小,反复此过程,直到找到匹配率大的方向;如果在该找到的匹配率大的方向的投影的特征的匹配率不小于设定的临界值,则要识别的物体与模型匹配,结束循环过程,否则将观察方向转向确定空间坐标系的其他坐标方向,并执行上述过程,直到找到设定观察方向中匹配率大的值且该匹配率大的值不小于设定的临界值为止,如果遍历所有的方向均为找到,则判定该模型不匹配要识别的物体,并结束上述循环。In the process of changing the observation direction of the model, judge the feature matching rate of the projected feature in a certain direction and the feature image of the object and the feature matching rate of the projected feature in the next incremental direction of the direction and the feature image of the object image. If the matching rate becomes larger, then change the observation direction to the next incremental direction of the incremental direction, if the matching rate becomes smaller, then move the observation direction to the opposite direction to the increment, and compare the size of the matching rate, repeat this process, Until the direction with a large matching rate is found; if the matching rate of the projected features in the direction with a large matching rate found is not less than the set critical value, the object to be recognized matches the model, and the loop process ends, otherwise the direction will be observed Turn to determine other coordinate directions of the space coordinate system, and execute the above process until a value with a large matching rate in the set viewing direction is found and the value with a large matching rate is not less than the set critical value. If traversing all directions is If found, it is determined that the model does not match the object to be recognized, and the above loop ends.

应该理解的是,可以利用多种方法获得投影,上述方法仅仅是示例性的,本发明并不局限于此。例如可以在空间中任取一点,以通过该点到模型的直线(即观察方向)为移动轴在某一个空间平面内,以任意角度移动,不断地获取投影,然后对比投影的特征与要识别对象的特征的匹配率,当在该平面内移动完360°之后,在将观察方向旋转一定角度,在另一个坐标平面内移动,直到遍历完整个坐标空间,然后寻找出最大的匹配率,并将该最大匹配率与设定的临界值比较,从而做出判断。另外,还有很多其他的搜索算法,来提高搜索的效率。It should be understood that projections can be obtained using various methods, and the above methods are only exemplary, and the present invention is not limited thereto. For example, you can take any point in space, take the line passing through the point to the model (ie the viewing direction) as the moving axis in a certain space plane, move at any angle, and continuously obtain projections, and then compare the characteristics of the projections with those to be identified. The matching rate of the characteristics of the object, after moving 360° in this plane, rotate the viewing direction by a certain angle, move in another coordinate plane until the entire coordinate space is traversed, and then find the maximum matching rate, and Compare the maximum matching rate with the set critical value to make a judgment. In addition, there are many other search algorithms to improve the efficiency of the search.

在一个实施例中,物体模型知识库中的每个模型在不同方向的投影的特征以预先提取的方式存储在物体模型知识库中。例如,可以将模型在空间坐标系按照一定角增量变换观察方向进行投影,并提取这些投影的特征,然后将这些特征存储在物体模型知识库中。在进行特征匹配时,顺次将物体模型知识库中的特征进行对比,然后做出判断。这样做的好处在于节省了进行投影和提取的时间,但是占据存储空间大。在另一个实施例中,可以进行实时投影,实时特征提取,然后进行对比。本发明对此不做具体限定。In one embodiment, the projection features of each model in the object model knowledge base in different directions are stored in the object model knowledge base in a pre-extracted manner. For example, the model can be projected in the space coordinate system according to a certain angular increment to transform the viewing direction, and the features of these projections can be extracted, and then these features can be stored in the object model knowledge base. When performing feature matching, the features in the object model knowledge base are compared in sequence, and then a judgment is made. The advantage of this is that it saves the time for projection and extraction, but takes up a lot of storage space. In another embodiment, real-time projection, real-time feature extraction, and then comparison can be performed. The present invention does not specifically limit it.

当要识别的物体被识别为该提取的物体模型,可以进一步包括调取物体模型知识库中的模型信息,从而对所述要识别的物体进行重构。模型信息包括几何形状、色彩、材料、物理化学特性、成分特性和动作信息。When the object to be recognized is recognized as the extracted object model, it may further include calling model information in the object model knowledge base, so as to reconstruct the object to be recognized. Model information includes geometry, color, material, physicochemical properties, compositional properties, and motion information.

当遍历完物体模型知识库中的所有模型,未寻找到采集的影像特征与提取的模型特征相匹配的物体模型,则判定所述要识别的物体为新对象,用户将该对应于该新对象的模型信息添加入物体模型知识库。When all the models in the object model knowledge base have been traversed and no object model matching the collected image features and the extracted model features is found, it is determined that the object to be recognized is a new object, and the user will correspond to the new object The model information of the object model is added to the object model knowledge base.

上面已经通过示例的方式对本发明进行了说明,应该理解的是,在不背离本发明的精神和范围的前提下,可以对本发明进行各种修改和变更。The present invention has been described above by way of examples, and it should be understood that various modifications and changes can be made to the present invention without departing from the spirit and scope of the present invention.

Claims (8)

1. a machine identification method is used for the identification and/or the reconstruct of object, it is characterized in that, comprising:
Gather and want the recognition object image;
Object image to gathering carries out feature extraction, promptly extracts image feature;
The object model knowledge base is provided, and this object model knowledge base comprises N object model, wherein, and N >=1;
Transfer first object model in the object model knowledge base;
Object model to this extraction carries out feature extraction, i.e. the extraction model characteristic;
The image feature and the aspect of model are compared;
If image feature and aspect of model matching rate are not less than the setting critical value, the object model that then will extract carries out record, with as alternative model;
If image feature and aspect of model matching rate are less than setting critical value; Then from the object model knowledge base, transfer second object model different with first object model; Repeat the step of said extraction model characteristic, the contrast of said characteristic; The the 3rd, the 4th object model in the traversal object model knowledge base ... till the object model that the object model that N object model, the matching rate of the aspect of model that extracts until searching out and the image feature of collection are not less than critical value promptly is complementary with the object that will discern;
Wherein, the object model that extracts is carried out in the step of feature extraction,, carry out projection, carry out feature extraction respectively for the projection on each direction in different directions through the direction of observation of transformation model;
The step of the direction of observation of transformation model comprises:
The setting space coordinate system;
The mode of going forward one by one with angle step changes the direction of observation of model, and it is specially:
Relatively in the size of the characteristic matching rate of the characteristic of the projection of next angle step direction of the characteristic matching rate of the characteristic of a certain direction projection and object image and this direction and object image; If it is big that matching rate becomes; Then direction of observation is changed into the next angle step direction of this angle step direction,, then direction of observation is moved this angle step in the opposite direction if matching rate diminishes; And the size of comparison match rate; This process repeatedly is not less than the direction of the critical value of setting up to the matching rate of the characteristic of characteristic that finds projection and object image, and object and the model that indicate to discern this moment are complementary; Otherwise direction of observation is turned to other coordinate directions of confirming space coordinates, and carry out said process, all do not find, then judge the object that this model does not match and will discern if travel through all directions.
2. machine identification method according to claim 1, wherein said critical value is adjustable.
3. machine identification method according to claim 2; Wherein, When said critical value setting hour, and when from the model knowledge base, searching out the alternative model that a plurality of and the image feature of wanting recognition object be complementary, then choose the object that the highest object model conduct of matching rate identifies.
4. machine identification method according to claim 1; Wherein, The step of the object model that extracts being carried out feature extraction further comprises: the characteristic of the projection model on each direction of being extracted is compared with the characteristic of object image respectively; If the matching rate of the characteristic of the characteristic of projection model and object image is not less than the setting critical value on this direction; The object identification that then will discern is the object model that is extracted; If the matching rate of the characteristic of the characteristic of projection model and object image is then changed direction the direction of observation of institute's extraction model less than setting critical value on this direction, and repeat the step of projection, feature extraction, characteristic contrast; The characteristic matching rate of the characteristic of the projection model on the direction that transforms to and object image is not less than to be set critical value or is converted into the M direction, and M is the numerical value of setting according to certain search strategy.
5. machine identification method according to claim 4; Wherein, The characteristic of each model projection in different directions in the object model knowledge base is stored in the object model knowledge base with the mode of extracting in advance; Or during machine recognition, the characteristic of said each model of extract real-time projection in different directions.
6. machine identification method according to claim 1, wherein, if this object that will discern is identified as the object model of this extraction, then this method further comprises the model information of transferring in the object model knowledge base.
7. machine identification method according to claim 1; Wherein, If traveled through all models in the object model knowledge base; The object model that the object image characteristic that does not search out and gather is complementary judges that then the said object that will discern is new object, and the user will be added into the object model knowledge base corresponding to the model information of this new object.
8. machine identification method according to claim 6, wherein, said model information comprises one or more in geometric configuration, color, physicochemical characteristics and the action message.
CN2009100780799A 2009-02-13 2009-02-13 Machine Recognition and Reconstruction Methods Expired - Fee Related CN101807244B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2009100780799A CN101807244B (en) 2009-02-13 2009-02-13 Machine Recognition and Reconstruction Methods

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2009100780799A CN101807244B (en) 2009-02-13 2009-02-13 Machine Recognition and Reconstruction Methods

Publications (2)

Publication Number Publication Date
CN101807244A CN101807244A (en) 2010-08-18
CN101807244B true CN101807244B (en) 2012-02-08

Family

ID=42609032

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2009100780799A Expired - Fee Related CN101807244B (en) 2009-02-13 2009-02-13 Machine Recognition and Reconstruction Methods

Country Status (1)

Country Link
CN (1) CN101807244B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102855493A (en) * 2012-08-02 2013-01-02 成都众合云盛科技有限公司 Object recognition system
CN103903297B (en) * 2012-12-27 2016-12-28 同方威视技术股份有限公司 3D data processing and recognition method
CN111399456B (en) * 2020-03-26 2021-03-19 深圳市鑫疆基业科技有限责任公司 Intelligent warehouse control method and system, intelligent warehouse and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6795567B1 (en) * 1999-09-16 2004-09-21 Hewlett-Packard Development Company, L.P. Method for efficiently tracking object models in video sequences via dynamic ordering of features
CN1698067A (en) * 2003-04-28 2005-11-16 索尼株式会社 Image recognition device and method, and robot device
CN101271469A (en) * 2008-05-10 2008-09-24 深圳先进技术研究院 A 2D Image Recognition and Object Reconstruction Method Based on 3D Model Library

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6795567B1 (en) * 1999-09-16 2004-09-21 Hewlett-Packard Development Company, L.P. Method for efficiently tracking object models in video sequences via dynamic ordering of features
CN1698067A (en) * 2003-04-28 2005-11-16 索尼株式会社 Image recognition device and method, and robot device
CN101271469A (en) * 2008-05-10 2008-09-24 深圳先进技术研究院 A 2D Image Recognition and Object Reconstruction Method Based on 3D Model Library

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
JP特开平10-312463A 1998.11.24

Also Published As

Publication number Publication date
CN101807244A (en) 2010-08-18

Similar Documents

Publication Publication Date Title
CN114571153B (en) A method of weld seam identification and robot weld seam tracking based on 3D point cloud
CN110609037B (en) Product defect detection system and method
CN109465809B (en) Intelligent garbage classification robot based on binocular stereoscopic vision positioning identification
CN111199556B (en) Camera-based indoor pedestrian detection and tracking method
CN105354866B (en) A kind of polygonal profile similarity detection method
CN110315525A (en) A kind of robot workpiece grabbing method of view-based access control model guidance
CN102456225B (en) Video monitoring system and moving target detecting and tracking method thereof
CN104156965B (en) A kind of automatic quick joining method of Mine Monitoring image
CN112669385A (en) Industrial robot workpiece identification and pose estimation method based on three-dimensional point cloud characteristics
JP5468332B2 (en) Image feature point extraction method
CN111684462B (en) Image matching method and vision system
CN112752028B (en) Pose determination method, device, device and storage medium for mobile platform
CN101807244B (en) Machine Recognition and Reconstruction Methods
CN113927606A (en) Robot 3D vision grabbing method, deviation rectifying method and system
JP2013114547A5 (en)
CN108470165B (en) Fruit visual collaborative search method for picking robot
CN115797332B (en) Object grabbing method and device based on instance segmentation
CN117670816A (en) Casting cleaning robot three-dimensional point cloud processing method based on deep learning
CN114972473B (en) Plant three-dimensional form self-adaptive measurement method and system
CN110046626B (en) PICO algorithm-based image intelligent learning dynamic tracking system and method
CN110197123A (en) A kind of human posture recognition method based on Mask R-CNN
CN119359801A (en) A method for generating flexible object manipulation strategies based on multimodal fusion
CN113034526A (en) Grabbing method, grabbing device and robot
JP2017091202A (en) Object recognition method and object recognition apparatus
CN108961285B (en) Method and device for extracting welding seam edge of container hinge

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20120208

Termination date: 20130213