CN104552292A - Control system of robot, robot, program and control method of robot - Google Patents
Control system of robot, robot, program and control method of robot Download PDFInfo
- Publication number
- CN104552292A CN104552292A CN201410531769.6A CN201410531769A CN104552292A CN 104552292 A CN104552292 A CN 104552292A CN 201410531769 A CN201410531769 A CN 201410531769A CN 104552292 A CN104552292 A CN 104552292A
- Authority
- CN
- China
- Prior art keywords
- assembled
- robot
- image
- control unit
- assembly
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims description 237
- 238000012545 processing Methods 0.000 claims abstract description 414
- 230000008569 process Effects 0.000 claims description 175
- 230000000007 visual effect Effects 0.000 claims description 165
- 238000001514 detection method Methods 0.000 claims description 75
- 238000003860 storage Methods 0.000 claims description 32
- 238000013459 approach Methods 0.000 claims description 16
- 238000007689 inspection Methods 0.000 description 459
- 230000008859 change Effects 0.000 description 304
- 230000036544 posture Effects 0.000 description 186
- 238000003384 imaging method Methods 0.000 description 140
- 230000005856 abnormality Effects 0.000 description 93
- 238000004364 calculation method Methods 0.000 description 59
- 238000010586 diagram Methods 0.000 description 45
- 239000011159 matrix material Substances 0.000 description 39
- 239000012636 effector Substances 0.000 description 26
- 230000006870 function Effects 0.000 description 18
- 239000013598 vector Substances 0.000 description 14
- 230000002159 abnormal effect Effects 0.000 description 11
- 230000004048 modification Effects 0.000 description 11
- 238000012986 modification Methods 0.000 description 11
- 238000011960 computer-aided design Methods 0.000 description 8
- 238000004519 manufacturing process Methods 0.000 description 8
- 238000011179 visual inspection Methods 0.000 description 8
- 230000003287 optical effect Effects 0.000 description 6
- 238000005259 measurement Methods 0.000 description 5
- 230000007246 mechanism Effects 0.000 description 5
- 230000009471 action Effects 0.000 description 4
- 230000000052 comparative effect Effects 0.000 description 4
- 238000012790 confirmation Methods 0.000 description 4
- 238000006073 displacement reaction Methods 0.000 description 4
- 230000005484 gravity Effects 0.000 description 4
- 238000003780 insertion Methods 0.000 description 4
- 230000037431 insertion Effects 0.000 description 4
- 239000000758 substrate Substances 0.000 description 4
- 238000003466 welding Methods 0.000 description 4
- 230000015572 biosynthetic process Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 230000007423 decrease Effects 0.000 description 3
- 230000003247 decreasing effect Effects 0.000 description 3
- 238000002360 preparation method Methods 0.000 description 3
- 238000003786 synthesis reaction Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 239000000470 constituent Substances 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 description 1
- 241000282412 Homo Species 0.000 description 1
- 101000582320 Homo sapiens Neurogenic differentiation factor 6 Proteins 0.000 description 1
- 102100030589 Neurogenic differentiation factor 6 Human genes 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000004888 barrier function Effects 0.000 description 1
- 238000010924 continuous production Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000012886 linear function Methods 0.000 description 1
- 230000005389 magnetism Effects 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
- B25J9/1697—Vision controlled systems
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1602—Programme controls characterised by the control system, structure, architecture
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B23—MACHINE TOOLS; METAL-WORKING NOT OTHERWISE PROVIDED FOR
- B23P—METAL-WORKING NOT OTHERWISE PROVIDED FOR; COMBINED OPERATIONS; UNIVERSAL MACHINE TOOLS
- B23P19/00—Machines for simply fitting together or separating metal parts or objects, or metal and non-metal parts, whether or not involving some deformation; Tools or devices therefor so far as not provided for in other classes
- B23P19/001—Article feeders for assembling machines
- B23P19/007—Picking-up and placing mechanisms
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J13/00—Controls for manipulators
- B25J13/08—Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
Landscapes
- Engineering & Computer Science (AREA)
- Mechanical Engineering (AREA)
- Robotics (AREA)
- Automation & Control Theory (AREA)
- Human Computer Interaction (AREA)
- Manipulator (AREA)
Abstract
Description
技术领域technical field
本发明涉及机器人控制系统、机器人、程序以及机器人控制方法等。The present invention relates to a robot control system, a robot, a program, a robot control method and the like.
背景技术Background technique
近几年,在生产现场中,为了使人所进行的作业机械化、自动化,大多导入工业用机器人。但是,在进行机器人的定位时,精密的校准成为前提,是机器人导入的障碍。In recent years, many industrial robots have been introduced in production sites in order to mechanize and automate work performed by humans. However, precise calibration is a prerequisite for robot positioning, which is an obstacle to robot introduction.
这里,作为进行机器人定位的手段之一,有视觉伺服。现有的视觉伺服是根据参照图像(goal图像、目标图像)与拍摄图像(当前的图像)的差别,对机器人进行反馈控制的技术。某种视觉伺服在不要求校准精密度方面很有用,并且作为降低机器人导入障碍的技术而被关注。Here, as one of the means for robot positioning, there is visual servoing. Existing visual servoing is a technology that performs feedback control on a robot based on the difference between a reference image (goal image, target image) and a captured image (current image). Some kind of visual servoing is useful in that calibration precision is not required, and it is attracting attention as a technique for lowering barriers to robot introduction.
作为与该视觉伺服相关的技术,例如存在专利文献1所记载的现有技术。As a technique related to this visual servoing, there is a prior art described in Patent Document 1, for example.
专利文献1:日本特开2011-143494号公报Patent Document 1: Japanese Patent Laid-Open No. 2011-143494
在通过视觉伺服使机器人进行将组装对象物组装于被组装对象物的组装作业的情况下,被组装对象物的位置姿势在每次进行组装作业时都会变化。在被组装对象物的位置姿势变化的情况下,被组装对象物与成为组装状态的组装对象物的位置姿势也产生变化。When the robot performs an assembly operation of assembling an assembly object to an assembly object by visual servoing, the position and posture of the assembly object change every time the assembly operation is performed. When the position and posture of the object to be assembled change, the position and posture of the object to be assembled and the assembled object in the assembled state also change.
此时,若每次使用相同的参照图像来进行视觉伺服,则无法实现正确的组装作业。这是因为无论成为组装状态的组装对象物的位置姿势是否变化,都会使组装对象物向映在参照图像中的组装对象物的位置姿势移动。In this case, if visual servoing is performed using the same reference image every time, accurate assembly work cannot be realized. This is because the assembled object is moved to the position and posture of the assembled object reflected in the reference image regardless of whether the position and posture of the assembled object in the assembled state are changed.
另外,虽然从理论上来讲只要每次实际的被组装对象物的位置变化时使用不同的参照图像,就能够通过使用参照图像的视觉伺服来进行组装作业,但是在这种情况下需要准备大量的参照图像,是不现实的。In addition, although in theory, as long as a different reference image is used every time the position of the actual object to be assembled changes, the assembly work can be performed by visual servoing using the reference image, but in this case, it is necessary to prepare a large number of Referring to the image, it is unrealistic.
发明内容Contents of the invention
本发明的一方式涉及一种机器人控制系统,其包括:拍摄图像获取部,其获取拍摄图像;以及控制部,其根据上述拍摄图像来控制机器人,上述拍摄图像获取部获取映有组装作业的组装对象物与被组装对象物的中的、至少上述被组装对象物的上述拍摄图像,上述控制部根据上述拍摄图像,进行上述被组装对象物的特征量检测处理,并根据上述被组装对象物的特征量,使上述组装对象物移动。One aspect of the present invention relates to a robot control system including: a captured image acquiring unit that acquires a captured image; Among the target object and the object to be assembled, at least the captured image of the target object to be assembled, the control unit performs a feature value detection process of the target object to be assembled based on the captured image, and based on the object to be assembled The characteristic quantity moves the above-mentioned assembly object.
在本发明的一方式中,根据从拍摄图像检测出的被组装对象物的特征量,使组装对象物移动。In one aspect of the present invention, the object to be assembled is moved based on the feature amount of the object to be assembled detected from the captured image.
由此,即使在被组装对象物的位置姿势变化的情况下,也能够正确地进行组装作业。As a result, even when the position and orientation of the object to be assembled changes, the assembly work can be accurately performed.
另外,在本发明的一方式中,也可以构成为,上述控制部根据映有上述组装对象物以及上述被组装对象物的1个或者多个拍摄图像,进行上述组装对象物以及上述被组装对象物的上述特征量检测处理,并根据上述组装对象物的特征量以及上述被组装对象物的特征量,以使上述组装对象物与上述被组装对象物的相对位置姿势关系成为目标相对位置姿势关系的方式,使上述组装对象物移动。In addition, in one aspect of the present invention, the control unit may be configured such that the object to be assembled and the object to be assembled are displayed on the basis of one or a plurality of captured images showing the object to be assembled and the object to be assembled. The above-mentioned feature quantity detection process of the object, and according to the feature quantity of the above-mentioned assembly object and the feature quantity of the above-mentioned assembled object, make the relative position and posture relationship between the above-mentioned assembly object and the above-mentioned assembled object become the target relative position and posture relationship The above-mentioned assembly object is moved by the method.
由此,能够根据从拍摄图像检测出的组装对象物的特征量、以及被组装对象物的特征量,进行组装作业等。As a result, assembly operations and the like can be performed based on the feature quantities of the object to be assembled and the feature quantities of the object to be assembled detected from the captured image.
另外,在本发明的一方式中,也可以构成为,上述控制部根据上述被组装对象物的特征量中的作为目标特征量而设定的特征量、与上述组装对象物的特征量中的作为关注特征量而设定的特征量,以使上述相对位置姿势关系成为上述目标相对位置姿势关系的方式,使上述组装对象物移动。In addition, in one aspect of the present invention, the control unit may be configured such that the control unit uses a feature value set as a target feature value among the feature values of the object to be assembled and a feature value among the feature values of the object to be assembled. The feature quantity set as the attention feature quantity is such that the assembly object is moved so that the relative position and posture relationship becomes the target relative position and posture relationship.
由此,能够以使设定的组装对象物的组装部分与设定的被组装对象物的被组装部分的相对位置姿势关系成为目标相对位置姿势关系的方式,使组装对象物移动等。Accordingly, it is possible to move the assembly object such that the relative position and posture relationship between the set assembly portion of the assembly object and the set assembly portion of the assembly object becomes the target relative position and posture relationship.
另外,在本发明的一方式中,也可以构成为,上述控制部以使上述组装对象物的关注特征点与上述被组装对象物的目标特征点一致或者接近的方式,使上述组装对象物移动。In addition, in one aspect of the present invention, the control unit may move the object to be assembled so that the feature point of interest of the object to be assembled coincides with or approaches the target feature point of the object to be assembled. .
由此,能够将组装对象物的组装部分组装于被组装对象物的被组装部分等。Thereby, the assembly part of an object to be assembled can be assembled to the part to be assembled of an object to be assembled, etc. FIG.
另外,在本发明的一方式中,也可以构成为,包括参照图像存储部,该参照图像存储部存储对采取目标位置姿势的上述组装对象物进行显示的参照图像,上述控制部根据映有上述组装对象物的第一拍摄图像与上述参照图像,使上述组装对象物向上述目标位置姿势移动,在使上述组装对象物移动后,根据至少映有上述被组装对象物的第二拍摄图像,进行上述被组装对象物的上述特征量检测处理,并根据上述被组装对象物的特征量,使上述组装对象物移动。In addition, in one aspect of the present invention, it may be configured to include a reference image storage unit that stores a reference image displaying the assembly object taking the target position and posture, the control unit based on the The first captured image of the object to be assembled and the reference image are moved to the target position and posture of the object to be assembled, and after moving the object to be assembled, based on the second captured image showing at least the object to be assembled, In the feature amount detection process of the object to be assembled, the object to be assembled is moved based on the feature amount of the object to be assembled.
由此,能够在反复进行相同的组装作业时,使用相同的参照图像,使组装对象物向被组装对象物的附近移动,之后,对照实际的被组装对象物的详细的位置姿势进行组装作业等。Thus, when the same assembly work is repeated, the assembly object can be moved to the vicinity of the object to be assembled using the same reference image, and then the assembly work can be performed in comparison with the detailed position and posture of the actual object to be assembled. .
另外,在本发明的一方式中,也可以构成为,上述控制部根据映有上述组装作业中的第一被组装对象物的第一拍摄图像,进行上述第一被组装对象物的上述特征量检测处理,并根据上述第一被组装对象物的特征量,使上述组装对象物移动,在使上述组装对象物移动后,根据至少映有第二被组装对象物的第二拍摄图像,进行上述第二被组装对象物的上述特征量检测处理,并根据上述第二被组装对象物的特征量,使上述组装对象物以及上述第一被组装对象物移动。In addition, in one aspect of the present invention, the control unit may be configured to perform the feature value of the first object to be assembled based on the first captured image showing the first object to be assembled in the assembly operation. In the detection process, the assembly object is moved based on the feature value of the first assembly object, and after the assembly object is moved, the above-mentioned The feature amount detection process of the second object to be assembled moves the object to be assembled and the first object to be assembled based on the feature amount of the second object to be assembled.
由此,在每次进行组装作业时,即使第一被组装对象物、第二被组装对象物的位置偏移,也能够进行组装对象物、第一被组装对象物、以及第二被组装对象物的组装作业等。Thus, even if the positions of the first object to be assembled and the second object to be assembled are misaligned every time an assembly operation is performed, the objects to be assembled, the first object to be assembled, and the second object to be assembled can be assembled. Assembly work, etc.
另外,在本发明的一方式中,也可以构成为,上述控制部根据映有上述组装作业中的上述组装对象物以及第一被组装对象物的1个或者多个第一拍摄图像,进行上述组装对象物以及上述第一被组装对象物的上述特征量检测处理,并根据上述组装对象物的特征量以及上述第一被组装对象物的特征量,以使上述组装对象物与上述第一被组装对象物的相对位置姿势关系成为第一目标相对位置姿势关系的方式,使上述组装对象物移动,并根据映有第二被组装对象物的第二拍摄图像,进行上述第二被组装对象物的上述特征量检测处理,并根据上述第一被组装对象物的特征量以及上述第二被组装对象物的特征量,以使上述第一被组装对象物与上述第二被组装对象物的相对位置姿势关系成为第二目标相对位置姿势关系的方式,使上述组装对象物与上述第一被组装对象物移动。In addition, in one aspect of the present invention, the control unit may be configured to perform the above-described operation based on one or more first captured images showing the assembly object and the first object to be assembled in the assembly operation. The feature detection process of the object to be assembled and the first object to be assembled is to make the object to be assembled and the first object to be assembled The relative position and posture relationship of the object to be assembled is such that the relative position and posture relationship of the object to be assembled becomes the first target relative position and posture relationship. The above-mentioned feature value detection process, and according to the feature value of the first object to be assembled and the feature value of the second object to be assembled, the relative relationship between the first object to be assembled and the second object to be assembled The position and posture relationship is such that the second target relative position and posture relationship is such that the assembly object and the first assembly object are moved.
由此,能够以使组装对象物的关注特征点与第一被组装对象物的目标特征点接近、第一被组装对象物的关注特征点与第二被组装对象物的目标特征点接近的方式,进行视觉伺服等。Thus, it is possible to make the feature point of interest of the assembly object close to the target feature point of the first object to be assembled, and the feature point of interest of the first object to be assembled to be close to the target feature point of the second object to be assembled. , for visual servoing, etc.
另外,在本发明的一方式中,也可以构成为,上述控制部根据映有上述组装作业中的上述组装对象物、第一被组装对象物以及第二被组装对象物的1个或者多个拍摄图像,进行上述组装对象物、上述第一被组装对象物以及上述第二被组装对象物的上述特征量检测处理,并根据上述组装对象物的特征量以及上述第一被组装对象物的特征量,以使上述组装对象物与上述第一被组装对象物的相对位置姿势关系成为第一目标相对位置姿势关系的方式,使上述组装对象物移动,并根据上述第一被组装对象物的特征量以及上述第二被组装对象物的特征量,以使上述第一被组装对象物与上述第二被组装对象物的相对位置姿势关系成为第二目标相对位置姿势关系的方式,使上述第一被组装对象物移动。In addition, in one aspect of the present invention, the control unit may be configured such that one or more of the object to be assembled, the first object to be assembled, and the second object to be assembled in the above-mentioned assembly operation are reflected based on one or more taking an image, performing the feature quantity detection process of the assembly object, the first assembly object, and the second assembly object, and based on the feature quantity of the assembly object and the feature of the first assembly object amount, so that the relative position and posture relationship between the assembly object and the first object to be assembled becomes the first target relative position and posture relationship, the assembly object is moved, and according to the characteristics of the first assembly object quantity and the feature quantity of the second object to be assembled, so that the relative position and posture relationship between the first object to be assembled and the second object to be assembled becomes the second target relative position and posture relationship. The object to be assembled moves.
由此,能够进行三个工件的同时组装作业等。Thereby, simultaneous assembly work of three workpieces etc. can be performed.
另外,在本发明的一方式中,也可以构成为,上述控制部根据映有上述组装作业中的第二被组装对象物的第一拍摄图像,进行上述第二被组装对象物的上述特征量检测处理,并根据上述第二被组装对象物的特征量,使第一被组装对象物移动,并根据映有移动后的上述第一被组装对象物的第二拍摄图像,进行上述第一被组装对象物的上述特征量检测处理,并根据上述第一被组装对象物的特征量,使上述组装对象物移动。In addition, in one aspect of the present invention, the control unit may be configured to perform the feature value of the second object to be assembled based on the first captured image showing the second object to be assembled during the assembly work. The detection process is to move the first object to be assembled based on the feature value of the second object to be assembled, and to perform the first object to be assembled based on the second captured image reflecting the moved first object to be assembled. The feature amount detection process of the object to be assembled moves the object to be assembled based on the feature amount of the first object to be assembled.
由此,无需使组装对象物与第一被组装对象物同时移动,而能够更加容易地进行机器人的控制等。This makes it possible to more easily control the robot, without moving the object to be assembled and the first object to be assembled simultaneously.
另外,在本发明的一方式中,也可以构成为,上述控制部通过进行基于上述拍摄图像的视觉伺服,控制上述机器人。In addition, in one aspect of the present invention, the control unit may be configured to control the robot by performing visual servoing based on the captured image.
由此,能够根据当前的作业状况,对机器人进行反馈控制等。In this way, feedback control and the like can be performed on the robot according to the current work situation.
另外,本发明的另一方式涉及一种机器人,其包括:拍摄图像获取部,其获取拍摄图像;以及控制部,其根据上述拍摄图像来控制机器人,上述拍摄图像获取部获取映有组装作业的组装对象物与被组装对象物中的、至少上述被组装对象物的上述拍摄图像,上述控制部根据上述拍摄图像,进行上述被组装对象物的特征量检测处理,并根据上述被组装对象物的特征量,使上述组装对象物移动。Moreover, another aspect of the present invention relates to a robot including: a photographed image acquisition unit that acquires a photographed image; Among the object to be assembled and the object to be assembled, at least the photographed image of the object to be assembled, the control unit performs feature quantity detection processing of the object to be assembled based on the photographed image, and The characteristic quantity moves the above-mentioned assembly object.
另外,在本发明的另一方式中,涉及一种使计算机作为上述各部而发挥功能的程序。Moreover, another aspect of this invention is related with the program which makes a computer function as said each part.
另外,本发明的另一方式涉及一种机器人控制方法,其包括获取映有组装作业的组装对象物与被组装对象物中的、至少上述被组装对象物的拍摄图像的步骤;根据上述拍摄图像,进行上述被组装对象物的特征量检测处理的步骤;以及根据上述被组装对象物的特征量,使上述组装对象物移动的步骤。In addition, another aspect of the present invention relates to a robot control method including the step of acquiring a photographed image of at least the object to be assembled among the object to be assembled and the object to be assembled on which the assembly work is reflected; , a step of performing a feature amount detection process of the object to be assembled; and a step of moving the object to be assembled based on the feature amount of the object to be assembled.
根据本发明的几个方式,能够提供即使在被组装对象物的位置姿势变化的情况下,也能够正确地进行组装作业的机器人控制系统、机器人、程序以及机器人控制方法等。According to some aspects of the present invention, it is possible to provide a robot control system, a robot, a program, a robot control method, and the like that can accurately perform assembly work even when the position and orientation of an object to be assembled changes.
另外,另一方式为机器人控制装置,其特征在于,具备:第一控制部,其以使机器人的臂的端点根据基于设定的1个以上的指导位置而形成的路径向目标位置移动的方式,生成指令值;图像获取部,其获取上述端点处于上述目标位置时的包含上述端点的图像亦即目标图像、以及上述端点处于当前位置时的包含上述端点的图像亦即当前图像;第二控制部,其以使上述端点根据上述当前图像以及上述目标图像从上述当前位置向上述目标位置移动的方式,生成指令值;以及驱动控制部,其使用由上述第一控制部生成的指令值与由上述第二控制部生成的指令值而使上述臂移动。In addition, another aspect is a robot control device, which is characterized in that it includes: a first control unit that moves an end point of an arm of a robot to a target position along a path formed based on one or more set guidance positions; , generating an instruction value; an image acquisition unit that acquires an image including the endpoint when the endpoint is at the target position, that is, a target image, and an image that includes the endpoint when the endpoint is at the current position, that is, a current image; the second control a unit that generates a command value in such a manner that the endpoint moves from the current position to the target position based on the current image and the target image; and a drive control unit that uses the command value generated by the first control unit and the The command value generated by the second control unit moves the arm.
根据本方式,以使机器人的臂的端点根据基于设定的1个以上的指导位置而形成的路径向目标位置移动的方式,生成指令值,并且以使端点根据当前图像以及目标图像从当前位置向目标位置移动的方式,生成指令值。然后,使用这些指令值而使臂移动。由此,能够维持位置控制的高速,并且也能够与目标位置变化的情况对应。According to this aspect, the command value is generated so that the end point of the arm of the robot moves to the target position based on the path formed based on one or more set guide positions, and the end point moves from the current position to the target position based on the current image and the target image. A command value is generated by moving to the target position. Then, the arm is moved using these command values. Accordingly, while maintaining the high speed of position control, it is also possible to cope with changes in the target position.
另外,另一方式为机器人控制装置,其特征在于,具备:控制部,其以使机器人的臂的端点与目标位置接近的方式生成上述端点的轨道;以及图像获取部,其获取上述端点处于当前位置时的包含上述端点的图像亦即当前图像以及上述端点处于上述目标位置时的包含上述端点的图像亦即目标图像,上述控制部根据基于设定的1个以上的指导位置而形成的路径、与上述当前图像及上述目标图像,使上述臂移动。由此,维持位置控制的高速,并且也能够与目标位置变化的情况对应。In addition, another aspect is a robot control device characterized by comprising: a control unit that generates a trajectory of the end point of the arm of the robot so that the end point approaches a target position; A current image that is an image including the endpoint at the time of the position, and a target image that is an image that includes the endpoint when the endpoint is at the target position, the control unit based on a route formed based on one or more set guidance positions, The arm is moved with the above-mentioned current image and the above-mentioned target image. Accordingly, while maintaining the high speed of position control, it is also possible to cope with changes in the target position.
这里,上述驱动控制部也可以使用分别以规定的分量将由上述第一控制部生成的指令值与由上述第二控制部生成的指令值叠加而成的信号,使上述臂移动。由此,能够以成为希望的轨道的方式,使端点的轨道移动。例如,能够使端点的轨道形成为虽然不理想但为在手眼摄像机的视角包含对象物的轨道。Here, the drive control unit may move the arm using a signal obtained by superimposing the command value generated by the first control unit and the command value generated by the second control unit by predetermined components. Thereby, the trajectory of the end point can be moved so that it becomes a desired trajectory. For example, the trajectory of the end points can be formed so that, although not ideal, the trajectory includes the object from the angle of view of the hand-eye camera.
这里,上述驱动控制部也可以根据上述当前位置与上述目标位置的差分,决定上述规定的分量。由此,能够与距离对应地连续地改变分量,因此能够顺利地切换控制。Here, the drive control unit may determine the predetermined component based on a difference between the current position and the target position. Thereby, the components can be continuously changed according to the distance, so that the switching control can be smoothly performed.
这里,也可以具备输入上述规定的分量的输入部。由此,能够在使用者希望的轨道上控制臂。Here, an input unit for inputting the aforementioned predetermined component may also be provided. Thereby, the arm can be controlled on a trajectory desired by the user.
这里,也可以具备存储上述规定的分量的存储部。由此,能够使用预先初始设定的分量。Here, a storage unit for storing the above-mentioned predetermined components may also be provided. Thereby, it is possible to use previously initialized components.
这里,上述驱动控制部也可以构成为,在上述当前位置满足规定的条件的情况下,使用基于由上述第一控制部生成的轨道的指令值来驱动上述臂,在上述当前位置不满足上述规定的条件的情况下,使用基于由上述第一控制部生成的轨道的指令值、以及基于由上述第二控制部生成的轨道的指令值来驱动上述臂。由此,能够更高速地进行处理。Here, the drive control unit may be configured to drive the arm using a command value based on a trajectory generated by the first control unit when the current position satisfies a predetermined condition, and to drive the arm when the current position does not satisfy the predetermined condition. Under the condition of , the arm is driven using a command value based on the trajectory generated by the first control unit and a command value based on the trajectory generated by the second control unit. Thereby, processing can be performed at a higher speed.
这里,也可以具备:力检测部,其对施加于上述端点的力进行检测;以及第三控制部,其根据上述力检测部所检测的值,以使上述端点从上述当前位置向上述目标位置移动的方式,生成上述端点的轨道,上述驱动控制部使用基于由上述第一控制部生成的轨道的指令值、基于由上述第二控制部生成的轨道的指令值、以及基于由上述第三控制部生成的轨道的指令值,或者使用基于由上述第一控制部生成的轨道的指令值、以及基于由上述第三控制部生成的轨道的指令值,使上述臂移动。由此,即使在目标位置移动的情况下、在无法确认目标位置的情况下,也能够维持位置控制的高速并安全地进行作业。Here, it is also possible to include: a force detection unit that detects the force applied to the end point; and a third control unit that moves the end point from the current position to the target position based on the value detected by the force detection unit. The way of moving is to generate the trajectory of the above-mentioned endpoint, and the above-mentioned drive control unit uses the command value based on the trajectory generated by the first control unit, the command value based on the trajectory generated by the second control unit, and the command value based on the trajectory generated by the third control unit. The arm is moved using the command value of the trajectory generated by the control unit, or the command value based on the trajectory generated by the first control unit and the command value based on the trajectory generated by the third control unit. Thereby, even when the target position moves or the target position cannot be confirmed, it is possible to maintain the high speed of the position control and safely perform the work.
另外,另一方式为机器人系统,其特征在于,具备:机器人,其具有臂;第一控制部,其以使上述臂的端点根据基于设定的1个以上的指导位置而形成的路径向目标位置移动的方式,生成指令值;拍摄部,其对上述端点处于上述目标位置时的包含上述端点的图像亦即目标图像、以及上述端点处于作为当前时刻的位置的当前位置时的包含上述端点的图像亦即当前图像进行拍摄;第二控制部,其以使上述端点根据上述当前图像以及上述目标图像从上述当前位置向上述目标位置移动的方式,生成指令值;以及驱动控制部,其使用由上述第一控制部生成的指令值与由上述第二控制部生成的指令值,使上述臂移动。由此,能够维持位置控制的高速,并且也能够与目标位置变化的情况对应。In addition, another aspect is a robot system characterized by comprising: a robot having an arm; and a first control unit that directs the end point of the arm toward the target along a path formed based on one or more set guidance positions. The method of position movement generates a command value; the imaging unit performs an image including the target image when the endpoint is at the target position, and an image including the endpoint when the endpoint is at the current position at the current time. an image, that is, a current image; a second control unit that generates a command value in such a manner that the endpoint moves from the current position to the target position based on the current image and the target image; and a drive control unit that uses The command value generated by the first control unit and the command value generated by the second control unit move the arm. Accordingly, while maintaining the high speed of position control, it is also possible to cope with changes in the target position.
另外,另一方式为机器人系统,其特征在于,具备:机器人,其具有臂;控制部,其以使上述臂的端点与目标位置接近的方式生成上述端点的轨道;以及拍摄部,其对上述端点处于作为当前时刻的位置的当前位置时的包含上述端点的图像亦即当前图像、以及上述端点处于上述目标位置时的包含上述端点的图像亦即目标图像进行拍摄,上述控制部根据基于设定的1个以上的指导位置而形成的路径、与上述当前图像及上述目标图像,使上述臂移动。由此,能够维持位置控制的高速,并且也能够与目标位置变化的情况对应。In addition, another aspect is a robot system characterized by comprising: a robot having an arm; a control unit that generates a trajectory of the end point so that the end point of the arm approaches a target position; and an imaging unit that captures the above-mentioned The current image, which is an image including the endpoint when the endpoint is at a current position which is the current position, and the target image, which is an image including the endpoint when the endpoint is at the target position, are photographed, and the control unit performs based on the settings. The path formed by one or more guide positions, the current image, and the target image are used to move the arm. Accordingly, while maintaining the high speed of position control, it is also possible to cope with changes in the target position.
另外,另一方式为机器人,其特征在于,具备:臂;第一控制部,其以使上述臂的端点根据基于设定的1个以上的指导位置而形成的路径向目标位置移动的方式,生成指令值;图像获取部,其获取上述端点处于上述目标位置时的包含上述端点的图像亦即目标图像、以及上述端点处于作为当前时刻的位置的当前位置时的包含上述端点的图像亦即当前图像;第二控制部,其以使上述端点根据上述当前图像以及上述目标图像从上述当前位置向上述目标位置移动的方式,生成指令值,以及驱动控制部,其使用由上述第一控制部生成的指令值与由上述第二控制部生成的指令值,使上述臂移动。由此,能够维持位置控制的高速,并且也能够与目标位置变化的情况对应。In addition, another aspect is a robot, characterized by comprising: an arm; and a first control unit configured to move an end point of the arm to a target position along a path formed based on one or more set guide positions, An instruction value is generated; an image acquiring unit that acquires a target image that is an image including the endpoint when the endpoint is at the target position, and a current image that includes the endpoint when the endpoint is at a current position that is a position at the current time. image; a second control unit that generates a command value in such a way that the endpoint moves from the current position to the target position based on the current image and the target image, and a drive control unit that uses the command value generated by the first control unit and the command value generated by the second control unit to move the arm. Accordingly, while maintaining the high speed of position control, it is also possible to cope with changes in the target position.
另外,另一方式为机器人,其特征在于,具备:臂;控制部,其以使上述臂的端点与目标位置接近的方式生成上述端点的轨道;以及图像获取部,其获取上述端点处于当前位置时的包含上述端点的图像亦即当前图像、以及上述端点处于上述目标位置时的包含上述端点的图像亦即目标图像,上述控制部根据基于设定的1个以上的指导位置而形成的路径、与上述当前图像以及上述目标图像,使上述臂移动。由此,能够维持位置控制的高速,并且也能够与目标位置变化的情况对应。In addition, another aspect is a robot characterized by comprising: an arm; a control unit that generates a trajectory of the end point so that the end point of the arm approaches a target position; and an image acquisition unit that acquires that the end point is at the current position. The current image is an image including the endpoint when the endpoint is at the target position, and the target image is an image including the endpoint when the endpoint is at the target position. The arm is moved with the current image and the target image. Accordingly, while maintaining the high speed of position control, it is also possible to cope with changes in the target position.
另外,另一方式为机器人控制方法,其特征在于包括:获取机器人的臂的端点处于目标位置时的包含上述端点的图像亦即目标图像的步骤;获取上述端点处于作为当前时刻的位置的当前位置时的包含上述端点的图像亦即当前图像的步骤;以及以根据基于设定的1个以上的指导位置而形成的路径使上述端点向上述目标位置移动的方式生成指令值,并且以根据上述当前图像以及上述目标图像使上述端点从上述当前位置向上述目标位置移动的方式生成指令值,从而使用上述指令值而使上述臂移动的步骤。由此,能够维持位置控制的高速,并且也能够与目标位置变化的情况对应。In addition, another aspect is a robot control method, which is characterized in that it includes the steps of: acquiring an image including the end point of the arm of the robot, that is, a target image when the end point is at the target position; and acquiring the current position where the end point is at the current moment. An image including the above-mentioned end point at that time, that is, the step of the current image; A step of generating a command value such that the end point moves from the current position to the target position using the image and the target image, and moving the arm using the command value. Accordingly, while maintaining the high speed of position control, it is also possible to cope with changes in the target position.
另外,另一方式为机器人控制方法,其特征在于,其为控制具有臂、以及获取上述臂的端点处于当前位置时的包含上述端点的图像亦即当前图像及上述端点处于目标位置时的包含上述端点的图像亦即目标图像的图像获取部的机器人的上述臂的机器人控制方法,并且使用根据基于设定的1个以上的指导位置而形成的路径所进行的位置控制的指令值、与根据上述当前图像以及上述目标图像所进行的视觉伺服的指令值,从而控制上述臂。由此,能够维持位置控制的高速,并且也能够与目标位置变化的情况对应。In addition, another aspect is a robot control method, which is characterized in that it controls an arm and acquires an image including the end point when the end point of the arm is at the current position, that is, the current image and when the end point is at the target position include the above-mentioned The robot control method of the above-mentioned arm of the robot of the image acquisition part of the image of the end point, that is, the target image, and using the command value of the position control performed based on the route formed based on one or more set guide positions, and the command value based on the above-mentioned The command value of visual servoing performed on the current image and the above-mentioned target image, thereby controlling the above-mentioned arm. Accordingly, while maintaining the high speed of position control, it is also possible to cope with changes in the target position.
另外,另一方式为机器人控制方法,其特征在于,其为控制具有臂、以及获取上述臂的端点处于当前位置时的包含上述端点的图像亦即当前图像及上述端点处于目标位置时的包含上述端点的图像亦即目标图像的图像获取部的机器人的上述臂的机器人控制方法,并且同时进行根据基于设定的1个以上的指导位置而形成的路径所进行的位置控制、与根据上述当前图像以及上述目标图像所进行的视觉伺服。由此,能够维持位置控制的高速,并且也能够与目标位置变化的情况对应。In addition, another aspect is a robot control method, which is characterized in that it controls an arm and acquires an image including the end point when the end point of the arm is at the current position, that is, the current image and when the end point is at the target position include the above-mentioned The robot control method of the above-mentioned arm of the robot of the image acquisition part of the image of the end point, that is, the target image, simultaneously performs position control based on a path formed based on one or more set guide positions, and simultaneously performs position control based on the above-mentioned current image. And the visual servoing performed by the above-mentioned target image. Accordingly, while maintaining the high speed of position control, it is also possible to cope with changes in the target position.
另外,另一方式为机器人控制程序,其特征在于,使运算装置执行如下步骤:获取机器人的臂的端点处于目标位置时的包含上述端点的图像亦即目标图像的步骤、上述端点处于作为当前时刻的位置的当前位置时的包含上述端点的图像亦即当前图像的步骤;以及以使上述端点根据基于设定的1个以上的指导位置而形成的路径向上述目标位置移动的方式生成指令值,并且以根据上述当前图像以及上述目标图像使上述端点从上述当前位置向上述目标位置移动的方式生成指令值,从而使用上述指令值而使上述臂移动的步骤。由此,能够维持位置控制的高速,并且也能够与目标位置变化的情况对应。In addition, another aspect is a robot control program, which is characterized in that it causes the computing device to execute the step of acquiring a target image that is an image including the end point of the arm of the robot when the end point is at the target position, and the end point is at the target position as the current time. the current image including the endpoint at the current location of the location; and generating a command value so that the endpoint moves to the target location according to a path formed based on one or more set guide locations, And a step of generating a command value to move the end point from the current position to the target position based on the current image and the target image, and moving the arm using the command value. Accordingly, while maintaining the high speed of position control, it is also possible to cope with changes in the target position.
另外,另一方式涉及一种机器人控制装置,其包括:机器人控制部,其根据图像信息来控制机器人;变化量运算部,其根据上述图像信息而求出图像特征量变化量;变化量推断部,其根据作为上述机器人或者对象物的信息并且作为上述图像信息以外的信息的变化量推断用信息,对上述图像特征量变化量的推断量亦即推断图像特征量变化量进行运算;以及异常判定部,其通过上述图像特征量变化量与上述推断图像特征量变化量的比较处理来进行异常判定。In addition, another aspect relates to a robot control device including: a robot control unit that controls a robot based on image information; a change amount calculation unit that calculates an image feature amount change amount based on the image information; a change amount estimation unit , which calculates an estimated amount of change in the image feature amount, that is, an estimated image feature amount change amount, based on information for estimating the amount of change that is information on the robot or the object and is information other than the image information; and abnormality determination A section that performs abnormality determination by comparing the amount of change in the image feature amount and the amount of change in the estimated image feature amount.
在本方式中,根据基于图像特征量变化量与变化量推断用信息而求出的推断图像特征量变化量,进行使用图像信息的机器人的控制的异常判定。由此,能够在使用图像信息的机器人的控制中,特别是在使用图像特征量的手段中,适当地进行异常判定等。In this form, abnormality determination of robot control using image information is performed based on the estimated image feature amount change amount obtained based on the image feature amount change amount and the change amount estimation information. Thereby, in the control of the robot using the image information, especially in the means using the image feature value, it is possible to appropriately perform abnormality determination and the like.
另外,在另一方式中,也可以构成为,上述变化量推断用信息是上述机器人的关节角信息。In another aspect, the information for estimating the amount of change may be joint angle information of the robot.
由此,作为变化量推断用信息,能够使用机器人的关节角信息。Thus, the joint angle information of the robot can be used as the change amount estimation information.
另外,在另一方式中,也可以构成为,上述变化量推断部通过对上述关节角信息的变化量,作用使上述关节角信息的变化量与上述图像特征量变化量相对应的雅克比矩阵,从而对上述推断图像特征量变化量进行运算。In another aspect, the change amount estimating unit may act on the change amount of the joint angle information by acting on a Jacobian matrix in which the change amount of the joint angle information corresponds to the change amount of the image feature value. , so as to calculate the change amount of the above-mentioned estimated image feature value.
由此,能够使用关节角信息的变化量与雅克比矩阵来求出推断图像特征量变化量等。Thus, the amount of change in the estimated image feature amount, etc. can be obtained using the amount of change in the joint angle information and the Jacobian matrix.
另外,在另一方式中,也可以构成为,上述变化量推断用信息是上述机器人的末端执行器或者上述对象物的位置姿势信息。In addition, in another aspect, the information for estimating the amount of change may be the position and posture information of the end effector of the robot or the object.
由此,作为变化量推断用信息,能够使用机器人的末端执行器或者上述对象物的位置姿势信息。As a result, the position and posture information of the end effector of the robot or the above-mentioned object can be used as the change amount estimation information.
另外,在另一方式中,也可以构成为,上述变化量推断部通过对上述位置姿势信息的变化量作用使上述位置姿势信息的变化量与上述图像特征量变化量相对应的雅克比矩阵,从而对上述推断图像特征量变化量进行运算。In addition, in another aspect, the change amount estimating unit may be configured such that, by acting on the change amount of the position posture information, a Jacobian matrix in which the change amount of the position posture information is associated with the change amount of the image feature value, Thus, the above-mentioned estimated image feature amount change amount is calculated.
由此,能够使用位置姿势信息的变化量与雅克比矩阵来求出推断图像特征量变化量等。In this way, the amount of change in the estimated image feature amount and the like can be obtained using the amount of change in the position and orientation information and the Jacobian matrix.
另外,在另一方式中,也可以构成为,在第i(i为自然数)时刻获取第一图像信息的图像特征量f1、并且在第j(j为满足j≠i的自然数)时刻获取第二图像信息的图像特征量f2的情况下,上述变化量运算部将上述图像特征量f1与上述图像特征量f2的差分作为上述图像特征量变化量而求出,上述变化量推断部在第k(k为自然数)时刻获取与上述第一图像信息对应的上述变化量推断用信息p1、并且在第l(l为自然数)时刻获取与上述第二图像信息对应的上述变化量推断用信息p2的情况下,根据上述变化量推断用信息p1与上述变化量推断用信息p2,求出上述推断图像特征量变化量。In addition, in another manner, it may also be configured such that the image feature value f1 of the first image information is acquired at the i-th (i is a natural number) time, and the image feature value f1 of the first image information is acquired at the j-th (j is a natural number satisfying j≠i) time. In the case of the image feature amount f2 of the two image information, the change amount calculating unit calculates the difference between the image feature amount f1 and the image feature amount f2 as the image feature amount change amount, and the change amount estimating portion (k is a natural number) the information p1 for estimating the amount of change corresponding to the first image information is acquired at time, and the information p2 for estimating the amount of change corresponding to the second image information is acquired at the lth time (l is a natural number) In this case, the amount of change in the estimated image feature amount is obtained from the information p1 for estimating the amount of change and the information p2 for estimating the amount of change.
由此,能够考虑时刻,而求出对应的图像特征量变化量与推断图像特征量变化量等。Accordingly, it is possible to obtain the corresponding image feature amount change amount, estimated image feature amount change amount, and the like in consideration of time.
另外,在另一方式中,也可以构成为,上述第k时刻是上述第一图像信息的获取时刻,上述第l时刻是上述第二图像信息的获取时刻。In another aspect, the kth time may be an acquisition time of the first image information, and the lth time may be an acquisition time of the second image information.
由此,在考虑可高速地进行关节角信息的获取的情形下,能够容易地进行考虑了时刻的处理等。This makes it possible to easily perform processing in consideration of time, taking into consideration that joint angle information can be acquired at high speed.
另外,在另一方式中,也可以构成为,上述异常判定部进行上述图像特征量变化量与上述推断图像特征量变化量的差别信息、和阈值的比较处理,并且在上述差别信息比上述阈值大的情况下判定为异常。In another aspect, the abnormality determination unit may perform a comparison process between the difference information between the image feature amount change amount and the estimated image feature amount change amount and a threshold value, and when the difference information is greater than the threshold value When it is large, it is judged as abnormal.
由此,能够通过阈值判定来进行异常判定等。Thereby, abnormality determination etc. can be performed by threshold value determination.
另外,在另一方式中,也可以构成为,对于上述异常判定部而言,上述变化量运算部中的上述图像特征量变化量的运算所使用的两个上述图像信息的获取时刻之差越大,则将上述阈值设定得越大。In addition, in another aspect, the abnormality determination unit may be configured such that the difference between the acquisition times of the two pieces of image information used in the calculation of the change amount of the image feature value in the change amount calculation unit becomes larger. If the value is larger, the above-mentioned threshold value is set to be larger.
由此,能够与状况对应地变更阈值等。Thereby, the threshold value etc. can be changed according to a situation.
另外,在另一方式中,也可以构成为,在由上述异常判定部检测到异常的情况下,上述机器人控制部进行使上述机器人停止的控制。In addition, in another aspect, when an abnormality is detected by the abnormality determination unit, the robot control unit may perform control to stop the robot.
由此,能够通过在异常检测时停止机器人,从而实现安全的机器人的控制等。Thereby, it is possible to realize safe robot control and the like by stopping the robot at the time of abnormality detection.
另外,在另一方式中,也可以构成为,在由上述异常判定部检测到异常的情况下,上述机器人控制部跳过基于上述变化量运算部中的上述图像特征量变化量的运算所使用的两个上述图像信息中的、在时间序列上靠后的时刻获取的上述图像信息亦即异常判定图像信息形成的控制,而根据在比上述异常判定图像信息靠前的时刻获取的上述图像信息来进行控制。In addition, in another aspect, when an abnormality is detected by the abnormality determination unit, the robot control unit skips the calculation of the amount of change in the image feature value in the change amount calculation unit. Of the two pieces of the above-mentioned image information, the above-mentioned image information acquired at a later time in time series, that is, the abnormality determination image information is formed, and based on the above-mentioned image information acquired at a time earlier than the above-mentioned abnormality determination image information to control.
由此,能够在异常检测时跳过使用异常判定图像信息的机器人的控制等。Thereby, it is possible to skip the control of the robot using the abnormality determination image information at the time of abnormality detection.
另外,另一方式涉及一种机器人控制装置,其包括:机器人控制部,其根据图像信息来控制机器人;变化量运算部,其求出表示上述机器人的末端执行器或者对象物的位置姿势信息的变化量的位置姿势变化量、或者表示上述机器人的关节角信息的变化量的关节角变化量;变化量推断部,其根据上述图像信息而求出图像特征量变化量,并根据上述图像特征量变化量,求出上述位置姿势变化量的推断量亦即推断位置姿势变化量、或者上述关节角变化量的推断量亦即推断关节角变化量;以及异常判定部,其通过上述位置姿势变化量与上述推断位置姿势变化量的比较处理、或者上述关节角变化量与上述推断关节角变化量的比较处理来进行异常判定。In addition, another aspect relates to a robot control device including: a robot control unit that controls a robot based on image information; and a change calculation unit that obtains position and posture information indicating an end effector of the robot or an object. The amount of change in the position and posture of the change, or the amount of change in the joint angle representing the amount of change in the joint angle information of the robot; the change amount estimation unit, which calculates the amount of change in the image feature value based on the above image information, and calculates the amount of change in the image feature value based on the above image feature value. the amount of change is to obtain an estimated amount of change in position and posture, which is an estimated amount of change in position and posture, or an estimated amount of change in joint angle, that is, an estimated amount of change in joint angle; Abnormality determination is performed by comparison processing with the above-mentioned estimated position and posture change amount, or comparison processing between the above-mentioned joint angle change amount and the above-mentioned estimated joint angle change amount.
另外,在另一方式中,根据图像特征量变化量求出推断位置姿势变化量或者推断关节角变化量,并通过位置姿势变化量与推断位置姿势变化量的比较、或者关节角变化量与推断关节角变化量的比较来进行异常判定。由此,也能够通过位置姿势信息或者关节角信息的比较,在使用图像信息的机器人的控制中,特别是在使用图像特征量的手段中,适当地进行异常判定。In addition, in another form, the estimated position and posture change amount or the estimated joint angle change amount is obtained from the image feature value change amount, and the position and posture change amount is compared with the estimated position and posture change amount, or the joint angle change amount is compared with the estimated joint angle change amount. Abnormal judgment is made by comparing the amount of joint angle change. Accordingly, it is also possible to appropriately perform abnormality determination in the control of the robot using image information, particularly in means using image feature values, by comparing position and posture information or joint angle information.
另外,在另一方式中,也可以构成为,上述变化量运算部进行获取多个上述位置姿势信息并作为上述位置姿势变化量而求出多个上述位置姿势信息的差分的处理、获取多个上述位置姿势信息并根据多个上述位置姿势信息的差分而求出上述关节角变化量的处理、获取多个上述关节角信息并作为上述关节角变化量而求出多个上述关节角信息的差分的处理、以及获取多个上述关节角信息并根据多个上述关节角信息的差分而求出上述位置姿势变化量的处理中的任一个处理。In addition, in another aspect, the change amount computing unit may perform a process of acquiring a plurality of the position and posture information and obtaining a difference between the plurality of the position and posture information as the position and posture change amount, and acquire a plurality of positions and posture information. The process of obtaining the above-mentioned position and posture information and obtaining the above-mentioned joint angle change amount based on the difference of the above-mentioned multiple pieces of the above-mentioned position and posture information, acquiring the above-mentioned pieces of the above-mentioned joint angle information, and calculating the difference of the above-mentioned joint angle information as the above-mentioned joint angle change amount Any one of the processing of acquiring a plurality of pieces of joint angle information and calculating the amount of change in the position and orientation based on the difference of the pieces of joint angle information.
由此,能够通过各种手段来求出位置姿势变化量或者关节角变化量等。Thus, the amount of change in position and posture, the amount of change in joint angle, and the like can be obtained by various means.
另外,另一方式涉及一种机器人,其包括:机器人控制部,其根据图像信息来控制机器人;变化量运算部,其根据上述图像信息来求出图像特征量变化量;变化量推断部,其根据作为上述机器人或者对象物的信息并且作为上述图像信息以外的信息的变化量推断用信息,对上述图像特征量变化量的推断量亦即推断图像特征量变化量进行运算;以及异常判定部,其通过上述图像特征量变化量与上述推断图像特征量变化量的比较处理来进行异常判定。In addition, another aspect relates to a robot including: a robot control unit that controls the robot based on image information; a change amount calculation unit that calculates an image feature amount change amount based on the above image information; a change amount estimation unit that calculating an estimated amount of change in the image feature amount, that is, an estimated image feature amount change amount, based on change amount estimation information that is information on the robot or the object and that is information other than the image information; and an abnormality determination unit, The abnormality determination is performed by comparing the change amount of the image feature amount with the above-mentioned estimated image feature amount change amount.
另外,在另一方式中,根据图像特征量变化量、与基于变化量推断用信息而求出的推断图像特征量变化量,进行使用图像信息的机器人的控制的异常判定。由此,能够在使用图像信息的机器人的控制中,特别是在使用图像特征量的手段中,适当地进行异常判定。In another aspect, abnormality determination of robot control using image information is performed based on the image feature amount change amount and the estimated image feature amount change amount obtained based on the change amount estimation information. This makes it possible to appropriately perform abnormality determination in the control of the robot using image information, particularly in means using image feature values.
另外,另一方式涉及一种机器人控制方法,其为根据图像信息来控制机器人的机器人控制方法,其包括根据上述图像信息,进行求出图像特征量变化量的变化量运算处理的步骤;根据作为上述机器人或者对象物的信息并且作为上述图像信息以外的信息的变化量推断用信息,对上述图像特征量变化量的推断量亦即推断图像特征量变化量进行运算的变化量推断处理的步骤;以及通过上述图像特征量变化量与上述推断图像特征量变化量的比较处理来进行异常判定的步骤。In addition, another aspect relates to a robot control method, which is a robot control method for controlling a robot based on image information, including the step of performing a change amount calculation process for obtaining a change amount of an image feature value based on the image information; The step of calculating the change amount estimation process of calculating the estimated amount of change amount of the image feature amount, that is, the estimated image feature amount change amount; and a step of performing abnormality determination by comparing the change amount of the image feature amount with the above-mentioned estimated image feature amount change amount.
另外,在另一方式中,根据图像特征量变化量、与基于变化量推断用信息而求出的推断图像特征量变化量,进行使用图像信息的机器人的控制的异常判定。由此,能够在使用图像信息的机器人的控制中,特别是在使用图像特征量的手段中,适当地进行异常判定。In another aspect, abnormality determination of robot control using image information is performed based on the image feature amount change amount and the estimated image feature amount change amount obtained based on the change amount estimation information. This makes it possible to appropriately perform abnormality determination in the control of the robot using image information, particularly in means using image feature values.
另外,另一方式涉及一种程序,其使计算机作为如下部件而发挥功能:机器人控制部,其根据图像信息来控制机器人;变化量运算部,其根据上述图像信息而求出图像特征量变化量;变化量推断部,其根据作为上述机器人或者对象物的信息并且作为上述图像信息以外的信息的变化量推断用信息,对上述图像特征量变化量的推断量亦即推断图像特征量变化量进行运算;以及异常判定部,其通过上述图像特征量变化量与上述推断图像特征量变化量的比较处理来进行异常判定。In addition, another aspect relates to a program that causes a computer to function as: a robot control unit that controls a robot based on image information; and a change amount calculation unit that calculates an image feature amount change amount based on the image information. a change amount estimating unit that performs an estimation of the amount of change in the image feature amount, that is, an estimated image feature amount change amount, based on information for estimating the amount of change that is information on the robot or the object and is information other than the image information. an operation; and an abnormality determination unit that performs abnormality determination by comparing the amount of change in the image feature value and the amount of change in the estimated image feature value.
另外,在另一方式中,根据图像特征量变化量、与基于变化量推断用信息而求出的推断图像特征量变化量,使计算机执行使用图像信息的机器人的控制的异常判定。由此,能够在使用图像信息的机器人的控制中,特别是在使用图像特征量的手段中,适当地进行异常判定。In another aspect, the computer executes the abnormality determination of the robot control using the image information based on the image feature amount change amount and the estimated image feature amount change amount obtained based on the change amount estimating information. This makes it possible to appropriately perform abnormality determination in the control of the robot using image information, particularly in means using image feature values.
这样,根据几个方式,能够提供适当地进行基于图像信息实现的机器人的控制中的使用图像特征量的控制的异常的检测的机器人控制装置、机器人以及机器人控制方法等。As described above, according to several aspects, it is possible to provide a robot control device, a robot, a robot control method, and the like that appropriately detect abnormalities in control using image feature values in robot control based on image information.
另外,另一方式涉及机器人,其为使用由拍摄部拍摄的检查对象物的拍摄图像来进行检查上述检查对象物的检查处理的机器人,并且根据第一检查信息,生成包含上述检查处理的检查区域的第二检查信息,并根据上述第二检查信息,进行上述检查处理。In addition, another aspect relates to a robot that performs an inspection process of inspecting the inspection object using a captured image of the inspection object captured by an imaging unit, and generates an inspection area including the inspection process based on first inspection information. The second inspection information, and according to the second inspection information, perform the above inspection process.
另外,在另一方式中,根据第一检查信息,生成包含检查区域的第二检查信息。一般地,将检查(狭义而言外观检查)所使用的图像中的哪一区域用于处理取决于检查对象物的形状等信息、针对检查对象物进行的作业内容等,因此,在每次检查对象物、作业内容变化时,必须重新设定检查区域,而导致使用者的负担较大。在这一点上,通过根据第一检查信息而生成第二检查信息,能够容易地决定检查区域等。In addition, in another form, the second inspection information including the inspection area is generated based on the first inspection information. Generally, which area of the image used for inspection (appearance inspection in a narrow sense) is used for processing depends on information such as the shape of the object to be inspected, the content of work performed on the object to be inspected, and so on. When the object or work content changes, it is necessary to reset the inspection area, which causes a heavy burden on the user. In this regard, by generating the second inspection information based on the first inspection information, it is possible to easily determine the inspection area and the like.
另外,在另一方式中,也可以构成为,上述第二检查信息包含将多个视点信息包含在内的视点信息组,并且上述视点信息组的各视点信息包含上述检查处理中的上述拍摄部的视点位置以及视线方向。In addition, in another aspect, the second inspection information may include a viewpoint information group including a plurality of viewpoint information, and each viewpoint information in the viewpoint information group may include the imaging unit in the inspection process. The position of the viewpoint and the direction of the line of sight.
由此,能够作为第二检查信息而生成视点信息组等。In this way, a viewpoint information group and the like can be generated as the second inspection information.
另外,在另一方式中,也可以构成为,对上述视点信息组的各视点信息,设定使上述拍摄部向与上述视点信息对应的上述视点位置以及上述视线方向移动时的优先度。In another aspect, it may be configured to set a priority for moving the imaging unit to the viewpoint position and the line-of-sight direction corresponding to the viewpoint information for each viewpoint information in the viewpoint information group.
由此,能够对视点信息组所包含的各视点信息,设定优先度等。Thereby, it is possible to set a priority and the like for each piece of viewpoint information included in the viewpoint information group.
另外,在另一方式中,也可以构成为,依据基于上述优先度而设定的移动顺序,使上述拍摄部向与上述视点信息组的上述各视点信息对应的上述视点位置以及上述视线方向移动。In another aspect, it may be configured to move the imaging unit to the viewpoint position and the line-of-sight direction corresponding to the viewpoint information in the viewpoint information group according to the movement order set based on the priority. .
由此,能够使用设定了优先度的多个视点信息,实际控制拍摄部而进行检查处理等。In this way, it is possible to actually control the imaging unit to perform inspection processing and the like using a plurality of viewpoint information with priorities set therein.
另外,在另一方式中,也可以构成为,在根据可动范围信息,判定为无法使上述拍摄部向与多个上述视点信息中的第i(i为自然数)视点信息对应的上述视点位置以及上述视线方向移动的情况下,不进行基于上述第i视点信息的上述拍摄部的移动,而根据上述移动顺序中的上述第i视点信息的下一个的第j(j为满足i≠j的自然数)视点信息,使上述拍摄部移动。In addition, in another aspect, it may be configured such that, based on the movable range information, it is determined that the imaging unit cannot move to the viewpoint position corresponding to the i-th (i is a natural number) viewpoint information among the plurality of viewpoint information. And in the case where the line of sight direction moves, the imaging unit is not moved based on the i-th viewpoint information, but is based on the next j-th of the i-th viewpoint information in the moving sequence (j is satisfied i≠j) natural number) point of view information to move the imaging unit.
由此,能够实现考虑了机器人的可动范围的拍摄部的控制等。Thereby, it is possible to realize the control of the imaging unit in consideration of the movable range of the robot, and the like.
另外,在另一方式中,也可以构成为,上述第一检查信息包含针对上述检查对象物的相对的检查处理对象位置,并以上述检查处理对象位置为基准,设定与上述检查对象物对应的对象物坐标系,从而使用上述对象物坐标系,生成上述视点信息。In addition, in another aspect, the first inspection information may include a relative inspection processing target position with respect to the inspection target object, and based on the inspection processing target position, it may be configured to correspond to the inspection target object. The object coordinate system is used to generate the viewpoint information using the object coordinate system.
由此,能够生成对象物坐标系中的视点信息等。In this way, viewpoint information and the like in the object coordinate system can be generated.
另外,在另一方式中,也可以构成为,上述第一检查信息包含表示上述检查对象物的全局坐标系中的位置姿势的对象物位置姿势信息,根据基于上述对象物位置姿势信息而求出的上述全局坐标系与上述对象物坐标系的相对关系,求出上述全局坐标系中的上述视点信息,并根据上述全局坐标系中的可动范围信息与上述全局坐标系中的上述视点信息,对是否能够使上述拍摄部向上述视点位置以及上述视线方向移动进行判定。In addition, in another aspect, the first inspection information may include object position and posture information indicating the position and posture of the inspection target in the global coordinate system, and may be obtained based on the object position and posture information. According to the relative relationship between the above-mentioned global coordinate system and the above-mentioned object coordinate system, the above-mentioned viewpoint information in the above-mentioned global coordinate system is obtained, and based on the movable range information in the above-mentioned global coordinate system and the above-mentioned viewpoint information in the above-mentioned global coordinate system, It is determined whether or not the imaging unit can be moved to the viewpoint position and the line-of-sight direction.
由此,能够生成全局坐标系中的视点信息、以及根据该视点信息与机器人的可动范围信息而控制拍摄部的移动等。In this way, it is possible to generate viewpoint information in the global coordinate system, and to control the movement of the imaging unit based on the viewpoint information and the robot's movable range information.
另外,在另一方式中,也可以构成为,上述检查处理是针对机器人作业的结果进行的处理,上述第一检查信息是在上述机器人作业中获取的信息。In addition, in another aspect, the inspection process may be performed on a result of robot work, and the first inspection information may be information acquired during the robot work.
由此,能够在机器人作业中获取第一检查信息等。Thereby, the first inspection information and the like can be acquired during the robot work.
另外,在另一方式中,也可以构成为,上述第一检查信息是包含上述检查对象物的形状信息、上述检查对象物的位置姿势信息、以及针对上述检查对象物的相对的检查处理对象位置中的至少一个的信息。In addition, in another aspect, the first inspection information may be configured to include shape information of the inspection object, position and posture information of the inspection object, and a relative inspection processing target position of the inspection object. at least one of the information.
由此,能够作为第一检查信息而获取形状信息、位置姿势信息以及检查处理对象位置的至少一个信息。Thereby, at least one piece of information on the shape information, the position posture information, and the position of the inspection processing target can be acquired as the first inspection information.
另外,在另一方式中,也可以构成为,上述第一检查信息包含上述检查对象物的三维模型数据。In addition, in another aspect, the first inspection information may be configured to include three-dimensional model data of the inspection object.
由此,能够作为第一检查信息而获取三维模型数据。Thereby, three-dimensional model data can be acquired as the first inspection information.
另外,在另一方式中,也可以构成为,上述检查处理是针对机器人作业的结果进行的处理,上述三维模型数据包含通过进行上述机器人作业而得到的作业后三维模型数据、与上述机器人作业前的上述检查对象物的上述三维模型数据亦即作业前三维模型数据。In addition, in another aspect, the inspection process may be performed on the result of the robot operation, and the three-dimensional model data may include post-operation three-dimensional model data obtained by performing the robot operation, and data before and after the robot operation. The above-mentioned three-dimensional model data of the above-mentioned inspection target object is the pre-work three-dimensional model data.
由此,能够作为第一检查信息而获取作业前后的三维模型数据。Thereby, three-dimensional model data before and after work can be acquired as the first inspection information.
另外,在另一方式中,也可以构成为,上述第二检查信息包含合格图像,上述合格图像是由配置于与上述视点信息对应的上述视点位置以及上述视线方向的假想摄像机拍摄上述三维模型数据而得的图像。In another aspect, the second inspection information may include a pass image, and the pass image is the three-dimensional model data captured by a virtual camera arranged at the viewpoint position and the line-of-sight direction corresponding to the viewpoint information. The resulting image.
由此,能够从三维模型数据与视点信息,作为第二检查信息而获取合格图像等。Thereby, it is possible to acquire a pass image or the like as the second inspection information from the three-dimensional model data and viewpoint information.
另外,在另一方式中,也可以构成为,上述第二检查信息包含合格图像与作业前图像,上述合格图像是由配置于与上述视点信息对应的上述视点位置以及上述视线方向的假想摄像机拍摄上述作业后三维模型数据而得的图像,上述作业前图像是由配置于与上述视点信息对应的上述视点位置以及上述视线方向的上述假想摄像机拍摄上述作业前三维模型数据而得的图像,通过对上述作业前图像与上述合格图像进行比较,求出上述检查区域。In addition, in another aspect, the second inspection information may include a pass image and a pre-work image, and the pass image is taken by a virtual camera arranged at the viewpoint position and the line-of-sight direction corresponding to the viewpoint information. The image obtained from the three-dimensional model data after the operation, the image before operation is an image obtained by capturing the three-dimensional model data before operation by the virtual camera arranged at the viewpoint position and the line of sight direction corresponding to the viewpoint information, and The above-mentioned pre-work image is compared with the above-mentioned pass image, and the above-mentioned inspection area is obtained.
由此,能够根据作业前后的三维模型数据与视点信息,求出合格图像与作业前图像,并根据其比较处理而求出检查区域等。In this way, the pass image and the pre-work image can be obtained from the three-dimensional model data before and after the operation and the viewpoint information, and the inspection region and the like can be obtained by comparing them.
另外,在另一方式中,也可以构成为,在上述比较中,求出上述作业前图像与上述合格图像的差分亦即差分图像,上述检查区域是上述差分图像中的包含上述检查对象物的区域。In another aspect, in the comparison, a difference image that is a difference between the pre-operation image and the acceptable image is obtained, and the inspection region is a part of the difference image that includes the inspection object. area.
由此,能够使用差分图像而求出检查区域等。In this way, it is possible to obtain the inspection region and the like using the difference image.
另外,在另一方式中,也可以构成为,上述第二检查信息包含合格图像与作业前图像,上述合格图像是由配置于与上述视点信息对应的上述视点位置以及上述视线方向的假想摄像机拍摄上述作业后三维模型数据而得的图像,上述作业前图像是由配置于与上述视点信息对应的上述视点位置以及上述视线方向的上述假想摄像机拍摄上述作业前三维模型数据而得的图像,根据上述作业前图像与上述合格图像的相似度,设定基于上述拍摄图像与上述合格图像进行的上述检查处理所使用的阈值。In addition, in another aspect, the second inspection information may include a pass image and a pre-work image, and the pass image is taken by a virtual camera arranged at the viewpoint position and the line-of-sight direction corresponding to the viewpoint information. The image obtained from the post-operation three-dimensional model data, the above-mentioned pre-operation image is an image obtained by capturing the pre-operation three-dimensional model data by the virtual camera arranged at the viewpoint position and the line-of-sight direction corresponding to the viewpoint information, according to the above The degree of similarity between the pre-work image and the pass image is set as a threshold value used in the inspection process based on the captured image and the pass image.
由此,能够使用作业前图像与合格图像的相似度,设定检查处理的阈值等。Thereby, it is possible to set the threshold value of the inspection process, etc., using the degree of similarity between the pre-work image and the acceptable image.
另外,在另一方式中,也可以构成为,至少包括第一臂与第二臂,上述拍摄部是设置于上述第一臂以及上述第二臂的至少一方的手眼摄像机。In another aspect, at least a first arm and a second arm may be included, and the imaging unit may be a hand-eye camera provided on at least one of the first arm and the second arm.
由此,能够使用2支以上的臂、与设置于该臂的至少一个的手眼摄像机来进行检查处理等。Accordingly, it is possible to perform inspection processing and the like using two or more arms and at least one hand-eye camera provided on the arm.
另外,另一方式涉及处理装置,其为针对使用由拍摄部拍摄的检查对象物的拍摄图像而进行上述检查对象物的检查处理的装置,输出上述检查处理所使用的信息的处理装置,并且根据第一检查信息,生成将上述检查处理的包含上述拍摄部的视点位置以及视线方向的视点信息、与上述检查处理的检查区域包含在内的第二检查信息,并针对进行上述检查处理的上述装置输出上述第二检查信息。In addition, another aspect relates to a processing device that outputs information used in the inspection process for a device that performs inspection processing of the inspection target object using a captured image of the inspection target object captured by the imaging unit, and based on The first inspection information generates the second inspection information including the viewpoint information including the viewpoint position and line-of-sight direction of the imaging unit in the inspection process, and the inspection area in the inspection process, and targets the device performing the inspection process. The above-mentioned second check information is output.
另外,在另一方式中,根据第一检查信息生成包含检查区域的第二检查信息。一般地,将检查(狭义而言为外观检查)所使用的图像中的哪一区域用于处理取决于检查对象物的形状等信息、针对检查对象物进行的作业内容等,因此,每次在检查对象物、作业内容变化时,必须重新设定检查区域,从而导致使用者的负担较大。在这一点上,通过根据第一检查信息生成第二检查信息,能够容易地决定检查区域,从而使其他的装置进行检查处理等。In addition, in another aspect, the second inspection information including the inspection area is generated based on the first inspection information. Generally, which area of the image used for inspection (appearance inspection in a narrow sense) is used for processing depends on information such as the shape of the object to be inspected, the content of the work performed on the object to be inspected, and so on. When the inspection object or work content changes, it is necessary to reset the inspection area, which places a heavy burden on the user. In this regard, by generating the second inspection information based on the first inspection information, it is possible to easily determine an inspection area and make another device perform inspection processing or the like.
另外,另一方式涉及检查方法,其为使用由拍摄部拍摄的检查对象物的拍摄图像,而进行检查上述检查对象物的检查处理的检查方法,在该检查方法中包括根据第一检查信息,生成将上述检查处理的包含上述拍摄部的视点位置以及视线方向的视点信息、与上述检查处理的检查区域包含在内的第二检查信息的步骤。In addition, another aspect relates to an inspection method for performing an inspection process of inspecting the inspection object using a captured image of the inspection object captured by the imaging unit. The inspection method includes, based on the first inspection information, A step of generating second inspection information including viewpoint information including a viewpoint position and a line-of-sight direction of the imaging unit in the inspection process and an inspection area in the inspection process.
另外,在另一方式中,根据第一检查信息生成包含检查区域的第二检查信息。一般地,将检查(狭义而言为外观检查)所使用的图像中的哪一区域用于处理取决于检查对象物的形状等信息、针对检查对象物进行的作业内容等,因此,每次在检查对象物、作业内容变化时,必须重新设定检查区域,从而导致使用者的负担较大。在这一点上,通过根据第一检查信息生成第二检查信息,能够容易地决定检查区域等。In addition, in another aspect, the second inspection information including the inspection area is generated based on the first inspection information. Generally, which area of the image used for inspection (appearance inspection in a narrow sense) is used for processing depends on information such as the shape of the object to be inspected, the content of the work performed on the object to be inspected, and so on. When the inspection object or work content changes, it is necessary to reset the inspection area, which places a heavy burden on the user. In this regard, by generating the second inspection information based on the first inspection information, it is possible to easily determine the inspection area and the like.
这样,根据几个方式,能够提供通过根据第一检查信息生成检查所需要的第二检查信息,能够减少使用者的负担并且容易地执行检查的机器人、处理装置以及检查方法等。In this manner, according to several aspects, it is possible to provide a robot, a processing device, an inspection method, and the like that can easily perform an inspection while reducing the burden on the user by generating the second inspection information required for the inspection based on the first inspection information.
附图说明Description of drawings
图1是通过视觉伺服进行的组装作业的说明图。FIG. 1 is an explanatory diagram of assembly work performed by visual servoing.
图2A、图2B是被组装对象物的位置偏移的说明图。2A and 2B are explanatory diagrams of positional displacement of an object to be assembled.
图3是本实施方式的系统构成例。FIG. 3 is an example of the system configuration of this embodiment.
图4是通过基于被组装对象物的特征量的视觉伺服进行的组装作业的说明图。4 is an explanatory diagram of an assembly operation performed by visual servoing based on feature quantities of an object to be assembled.
图5是基于被组装对象物的特征量的视觉伺服所使用的拍摄图像的一个例子。FIG. 5 is an example of a captured image used for visual servoing based on feature quantities of an object to be assembled.
图6是组装状态的说明图。Fig. 6 is an explanatory diagram of an assembled state.
图7是基于被组装对象物的特征量的视觉伺服的流程图。FIG. 7 is a flowchart of visual servoing based on feature quantities of an object to be assembled.
图8是基于被组装对象物的特征量的视觉伺服的另一个流程图。FIG. 8 is another flow chart of visual servoing based on feature quantities of objects to be assembled.
图9是使组装对象物向被组装对象物的正上方移动的处理的说明图。FIG. 9 is an explanatory diagram of a process of moving an object to be assembled directly above an object to be assembled.
图10是通过两种视觉伺服进行的组装作业的说明图。FIG. 10 is an explanatory diagram of assembly work performed by two types of visual servoing.
图11是连续进行两种视觉伺服的情况下的处理的流程图。FIG. 11 is a flowchart of processing when two types of visual servoing are performed continuously.
图12(A)~(D)是参照图像与拍摄图像的说明图。12(A) to (D) are explanatory diagrams of a reference image and a captured image.
图13A、图13B是三个工件的组装作业的说明图。13A and 13B are explanatory views of the assembly work of three workpieces.
图14(A)~(C)是进行三个工件的组装作业时使用的拍摄图像的说明图。14(A) to (C) are explanatory diagrams of captured images used when performing assembly work of three workpieces.
图15是进行三个工件的组装作业时的处理的流程图。FIG. 15 is a flowchart of processing when three workpieces are assembled.
图16(A)~(C)是同时组装三个工件时使用的拍摄图像的说明图。16(A) to (C) are explanatory diagrams of captured images used when three workpieces are assembled simultaneously.
图17是同时组装三个工件时的处理的流程图。Fig. 17 is a flowchart of processing when three workpieces are assembled simultaneously.
图18(A)~(C)是以其他的顺序组装三个工件时使用的拍摄图像的说明图。18(A) to (C) are explanatory diagrams of captured images used when three workpieces are assembled in another order.
图19A、图19B是机器人的构成例。19A and 19B are configuration examples of the robot.
图20是经由网络而控制机器人的机器人控制系统的构成例。Fig. 20 is a configuration example of a robot control system that controls a robot via a network.
图21是表示第二实施方式的机器人系统1的构成的一个例子的图。FIG. 21 is a diagram showing an example of the configuration of the robot system 1 according to the second embodiment.
图22是表示机器人系统1的功能结构的一个例子的框图。FIG. 22 is a block diagram showing an example of the functional configuration of the robot system 1 .
图23是机器人系统1的数据流程图。FIG. 23 is a data flow diagram of the robot system 1 .
图24是表示控制部20的硬件结构的图。FIG. 24 is a diagram showing a hardware configuration of the control unit 20 .
图25A是对通过位置控制以及视觉伺服来控制臂11时的端点的轨道进行说明的图,图25B是目标图像的一个例子。FIG. 25A is a diagram illustrating a trajectory of an end point when the arm 11 is controlled by position control and visual servoing, and FIG. 25B is an example of a target image.
图26是对分量α进行说明的图。FIG. 26 is a diagram explaining the component α.
图27是表示本发明的第三实施方式的机器人系统2的处理流程的流程图。FIG. 27 is a flowchart showing the flow of processing of the robot system 2 according to the third embodiment of the present invention.
图28是对对象物的位置、切换点的位置以及端点的轨道进行说明的图。FIG. 28 is a diagram explaining the position of an object, the position of a switching point, and the trajectory of an end point.
图29是表示本发明的第四实施方式的机器人系统3的结构的一个例子的图。FIG. 29 is a diagram showing an example of the configuration of the robot system 3 according to the fourth embodiment of the present invention.
图30是表示机器人系统3的功能结构的一个例子的框图。FIG. 30 is a block diagram showing an example of the functional configuration of the robot system 3 .
图31是表示机器人系统3的处理流程的流程图。FIG. 31 is a flowchart showing the processing flow of the robot system 3 .
图32是表示机器人系统3将工件插入孔H的装配作业的图。FIG. 32 is a diagram showing an assembly operation in which the robot system 3 inserts the workpiece into the hole H. As shown in FIG.
图33是表示本发明的第五实施方式的机器人系统4的处理流程的流程图。FIG. 33 is a flowchart showing the flow of processing of the robot system 4 according to the fifth embodiment of the present invention.
图34是表示机器人系统4将工件插入孔H的装配作业的图。FIG. 34 is a diagram showing an assembly operation in which the robot system 4 inserts the workpiece into the hole H. As shown in FIG.
图35是本实施方式的机器人控制装置的构成例。FIG. 35 is a configuration example of a robot control device according to this embodiment.
图36是本实施方式的机器人控制装置的详细的构成例。FIG. 36 is a detailed configuration example of the robot control device according to this embodiment.
图37是获取图像信息的拍摄部的配置例。Fig. 37 is an example of arrangement of imaging units that acquire image information.
图38是本实施方式的机器人的构成例。FIG. 38 is an example of the configuration of the robot of this embodiment.
图39是本实施方式的机器人的构造的其他例子。FIG. 39 is another example of the structure of the robot of this embodiment.
图40是一般的视觉伺服控制系统的构成例。Fig. 40 is a configuration example of a general visual servoing control system.
图41是对图像特征量变化量、位置姿势信息的变化量以及关节角信息的变化量、与雅克比矩阵的关系进行说明的图。FIG. 41 is a diagram illustrating the relationship between the amount of change in image feature data, the amount of change in position and posture information, and the amount of change in joint angle information, and the Jacobian matrix.
图42是对视觉伺服控制进行说明的图。Fig. 42 is a diagram illustrating visual servoing control.
图43A、图43B是本实施方式的异常检测手段的说明图。43A and 43B are explanatory diagrams of the abnormality detection means of this embodiment.
图44是与图像获取时刻之差对应地设定阈值的手段的说明图。FIG. 44 is an explanatory diagram of means for setting a threshold corresponding to a difference in image acquisition time.
图45是表示图像获取时刻、关节角信息的获取时刻以及图像特征量获取时刻的关系的图。FIG. 45 is a diagram showing the relationship between image acquisition time, joint angle information acquisition time, and image feature amount acquisition time.
图46是表示图像获取时刻、关节角信息的获取时刻以及图像特征量获取时刻的关系的另一个图。FIG. 46 is another diagram showing the relationship between the image acquisition time, the joint angle information acquisition time, and the image feature amount acquisition time.
图47是结合数学公式说明图像特征量变化量、位置姿势信息的变化量以及关节角信息的变化量的相互关系的图。FIG. 47 is a diagram illustrating the relationship between the amount of change in image feature data, the amount of change in position and posture information, and the amount of change in joint angle information in conjunction with mathematical formulas.
图48是对本实施方式的处理进行说明的流程图。FIG. 48 is a flowchart illustrating the processing of this embodiment.
图49是本实施方式的机器人控制装置的另一个详细的构成例。FIG. 49 is another detailed configuration example of the robot controller of this embodiment.
图50是本实施方式的机器人的构成例。FIG. 50 is an example of the configuration of the robot of this embodiment.
图51A、图51B是本实施方式的处理装置的构成例。51A and 51B are configuration examples of the processing device of this embodiment.
图52是本实施方式的机器人的构成例。FIG. 52 is an example of the configuration of the robot of this embodiment.
图53是本实施方式的机器人的其他构成例。Fig. 53 is another configuration example of the robot of this embodiment.
图54是使用第二检查信息的检查装置的构成例。Fig. 54 is a configuration example of an inspection device using second inspection information.
图55是第一检查信息与第二检查信息的例子。Fig. 55 is an example of first inspection information and second inspection information.
图56是对离线处理的流程进行说明的流程图。FIG. 56 is a flowchart illustrating the flow of offline processing.
图57A、图57B是形状信息(三维模型数据)的例子。57A and 57B are examples of shape information (three-dimensional model data).
图58是视点信息的生成所使用的视点候补信息的例子。FIG. 58 shows an example of viewpoint candidate information used for generating viewpoint information.
图59是视点候补信息的对象物坐标系中的坐标值的例子。FIG. 59 is an example of coordinate values in the object coordinate system of viewpoint candidate information.
图60是基于检查处理对象位置的对象物坐标系的设定例。FIG. 60 is an example of setting an object coordinate system based on the inspection processing target position.
图61A~图61G是与各视点信息对应的作业前图像与合格图像的例子。61A to 61G are examples of pre-work images and acceptable images corresponding to each viewpoint information.
图62A~图62D是检查区域的设定手段的说明图。62A to 62D are explanatory diagrams of means for setting an inspection region.
图63A~图63D是检查区域的设定手段的说明图。63A to 63D are explanatory diagrams of means for setting an inspection region.
图64A~图64D是检查区域的设定手段的说明图。64A to 64D are explanatory diagrams of means for setting an inspection region.
图65A~图65D是作业前后的相似度计算处理的说明图。65A to 65D are explanatory diagrams of similarity calculation processing before and after work.
图66A~图66D是作业前后的相似度计算处理的说明图。66A to 66D are explanatory diagrams of similarity calculation processing before and after work.
图67A~图67E是视点信息的优先度的说明图。67A to 67E are explanatory diagrams of priorities of viewpoint information.
图68是对在线处理的流程进行说明的流程图。Fig. 68 is a flowchart illustrating the flow of online processing.
图69A、图69B是对象物坐标系中的视点信息与机器人坐标系中的视点信息的比较例。69A and 69B are comparison examples of viewpoint information in the object coordinate system and viewpoint information in the robot coordinate system.
图70A、图70B是图像旋转角度的说明图。70A and 70B are explanatory diagrams of image rotation angles.
具体实施方式Detailed ways
以下,对本实施方式进行说明。此外,以下说明的本实施方式并非不合理地限定权利要求书所记载的本发明的内容。另外,本实施方式中说明的全部结构未必都是本发明所必须的构成要件。Hereinafter, this embodiment will be described. In addition, this embodiment described below does not limit the content of this invention described in a claim unreasonably. In addition, not all the configurations described in this embodiment are necessarily essential components of the present invention.
1.本实施方式的手段1. The means of this embodiment
第一实施方式first embodiment
如图1所示,这里,对将由机器人的手部HD把持的组装对象物WK1组装于被组装对象物WK2的组装作业的情况进行说明。此外,机器人的手部HD设置于机器人的臂AM的前端。As shown in FIG. 1 , here, a case of assembling the object to be assembled WK1 grasped by the robot's hand HD to the object to be assembled WK2 will be described. In addition, the hand HD of the robot is provided at the tip of the arm AM of the robot.
首先,作为本实施方式的比较例,在通过使用上述参照图像的视觉伺服进行图1所示的组装作业的情况下,根据由摄像机(拍摄部)CM拍摄的拍摄图像、以及预先准备好的参照图像,对机器人进行控制。具体而言,使组装对象物WK1如箭头YJ那样地向映入参照图像的组装对象物WK1R的位置移动,而将其组装于被组装对象物WK2。First, as a comparative example of the present embodiment, when the assembly work shown in FIG. image to control the robot. Specifically, the object to be assembled WK1 is moved to the position of the object to be assembled WK1R in which the reference image is reflected as indicated by the arrow YJ, and assembled to the object to be assembled WK2 .
这里,在图2A中示出了此时使用的参照图像RIM,在图2B中示出了映入参照图像RIM的被组装对象物WK2的现实空间(三维空间)上的位置。在图2A的参照图像RIM中,映有被组装对象物WK2与组装状态(或者组装之前的状态)的组装对象物WK1R(相当于图1的WK1R)。在使用上述参照图像RIM的视觉伺服中,以使映入拍摄图像的组装对象物WK1的位置姿势与映入参照图像RIM的组装状态的组装对象物WK1R的位置姿势一致的方式,使组装对象物WK1移动。Here, the reference image RIM used at this time is shown in FIG. 2A , and the position on the real space (three-dimensional space) of the object to be assembled WK2 reflected in the reference image RIM is shown in FIG. 2B . In the reference image RIM of FIG. 2A , the object to be assembled WK2 and the object to be assembled WK1R (corresponding to WK1R in FIG. 1 ) in the assembled state (or the state before assembly) are reflected. In visual servoing using the above-mentioned reference image RIM, the position and posture of the assembly object WK1 reflected in the captured image and the position and posture of the assembly object WK1R in the assembled state reflected in the reference image RIM are matched so that the assembly object WK1 WK1 moves.
但是,如上所述,在实际进行组装作业的情况下,存在被组装对象物WK2的位置姿势变化的情况。例如,如图2B所示,映入图2A的参照图像RIM的被组装对象物WK2的重心位置在现实空间上为GC1。与此相对,实际的被组装对象物WK2会偏置,并且实际的被组装对象物WK2的重心位置为GC2。在这种情况下,即便以使与映入参照图像RIM的组装对象物WK1R的位置姿势一致的方式,使实际的组装对象物WK1移动,也无法成为与实际的被组装对象物WK2的组装状态,因此,不能正确地进行组装作业。这是因为在被组装对象物WK2的位置姿势变化的情况下,与被组装对象物WK2成为组装状态的组装对象物WK1的位置姿势也变化。However, as described above, when the assembly work is actually performed, the position and orientation of the object to be assembled WK2 may change. For example, as shown in FIG. 2B , the position of the center of gravity of the object to be assembled WK2 reflected in the reference image RIM of FIG. 2A is GC1 in real space. On the other hand, the actual object to be assembled WK2 is offset, and the position of the center of gravity of the actual object to be assembled WK2 is GC2. In this case, even if the actual object to be assembled WK1 is moved so as to match the position and posture of the object to be assembled WK1R reflected in the reference image RIM, the actual assembled object WK2 cannot be brought into an assembled state. , therefore, the assembly work cannot be performed correctly. This is because when the position and orientation of the object to be assembled WK2 changes, the position and orientation of the object to be assembled WK1 which is assembled with the object to be assembled WK2 also changes.
因此,本实施方式的机器人控制系统100等即使在被组装对象物的位置姿势变化的情况下,也能够正确地进行组装作业。Therefore, the robot control system 100 and the like according to the present embodiment can accurately perform the assembly work even when the position and orientation of the object to be assembled changes.
具体而言,在图3中示出了本实施方式的机器人控制系统100的构成例。本实施方式的机器人控制系统100包括从拍摄部200获取拍摄图像的拍摄图像获取部110、和根据拍摄图像来控制机器人300的控制部120。另外,机器人300具有末端执行器(手部)310和臂320。此外,在后面对拍摄部200以及机器人300的结构进行详细叙述。Specifically, a configuration example of the robot control system 100 according to the present embodiment is shown in FIG. 3 . The robot control system 100 of the present embodiment includes a captured image acquisition unit 110 that acquires a captured image from the imaging unit 200 , and a control unit 120 that controls the robot 300 based on the captured image. In addition, the robot 300 has an end effector (hand) 310 and an arm 320 . In addition, the configuration of the imaging unit 200 and the robot 300 will be described in detail later.
首先,拍摄图像获取部110获取映有组装作业的组装对象物与被组装对象物中的、至少被组装对象物的拍摄图像。First, the captured image acquiring unit 110 acquires a captured image of at least the target to be assembled among the target to be assembled and the target to be assembled on which the assembly work is reflected.
然后,控制部120根据拍摄图像,进行被组装对象物的特征量检测处理,并根据被组装对象物的特征量,使组装对象物移动。此外,在使组装对象物移动的处理中,也包括输出机器人300的控制信息(控制信号)的处理等。另外,控制部120的功能通过各种处理器(CPU等)、ASIC(门阵列等)等硬件、或程序等而能够实现。Then, the control unit 120 performs feature amount detection processing of the object to be assembled based on the captured image, and moves the object to be assembled based on the feature amount of the object to be assembled. Moreover, the process of outputting the control information (control signal) of the robot 300, etc. are included in the process of moving an object to be assembled. In addition, the functions of the control unit 120 can be realized by hardware such as various processors (CPU, etc.), ASIC (gate array, etc.), programs, or the like.
这样,在使用上述参照图像的视觉伺服(比较例)中,根据参照图像的组装对象物的特征量,使组装对象物移动,与此相对,在本实施方式中,根据映入拍摄图像的被组装对象物的特征量,使组装对象物移动。例如,如图4所示,在由摄像机CM拍摄的拍摄图像中,检测作为被组装对象物的工件WK2的特征量,并根据检测出的工件WK2的特征量,使作为组装对象物的工件WK1如箭头YJ所示地移动。In this way, in the visual servoing (comparative example) using the above reference image, the object to be assembled is moved based on the feature value of the object to be assembled in the reference image. The characteristic quantity of the object to be assembled is used to move the object to be assembled. For example, as shown in FIG. 4, in the captured image captured by the camera CM, the feature value of the workpiece WK2 as the object to be assembled is detected, and based on the detected feature value of the workpiece WK2, the workpiece WK1 as the object to be assembled is Move as indicated by arrow YJ.
这里,在由摄像机CM拍摄的拍摄图像中,映有当前时刻(拍摄的时刻)的被组装对象物WK2。因此,能够使组装对象物WK1向当前时刻的被组装对象物WK2的位置移动。由此,能够防止如使用上述参照图像的视觉伺服的失败例(图1中说明的比较例的问题)那样,使组装对象物WK1向当前时刻不能成为组装状态的位置移动的情况。另外,由于在每次进行组装作业时,根据拍摄图像设定视觉伺服的新的目标位置,所以即使在被组装对象物WK2的位置姿势变化的情况下,也能够设定正确的目标位置。Here, the object to be assembled WK2 at the present time (time of imaging) is reflected in the captured image captured by the camera CM. Therefore, the object to be assembled WK1 can be moved to the position of the object to be assembled WK2 at the present time. Thereby, it is possible to prevent the object to be assembled WK1 from being moved to a position where it cannot be assembled at the present time, as in the failure example of visual servoing using the reference image (problem of the comparative example described in FIG. 1 ). Also, since a new target position for visual servoing is set based on the captured image every time an assembly operation is performed, even when the position and orientation of the object to be assembled WK2 changes, an accurate target position can be set.
如上所述,即使在被组装对象物的位置姿势变化的情况下,也能够正确地进行组装作业。并且,在本实施方式中,也无需预先准备参照图像,从而能够减少视觉伺服的准备成本。As described above, even when the position and orientation of the object to be assembled changes, the assembly work can be accurately performed. Furthermore, in this embodiment, there is no need to prepare a reference image in advance, and the preparation cost for visual servoing can be reduced.
另外,这样,控制部120根据拍摄图像来进行视觉伺服,从而控制机器人。In addition, in this way, the control unit 120 controls the robot by performing visual servoing based on the captured image.
由此,能够根据当前的作业状况,对机器人进行反馈控制等。In this way, feedback control and the like can be performed on the robot according to the current work situation.
此外,机器人控制系统100并不限定于图1的结构,而能够进行省略上述一部分的构成要素、或追加其他的构成要素等各种变形实施。另外,如后述图19B所示,本实施方式的机器人控制系统100包括在机器人300内,并且也可以与机器人300一体地构成。并且,如后述图20所示,机器人控制系统100的功能也可以通过服务器500、各机器人300所具有的终端装置330来实现。In addition, the robot control system 100 is not limited to the configuration shown in FIG. 1 , and can be implemented in various modifications such as omitting some of the above components or adding other components. In addition, as shown in FIG. 19B described later, the robot control system 100 of this embodiment is included in the robot 300 and may be integrally configured with the robot 300 . Furthermore, as shown in FIG. 20 described later, the functions of the robot control system 100 may be realized by the server 500 and the terminal device 330 included in each robot 300 .
另外,例如在机器人控制系统100与拍摄部200通过包括有线以及无线的至少一方的网络而连接的情况下,拍摄图像获取部110也可以是与进行拍摄部200通信的通信部(接口部)。并且,在机器人控制系统100包括拍摄部200的情况下,拍摄图像获取部110也可以是拍摄部200本身。In addition, for example, when the robot control system 100 and the imaging unit 200 are connected via at least one network including wired and wireless, the captured image acquisition unit 110 may be a communication unit (interface unit) that communicates with the imaging unit 200 . Furthermore, when the robot control system 100 includes the imaging unit 200 , the captured image acquisition unit 110 may be the imaging unit 200 itself.
这里,拍摄图像是指利用拍摄部200拍摄而得到的图像。另外,拍摄图像也可以是存储于外部的存储部的图像、经由网络而获取的图像。拍摄图像例如是后述图5所示的图像PIM等。Here, the captured image refers to an image captured by the imaging unit 200 . In addition, the captured image may be an image stored in an external storage unit or an image acquired via a network. The captured image is, for example, an image PIM shown in FIG. 5 to be described later.
另外,组装作业是指组装多个作业对象物的作业,具体而言,是指将组装对象物组装于被组装对象物的作业。组装作业例如是将工件WK1放置在工件WK2之上(或者旁边)的作业、是将工件WK1嵌入(嵌合)工件WK2的作业(嵌入作业、嵌合作业)、或是将工件WK1与工件WK2粘合、连接、装配、融着的作业(粘合作业、连接作业、装配作业、融着作业)等。In addition, an assembling operation refers to an operation of assembling a plurality of work objects, and specifically refers to an operation of assembling an assembly object to an assembled object. The assembly operation is, for example, an operation of placing the workpiece WK1 on (or beside) the workpiece WK2, an operation of inserting (fitting) the workpiece WK1 into (fitting) the workpiece WK2 (embedding operation, fitting operation), or placing the workpiece WK1 and the workpiece WK2 Bonding, connecting, assembling, and fusing work (bonding work, connecting work, assembling work, fusing work), etc.
并且,组装对象物是指在组装作业中针对被组装对象物而进行组装的物体。例如,在图4的例子中是工件WK1。In addition, the object to be assembled refers to an object to be assembled with respect to an object to be assembled in the assembly work. For example, it is workpiece WK1 in the example of FIG. 4 .
另一方面,被组装对象物是指在组装作业中供组装对象物组装的物体。例如,在图4的例子中是工件WK2。On the other hand, the object to be assembled refers to an object to which the object to be assembled is assembled in the assembly work. For example, it is workpiece WK2 in the example of FIG. 4 .
2.处理的详细2. Details of processing
接下来,对本实施方式的处理进行详细说明。Next, the processing of this embodiment will be described in detail.
2.1.基于被组装对象物的特征量的视觉伺服2.1. Visual servoing based on the feature quantity of the assembled object
本实施方式的拍摄图像获取部110获取映有组装对象物以及被组装对象物的1个或者多个拍摄图像。然后,控制部120根据获取的1个或者多个拍摄图像,进行组装对象物以及被组装对象物的特征量检测处理。并且,控制部120根据组装对象物的特征量以及被组装对象物的特征量,以使组装对象物与被组装对象物的相对位置姿势关系成为目标相对位置姿势关系的方式,使组装对象物移动。The captured image acquisition unit 110 of the present embodiment acquires one or a plurality of captured images showing the assembly object and the assembled object. Then, the control unit 120 performs feature amount detection processing of the object to be assembled and the object to be assembled based on the acquired one or a plurality of captured images. In addition, the control unit 120 moves the object to be assembled so that the relative positional relationship between the object to be assembled and the object to be assembled becomes a target relative positional relationship based on the feature amount of the object to be assembled and the feature amount of the object to be assembled. .
这里,目标相对位置姿势关系是指,在通过视觉伺服进行组装作业时成为目标的、组装对象物与被组装对象物的相对位置姿势关系。例如,在图4的例子中,工件WK1与工件WK2的三角形的孔HL接触(邻接)时的相对位置姿势关系为目标相对位置姿势。Here, the target relative position and posture relationship refers to the relative position and posture relationship between the object to be assembled and the object to be assembled, which is targeted at the time of the assembly work by visual servoing. For example, in the example of FIG. 4 , the relative position and posture relationship when the workpiece WK1 is in contact with (adjacent to) the triangular hole HL of the workpiece WK2 is the target relative position and posture.
由此,能够根据从拍摄图像检测出的组装对象物的特征量与被组装对象物的特征量,进行组装作业等。此外,在后段中对拍摄图像获取部110所获取的1个或者多个拍摄图像进行详细说明。As a result, assembly operations and the like can be performed based on the feature quantities of the object to be assembled and the feature quantities of the object to be assembled detected from the captured image. In addition, one or more captured images acquired by the captured image acquisition unit 110 will be described in detail in a later stage.
另外,在多数组装作业中,大多确定了组装对象物中的组装于被组装对象物的部分(组装部分)、与被组装对象物中的供组装对象物组装的部分(被组装部分)。例如,在图4的例子中,组装对象物中的组装部分是指工件WK1的底面BA,被组装对象物的被组装部分是指工件WK2的三角形的孔HL。在图4的组装作业的例子中,是将组装部分BA嵌入被组装部分的孔HL,例如即使将工件WK1的侧面SA组装于孔HL也是没意义的。因此,优选预先设定组装对象物的组装部分与被组装对象物的被组装部分。In addition, in many assembly operations, a portion of the assembly object to be assembled with the assembly object (assembly portion) and a portion of the assembly object to be assembled with the assembly object (assembly portion) are often specified. For example, in the example of FIG. 4 , the assembled portion of the object to be assembled refers to the bottom surface BA of the workpiece WK1 , and the portion to be assembled of the object to be assembled refers to the triangular hole HL of the workpiece WK2 . In the example of the assembly work shown in FIG. 4 , the assembly part BA is fitted into the hole HL of the part to be assembled. For example, it is meaningless to assemble the side surface SA of the workpiece WK1 into the hole HL. Therefore, it is preferable to set the assembly part of the object to be assembled and the part to be assembled of the object to be assembled in advance.
因此,控制部120根据被组装对象物的特征量中的作为目标特征量而设定的特征量、与组装对象物的特征量中的作为关注特征量而设定的特征量,以使组装对象物与被组装对象物的相对位置姿势关系成为目标相对位置姿势关系的方式,使组装对象物移动。Therefore, the control unit 120 uses the feature quantity set as the target feature quantity among the feature quantities of the object to be assembled and the feature quantity set as the attention feature quantity among the feature quantities of the object to be assembled to make the assembly object The object to be assembled is moved so that the relative positional relationship between the object and the object to be assembled becomes the target relative positional relationship.
这里,特征量例如是指图像的特征点、映入图像中的检测对象物(组装对象物以及被组装对象物等)的轮廓线等。而且,特征量检测处理是指检测图像中的特征量的处理,例如是指特征点检测处理、轮廓线检测处理等。Here, the feature amount refers to, for example, a feature point of an image, a contour line of an object to be detected (an object to be assembled, an object to be assembled, etc.) reflected in the image, and the like. Also, the feature amount detection processing refers to processing for detecting feature amounts in an image, for example, feature point detection processing, contour line detection processing, and the like.
以下,对作为特征量而检测特征点的情况进行说明。特征点是指能够从图像中突出观测的点。例如,在图5所示的拍摄图像PIM11中,作为被组装对象物的工件WK2的特征点,检测出特征点P1~P10,作为组装对象物的工件WK1的特征点,检测出特征点Q1~Q5。此外,在图5的例子中,为了便于图示以及说明,图示出仅检测到P1~P10以及Q1~Q5的特征点的样子,但是在实际的拍摄图像中检测到比这更多的特征点。但是,即使在检测到比这更多的特征点的情况下,以下的说明处理的内容也无变化。Hereinafter, a case where a feature point is detected as a feature amount will be described. A feature point is a point that can stand out from an observation in an image. For example, in the captured image PIM11 shown in FIG. 5 , feature points P1 to P10 are detected as feature points of the workpiece WK2 to be assembled, and feature points Q1 to P10 are detected as feature points of the workpiece WK1 to be assembled. Q5. In addition, in the example of FIG. 5 , for convenience of illustration and explanation, only the feature points of P1 to P10 and Q1 to Q5 are detected, but more features than this are detected in the actual captured image. point. However, even when more feature points are detected, the content of the processing described below does not change.
另外,在本实施方式中,作为特征点的检测方法(特征点检测处理),使用角点检测法等,但是也可以使用其他的一般的角部检测方法(固有值、FAST特征检测),也可以使用以SIFT(Scale Invariant FeatureTransform:尺度不变特征转换)为代表的局部特征量描述子、SURF(Speeded Up Robust Feature:快速鲁棒性特征)等。In addition, in this embodiment, a corner detection method or the like is used as a feature point detection method (feature point detection processing), but other general corner detection methods (either eigenvalue, FAST feature detection) may also be used. Local feature descriptors represented by SIFT (Scale Invariant FeatureTransform: Scale Invariant Feature Transformation), SURF (Speeded Up Robust Feature: Fast Robust Feature), etc. can be used.
而且,在本实施方式中,根据被组装对象物的特征量中的作为目标特征量而设定的特征量、与组装对象物的特征量中的作为关注特征量而设定的特征量,进行视觉伺服。Furthermore, in this embodiment, based on the feature quantity set as the target feature quantity among the feature quantities of the object to be assembled, and the feature quantity set as the feature quantity of interest among the feature quantities of the object to be assembled, the Visual Servo.
具体而言,在图5的例子中,在工件WK2的特征点P1~P10中,作为目标特征量而设定目标特征点P9以及目标特征点P10。另一方面,在工件WK1的特征点Q1~Q5中,作为关注特征量而设定关注特征点Q4以及关注特征点Q5。Specifically, in the example of FIG. 5 , among the feature points P1 to P10 of the workpiece WK2 , a target feature point P9 and a target feature point P10 are set as target feature quantities. On the other hand, among the feature points Q1 to Q5 of the workpiece WK1 , a feature point of interest Q4 and a feature point of interest Q5 are set as feature quantities of interest.
而且,控制部120以使组装对象物的关注特征点与被组装对象物的目标特征点一致或者接近的方式,使组装对象物移动。Then, the control unit 120 moves the object to be assembled so that the feature point of interest of the object to be assembled matches or approaches the target feature point of the object to be assembled.
即,以使关注特征点Q4与目标特征点P9接近、关注特征点Q5与目标特征点P10接近的方式,使组装对象物WK1如箭头YJ所示地移动。That is, the object to be assembled WK1 is moved as shown by the arrow YJ so that the focused feature point Q4 approaches the target feature point P9 and the focused feature point Q5 approaches the target feature point P10 .
这里,目标特征量是指表示被组装对象物的特征量中的、通过视觉伺服使组装对象物移动时成为目标的特征量。换言之,目标特征量是被组装对象物的被组装部分的特征量。另外,目标特征点是指在进行特征点检测处理的情况下作为目标特征量而设定的特征点。如上所述,在图5的例子中,作为目标特征点而设定有与工件WK2的三角形的孔HL对应的特征点P9以及特征点P10。Here, the target feature amount refers to a feature amount that is targeted when the assembly object is moved by visual servoing, among the feature amounts representing the object to be assembled. In other words, the target feature amount is the feature amount of the assembled part of the object to be assembled. In addition, the target feature point refers to a feature point set as a target feature amount when the feature point detection process is performed. As described above, in the example of FIG. 5 , the feature point P9 and the feature point P10 corresponding to the triangular hole HL of the workpiece WK2 are set as target feature points.
另一方面,关注特征量是指表示组装对象物或者被组装对象物的特征量中的、朝向与目标特征量对应的现实空间上的点(在图5的例子中为工件WK2的三角形的孔HL)移动的、表示现实空间上的点(在图5的例子中为工件WK1的底面)的特征量。换言之,关注特征量是组装对象物的组装部分的特征量。另外,关注特征点是指在进行特征点检测处理的情况下作为关注特征量而设定的特征点。如上所述,在图5的例子中,作为关注特征点而设定有与工件WK1的底面对应的特征点Q4以及特征点Q5。On the other hand, the feature value of interest refers to a point (in the example of FIG. 5 , a triangular hole of the workpiece WK2 ) facing a point on the real space corresponding to the target feature value among the feature values representing the object to be assembled or the object to be assembled. HL) is a moving feature value representing a point in real space (the bottom surface of the workpiece WK1 in the example of FIG. 5 ). In other words, the feature value of interest is the feature value of the assembly part of the object to be assembled. Note that the feature point of interest refers to a feature point set as the feature amount of interest when the feature point detection process is performed. As described above, in the example of FIG. 5 , the feature point Q4 and the feature point Q5 corresponding to the bottom surface of the workpiece WK1 are set as the feature points of interest.
另外,目标特征量(目标特征点)以及关注特征量(关注特征点)可以是指导者(使用者)预先设定的,也可以是依据给定的算法而设定的特征量(特征点)。例如,也可以根据检测到的特征点的偏差以及与目标特征点的相对位置关系,设定目标特征点。具体而言,在图5的例子中,也可以在拍摄图像PIM11中,将位于表示工件WK2的特征点P1~P10的偏差中心附近的特征点P9以及特征点P10设定作为目标特征点。另外,除此之外也可以在表示被组装对象物的CAD(ComputerAided Design:计算机辅助设计)数据中,预先设定与目标特征点对应的点,并且在拍摄图像中进行CAD数据与CAD匹配,从而根据CAD匹配的结果,从被组装对象物的特征点之中,确定(检测)作为目标特征点而设定的特征点。关注特征量(关注特征点)也是同样的。In addition, the target feature quantity (target feature point) and the attention feature quantity (focus feature point) may be preset by the instructor (user), or may be feature quantities (feature points) set according to a given algorithm. . For example, the target feature point can also be set according to the deviation of the detected feature point and the relative positional relationship with the target feature point. Specifically, in the example of FIG. 5 , in the captured image PIM11 , the feature points P9 and P10 located near the centers of deviations of the feature points P1 to P10 representing the workpiece WK2 may be set as target feature points. In addition, it is also possible to pre-set points corresponding to target feature points in CAD (Computer Aided Design: Computer Aided Design) data representing the object to be assembled, and to perform matching between the CAD data and the CAD in the captured image, Accordingly, the feature points set as the target feature points are specified (detected) from among the feature points of the object to be assembled based on the result of the CAD matching. The same applies to the feature quantity of interest (feature point of interest).
此外,在本实施方式中,控制部120以使组装对象物的关注特征点与被组装对象物的目标特征点一致或者接近的方式,使组装对象物移动,但是由于被组装对象物与组装对象物是有形物体,所以实际上在相同的点检测不到目标特征点与关注特征点。即,以一致的方式使组装对象物移动终究是指使检测到关注特征点的点向检测到目标特征点的点移动的意思。In addition, in this embodiment, the control unit 120 moves the object to be assembled so that the attention feature point of the object to be assembled matches or approaches the target feature point of the object to be assembled. Objects are tangible objects, so in fact, the target feature point and the attention feature point cannot be detected at the same point. That is, moving the object to be assembled in a consistent manner ultimately means moving the point at which the feature point of interest is detected toward the point at which the target feature point is detected.
由此,能够以使设定的组装对象物的组装部分与设定的被组装对象物的被组装部分的相对位置姿势关系成为目标相对位置姿势关系的方式,使组装对象物移动等。Accordingly, it is possible to move the assembly object such that the relative position and posture relationship between the set assembly portion of the assembly object and the set assembly portion of the assembly object becomes the target relative position and posture relationship.
而且,能够将组装对象物的组装部分组装于被组装对象物的被组装部分等。例如图6所示,能够将作为工件WK1的组装部分的底面BA嵌入作为工件WK2的被组装部分的孔HL。Furthermore, it is possible to assemble the assembly portion of the object to be assembled to the portion to be assembled of the object to be assembled, and the like. For example, as shown in FIG. 6 , it is possible to fit the bottom surface BA of the assembly part of the workpiece WK1 into the hole HL of the assembly part of the workpiece WK2 .
接下来,用图7的流程图对本实施方式的处理流程进行说明。Next, the processing flow of this embodiment will be described using the flowchart of FIG. 7 .
首先,拍摄图像获取部110获取例如图5所示的拍摄图像PIM11(S101)。在该拍摄图像PIM11映有组装对象物WK1与被组装对象物WK2的双方。First, the captured image acquisition unit 110 acquires, for example, the captured image PIM11 shown in FIG. 5 ( S101 ). Both the object to be assembled WK1 and the object to be assembled WK2 are reflected in this captured image PIM11 .
接下来,控制部120根据获取的拍摄图像PIM11,进行特征量检测处理,从而检测被组装对象物WK2的目标特征量FB、与组装对象物WK1的关注特征量FA(S102、S103)。Next, the control unit 120 performs feature detection processing based on the captured image PIM11 to detect the target feature FB of the object to be assembled WK2 and the feature of interest FA of the object to be assembled WK1 ( S102 , S103 ).
然后,如上所述,控制部120根据检测出的关注特征量FA与目标特征量FB,使组装对象物WK1移动(S104),并且对组装对象物WK1与被组装对象物WK2的相对位置姿势关系是否成为目标相对位置姿势关系进行判定(S105)。Then, as described above, the control unit 120 moves the assembly object WK1 based on the detected attention feature value FA and target feature value FB (S104), and checks the relative position and posture relationship between the assembly object WK1 and the assembly object WK2. It is judged whether it is the target relative position and posture relationship (S105).
最后,在判定为组装对象物WK1与被组装对象物WK2的相对位置姿势关系如图6所示地成为目标相对位置姿势关系的情况下,结束处理,在判定为组装对象物WK1与被组装对象物WK2的相对位置姿势关系未成为目标相对位置姿势关系的情况下,回到步骤S101,并且反复进行处理。以上是本实施方式的处理流程。Finally, when it is determined that the relative position and posture relationship between the assembly object WK1 and the assembled object WK2 becomes the target relative position and posture relationship as shown in FIG. If the relative position and posture relationship of the object WK2 has not reached the target relative position and posture relationship, it returns to step S101 and repeats the process. The above is the processing flow of this embodiment.
另外,拍摄图像获取部110也可以获取多个拍摄图像。在这种情况下,拍摄图像获取部110可以获取多个映有组装对象物与被组装对象物的双方的拍摄图像,也可以获取仅映有组装对象物的拍摄图像、以及仅映有被组装对象物的拍摄图像。In addition, the captured image acquisition unit 110 may acquire a plurality of captured images. In this case, the captured image acquisition unit 110 may acquire a plurality of captured images showing both the assembly target object and the assembled object, or may acquire a captured image showing only the assembly target object and only a captured image showing only the assembled object. Captured images of objects.
这里,在图8的流程图中示出了后者的获取分别映有组装对象物与被组装对象物的多个拍摄图像的情况的处理流程。Here, the latter processing flow in the case of acquiring a plurality of captured images showing the assembly object and the assembled object are shown in the flowchart of FIG. 8 .
首先,拍摄图像获取部110获取至少映有被组装对象物WK2的拍摄图像PIM11(S201)。此外,该拍摄图像PIM11也可以映有组装对象物WK1。然后,控制部120从拍摄图像PIM11检测被组装对象物WK2的目标特征量FB(S202)。First, the captured image acquisition unit 110 acquires the captured image PIM11 in which at least the object to be assembled WK2 is reflected ( S201 ). In addition, the object to be assembled WK1 may be reflected in this captured image PIM11. Then, the control unit 120 detects the target feature value FB of the object to be assembled WK2 from the captured image PIM11 ( S202 ).
接下来,拍摄图像获取部110获取至少映有组装对象物WK1的拍摄图像PIM12(S203)。此外,与步骤S201相同,该拍摄图像PIM12也可以映有被组装对象物WK2。然后,控制部120从拍摄图像PIM12检测组装对象物WK1的关注特征量FA(S204)。Next, the captured image acquisition unit 110 acquires the captured image PIM12 in which at least the object to be assembled WK1 is reflected ( S203 ). In addition, as in step S201, the object to be assembled WK2 may be reflected in the captured image PIM12. Then, the control unit 120 detects the feature value FA of interest of the object to be assembled WK1 from the captured image PIM12 ( S204 ).
然后,以下与图7说明的处理流程相同,控制部120根据检测出的关注特征量FA与目标特征量FB,使组装对象物WK1移动(S205),并且对组装对象物WK1与被组装对象物WK2的相对位置姿势关系是否成为目标相对位置姿势关系进行判定(S206)。Then, the following is the same as the processing flow described in FIG. 7 , the control unit 120 moves the assembly object WK1 according to the detected attention feature value FA and target feature value FB (S205), and the assembly object WK1 and the assembly object It is determined whether the relative position and posture relationship of WK2 becomes the target relative position and posture relationship (S206).
最后,在判定为组装对象物WK1与被组装对象物WK2的相对位置姿势关系如图6所示地成为目标相对位置姿势关系的情况下,结束处理,在判定为组装对象物WK1与被组装对象物WK2的相对位置姿势关系未成为目标相对位置姿势关系的情况下,回到步骤S203,并且反复进行处理。以上是获取分别映有组装对象物与被组装对象物的多个拍摄图像的情况的处理流程。Finally, when it is determined that the relative position and posture relationship between the assembly object WK1 and the assembled object WK2 becomes the target relative position and posture relationship as shown in FIG. If the relative position and posture relationship of the object WK2 has not reached the target relative position and posture relationship, it returns to step S203 and repeats the process. The above is the processing flow in the case of acquiring a plurality of captured images in which the object to be assembled and the object to be assembled are respectively reflected.
另外,在上述例子中,通过视觉伺服将组装对象物实际组装于被组装对象物,但是本发明并不限定于此,也可以通过视觉伺服形成将组装对象物组装于被组装对象物之前的状态。In addition, in the above example, the object to be assembled is actually assembled to the object to be assembled by visual servoing, but the present invention is not limited thereto, and the state before the object to be assembled to the object to be assembled may be formed by visual servoing. .
即,控制部120也可以根据被组装对象物的特征量(特征点),确定出与被组装对象物处于给定的位置关系的图像区域,并以使组装对象物的关注特征点与确定出的图像区域一致或者接近的方式,使组装对象物移动。换言之,控制部120也可以根据被组装对象物的特征量,确定与被组装对象物处于给定的位置关系的现实空间上的点,并使组装对象物向确定出的点移动。That is, the control unit 120 may specify an image region in a given positional relationship with the object to be assembled based on the feature amount (feature point) of the object to be assembled, and make the feature points of interest of the object to be assembled and the determined Make the assembly object move in such a way that the image areas are consistent or close to each other. In other words, the control unit 120 may specify a point in real space that is in a predetermined positional relationship with the object to be assembled based on the feature value of the object to be assembled, and move the object to be assembled to the specified point.
例如,在图9所示的拍摄图像PIM中,作为表示被组装对象物WK2的被组装部分亦即三角形的孔HL的特征点,检测到特征点P8~P10。在这种情况下,在拍摄图像PIM中,确定出与特征点P8~P10处于给定的位置关系的图像区域R1~R3。然后,以使组装对象物WK1的关注特征点Q4与图像区域R2一致(接近)、组装对象物的关注特征点Q5与图像区域R3一致(接近)的方式,使组装对象物WK1移动。For example, in the captured image PIM shown in FIG. 9 , feature points P8 to P10 are detected as feature points representing the triangular hole HL which is the assembled portion of the object to be assembled WK2 . In this case, in the captured image PIM, the image regions R1 to R3 having a predetermined positional relationship with the feature points P8 to P10 are identified. Then, the assembly object WK1 is moved so that the attention feature point Q4 of the assembly object WK1 coincides with (closes to) the image region R2 , and the attention feature point Q5 of the assembly object coincides with (closes to) the image region R3 .
由此,能够形成例如组装作业之前的状态等。Thereby, for example, the state before the assembling work can be created.
另外,未必需要进行如上述例子所示地检测组装对象物的特征量的处理。例如,也可以检测被组装对象物的特征量,并根据检测出的被组装对象物的特征量,推断与机器人相对的被组装对象物的位置姿势,从而以使把持有组装对象物的手部与推断出的被组装对象物的位置接近的方式,控制机器人等。In addition, it is not necessarily necessary to perform the process of detecting the feature value of the object to be assembled as shown in the above example. For example, it is also possible to detect the feature quantity of the object to be assembled, and infer the position and posture of the object to be assembled relative to the robot based on the detected feature quantity of the object to be assembled, so that the hand holding the object to be assembled Control the robot, etc. so that the part approaches the estimated position of the object to be assembled.
2.2.由两种视觉伺服进行的组装作业2.2. Assembly work by two types of visual servoing
接下来,对持续进行使用参照图像的视觉伺服(第一视觉伺服)、与使用被组装对象物的特征量而使组装对象物移动的视觉伺服(第二视觉伺服)这两种视觉伺服的情况的处理进行说明。Next, for the case where visual servoing using a reference image (first visual servoing) and visual servoing (second visual servoing) in which an assembly object is moved using a feature value of the object to be assembled are continuously performed processing will be explained.
例如,在图10中,通过使用参照图像的第一视觉伺服进行组装对象物WK1的从位置GC1朝向位置GC2的移动(箭头YJ1所示的移动),通过使用被组装对象物WK2的特征量的第二视觉伺服进行组装对象物WK1的从位置GC2朝向位置GC3的移动(箭头YJ2所示的移动)。此外,位置GC1~GC3是组装对象物WK1的重心位置。For example, in FIG. 10 , the movement of the assembly object WK1 from the position GC1 to the position GC2 (the movement indicated by the arrow YJ1 ) is performed by the first visual servoing using the reference image, and by using the feature value of the assembly object WK2 The second visual servoing performs movement (movement shown by arrow YJ2 ) of assembly target object WK1 toward position GC3 from position GC2 . In addition, the positions GC1 to GC3 are the positions of the center of gravity of the object to be assembled WK1.
在进行这样的处理的情况下,如图3所示,本实施方式的机器人控制系统100还包括参照图像存储部130。参照图像存储部130存储对采取目标位置姿势的组装对象物进行显示的参照图像。参照图像例如是如后述图12(A)所示的图像RIM。此外,参照图像存储部130的功能能够通过RAM(Random Access Memory:随机存取存储器)等存储器、HDD(Hard Disk Drive:硬盘驱动器)等来实现。When performing such processing, the robot control system 100 of the present embodiment further includes a reference image storage unit 130 as shown in FIG. 3 . The reference image storage unit 130 stores a reference image displaying an assembly object taking a target position and posture. The reference image is, for example, an image RIM shown in FIG. 12(A) described later. In addition, the function of the reference image storage unit 130 can be realized by a memory such as RAM (Random Access Memory: Random Access Memory), HDD (Hard Disk Drive: Hard Disk Drive), or the like.
而且,对于控制部120而言,作为如上述图10的箭头YJ1所示的第一视觉伺服,根据至少映有组装对象物的第一拍摄图像与参照图像,使组装对象物向目标位置姿势移动。Furthermore, the control unit 120 moves the object to be assembled to the target position and posture based on at least the first captured image and the reference image showing the object to be assembled as the first visual servoing shown by the arrow YJ1 in FIG. 10 . .
并且,控制部120在上述第一视觉伺服后,进行如图10的箭头YJ2所示的第二视觉伺服。即,控制部120在使组装对象物移动后,根据至少映有被组装对象物的第二拍摄图像,进行被组装对象物的特征量检测处理,并根据被组装对象物的特征量,使组装对象物移动。And the control part 120 performs the 2nd visual servoing shown by the arrow YJ2 of FIG. 10 after the said 1st visual servoing. That is, after the control unit 120 moves the object to be assembled, based on the second captured image showing at least the object to be assembled, it performs a feature amount detection process of the object to be assembled, and based on the feature amount of the object to be assembled, the assembled object The object moves.
这里,用图11的流程图、图12(A)~图12(D)对更加具体的处理流程进行说明。Here, a more specific flow of processing will be described using the flowchart of FIG. 11 and FIGS. 12(A) to 12(D).
首先,作为第一视觉伺服的准备,使机器人的手部HD把持组装对象物WK1,并使组装对象物WK1向该目标位置姿势移动(S301),利用拍摄部200(图10的摄像机CM)拍摄目标位置姿势的组装对象物WK1,从而获取如图12(A)所示的参照图像(目标图像)RIM(S302)。然后,从获取的参照图像RIM检测组装对象物WK1的特征量F0(S303)。First, as a preparation for the first visual servoing, the robot's hand HD grasps the object to be assembled WK1, moves the object to be assembled WK1 to the target position (S301), and takes an image with the imaging unit 200 (camera CM in FIG. 10 ). The assembly object WK1 in the target position and posture acquires a reference image (target image) RIM as shown in FIG. 12(A) (S302). Then, the feature value F0 of the object to be assembled WK1 is detected from the acquired reference image RIM (S303).
这里,目标位置姿势是指在第一视觉伺服中成为目标的组装对象物WK1的位置姿势。例如,在图10中,位置GC2是目标位置姿势,在图12(A)的参照图像RIM中,映有组装对象物WK1位于该目标位置姿势GC2的样子。该目标位置姿势是在生成参照图像时由指导者(使用者)设定的位置姿势。Here, the target position and posture refer to the position and posture of the assembly target object WK1 to be targeted in the first visual servoing. For example, in FIG. 10 , the position GC2 is the target position and posture, and the reference image RIM in FIG. 12(A) shows a state where the assembly object WK1 is located at the target position and posture GC2. The target position and posture are the positions and postures set by the instructor (user) when generating the reference image.
另外,参照图像例如像图12(A)的参照图像RIM那样,是指在上述目标位置姿势下映有第一视觉伺服中的移动对象亦即组装对象物WK1的图像。此外,在图12(A)的参照图像RIM中,也映有被组装对象物WK2,但是不是一定要映出被组装对象物WK2。另外,参照图像也可以是存储于外部的存储部的图像、经由网络而获取的图像、根据CAD模型数据生成的图像等。In addition, the reference image refers to an image in which the assembly object WK1 which is the moving object in the first visual servoing is reflected under the above-mentioned target position and posture, such as the reference image RIM in FIG. 12A . In addition, in the reference image RIM of FIG. 12(A), the object to be assembled WK2 is also reflected, but the object to be assembled WK2 is not necessarily reflected. In addition, the reference image may be an image stored in an external storage unit, an image acquired via a network, an image generated from CAD model data, or the like.
接下来,进行第一视觉伺服。首先,拍摄图像获取部110获取如图12(B)所示的第一拍摄图像PIM101(S304)。Next, first visual servoing is performed. First, the captured image acquisition unit 110 acquires the first captured image PIM101 as shown in FIG. 12(B) (S304).
这里,本例中的第一拍摄图像例如像图12(B)的拍摄图像PIM101那样,是指映有组装作业的组装对象物WK1与被组装对象物WK2中的至少组装对象物WK1的拍摄图像。Here, the first captured image in this example refers to a captured image of at least the assembly target WK1 among the assembly target WK1 and the assembly target WK2 , such as the captured image PIM101 in FIG. 12(B). .
然后,控制部120从第一拍摄图像PIM101检测组装对象物WK1的特征量F1(S305),并根据上述特征量F0与特征量F1,使组装对象物WK1如图10的箭头YJ1所示地移动(S306)。Then, the control unit 120 detects the feature value F1 of the object to be assembled WK1 from the first captured image PIM101 (S305), and moves the object to be assembled WK1 as indicated by the arrow YJ1 in FIG. (S306).
然后,控制部120对组装对象物WK1是否处于目标位置姿势GC2进行判定(S306),在判定为组装对象物WK1处于目标位置姿势GC2的情况下,移至第二视觉伺服。另一方面,在判定为组装对象物WK1未处于目标位置姿势GC2的情况下,回到步骤S304,并且反复进行第一视觉伺服。Then, the control unit 120 judges whether the object to be assembled WK1 is in the target position and posture GC2 ( S306 ), and when it is determined that the object to be assembled WK1 is in the target position and posture GC2 , it moves to the second visual servoing. On the other hand, when it is determined that the object to be assembled WK1 is not in the target position and posture GC2 , the process returns to step S304 and the first visual servoing is repeated.
这样,在第一视觉伺服中,边比较参照图像RIM与第一拍摄图像PIM101中的组装对象物WK1的特征量彼此,边控制机器人。In this way, in the first visual servoing, the robot is controlled while comparing the feature quantities of the assembly target object WK1 in the reference image RIM and the first captured image PIM101 .
接下来,进行第二视觉伺服。在第二视觉伺服中,首先,拍摄图像获取部110获取如图12(C)所示的第二拍摄图像PIM21(S308)。这里,第二拍摄图像是指用于第二视觉伺服的拍摄图像。此外,在本例的第二拍摄图像PIM21中,映有组装对象物WK1与被组装对象物WK2的双方。Next, the second visual servoing is performed. In the second visual servoing, first, the captured image acquisition unit 110 acquires the second captured image PIM21 as shown in FIG. 12(C) (S308). Here, the second captured image refers to a captured image used for the second visual servoing. In addition, in the second captured image PIM21 of this example, both the object to be assembled WK1 and the object to be assembled WK2 are reflected.
然后,控制部120从第二拍摄图像PIM21检测被组装对象物WK2的目标特征量FB(S309)。例如在本例中,如图12(C)所示,作为目标特征量FB而检测到目标特征点GP1以及目标特征点GP2。Then, the control unit 120 detects the target feature value FB of the object to be assembled WK2 from the second captured image PIM21 ( S309 ). For example, in this example, as shown in FIG. 12(C), target feature points GP1 and target feature points GP2 are detected as target feature quantities FB.
同样,控制部120从第二拍摄图像PIM21检测组装对象物WK1的关注特征量FA(S310)。例如在本例中,如图12(C)所示,作为关注特征量FA而检测到关注特征点IP1以及关注特征点IP2。Similarly, the control unit 120 detects the feature value FA of interest of the object to be assembled WK1 from the second captured image PIM21 ( S310 ). For example, in this example, as shown in FIG. 12(C), attention feature points IP1 and attention feature points IP2 are detected as attention feature amounts FA.
接下来,控制部120根据关注特征量FA与目标特征量FB,使组装对象物WK1移动(S312)。即,与前文中用图5说明的例子相同地,以使关注特征点IP1与目标特征点GP1接近、并使关注特征点IP2与目标特征点GP2接近的方式使组装对象物WK1移动。Next, the control unit 120 moves the object to be assembled WK1 based on the attention feature value FA and the target feature value FB ( S312 ). That is, like the example described above with reference to FIG. 5 , the object to be assembled WK1 is moved so that the focused feature point IP1 approaches the target feature point GP1 and the focused feature point IP2 approaches the target feature point GP2 .
然后,控制部120对组装对象物WK1与被组装对象物WK2是否处于目标相对位置姿势关系进行判定(S312)。例如在图12(D)所示的拍摄图像PIME中,由于关注特征点IP1与目标特征点GP1邻接,关注特征点IP2与目标特征点GP2邻接,所以判定为组装对象物WK1与被组装对象物WK2处于目标相对位置姿势关系,从而结束处理。Then, the control unit 120 determines whether the object to be assembled WK1 and the object to be assembled WK2 are in the target relative positional relationship ( S312 ). For example, in the captured image PIME shown in FIG. 12(D), since the feature point of interest IP1 is adjacent to the target feature point GP1, and the feature point of interest IP2 is adjacent to the target feature point GP2, it is determined that the object to be assembled WK1 and the object to be assembled are WK2 is in the target relative position and posture relationship, and the process ends.
另一方面,在判定为组装对象物WK1与被组装对象物WK2未处于目标相对位置姿势关系的情况下,回到步骤S308,并且反复进行第二视觉伺服。On the other hand, when it is determined that the object to be assembled WK1 and the object to be assembled WK2 are not in the target relative position and posture relationship, the process returns to step S308 and the second visual servoing is repeated.
由此,在每次反复进行相同的组装作业时,使用相同的参照图像,并使组装对象物向被组装对象物的附近移动,之后,能够与实际的被组装对象物的详细的位置姿势相配合地进行组装作业等。即,即使在生成参照图像时的被组装对象物的位置姿势、与实际的组装作业时的被组装对象物的位置姿势偏移(不同)的情况下,也由于在第二视觉伺服中与被组装对象物的位置偏移对应,所以在第一视觉伺服中,能够每次使用相同的参照图像,而无需使用不同的参照图像。其结果是,能够抑制参照图像的准备成本等。As a result, each time the same assembly work is repeated, the same reference image is used to move the assembly object to the vicinity of the object to be assembled, and then the detailed position and posture of the actual object to be assembled can be compared. Cooperate with assembly work, etc. That is, even when the position and posture of the object to be assembled when the reference image is generated are shifted (different from) the position and posture of the object to be assembled during the actual assembly work, the second visual servoing is different from the position and posture of the object to be assembled. Since the positional displacement of the object to be assembled corresponds, in the first visual servoing, the same reference image can be used each time without using a different reference image. As a result, preparation costs for reference images and the like can be suppressed.
此外,在上述步骤S310中,从第二拍摄图像PIM21检测组装对象物WK1的关注特征量FA,但是未必意味着必须从第二拍摄图像PIM21进行检测。例如在第二拍摄图像PIM21未映有组装对象物WK1的情况等,也可以从映有组装对象物的其他第二拍摄图像PIM22检测组装对象物WK1的特征量等。In addition, in the said step S310, although the feature quantity FA of interest of the object to be assembled WK1 is detected from the 2nd captured image PIM21, it does not necessarily mean that it must be detected from the 2nd captured image PIM21. For example, when the object to be assembled WK1 is not shown in the second captured image PIM21 , the feature amount of the object to be assembled WK1 may be detected from another second captured image PIM22 in which the object to be assembled is reflected.
2.3.三个工件的组装作业2.3. Assembly of three workpieces
接下来,如图13A以及图13B所示,对进行三个工件WK1~WK3的组装作业的情况的处理进行说明。Next, as shown in FIG. 13A and FIG. 13B , the processing in the case where the assembly work of three works WK1 to WK3 is performed will be described.
在本组装作业中,如图13A所示,将由机器人的第一手部HD1把持的组装对象物WK1(工件WK1,例如为驱动器)组装于由机器人的第二手部HD2把持的第一被组装对象物WK2(工件WK2,例如为螺钉),并且将与工件WK1成为组装状态的工件WK2组装于作业台之上的第二被组装对象物WK3(工件WK3,例如螺孔)。然后,在组装作业后,成为如图13B所示的组装状态。In this assembly operation, as shown in FIG. 13A , the assembly object WK1 (workpiece WK1 , such as a driver) held by the first hand HD1 of the robot is assembled to the first assembled object held by the second hand HD2 of the robot. The object WK2 (work WK2, for example, a screw), and the workpiece WK2 assembled with the workpiece WK1 is assembled to the second object to be assembled WK3 (work WK3 , for example, a screw hole) on the table. Then, after the assembling work, it becomes the assembled state shown in FIG. 13B.
具体而言,在进行这样的处理的情况下,如图14(A)所示,控制部120根据至少映有组装作业中的第一被组装对象物WK2的第一拍摄图像PIM31,进行第一被组装对象物WK2的特征量检测处理。本例中的第一拍摄图像是指在进行组装对象物WK1与第一被组装对象物WK2的组装作业时使用的拍摄图像。Specifically, when performing such processing, as shown in FIG. 14(A), the control unit 120 performs the first Feature amount detection processing of the object to be assembled WK2. The first captured image in this example refers to a captured image used when the assembly work of the assembly object WK1 and the first assembly target object WK2 is performed.
然后,控制部120根据第一被组装对象物WK2的特征量,使组装对象物WK1如图13A的箭头YJ1所示地移动。Then, the control unit 120 moves the assembly object WK1 as indicated by the arrow YJ1 in FIG. 13A based on the feature amount of the first assembly object WK2 .
接下来,如图14(B)所示,控制部120在使组装对象物WK1移动后,根据至少映有第二被组装对象物WK3的第二拍摄图像PIM41,进行第二被组装对象物WK3的特征量检测处理。本例中的第二拍摄图像是指在进行第一被组装对象物WK2与第二被组装对象物WK3的组装作业时使用的拍摄图像。Next, as shown in FIG. 14(B), after the control unit 120 moves the object to be assembled WK1, based on the second captured image PIM41 showing at least the object to be assembled WK3, the second object to be assembled WK3 is moved. feature detection processing. The second captured image in this example refers to a captured image used when performing the assembly work of the first object to be assembled WK2 and the second object to be assembled WK3 .
然后,控制部120根据第二被组装对象物WK3的特征量,使组装对象物WK1以及第一被组装对象物WK2如图13A的箭头YJ2的所示地移动。Then, the control unit 120 moves the assembly object WK1 and the first assembly object WK2 as indicated by the arrow YJ2 in FIG. 13A , based on the feature value of the second assembly object WK3 .
由此,在每次进行组装作业时,即使在第一被组装对象物WK2、第二被组装对象物WK3的位置偏移的情况下,也能够进行组装对象物、第一被组装对象物、以及第二被组装对象物的组装作业等。As a result, even when the positions of the first object to be assembled WK2 and the second object to be assembled WK3 are misaligned every time an assembly operation is performed, the objects to be assembled, the first object to be assembled, And the assembly operation of the second object to be assembled, etc.
接下来,用图15的流程图对图13A以及图13B所示的三个工件的组装作业中的处理流程进行详细说明。Next, the flow of processing in the assembly work of the three workpieces shown in FIGS. 13A and 13B will be described in detail using the flowchart of FIG. 15 .
首先,拍摄图像获取部110获取至少映有组装作业中的组装对象物WK1以及第一被组装对象物WK2的1个或者多个第一拍摄图像。然后,控制部120根据第一拍摄图像,进行组装对象物WK1以及第一被组装对象物WK2的特征量检测处理。First, the captured image acquisition unit 110 acquires one or a plurality of first captured images showing at least the assembly object WK1 and the first assembly target object WK2 in the assembly operation. Then, the control unit 120 performs feature amount detection processing of the assembly object WK1 and the first assembly object WK2 based on the first captured image.
在本例中,首先,拍摄图像获取部110获取映有第一被组装对象物WK2的第一拍摄图像PIM31(S401)。然后,控制部120根据第一拍摄图像PIM31,进行第一被组装对象物WK2的特征量检测处理,从而检测第一目标特征量FB1(S402)。这里,作为第一目标特征量FB1而检测到如图14(A)所示的目标特征点GP1与目标特征点GP2。In this example, first, the captured image acquisition unit 110 acquires the first captured image PIM31 in which the first object to be assembled WK2 is reflected ( S401 ). Then, the control unit 120 detects the first target feature amount FB1 by performing feature amount detection processing of the first object to be assembled WK2 based on the first captured image PIM31 ( S402 ). Here, the target feature point GP1 and the target feature point GP2 as shown in FIG. 14(A) are detected as the first target feature amount FB1.
接下来,拍摄图像获取部110获取映有组装对象物WK1的第一拍摄图像PIM32(S403)。然后,控制部120根据第一拍摄图像PIM32,进行组装对象物WK1的特征量检测处理,从而检测第一关注特征量FA(S404)。这里,作为第一关注特征量FA而检测到关注特征点IP1与关注特征点IP2。Next, the captured image acquisition unit 110 acquires the first captured image PIM32 in which the object to be assembled WK1 is reflected ( S403 ). Then, the control unit 120 detects the first attention feature amount FA by performing feature amount detection processing of the assembly target object WK1 based on the first captured image PIM32 ( S404 ). Here, the attention feature point IP1 and the attention feature point IP2 are detected as the first attention feature amount FA.
此外,在步骤S401~S404中,对获取分别映有组装对象物WK1与第一被组装对象物WK2的多个第一拍摄图像(PIM31以及PIM32)的例子进行了说明,但是如图14(A)所示,在组装对象物WK1与第一被组装对象物WK2映入相同的第一拍摄图像PIM31的情况下,也可以从第一拍摄图像PIM31检测组装对象物WK1与第一被组装对象物WK2的双方的特征量。In addition, in steps S401 to S404, an example of acquiring a plurality of first photographed images (PIM31 and PIM32) respectively reflecting the assembly object WK1 and the first assembly object WK2 has been described, but as shown in FIG. 14 (A ), when the object to be assembled WK1 and the first object to be assembled WK2 are reflected in the same first captured image PIM31, the object to be assembled WK1 and the first object to be assembled can also be detected from the first captured image PIM31 The characteristic quantities of both sides of WK2.
接下来,控制部120根据组装对象物WK1的特征量(第一关注特征量FA)以及第一被组装对象物WK2的特征量(第一目标特征量FB1),以使组装对象物WK1与第一被组装对象物WK2的相对位置姿势关系成为第一目标相对位置姿势关系的方式,使组装对象物WK1移动(S405)。具体而言,在拍摄图像中,以使关注特征点IP1与目标特征点GP1接近、并使关注特征点IP2与目标特征点GP2接近的方式,使组装对象物WK1移动。此外,该移动相当于图13A的箭头YJ1的移动。Next, the control unit 120 uses the feature quantity of the assembly object WK1 (the first feature quantity of interest FA) and the feature quantity of the first object to be assembled WK2 (the first target feature quantity FB1 ) to make the assembly object WK1 and the first target feature quantity An object to be assembled WK1 is moved so that the relative positional relationship of the object to be assembled WK2 becomes the first target relative positional relationship (S405). Specifically, in the captured image, the object to be assembled WK1 is moved such that the focused feature point IP1 approaches the target feature point GP1 , and the focused feature point IP2 approaches the target feature point GP2 . In addition, this movement corresponds to the movement of arrow YJ1 in FIG. 13A .
然后,控制部120对组装对象物WK1与第一被组装对象物WK2是否处于第一目标相对位置姿势关系进行判定(S406)。在判定为组装对象物WK1与第一被组装对象物WK2未处于第一目标相对位置姿势关系的情况下,回到步骤S403,并且重新进行处理。Then, the control unit 120 determines whether the object to be assembled WK1 and the first object to be assembled WK2 are in the first target relative positional relationship ( S406 ). When it is determined that the object to be assembled WK1 and the first object to be assembled WK2 are not in the first target relative positional relationship, the process returns to step S403 and the process is repeated.
另一方面,在判定为组装对象物WK1与第一被组装对象物WK2处于第一目标相对位置姿势关系的情况下,从第一拍摄图像PIM32检测第一被组装对象物WK2的第二关注特征量FB2(S407)。具体而言,如后述图14(B)所示,控制部120作为第二关注特征量FB2而检测到关注特征点IP3与关注特征点IP4。On the other hand, when it is determined that the assembly object WK1 and the first assembly object WK2 are in the first target relative positional relationship, the second attention feature of the first assembly object WK2 is detected from the first captured image PIM32 Quantity FB2 (S407). Specifically, as shown in FIG. 14(B) described later, the control unit 120 detects a feature point of interest IP3 and a feature point of interest IP4 as the second feature value of interest FB2 .
接下来,拍摄图像获取部110获取如图14(B)所示的至少映有第二被组装对象物WK3的第二拍摄图像PIM41(S408)。Next, the captured image acquisition part 110 acquires the 2nd captured image PIM41 which shows at least the 2nd object to be assembled WK3 as shown in FIG.14(B) (S408).
然后,控制部120根据第二拍摄图像PIM41,进行第二被组装对象物WK3的特征量检测处理,从而检测第二目标特征量FC(S409)。具体而言,如图14(B)所示,控制部120作为第二目标特征量FC而检测到目标特征点GP3与目标特征点GP4。Then, the control unit 120 detects the second target feature amount FC by performing feature amount detection processing of the second object to be assembled WK3 based on the second captured image PIM41 ( S409 ). Specifically, as shown in FIG. 14(B), the control unit 120 detects the target feature point GP3 and the target feature point GP4 as the second target feature value FC.
接下来,控制部120根据第一被组装对象物WK2的特征量(第二关注特征量FB2)、与第二被组装对象物WK3的特征量(第二目标特征量FC),以使第一被组装对象物WK2与第二被组装对象物WK3的相对位置姿势关系成为第二目标相对位置姿势关系的方式,使组装对象物WK1与第一被组装对象物WK2移动(S410)。Next, the control unit 120 uses the feature quantity of the first assembly object WK2 (the second attention feature quantity FB2) and the feature quantity of the second assembly object WK3 (the second target feature quantity FC) to make the first The assembly object WK1 and the first assembly object WK2 are moved so that the relative positional relationship between the assembled object WK2 and the second assembled object WK3 becomes the second target relative positional relationship ( S410 ).
具体而言,在拍摄图像中,以使关注特征点IP3与目标特征点GP3接近、并使关注特征点IP4与目标特征点GP4接近的方式,使组装对象物WK1与第一被组装对象物WK2移动。此外,该移动相当于图13A的箭头YJ2的移动。Specifically, in the captured image, the object to be assembled WK1 and the first object to be assembled WK2 are arranged such that the feature point of interest IP3 is close to the target feature point GP3 and the feature point of interest IP4 is close to the target feature point GP4. move. In addition, this movement corresponds to the movement of the arrow YJ2 of FIG. 13A.
然后,控制部120对第一被组装对象物WK2与第二被组装对象物WK3是否处于第二目标相对位置姿势关系进行判定(S411)。在判定为第一被组装对象物WK2与第二被组装对象物WK3未处于第二目标相对位置姿势关系的情况下,回到步骤S408,并且重新进行处理。Then, the control unit 120 determines whether the first object to be assembled WK2 and the second object to be assembled WK3 are in the second target relative positional relationship ( S411 ). When it is determined that the first object to be assembled WK2 and the second object to be assembled WK3 are not in the second target relative positional relationship, the process returns to step S408 and the process is repeated.
另一方面,如图14(C)所示的拍摄图像PIME那样,在判定为第一被组装对象物WK2与第二被组装对象物WK3处于组装状态、即处于第二目标相对位置姿势关系的情况下,结束处理。On the other hand, as in the captured image PIME shown in FIG. 14(C), when it is determined that the first object to be assembled WK2 and the second object to be assembled WK3 are in the assembled state, that is, in the second target relative positional relationship, In case, end processing.
这样,能够以使组装对象物WK1的关注特征点(IP1以及IP2)与第一被组装对象物WK2的目标特征点(GP1以及GP2)接近、并使第一被组装对象物WK2的关注特征点(IP3以及IP4)与第二被组装对象物WK3的目标特征点(GP3以及GP4)接近的方式,进行视觉伺服等。In this way, the feature points of interest (IP1 and IP2) of the assembly object WK1 can be brought closer to the target feature points (GP1 and GP2) of the first object to be assembled WK2, and the feature points of interest of the first object to be assembled WK2 can be brought closer to each other. Visual servoing etc. are performed so that (IP3 and IP4) approach the target feature point (GP3 and GP4) of the second object to be assembled WK3.
另外,也可以不像图15的流程图所示那样按顺序对组装对象物WK1与第一被组装对象物WK2进行组装,而如图16(A)~图16(C)所示地同时组装三个工件。In addition, instead of assembling the assembly object WK1 and the first assembly object WK2 sequentially as shown in the flow chart of FIG. 15 , they may be assembled simultaneously as shown in FIGS. 16(A) to 16(C). Three artifacts.
在图17的流程图中示出了此时的处理流程。首先,拍摄图像获取部110获取映有组装作业中的组装对象物WK1、第一被组装对象物WK2以及第二被组装对象物WK3的1个或者多个拍摄图像(S501)。在本例中,获取图16(A)所示的拍摄图像PIM51。The flow of processing at this time is shown in the flowchart of FIG. 17 . First, the captured image acquisition unit 110 acquires one or a plurality of captured images showing the assembly object WK1 , the first assembly object WK2 , and the second assembly object WK3 during the assembly operation ( S501 ). In this example, captured image PIM51 shown in FIG. 16(A) is acquired.
接下来,控制部120根据1个或者多个拍摄图像,进行组装对象物WK1、第一被组装对象物WK2以及第二被组装对象物WK3的特征量检测处理(S502~S504)。Next, the control unit 120 performs feature value detection processing of the assembly object WK1 , the first assembly object WK2 , and the second assembly object WK3 based on one or more captured images ( S502 to S504 ).
在本例中,如图16(A)所示,作为第二被组装对象物WK3的特征量而检测到目标特征点GP3以及目标特征点GP4(S502)。然后,作为第一被组装对象物WK2的特征量而检测到目标特征点GP1以及目标特征点GP2、关注特征点IP3以及关注特征点IP4(S503)。并且,作为组装对象物WK1的特征量而检测到关注特征点IP1以及关注特征点IP2(S504)。此外,在三个工件映入各自不同的拍摄图像的情况下,也可以在不同的拍摄图像分别进行特征量检测处理。In this example, as shown in FIG. 16(A), target feature points GP3 and target feature points GP4 are detected as feature quantities of the second object to be assembled WK3 (S502). Then, target feature points GP1 and GP2 , and focused feature points IP3 and focused feature points IP4 are detected as feature quantities of the first assembly object WK2 ( S503 ). Then, the attention feature point IP1 and the attention feature point IP2 are detected as feature quantities of the assembly object WK1 ( S504 ). In addition, when the three workpieces are reflected in different captured images, the feature amount detection processing may be performed on the different captured images.
接下来,控制部120一边根据组装对象物WK1的特征量以及第一被组装对象物WK2的特征量,以使组装对象物WK1与第一被组装对象物WK2的相对位置姿势关系成为第一目标相对位置姿势关系的方式,使组装对象物WK1移动,一边根据第一被组装对象物WK2的特征量以及第二被组装对象物WK3的特征量,以使第一被组装对象物WK2与第二被组装对象物WK3的相对位置姿势关系成为第二目标相对位置姿势关系的方式,使第一被组装对象物WK2移动(S505)。Next, the control unit 120 sets the relative positional relationship between the assembly object WK1 and the first assembly object WK2 as the first target based on the feature value of the assembly object WK1 and the feature value of the first assembly object WK2. The method of relative position and posture relationship makes the assembly object WK1 move, and makes the first assembly object WK2 and the second The first object to be assembled WK2 is moved so that the relative positional relationship of the object to be assembled WK3 becomes the second target relative positional relationship (S505).
即,以使关注特征点IP1与目标特征点GP1接近、使关注特征点IP2与目标特征点GP2接近、使关注特征点IP3与目标特征点GP3接近、并使关注特征点IP4与目标特征点GP4接近的方式,使组装对象物WK1与第一被组装对象物WK2同时移动。That is, to bring the focused feature point IP1 close to the target feature point GP1, to make the focused feature point IP2 close to the target feature point GP2, to make the focused feature point IP3 close to the target feature point GP3, and to make the focused feature point IP4 close to the target feature point GP4 In an approaching manner, the assembly object WK1 and the first assembly object WK2 are moved simultaneously.
然后,拍摄图像获取部110重新获取拍摄图像(S506),并且控制部120根据重新获取的拍摄图像,对组装对象物WK1、第一被组装对象物WK2、以及第二被组装对象物WK3这三个工件是否处于目标相对位置姿势关系进行判定(S507)。Then, the captured image acquisition unit 110 reacquires the captured image (S506), and the control unit 120 performs an operation of the three assembly objects WK1, the first assembled object WK2, and the second assembled object WK3 based on the newly acquired captured image. It is determined whether a workpiece is in the target relative position and posture relationship (S507).
例如,在步骤S506中获取的拍摄图像是如图16(B)所示的拍摄图像PIM52,在判定为三个工件还未处于目标相对位置姿势关系的情况下,回到步骤S503,并且反复进行处理。此外,根据重新获取的拍摄图像PIM52,进行步骤S503以下的处理。For example, the photographed image acquired in step S506 is the photographed image PIM52 shown in Figure 16 (B), when it is determined that the three workpieces are not in the target relative position and posture relationship, return to step S503, and repeat deal with. Moreover, based on the captured image PIM52 newly acquired, the process of step S503 and following are performed.
另一方面,在步骤S506中获取的拍摄图像是如图16(C)所示的拍摄图像PIME的情况下,判定为三个工件处于目标相对位置姿势关系,从而结束处理。On the other hand, when the captured image acquired in step S506 is the captured image PIME as shown in FIG. 16(C), it is determined that the three workpieces are in the target relative position and posture relationship, and the process ends.
由此,能够同时进行三个工件的组装作业等。其结果是,能够缩短三个工件的组装作业的作业时间等。Thereby, the assembly work of three workpiece|work etc. can be performed simultaneously. As a result, it is possible to shorten the work time and the like of the assembly work of the three workpieces.
并且,在进行三个工件的组装作业时,也可以按照与图15的流程图所示的组装顺序相反的顺序,进行组装作业。即,如图18(A)~图18(C)所示,也可以在将第一被组装对象物WK2组装于第二被组装对象物WK3后,将组装对象物WK1组装于第一被组装对象物WK2。In addition, when performing the assembly work of three workpieces, the assembly work may be performed in the reverse order of the assembly order shown in the flow chart of FIG. 15 . That is, as shown in FIGS. 18(A) to 18(C), after assembling the first object to be assembled WK2 to the second object to be assembled WK3, the object to be assembled WK1 may be assembled to the first object to be assembled. Object WK2.
在这种情况下,如图18(A)所示,控制部120根据至少映有组装作业中的第二被组装对象物WK3的第一拍摄图像PIM61,进行第二被组装对象物WK3的特征量检测处理,并根据第二被组装对象物WK3的特征量,使第一被组装对象物WK2移动。此外,由于特征量检测处理的详细内容与用图16(A)说明的例子相同,所以省略其说明。In this case, as shown in FIG. 18(A), the control unit 120 performs a feature analysis of the second object to be assembled WK3 based on the first captured image PIM61 showing at least the second object to be assembled WK3 during the assembly operation. Quantity detection processing, and moves the first object to be assembled WK2 based on the feature amount of the second object to be assembled WK3. Note that, since the details of the feature amount detection processing are the same as those described with reference to FIG. 16(A), description thereof will be omitted.
接下来,如图18(B)所示,控制部120根据至少映有移动后的第一被组装对象物WK2的第二拍摄图像PIM71,进行第一被组装对象物WK2的特征量检测处理,并根据第一被组装对象物WK2的特征量,以形成如图18(C)的组装状态的方式,使组装对象物WK1移动。Next, as shown in FIG. 18(B), the control unit 120 performs feature quantity detection processing of the first object to be assembled WK2 based on the second captured image PIM71 showing at least the first object to be assembled WK2 after the movement, Then, the object to be assembled WK1 is moved so as to form the assembled state shown in FIG. 18(C) based on the feature value of the first object to be assembled WK2.
由此,无需使组装对象物WK1与第一被组装对象物WK2同时移动,而能够更容易地进行机器人的控制等。另外,即使不是多臂的机器人而是单臂的机器人,也能够进行三个工件的组装作业等。Thereby, it is not necessary to simultaneously move the object to be assembled WK1 and the first object to be assembled WK2 , and it is possible to more easily control the robot and the like. In addition, even if the robot is not a multi-arm robot but a single-arm robot, it is possible to perform assembly work of three workpieces and the like.
另外,以上的本实施方式中使用的拍摄部(摄像机)200例如包括CCD(charge-coupled device:电荷耦合元件)等拍摄元件与光学系统。拍摄部200例如在顶棚、作业台之上等,以视觉伺服中的检测对象(组装对象物、被组装对象物或者机器人300的末端执行器310等)进入拍摄部200的视角内那样的角度而配置。而且,拍摄部200将拍摄图像的信息向机器人控制系统100等输出。其中,在本实施方式中,将拍摄图像的信息保持原样地向机器人控制系统100输出,但是并不限定于此。例如,拍摄部200能够包括图像处理等所使用的装置(处理器)。In addition, the imaging unit (camera) 200 used in the above-mentioned embodiment includes, for example, an imaging element such as a CCD (charge-coupled device) and an optical system. The imaging unit 200 is located on a ceiling or a workbench, for example, at an angle such that a detection object in visual servoing (an object to be assembled, an object to be assembled, or the end effector 310 of the robot 300, etc.) enters the field of view of the imaging unit 200. configuration. Furthermore, the imaging unit 200 outputs the information of the captured image to the robot control system 100 and the like. However, in this embodiment, the information of the captured image is output to the robot control system 100 as it is, but the present invention is not limited thereto. For example, the imaging unit 200 can include a device (processor) used for image processing and the like.
3.机器人3. Robot
接下来,在图19A以及图19B中,示出了应用本实施方式的机器人控制系统100的机器人300的构成例。在图19A以及图19B的任一情况下,机器人300都具有末端执行器310。Next, a configuration example of a robot 300 to which the robot control system 100 of the present embodiment is applied is shown in FIGS. 19A and 19B . In either case of FIGS. 19A and 19B , the robot 300 has an end effector 310 .
末端执行器310是为了把持、提起、吊起、吸附工件(作业对象物)、对工件施行加工而安装于臂的端点的部件。末端执行器310例如可以是手部(把持部),可以是钩部,也可以是吸盘等。并且,也可以针对1支臂设置多个末端执行器。此外,臂是机器人300的部件,并且是包括一个以上的关节的可动部件。The end effector 310 is a member attached to an end point of the arm for grasping, lifting, hoisting, sucking a workpiece (work object), and performing processing on the workpiece. The end effector 310 may be, for example, a hand (grip), a hook, or a suction cup. Also, a plurality of end effectors may be provided for one arm. In addition, the arm is a part of the robot 300, and is a movable part including one or more joints.
例如,图19A的机器人是机器人主体300(机器人)与机器人控制系统100独立地构成的。在这种情况下,机器人控制系统100的一部分或者全部的功能例如通过PC(Personal Computer:个人计算机)来实现。For example, in the robot of FIG. 19A , the robot main body 300 (robot) and the robot control system 100 are configured independently. In this case, part or all of the functions of the robot control system 100 are realized by a PC (Personal Computer), for example.
另外,本实施方式的机器人并不限定于图19A的结构,也可以是如图19B所示地机器人主体300与机器人控制系统100一体地构成的。即,机器人300也可以包括机器人控制系统100。具体而言,如图19B所示,机器人300也可以具有机器人主体(具有臂以及末端执行器310)以及支撑机器人主体的基座单元部,并且机器人控制系统100收纳于该基座单元部。在图19B的机器人300中,形成为在基座单元部设置有车轮等、并且机器人整体能够移动的结构。此外,图19A是单臂型的例子,而机器人300也可以如图19B所示是双臂型等多臂型的机器人。另外,机器人300可以是通过人手来移动的机器人,也可以是设置驱动车轮的马达而利用机器人控制系统100控制该马达从而使其移动的机器人。另外,并不限定于如图19B所示地在设置于机器人300之下的基座单元部设置机器人控制系统100。In addition, the robot of this embodiment is not limited to the configuration shown in FIG. 19A , and may be configured integrally with the robot main body 300 and the robot control system 100 as shown in FIG. 19B . That is, the robot 300 may also include the robot control system 100 . Specifically, as shown in FIG. 19B , the robot 300 may have a robot body (with an arm and an end effector 310 ) and a base unit supporting the robot body, and the robot control system 100 may be housed in the base unit. In the robot 300 shown in FIG. 19B , wheels and the like are provided on the base unit and the entire robot is movable. In addition, FIG. 19A is an example of a single-arm type, but the robot 300 may be a multi-arm type robot such as a double-arm type as shown in FIG. 19B . In addition, the robot 300 may be moved by a human hand, or may be a robot provided with a motor for driving wheels and controlled by the robot control system 100 to move the motor. In addition, it is not limited to install the robot control system 100 in the base unit part installed under the robot 300 as shown in FIG. 19B.
另外,如图20所示,机器人控制系统100的功能也可以通过经由包括有线以及无线的至少一方的网络400而与机器人300通信连接的服务器500来实现。In addition, as shown in FIG. 20 , the functions of the robot control system 100 may be realized by a server 500 communicatively connected to the robot 300 via a network 400 including at least one of wired and wireless.
或者在本实施方式中,也可以由服务器500侧的机器人控制系统进行本发明的机器人控制系统的处理的一部分。在这种情况下,通过与设置于机器人300侧的机器人控制系统的分散处理,从而实现该处理。此外,机器人300侧的机器人控制系统例如通过设置于机器人300的终端装置330(控制部)来实现。Alternatively, in this embodiment, part of the processing of the robot control system of the present invention may be performed by the robot control system on the server 500 side. In this case, the processing is realized by distributed processing with the robot control system provided on the robot 300 side. In addition, the robot control system on the robot 300 side is realized by, for example, a terminal device 330 (control unit) provided in the robot 300 .
而且,在这种情况下,服务器500侧的机器人控制系统进行本发明的机器人控制系统的各处理中的、分配于服务器500的机器人控制系统的处理。另一方面,设置于机器人300的机器人控制系统进行本发明的机器人控制系统的各处理中的、分配于机器人300的机器人控制系统的处理。此外,本发明的机器人控制系统的各处理可以是分配于服务器500侧的处理,也可以是分配于机器人300侧的处理。Furthermore, in this case, the robot control system on the server 500 side performs the processing assigned to the robot control system of the server 500 among the respective processes of the robot control system of the present invention. On the other hand, the robot control system provided in the robot 300 performs processing assigned to the robot control system of the robot 300 among the respective processes of the robot control system of the present invention. In addition, each process of the robot control system of the present invention may be allocated to the server 500 side, or may be allocated to the robot 300 side.
由此,例如与终端装置330相比处理能力较高的服务器500能够进行处理量较多的处理等。并且,例如服务器500能够一并控制各机器人300的动作,并能够容易地使多个机器人300协调动作等。Thereby, for example, the server 500 having a higher processing capability than the terminal device 330 can perform processing with a large amount of processing. In addition, for example, the server 500 can collectively control the operation of each robot 300 and can easily coordinate the operation of a plurality of robots 300 .
另外,近几年,制造多品种且少数的部件的情况有增加的趋势。而且,在变更制造的部件的种类的情况下,需要变更机器人进行的动作。若为如图20所示的结构,则即使不重新进行针对多个机器人300各自的指导作业,服务器500也能够一并变更机器人300所进行的动作等。In addition, in recent years, there is an increasing tendency to manufacture many types of parts with a small number of components. Furthermore, when changing the type of parts to be manufactured, it is necessary to change the operation performed by the robot. With the configuration shown in FIG. 20 , the server 500 can collectively change the actions and the like performed by the robots 300 without redoing the instruction work for each of the plurality of robots 300 .
并且,若为如图20所示的结构,则与针对各机器人300设置一个机器人控制系统100的情况相比,能够大幅度减少进行机器人控制系统100的软件更新时的麻烦等。Furthermore, according to the configuration shown in FIG. 20 , compared with the case where one robot control system 100 is provided for each robot 300 , it is possible to greatly reduce the trouble of updating the software of the robot control system 100 .
此外,本实施方式的机器人控制系统以及机器人等也可以通过程序来实现上述处理的一部分或者大部分。在这种情况下,CPU等处理器执行程序,从而实现本实施方式的机器人控制系统以及机器人等。具体而言,读出存储于信息存储介质的程序,并且CPU等处理器执行读出的程序。这里,信息存储介质(能够由计算机读取的介质)是储存程序、数据等的介质,其功能能够通过光盘(DVD、CD等)、HDD(硬盘驱动器)、或者存储器(卡式存储器、ROM等)等来实现。而且,CPU等处理器根据储存于信息存储介质的程序(数据),进行本实施方式的各种处理。即,在信息存储介质存储用于使计算机(具备操作部、处理部、存储部、输出部的装置)作为本实施方式的各部而发挥功能的程序(用于使计算机执行各部的处理的程序)。In addition, the robot control system and the robot of the present embodiment may implement a part or most of the above processing by a program. In this case, a processor such as a CPU executes the program, thereby realizing the robot control system and the robot of the present embodiment. Specifically, a program stored in an information storage medium is read, and a processor such as a CPU executes the read program. Here, the information storage medium (a medium that can be read by a computer) is a medium that stores programs, data, etc., and its function can be performed by an optical disk (DVD, CD, etc.), HDD (hard disk drive), or a memory (card memory, ROM, etc.) ) and so on to achieve. Furthermore, a processor such as a CPU performs various processes in the present embodiment based on a program (data) stored in an information storage medium. That is, a program for causing a computer (a device including an operation unit, a processing unit, a storage unit, and an output unit) to function as each unit of the present embodiment (a program for causing the computer to execute the processing of each unit) is stored in the information storage medium. .
以上对本实施方式进行了详细说明,但是可以在实质上不脱离本发明的新内容和效果的条件下,进行多种多样的改变,这对于本领域技术人员来说是显而易见的。因此,这种改变例也均包含在本发明的范围内。例如,在说明书或附图中,至少一次与更加广义或同义的不同用语一起被记载的用语,在说明书或附图中的任何位置,均能够替换成该不同用语。另外,机器人控制系统、机器人以及程序的结构、动作也不限定于本实施方式中说明的结构、动作,而能够进行各种变形实施。The present embodiment has been described in detail above, but it is obvious to those skilled in the art that various changes can be made without substantially departing from the novel contents and effects of the present invention. Therefore, such modified examples are also included in the scope of the present invention. For example, in the specification or the drawings, a term described together with a different term having a broader or synonymous meaning at least once can be replaced by the different term at any position in the specification or the drawings. In addition, the configuration and operation of the robot control system, the robot, and the program are not limited to those described in this embodiment, and various modifications can be made.
第二实施方式second embodiment
图21是表示本发明的一实施方式的机器人系统1的结构的一个例子的系统构成图。本实施方式的机器人系统1主要具备机器人10、控制部20、第一拍摄部30以及第二拍摄部40。FIG. 21 is a system configuration diagram showing an example of the configuration of the robot system 1 according to the embodiment of the present invention. The robot system 1 of the present embodiment mainly includes a robot 10 , a control unit 20 , a first imaging unit 30 , and a second imaging unit 40 .
机器人10是具有包括多个接头(关节)12以及多个连杆13的臂11的臂型机器人。机器人10根据来自控制部20的控制信号而进行处理。The robot 10 is an arm-type robot having an arm 11 including a plurality of joints (joints) 12 and a plurality of links 13 . The robot 10 performs processing based on control signals from the control unit 20 .
在接头12设置有用于使它们进行动作的促动器(未图示)。促动器例如具备伺服马达、编码器等。编码器输出的编码器值用于由控制部20进行的机器人10的反馈控制。An actuator (not shown) for operating them is provided on the joint 12 . The actuator includes, for example, a servo motor, an encoder, and the like. The encoder value output by the encoder is used for feedback control of the robot 10 by the control unit 20 .
在臂11的前端附近设置有手眼摄像机15。手眼摄像机15是拍摄处于臂11的前端的物体而生成图像数据的单元。作为手眼摄像机15,例如能够采用可见光摄像机、红外线摄像机等。A hand-eye camera 15 is provided near the front end of the arm 11 . The hand-eye camera 15 is a means for capturing an object at the tip of the arm 11 to generate image data. As the hand-eye camera 15, for example, a visible light camera, an infrared camera, or the like can be employed.
作为臂11的前端部分的区域,将不与机器人10的其他区域(除去后面进行说明的手部14)连接的区域定义为臂11的端点。在本实施方式中,使图21所示的点E的位置位于端点的位置。As the region of the tip portion of the arm 11 , a region that is not connected to other regions of the robot 10 (excluding the hand 14 described later) is defined as an end point of the arm 11 . In this embodiment, the position of the point E shown in FIG. 21 is located at the end point.
此外,对于机器人10的结构而言,在对本实施方式的特征进行说明时对主要结构进行了说明,并且不限定于上述结构。并不排除一般的把持机器人所具备的结构。例如,在图21中示出了6轴的臂,但是也可以使轴数(接头数)进一步增加,也可以使其减少。也可以增减连杆的数量。另外,也可以适当地变更臂、连杆、接头等各种部件的形状、大小、配置、构造等。In addition, the structure of the robot 10 has demonstrated the main structure when describing the characteristic of this embodiment, and is not limited to the said structure. Structures possessed by general handling robots are not excluded. For example, although a six-axis arm is shown in FIG. 21 , the number of axes (the number of joints) may be further increased or decreased. The number of connecting rods can also be increased or decreased. In addition, the shape, size, arrangement, structure, etc. of various components such as arms, links, and joints may be appropriately changed.
控制部20进行控制机器人10的整体的处理。控制部20可以设置于远离机器人10的主体的场所,也可以内置于机器人10。在控制部20设置于远离机器人10的主体的场所的情况下,控制部20以有线或者无线的方式与机器人10连接。The control unit 20 performs processing for controlling the robot 10 as a whole. The control unit 20 may be installed in a place away from the main body of the robot 10 or built in the robot 10 . When the control unit 20 is installed in a place away from the main body of the robot 10 , the control unit 20 is connected to the robot 10 by wire or wirelessly.
第一拍摄部30以及第二拍摄部40分别是从不同角度对臂11的作业区域附近进行拍摄而生成图像数据的单元。第一拍摄部30以及第二拍摄部40例如包括摄像机,并被设置于作业台、顶棚、壁等。作为第一拍摄部30以及第二拍摄部40,能够采用可见光摄像机、红外线摄像机等。第一拍摄部30以及第二拍摄部40与控制部20连接,并向控制部20输入由第一拍摄部30以及第二拍摄部40拍摄的图像。此外,第一拍摄部30以及第二拍摄部40也可以不与控制部20而与机器人10连接,也可以内置于机器人10。此时,经由机器人10而向控制部20输入由第一拍摄部30以及第二拍摄部40拍摄的图像。The first imaging unit 30 and the second imaging unit 40 are means for generating image data by imaging the vicinity of the work area of the arm 11 from different angles. The first imaging unit 30 and the second imaging unit 40 include cameras, for example, and are installed on a workbench, a ceiling, a wall, or the like. As the first imaging unit 30 and the second imaging unit 40 , a visible light camera, an infrared camera, or the like can be employed. The first imaging unit 30 and the second imaging unit 40 are connected to the control unit 20 , and input images captured by the first imaging unit 30 and the second imaging unit 40 to the control unit 20 . In addition, the first imaging unit 30 and the second imaging unit 40 may be connected to the robot 10 instead of the control unit 20 , or may be built in the robot 10 . At this time, the images captured by the first imaging unit 30 and the second imaging unit 40 are input to the control unit 20 via the robot 10 .
接下来,对机器人系统1的功能构成例进行说明。图22表示机器人系统1的功能框图。Next, an example of the functional configuration of the robot system 1 will be described. FIG. 22 shows a functional block diagram of the robot system 1 .
机器人10具备根据促动器的编码器值、以及传感器的传感器值等来控制臂11的动作控制部101。The robot 10 includes a motion control unit 101 that controls the arm 11 based on encoder values of actuators, sensor values of sensors, and the like.
动作控制部101根据从控制部20输出的信息、促动器的编码器值、以及传感器的传感器值等,以使臂11向从控制部20输出的位置移动的方式,驱动促动器。能够根据设置于接头12等的促动器的编码器值等求出端点的当前位置。The motion control unit 101 drives the actuator so as to move the arm 11 to the position output from the control unit 20 based on the information output from the control unit 20 , the encoder value of the actuator, the sensor value of the sensor, and the like. The current position of the end point can be obtained from an encoder value or the like of an actuator provided on the joint 12 or the like.
控制部20主要具备位置控制部2000、视觉伺服部210以及驱动控制部220。位置控制部2000主要具备路径获取部201以及第一控制部202。视觉伺服部210主要具备图像获取部211、图像处理部212以及第二控制部213。The control unit 20 mainly includes a position control unit 2000 , a visual servoing unit 210 , and a drive control unit 220 . The position control unit 2000 mainly includes a route acquisition unit 201 and a first control unit 202 . The visual servoing unit 210 mainly includes an image acquisition unit 211 , an image processing unit 212 , and a second control unit 213 .
位置控制部2000执行沿着预先设定的规定的路径使臂11移动的位置控制。The position control unit 2000 executes position control for moving the arm 11 along a predetermined path set in advance.
路径获取部201获取与路径相关的信息。路径是根据指导位置形成的,例如是以预先设定的规定的顺序,将预先通过指导而设定的1个以上的指导位置连起来从而形成的。此外,与路径相关的信息、例如表示坐标、路径内的顺序的信息保持于存储器22(在后面进行说明,参照图24等)。保持于存储器22的与路径相关的信息也可以经由输入装置25等而输入。此外,与路径相关的信息也包含端点的最终的位置、即与目标位置相关的信息。The route acquisition unit 201 acquires information on routes. The route is formed based on the guidance position, for example, is formed by connecting one or more guidance positions set in advance through guidance in a predetermined order set in advance. In addition, information related to the route, for example, information indicating coordinates and order within the route is held in the memory 22 (described later, refer to FIG. 24 and the like). The information on the route held in the memory 22 may also be input via the input device 25 or the like. In addition, the information on the route also includes the final position of the endpoint, that is, information on the target position.
第一控制部202根据由路径获取部201获取的与路径相关的信息,设定下一个指导位置、即设定端点的轨道。The first control unit 202 sets the next guidance position, that is, the trajectory of the set end point, based on the information about the route acquired by the route acquisition unit 201 .
另外,第一控制部202根据端点的轨道,决定臂11的下一个移动位置、即决定设置于接头12的各促动器的目标角度。另外,第一控制部202生成使臂11移动目标角度那样的指令值,并将其向驱动控制部220输出。此外,由于第一控制部202进行的处理是一般的内容,所以省略详细的说明。In addition, the first control unit 202 determines the next moving position of the arm 11 , that is, the target angle of each actuator provided on the joint 12 based on the track of the end point. In addition, the first control unit 202 generates a command value for moving the arm 11 by a target angle, and outputs it to the drive control unit 220 . In addition, since the processing performed by the first control unit 202 is general, a detailed description thereof will be omitted.
视觉伺服部210执行根据第一拍摄部30与第二拍摄部40拍摄的图像,将与目标物的相对的位置的变化作为视觉信息来测量,并将其作为反馈信息来使用,从而追踪目标物的控制手段亦即所谓的视觉伺服,而使臂11移动。The visual servoing unit 210 measures the change in the relative position of the target object as visual information based on the images captured by the first imaging unit 30 and the second imaging unit 40 , and uses it as feedback information to track the target object. The control means, so-called visual servoing, makes the arm 11 move.
此外,作为视觉伺服,能够使用位置基准的方法、特征基准的方法等方法,上述位置基准的方法根据对象的三维位置信息而控制机器人,上述对象的三维位置信息是通过使用产生视差那样的两个图像而将图像作为立体图像来识别的立体图等方法来计算出来的;上述特征基准的方法以使由两个拍摄部从正交的方向拍摄的图像与预先保持的各拍摄部的目标图像之差为零(各图像的像素数量的误差矩阵为零)的方式控制机器人。例如,在本实施方式中,采用特征基准的方法。此外,特征基准的方法虽然能够使用1台拍摄部来进行,但是为了提高精度,优选使用2台拍摄部。In addition, as visual servoing, methods such as a position-based method, a feature-based method, etc., which control a robot based on three-dimensional position information of an object by using two objects that produce parallax, etc., can be used. The image is calculated by a method such as a stereogram that recognizes the image as a stereoscopic image; the method of the above-mentioned feature reference is to make the difference between the image captured by the two imaging units from the orthogonal direction and the target image of each imaging unit held in advance is zero (the error matrix of the number of pixels in each image is zero) to control the robot. For example, in this embodiment, a feature-based method is employed. In addition, although the feature-based method can be performed using one imaging unit, it is preferable to use two imaging units in order to improve accuracy.
图像获取部211获取第一拍摄部30所拍摄的图像(以下,称为第一图像)以及第二拍摄部40所拍摄的图像(以下,称为第二图像)。将图像获取部211获取的第一图像以及第二图像向图像处理部212输出。The image acquisition unit 211 acquires an image captured by the first imaging unit 30 (hereinafter referred to as a first image) and an image captured by the second imaging unit 40 (hereinafter referred to as a second image). The first image and the second image acquired by the image acquisition unit 211 are output to the image processing unit 212 .
图像处理部212根据从图像获取部211获取的第一图像以及第二图像,从第一图像以及第二图像识别端点的前端,并提取包括端点的图像。此外,由于图像处理部212进行的图像识别处理能够使用一般的各种技术,所以省略其说明。The image processing unit 212 recognizes the tip of the endpoint from the first image and the second image based on the first image and the second image acquired from the image acquiring unit 211 , and extracts an image including the endpoint. In addition, since the image recognition processing performed by the image processing part 212 can use various general techniques, the description is abbreviate|omitted.
第二控制部213根据由图像处理部212提取的图像(以下,称为当前图像)、以及端点位于目标位置时的图像(以下,称为目标图像),设定端点的轨道、即端点的移动量以及移动方向。此外,对于目标图像而言,将预先获取的信息存储于存储器22等即可。The second control unit 213 sets the track of the end point, that is, the movement of the end point, based on the image extracted by the image processing unit 212 (hereinafter, referred to as the current image) and the image when the end point is at the target position (hereinafter, referred to as the target image). amount and direction of movement. In addition, what is necessary is just to store the information acquired beforehand in the memory 22 etc. about a target image.
另外,第二控制部213根据端点的移动量以及移动方向,决定设置于接头12的各促动器的目标角度。并且,第二控制部213生成使臂11移动目标角度那样的指令值,并将其向驱动控制部220输出。此外,由于第二控制部213进行的处理是一般的内容,所以省略详细的说明。In addition, the second control unit 213 determines the target angle of each actuator provided on the joint 12 based on the movement amount and movement direction of the end point. Then, the second control unit 213 generates a command value for moving the arm 11 by a target angle, and outputs it to the drive control unit 220 . In addition, since the process performed by the 2nd control part 213 is general content, detailed description is abbreviate|omitted.
此外,在具有关节的机器人10中,若决定了各关节的角度,则端点的位置通过正向运动学处理而唯一决定。即,在N关节机器人中,由于能够由N个关节角度表现一个目标位置,所以若使该N个关节角度的组合为一个目标关节角度,则能够将端点的轨道考虑为关节角度的集合。由此,从第一控制部202以及第二控制部213输出的指令值可以是与位置相关的值(目标位置),也可以是与关节的角度相关的值(目标角度)。In addition, in the robot 10 having joints, once the angles of the joints are determined, the positions of the end points are uniquely determined by forward kinematics processing. That is, in an N-joint robot, since one target position can be represented by N joint angles, if a combination of the N joint angles is used as one target joint angle, the trajectories of the end points can be considered as a set of joint angles. Thus, the command values output from the first control unit 202 and the second control unit 213 may be values related to positions (target positions) or values related to joint angles (target angles).
驱动控制部220根据从第一控制部202以及第二控制部213获取的信息,以使端点的位置、即以使臂11移动的方式,向动作控制部101输出指示。在后面对驱动控制部220所进行的处理的详细内容进行详述。The drive control unit 220 outputs an instruction to the motion control unit 101 based on the information acquired from the first control unit 202 and the second control unit 213 to move the position of the end point, that is, the arm 11 . The details of the processing performed by the drive control unit 220 will be described in detail later.
图23是机器人系统1的数据流程图。FIG. 23 is a data flow diagram of the robot system 1 .
在位置控制部2000中,传递有用于通过位置控制而使机器人的各关节与目标角度接近的反馈环路。预先设定的路径的信息包含与目标位置相关的信息。对于第一控制部202而言,若获取与目标位置相关的信息,则根据与目标位置相关的信息以及由路径获取部201获取的当前位置,生成轨道以及指令值(这里是目标角度)。A feedback loop for bringing each joint of the robot closer to a target angle through position control is transmitted to the position control unit 2000 . The preset route information includes information related to the target position. For the first control unit 202 , if the information related to the target position is acquired, the orbit and the command value (here, the target angle) are generated according to the information related to the target position and the current position acquired by the route acquisition unit 201 .
在视觉伺服部210中,传递有用于使用来自第一拍摄部30以及第二拍摄部40的信息而与目标位置接近的视觉反馈环路。第二控制部213作为与目标位置相关的信息而获取目标图像。对于第二控制部213而言,由于当前图像以及当前图像上的目标位置是以图像上的坐标系来表示的,所以将其变换为机器人的坐标系。第二控制部213根据变换后的当前的当前图像以及目标图像,生成轨道以及指令值(这里是目标角度)。A visual feedback loop for approaching a target position using information from the first imaging unit 30 and the second imaging unit 40 is transmitted to the visual servoing unit 210 . The second control unit 213 acquires a target image as information on the target position. For the second control unit 213, since the current image and the target position on the current image are represented by the coordinate system on the image, it is transformed into the coordinate system of the robot. The second control unit 213 generates a trajectory and a command value (here, a target angle) based on the converted current current image and target image.
驱动控制部220向机器人10输出根据从第一控制部202输出的指令值、以及从第二控制部213输出的指令值而形成的指令值。具体而言,驱动控制部220将从第一控制部202输出的指令值乘以α这一系数,将从第二控制部213输出的指令值乘以1-α这一系数,并向机器人10输出将上述值合成而成的值。这里,α是比0大、比1小的实数。The drive control unit 220 outputs a command value formed from the command value output from the first control unit 202 and the command value output from the second control unit 213 to the robot 10 . Specifically, the drive control unit 220 multiplies the command value output from the first control unit 202 by a coefficient α, multiplies the command value output from the second control unit 213 by a coefficient 1−α, and sends the output to the robot 10 A value obtained by combining the above values is output. Here, α is a real number larger than 0 and smaller than 1.
此外,根据从第一控制部202输出的指令值、以及从第二控制部213输出的指令值而形成的指令值的方式并不限定于此。In addition, the form of the command value formed from the command value output from the 1st control part 202 and the command value output from the 2nd control part 213 is not limited to this.
这里,在本实施方式中,从第一控制部202以恒定的间隔(例如每隔1毫秒(msec))输出指令值,从第二控制部213以比来自第一控制部202的输出间隔长的间隔(例如每隔30msec)输出指令值。因此,驱动控制部220在不从第二控制部213输出指令值的情况下,将从第一控制部202输出的指令值乘以α这一系数,将最后从第二控制部213输出的指令值乘以1-α这一系数,并向机器人10输出将上述值合成而成的值。对于驱动控制部220而言,最后从第二控制部213输出的指令值暂时存储于存储器22(图24参照)等存储装置,驱动控制部220从存储装置读出它而使用即可。Here, in this embodiment, the command value is output from the first control unit 202 at constant intervals (for example, every 1 millisecond (msec)), and the command value is output from the second control unit 213 at an interval longer than that from the first control unit 202. The command value is output at intervals (for example, every 30msec). Therefore, when the drive control unit 220 does not output a command value from the second control unit 213 , it multiplies the command value output from the first control unit 202 by the coefficient α, and the last command output from the second control unit 213 The value is multiplied by a coefficient of 1−α, and a value obtained by combining the above values is output to the robot 10 . The drive control unit 220 may temporarily store the command value output from the second control unit 213 in a storage device such as the memory 22 (see FIG. 24 ), and the drive control unit 220 may read it from the storage device and use it.
动作控制部101从控制部20获取指令值(目标角度)。动作控制部101根据设置于接头12等的促动器的编码器值等,获取端点的当前角度,并计算目标角度与当前角度的差分(偏差角度)。另外,动作控制部101例如根据偏差角度来计算臂11的移动速度(偏差角度越大移动速度越快),并以计算出的移动速度使臂11移动计算出的偏差角度。The operation control unit 101 acquires a command value (target angle) from the control unit 20 . The operation control unit 101 acquires the current angle of the end point based on the encoder value of the actuator provided on the joint 12 and the like, and calculates the difference (deviation angle) between the target angle and the current angle. In addition, the operation control unit 101 calculates the moving speed of the arm 11 based on, for example, the deviation angle (the larger the deviation angle, the faster the moving speed), and moves the arm 11 at the calculated deviation angle at the calculated movement speed.
图24是表示控制部20的简要结构的一个例子的框图。如图所示,由例如计算机等构成的控制部20具备作为运算装置的中央处理器(Central Processing Unit:)21;由作为易失性的存储装置的RAM(Random Access Memory:随机存取存储器)、与作为非易失性的存储装置的ROM(Read only Memory:只读存储器)构成的存储器22;外部存储装置23;与机器人10等外部的装置进行通信的通信装置24;鼠标或键盘等输入装置25;显示器等输出装置26;以及将控制部20与其他单元连接的接口(I/F)27。FIG. 24 is a block diagram showing an example of a schematic configuration of the control unit 20 . As shown in the figure, the control unit 20 composed of, for example, a computer, etc. is equipped with a central processing unit (Central Processing Unit:) 21 as a computing device; a RAM (Random Access Memory: random access memory) as a volatile storage device , a memory 22 composed of a ROM (Read only Memory: read-only memory) as a non-volatile storage device; an external storage device 23; a communication device 24 for communicating with external devices such as the robot 10; an input such as a mouse or a keyboard. device 25; an output device 26 such as a display; and an interface (I/F) 27 for connecting the control unit 20 to other units.
上述各功能部例如是通过CPU21在存储器22读出并执行储存于存储器22的规定的程序从而实现的。此外,规定的程序例如可以预先安装于存储器22,也可以经由通信装置24而从网络下载从而安装或者更新。Each of the functional units described above is realized by, for example, CPU 21 reading out and executing a predetermined program stored in memory 22 in memory 22 . In addition, the predetermined program may be installed in the memory 22 in advance, for example, or may be downloaded from the network via the communication device 24 to be installed or updated.
对于以上的机器人系统1的结构而言,在对本实施方式的特征进行说明时对主要结构进行了说明,并且不限定于上述结构。另外,不排除具备一般的机器人系统的结构。The configuration of the above robot system 1 is not limited to the configuration described above while the main configuration was described when describing the characteristics of the present embodiment. In addition, a configuration including a general robot system is not excluded.
接下来,对本实施方式的由上述结构构成的机器人系统1的特征的处理进行说明。在本实施方式中,以使用手眼摄像机15按顺序对如图21所示的对象物O1、O2、O3进行目视检查的作业为例进行说明。Next, the characteristic processing of the robot system 1 having the above configuration according to the present embodiment will be described. In this embodiment, an operation of sequentially visually inspecting the objects O1, O2, and O3 shown in FIG. 21 using the hand-eye camera 15 will be described as an example.
若经由未图示的按钮等而输入控制开始指示,则控制部20通过位置控制以及视觉伺服而控制臂11。驱动控制部220在从第二控制部213输入指令值的情况(在本实施方式中每30次进行1次)下,使用将从第一控制部202输出的值(以下,称为基于位置控制的指令值)、与从第二控制部213输出的值(以下,称为基于视觉伺服的指令值)以任意的分量合成而成的指令值,并向动作控制部101输出指示。驱动控制部220在不从第二控制部213输入指令值的情况(在本实施方式中每30次进行29次)下,使用从第一控制部202输出的基于位置控制的指令值、与最后从第二控制部213输出并暂时存储于存储器22等的指令值,并向动作控制部101输出指示。When a control start instruction is input through a button (not shown) or the like, the control unit 20 controls the arm 11 through position control and visual servoing. The drive control unit 220 uses the value to be output from the first control unit 202 (hereinafter referred to as position-based control) when the command value is input from the second control unit 213 (in this embodiment, once every 30 times). command value) and a value output from the second control unit 213 (hereinafter referred to as a command value based on visual servoing) with arbitrary components, and outputs an instruction to the motion control unit 101 . When the drive control unit 220 does not input a command value from the second control unit 213 (29 times out of 30 in the present embodiment), it uses the command value based on the position control output from the first control unit 202 and the last The command value output from the second control unit 213 and temporarily stored in the memory 22 or the like is output as an instruction to the operation control unit 101 .
图25A是对通过位置控制以及视觉伺服而控制臂11时的端点的轨道进行说明的图。在图25A中的位置1设置有对象物O1,在位置2设置有对象物O2,在位置3设置有对象物O3。在图25A中,对象物O1、O2、O3位于同一平面(XY平面)上,手眼摄像机15位于恒定的Z方向位置。FIG. 25A is a diagram illustrating a trajectory of an end point when the arm 11 is controlled by position control and visual servoing. The object O1 is installed at position 1 in FIG. 25A , the object O2 is installed at position 2 , and the object O3 is installed at position 3 . In FIG. 25A, the objects O1, O2, and O3 are located on the same plane (XY plane), and the hand-eye camera 15 is located at a constant position in the Z direction.
在图25A中,实线所示的轨道是仅使用基于位置控制的指令值的情况下的端点的轨道。该轨道成为通过位置1、2、3之上的轨道,在对象物O1、O2、O3始终以相同的位置、姿势设置的情况下,能够仅通过位置控制而对对象物O1、O2、O3进行目视检查。In FIG. 25A , the trajectory shown by the solid line is the trajectory of the end point when only the command value by position control is used. This trajectory becomes a trajectory that passes above positions 1, 2, and 3. When the objects O1, O2, and O3 are always installed at the same position and posture, it is possible to control the objects O1, O2, and O3 only by position control. Visual inspection.
与此相对,在图25A中,考虑对象物O2从实线上的位置2向移动后的位置2移动的情况。由于端点在实线所示的对象物O2之上移动,所以在仅使用基于位置控制的指令值的情况下,能够设想对象物O2的检查的精度降低、或者无法进行检查的情形。In contrast, in FIG. 25A , consider a case where the object O2 moves from position 2 on the solid line to position 2 after the movement. Since the end point moves above the object O2 shown by the solid line, when only the command value by position control is used, the accuracy of the inspection of the object O2 may decrease or inspection may become impossible.
由于与对象物的位置的移动对应,所以良好地应用视觉伺服。若使用视觉伺服,则即使对象物的位置偏移,也能够使端点在对象物的正上方移动。例如,若对象物O2位于移动后的位置2,则在给予图25B所示的图像作为目标图像的情况下,假设仅使用基于视觉伺服的指令值,则端点通过图25A中的点线所示的轨道上。Since it corresponds to the movement of the position of the object, visual servoing is well applied. If visual servoing is used, even if the position of the object is shifted, the end point can be moved directly above the object. For example, if the object O2 is located at the moved position 2, in the case of giving the image shown in FIG. 25B as the target image, assuming that only the instruction value based on visual servoing is used, the endpoint is shown by the dotted line in FIG. 25A on the track.
视觉伺服是能够与对象物的偏移对应的非常有用的控制方法,但是因第一拍摄部30或第二拍摄部40的帧速率、图像处理部212的图像处理时间等,从而与位置控制的情况相比,存在到达目标位置为止的时间花费较多这一问题。Visual servoing is a very useful control method capable of responding to the displacement of the object, but due to the frame rate of the first imaging unit 30 or the second imaging unit 40, the image processing time of the image processing unit 212, etc., it is different from the position control method. Compared with the case, there is a problem that it takes a long time to reach the target position.
因此,通过同时使用位置控制与视觉伺服的指令值(同时进行位置控制与视觉伺服、即并行控制),从而与对象物O1、O2、O3的位置偏移对应地确保检查精度,同时与视觉伺服相比高速地移动。Therefore, by using command values for position control and visual servoing at the same time (simultaneously performing position control and visual servoing, that is, parallel control), the inspection accuracy can be ensured in accordance with the positional deviation of the objects O1, O2, and O3. Move at a relatively high speed.
此外,所谓同时并不限定于完全相同的时间、时刻。例如,同时使用位置控制与视觉伺服的指令值的情况是指,也包括同时输出位置控制的指令值与视觉伺服的指令值的情况、错开微小时间地输出位置控制的指令值与视觉伺服的指令值的情况的概念。对于微小时间而言,只要是能够进行与同时的情况相同的处理的时间,就可以是任意长度的时间。In addition, the term "simultaneous" is not limited to exactly the same time and time. For example, the case where the command values for position control and visual servoing are used simultaneously includes the case where the command value for position control and the command value for visual servoing are output simultaneously, and the command value for position control and the command for visual servoing are output with a slight time difference. Concept of value situation. The minute time may be any length of time as long as the same processing as in the simultaneous case can be performed.
特别是在目视检查的情况下,由于手眼摄像机15的视角包含对象物O2即可(对象物O2无需位于视角的中心),所以即使轨道不在对象物O2之上也没问题。Especially in the case of visual inspection, since the viewing angle of the hand-eye camera 15 only needs to include the object O2 (the object O2 does not need to be located at the center of the viewing angle), there is no problem even if the orbit is not on the object O2.
因此,在本实施方式中,驱动控制部220以形成手眼摄像机15的视角包含对象物O2的轨道的方式,将基于位置控制的指令值与基于视觉伺服的指令值进行合成。在图25A中由点划线表示此时的轨道。该轨道未通过对象物O2的正上方,而是能够最大限度确保检查精度的位置。Therefore, in the present embodiment, the drive control unit 220 synthesizes the command value by position control and the command value by visual servoing so that a trajectory including the object O2 is formed in the field of view of the hand-eye camera 15 . The trajectory at this time is indicated by a dotted line in FIG. 25A. This orbit does not pass directly above the object O2, but is a position where inspection accuracy can be ensured to the maximum.
此外,除了对象物的设置位置偏移以外,由温度变化引起的臂11的各部件等的膨胀等也成为路径上的位置与实际的对象物的位置的偏移的重要因素,但是在这种情况下,也能够通过同时使用位置控制与视觉伺服的指令值来解决。In addition, in addition to the displacement of the installation position of the object, the expansion of each member of the arm 11 due to temperature change, etc. is also an important factor for the deviation between the position on the path and the actual position of the object. In some cases, it can also be solved by using command values of position control and visual servoing at the same time.
该图25A中的点划线所示的轨道的位置因分量α而能够改变。图26是对分量α进行说明的图。The position of the track indicated by the dashed-dotted line in FIG. 25A can be changed by the component α. FIG. 26 is a diagram explaining the component α.
图26A是表示到目标(这里是对象物O1、O2、O3)的距离与分量α的关系的图。线A是与到目标位置的距离无关而为恒定的分量α的情况。线B是与到目标位置的距离对应地阶段性地减小分量α的情况。线C、D是与到目标位置的距离对应地连续地减小分量α的情况,线C是与距离成比例地使分量α的变化变小的情况,线D是距离与分量α成比例的情况。其中,分量α为0<α<1。FIG. 26A is a graph showing the relationship between the distance to the target (here, objects O1, O2, and O3) and the component α. Line A is the case of constant component α regardless of the distance to the target position. Line B is a case where the component α is gradually decreased in accordance with the distance to the target position. Lines C and D are the case where the component α is continuously reduced corresponding to the distance to the target position, line C is the case where the change in the component α is made smaller in proportion to the distance, and line D is the case where the distance is proportional to the component α Condition. Wherein, the component α is 0<α<1.
在为图26A的线B、C、D的情况下,以使随着到目标位置的距离变近,位置控制的指令值的比重减少、视觉伺服的指令值的比重增加的方式,设定分量α。由此,在目标位置移动的情况下,能够以使端点与目标位置更加接近的方式生成轨道。In the case of lines B, C, and D in FIG. 26A , the components are set so that as the distance to the target position gets closer, the proportion of the command value for position control decreases and the proportion of the command value for visual servoing increases. alpha. Accordingly, when the target position moves, the trajectory can be generated so that the end point is closer to the target position.
另外,由于设定分量α以使得位置控制与视觉伺服的各指令值叠加,所以能够与到目标位置的距离对应地连续地改变分量α。通过与距离对应地连续地改变分量,能够将控制从大致基于位置控制的臂的控制向大致基于视觉伺服的臂的控制顺利地切换。In addition, since the component α is set so that each command value of the position control and visual servoing is superimposed, the component α can be continuously changed corresponding to the distance to the target position. By continuously changing the components according to the distance, it is possible to smoothly switch the control from the control of the arm approximately based on position control to the control of the arm approximately based on visual servoing.
此外,如图26A所示,分量α并不限定于由到目标(这里是对象物O1、O2、O3)的距离规定的情况。如图26B所示,也可以由从开始位置离开的距离规定分量α。即,驱动控制部220能够根据当前位置与目标位置的差分来决定分量α。In addition, as shown in FIG. 26A , the component α is not limited to the case where it is defined by the distance to the target (objects O1, O2, O3 here). As shown in FIG. 26B , the component α may be specified by the distance from the start position. That is, the drive control unit 220 can determine the component α based on the difference between the current position and the target position.
此外,到目标的距离、从开始位置离开的距离可以根据路径获取部101所获取的路径而求出,也可以根据当前图像与目标图像而求出。例如,在根据路径而求出的情况下,能够根据与路径相关的信息所包含的开始位置、目标、对象物的位置等的坐标、顺序、以及当前位置的坐标、顺序来进行计算。In addition, the distance to the target and the distance from the start position may be obtained from the route obtained by the route obtaining unit 101, or may be obtained from the current image and the target image. For example, when calculating from a route, the calculation can be performed based on the coordinates and order of the start position, target, and object positions included in the route-related information, and the coordinates and order of the current position.
对于如图26所示的当前位置与目标位置的差分和分量的关系而言,由于以使用者希望的轨道来控制臂11,所以例如能够经由输入装置25等输入部而输入。另外,当前位置与目标位置的差分和分量的关系预先存储于存储器22等存储机构,从而使用它即可。此外,存储于存储机构的当前位置与目标位置的差分和分量的关系可以是经由输入部而输入的内容,也可以是预先初始设定的内容。The relationship between the difference and the component between the current position and the target position as shown in FIG. 26 can be input via an input unit such as the input device 25 because the arm 11 is controlled in a trajectory desired by the user. In addition, the relationship between the difference and the component between the current position and the target position is stored in storage means such as the memory 22 in advance, and it may be used. In addition, the relationship between the difference and the component between the current position and the target position stored in the storage means may be the content input via the input unit, or may be the content initially set in advance.
根据本实施方式,由于使用以恒定的比例将位置控制与视觉伺服的各指令值合成而成的指令值来控制臂(手眼摄像机),所以即使在产生对象物的位置偏移的情况下,也能够精度良好地进行高速检查。特别是,根据本实施方式,能够使速度与位置控制同等(与视觉伺服相比高速),并且与位置控制相比能够针对位置偏移进行鲁棒性检查。According to the present embodiment, since the arm (hand-eye camera) is controlled using command values obtained by combining position control and visual servoing command values at a constant ratio, even when the position of the object is shifted, the High-speed inspection can be performed with high precision. In particular, according to the present embodiment, it is possible to make the speed equal to the position control (higher than the visual servoing), and it is possible to perform a robustness check against positional deviation compared with the position control.
此外,在本实施方式中,平常将基于位置控制的指令值与基于视觉伺服的指令值进行合成,但是例如在对象物O2的位置偏移比规定的阈值大的情况下,也可以仅使用基于视觉伺服的指令值而使臂11移动。第二控制部213根据当前图像而求出对象物O2的位置偏移是否比规定的阈值大即可。In addition, in the present embodiment, the command value based on position control and the command value based on visual servoing are usually synthesized, but for example, when the positional deviation of the object O2 is greater than a predetermined threshold, only the command value based on The arm 11 is moved according to the command value of visual servoing. The second control unit 213 may determine whether or not the positional deviation of the object O2 is larger than a predetermined threshold value based on the current image.
另外,在本实施方式中,驱动控制部220根据当前位置与目标位置的差分而决定分量α,但是决定分量α的方法并不限定于此。例如,驱动控制部220也可以使分量α随着时间的经过而变化。另外,驱动控制部220也可以到经过一定时间为止使分量α随着时间的经过而变化,之后根据当前位置与目标位置的差分来改变分量α。In addition, in the present embodiment, the drive control unit 220 determines the component α based on the difference between the current position and the target position, but the method of determining the component α is not limited to this. For example, the drive control unit 220 may change the component α with time. In addition, the drive control unit 220 may change the component α with time until a certain time elapses, and then change the component α according to the difference between the current position and the target position.
第三实施方式third embodiment
本发明的第一实施方式平常使用以恒定的比例将位置控制与视觉伺服的各指令值合成而成的指令值来控制臂,但是本发明的适用范围并不限定于此。In the first embodiment of the present invention, the arm is usually controlled using a command value obtained by combining command values of position control and visual servoing at a constant ratio, but the scope of application of the present invention is not limited thereto.
本发明的第三实施方式是将与对象物的位置对应地仅使用位置控制的各指令值的情况、与使用以恒定的比例将位置控制与视觉伺服的各指令值合成而成的指令值的情况进行组合的方式。以下,对本发明的第三实施方式的机器人系统2进行说明。此外,由于机器人系统2的结构与第二实施方式的机器人系统1相同,所以省略机器人系统2的结构的说明,而对机器人系统2的处理进行说明。另外,对于与第二实施方式相同的部分,标注相同的附图标记,并省略其说明。In the third embodiment of the present invention, the case of using only the command values of the position control corresponding to the position of the object and the case of using the command values obtained by combining the command values of the position control and visual servoing at a constant ratio are used. The way the situation is combined. Hereinafter, a robot system 2 according to a third embodiment of the present invention will be described. In addition, since the structure of the robot system 2 is the same as that of the robot system 1 of the second embodiment, description of the structure of the robot system 2 will be omitted, and the processing of the robot system 2 will be described. In addition, the same code|symbol is attached|subjected to the same part as 2nd Embodiment, and description is abbreviate|omitted.
图27是表示本发明的臂11的控制处理的流程的流程图。该处理例如是通过经由未图示的按钮等而输入控制开始指示从而开始的。在本实施方式中,进行对象物O1、O2的目视检查。FIG. 27 is a flowchart showing the flow of control processing of the arm 11 of the present invention. This process is started, for example, by inputting a control start instruction via a button (not shown) or the like. In this embodiment, visual inspection of objects O1 and O2 is performed.
若开始进行处理,则位置控制部2000进行位置控制(步骤S1000)。即,第一控制部202根据由路径获取部201获取的与路径相关的信息而生成指令值,并将其向驱动控制部220输出。驱动控制部220将从第一控制部202输出的指令值向机器人10输出。这样,动作控制部101根据指令值而使臂11(即端点)移动。When the processing is started, the position control unit 2000 performs position control (step S1000). That is, the first control unit 202 generates a command value based on the information on the route acquired by the route acquisition unit 201 , and outputs the command value to the drive control unit 220 . The drive control unit 220 outputs the command value output from the first control unit 202 to the robot 10 . In this way, the motion control unit 101 moves the arm 11 (that is, the end point) according to the command value.
接下来,第一控制部202对通过位置控制而使端点移动的结果、即对端点是否通过切换点1进行判断(步骤S1002)。表示切换点1的位置的信息包含在预先设定的与路径相关的信息内。Next, the first control unit 202 judges the result of moving the end point by the position control, that is, whether the end point has passed the switching point 1 (step S1002 ). The information indicating the position of the switching point 1 is included in the preset route-related information.
图28是对对象物O1、O2的位置、切换点的位置以及端点的轨道进行说明的图。在本实施方式中,切换点1设置在开始地点与对象物O1之间。FIG. 28 is a diagram illustrating the positions of the objects O1 and O2, the positions of the switching points, and the trajectories of the end points. In this embodiment, the switching point 1 is set between the start point and the object O1.
在端点未通过切换点1的情况(步骤S1002中为否)下,控制部20反复进行步骤S1000的处理。When the end point has not passed through the switching point 1 (No in step S1002), the control unit 20 repeats the process of step S1000.
在端点通过切换点1的情况(步骤S1002中为是)下,驱动控制部220使用位置控制以及视觉伺服来控制臂11(步骤S1004)。即,第一控制部202根据由路径获取部201获取的与路径相关的信息而生成指令值,并将其向驱动控制部220输出。另外,第二控制部213根据由图像处理部212处理的当前图像与目标图像而生成指令值,并将其向驱动控制部220输出。驱动控制部220随着时间的经过而对分量α进行阶段性切换,并使用切换后的分量α,将从第一控制部202输出的指令值与从第二控制部213输出的指令值进行合成,并向机器人10输出。这样,动作控制部101根据指令值而使臂11(即端点)移动。When the end point passes through the switching point 1 (YES in step S1002), the drive control unit 220 controls the arm 11 using position control and visual servoing (step S1004). That is, the first control unit 202 generates a command value based on the information on the route acquired by the route acquisition unit 201 , and outputs the command value to the drive control unit 220 . In addition, the second control unit 213 generates a command value based on the current image and the target image processed by the image processing unit 212 , and outputs it to the drive control unit 220 . The drive control unit 220 switches the component α stepwise over time, and uses the switched component α to synthesize the command value output from the first control unit 202 and the command value output from the second control unit 213 , and output to the robot 10. In this way, the motion control unit 101 moves the arm 11 (that is, the end point) according to the command value.
以下,对步骤S1004的处理进行具体说明。在进行步骤S1004的处理之前、即在步骤S1000的处理中,不使用来自视觉伺服部210的指令值。因此,来自位置控制部2000的指令值的分量α为1(来自视觉伺服部210的指令值的分量1-α=0)。Hereinafter, the processing of step S1004 will be specifically described. Before the processing of step S1004 , that is, in the processing of step S1000 , the command value from the visual servoing unit 210 is not used. Therefore, the component α of the command value from the position control unit 2000 is 1 (the component 1−α=0 of the command value from the visual servoing unit 210 ).
在步骤S1004的处理开始后,若经过了一定时间(例如10msec),则驱动控制部220将来自位置控制部2000的指令值的分量α从1切换至0.9。这样,来自视觉伺服部210的指令值的分量1-α变为0.1。然后,驱动控制部220在使来自位置控制部2000的指令值的分量α为0.9、并使来自视觉伺服部210的指令值的分量1-α为0.1的前提下对它们的指令值进行合成,并向机器人10输出。When a certain time (for example, 10 msec) elapses after the start of the process in step S1004 , the drive control unit 220 switches the component α of the command value from the position control unit 2000 from 1 to 0.9. In this way, the component 1−α of the command value from the visual servoing unit 210 becomes 0.1. Then, the drive control unit 220 synthesizes the command values on the premise that the component α of the command value from the position control unit 2000 is 0.9 and the component 1−α of the command value from the visual servoing unit 210 is 0.1, And output to the robot 10.
之后,若进一步经过了一定时间,则驱动控制部220将来自位置控制部2000的指令值的分量α从0.9切换至0.8,并将来自视觉伺服部210的指令值的分量1-α从0.1切换至0.2。这样,随着一定时间的经过而阶段性地切换分量α,并使用切换后的分量,将从第一控制部202输出的指令值与从第二控制部213输出的指令值进行合成。After that, when a certain period of time has elapsed, the drive control unit 220 switches the component α of the command value from the position control unit 2000 from 0.9 to 0.8, and switches the component 1-α of the command value from the visual servoing unit 210 from 0.1. to 0.2. In this way, the component α is switched stepwise with the lapse of a certain time, and the command value output from the first control unit 202 and the command value output from the second control unit 213 are combined using the switched component.
位置控制部2000在来自位置控制部2000的指令值的分量α变为0.5、来自视觉伺服部210的指令值的分量1-α变为0.5之前,反复进行该分量α的切换以及指令值的合成。驱动控制部220在来自位置控制部2000的指令值的分量α变为0.5、来自视觉伺服部210的指令值的分量1-α变为0.5之后,以不切换分量α而维持分量α的方式,反复进行指令值的合成。The position control unit 2000 repeats the switching of the component α and the synthesis of the command values until the component α of the command value from the position control unit 2000 becomes 0.5 and the component 1-α of the command value from the visual servoing unit 210 becomes 0.5. . After the component α of the command value from the position control unit 2000 becomes 0.5 and the component 1−α of the command value from the visual servoing unit 210 becomes 0.5, the drive control unit 220 maintains the component α without switching the component α, Synthesis of command values is repeated.
由此,即使在对象物O1的位置变化的情况下也能够进行目视检查。另外,在超出所需地远离对象物的情况下,仅通过位置控制而使端点移动,从而能够进行高速处理。另外,在与对象物接近时通过位置控制以及视觉伺服而使端点移动,从而也能够与对象物的位置变化的情况对应。并且,通过缓缓切换分量α,能够防止臂11的突然的动作、振动。This enables visual inspection even when the position of the object O1 changes. In addition, when the object is farther than necessary, the end point is moved only by position control, thereby enabling high-speed processing. In addition, when approaching an object, the end point is moved by position control and visual servoing, and it is also possible to cope with a change in the position of the object. In addition, by gradually switching the component α, sudden movements and vibrations of the arm 11 can be prevented.
此外,在步骤S1004的处理中,在进行该分量α的切换以及指令值的合成期间,端点通过切换点2的(步骤S1006,在后面进行详细叙述)情况下,在分量α变为0.5之前不进行分量α的切换以及指令值的合成,而进入步骤S1006。In addition, in the process of step S1004, when the end point passes the switching point 2 (step S1006, which will be described in detail later) while switching the component α and synthesizing the command value, the component α does not change until the component α becomes 0.5. Switching of the component α and synthesis of command values are performed, and the process proceeds to step S1006.
接下来,第一控制部202对通过位置控制以及视觉伺服而使端点移动的结果、即对端点是否通过切换点2进行判断(步骤S1006)。表示切换点2的位置的信息包含在与路径相关的信息内。如图28所示,切换点2设定于对象物O1。Next, the first control unit 202 judges whether or not the end point has passed the switching point 2 as a result of moving the end point by position control and visual servoing (step S1006 ). Information indicating the position of the switching point 2 is included in the route-related information. As shown in FIG. 28 , the switching point 2 is set to the object O1.
在端点未通过切换点2的情况(步骤S1006中为否)下,控制部20反复进行步骤S1004的处理。When the end point has not passed through the switching point 2 (No in step S1006), the control unit 20 repeats the process of step S1004.
在端点通过切换点2的情况(步骤S1006中为是)下,驱动控制部220以使分量α随着时间的经过而阶段性增大的方式对分量α进行切换,并使用切换后的分量α,将从第一控制部202输出的指令值与从第二控制部213输出的指令值进行合成,并向机器人10输出。动作控制部101根据指令值而使臂11(即端点)移动(步骤S1008)。When the end point passes through the switching point 2 (Yes in step S1006), the drive control unit 220 switches the component α so that the component α increases stepwise with the passage of time, and uses the switched component α , the command value output from the first control unit 202 and the command value output from the second control unit 213 are combined and output to the robot 10 . The motion control unit 101 moves the arm 11 (that is, the endpoint) according to the command value (step S1008).
以下,对步骤S1008的处理进行具体说明。在进行步骤S1008的处理之前、即在步骤S1006的处理中,驱动控制部220使来自位置控制部2000的指令值的分量α为0.5,并使来自视觉伺服部210的指令值的分量1-α为0.5从而合成指令值。Hereinafter, the processing of step S1008 will be specifically described. Before performing the processing in step S1008, that is, in the processing in step S1006, the drive control unit 220 sets the component α of the command value from the position control unit 2000 to 0.5, and sets the component α of the command value from the visual servoing unit 210 to 1−α 0.5 to synthesize the instruction value.
在步骤S1008的处理开始后,若经过了一定时间(例如10msec),则驱动控制部220将来自位置控制部2000的指令值的分量α从0.5切换至0.6。这样,来自视觉伺服部210的指令值的分量1-α变为0.4。然后,驱动控制部220在使来自位置控制部2000的指令值的分量α为0.6、来自视觉伺服部210的指令值的分量1-α为0.4的前提下,对它们的指令值进行合成,并向机器人10输出。When a certain time (for example, 10 msec) elapses after the start of the process in step S1008 , the drive control unit 220 switches the component α of the command value from the position control unit 2000 from 0.5 to 0.6. In this way, the component 1−α of the command value from the visual servoing unit 210 becomes 0.4. Then, the drive control unit 220 synthesizes the command values on the premise that the component α of the command value from the position control unit 2000 is 0.6 and the component 1−α of the command value from the visual servoing unit 210 is 0.4, and Output to the robot 10.
之后,若进一步经过了一定时间,则驱动控制部220将来自位置控制部2000的指令值的分量α从0.6切换至0.7,并将来自视觉伺服部210的指令值的分量1-α从0.4切换至0.3。这样,随着一定时间的经过而阶段性地切换分量α,并使用切换后的分量,将从第一控制部202输出的指令值与从第二控制部213输出的指令值进行合成。After that, when a certain time elapses, the drive control unit 220 switches the component α of the command value from the position control unit 2000 from 0.6 to 0.7, and switches the component 1-α of the command value from the visual servoing unit 210 from 0.4 to 0.4. to 0.3. In this way, the component α is switched stepwise with the lapse of a certain time, and the command value output from the first control unit 202 and the command value output from the second control unit 213 are combined using the switched component.
驱动控制部220在分量α变为1之前反复进行分量α的切换。在分量α变为1的情况下,来自视觉伺服部210的指令值的分量1-α为0。因此,驱动控制部220向机器人10输出从第一控制部202输出的指令值。这样,动作控制部101根据指令值而使臂11(即端点)移动(步骤S1010)。其结果是,通过位置控制而使端点移动。步骤S1010的处理与步骤S1000相同。The drive control unit 220 repeatedly switches the component α until the component α becomes 1. When the component α becomes 1, the component 1−α of the command value from the visual servoing unit 210 becomes 0. Therefore, the drive control unit 220 outputs the command value output from the first control unit 202 to the robot 10 . In this way, the motion control unit 101 moves the arm 11 (that is, the end point) according to the command value (step S1010 ). As a result, the endpoint is moved by position control. The processing of step S1010 is the same as that of step S1000.
这样,在通过对象物O1的阶段,通过位置控制而使端点移动,从而能够进行高速处理。另外,通过缓缓切换分量α,能够防止臂11的突然的动作、振动。In this way, at the stage of passing the object O1, the end point is moved by position control, thereby enabling high-speed processing. In addition, sudden movement and vibration of the arm 11 can be prevented by gradually switching the component α.
接下来,第一控制部202对通过位置控制而使端点移动的结果、即对端点是否通过切换点3进行判断(步骤S1012)。表示切换点3的位置的信息包含在预先设定的与路径相关的信息内。如图28所示,切换点3设置在对象物O1(切换点2)与对象物O2之间。Next, the first control unit 202 judges the result of moving the endpoint by the position control, that is, whether the endpoint has passed the switching point 3 (step S1012 ). The information indicating the position of the switching point 3 is included in the preset route-related information. As shown in FIG. 28, the switching point 3 is set between the object O1 (switching point 2) and the object O2.
在端点未通过切换点3的情况(步骤S1012中为否)下,控制部20反复进行步骤S1010的处理。When the end point has not passed through the switching point 3 (No in step S1012), the control unit 20 repeats the process of step S1010.
在端点通过切换点3的情况(步骤S1012中为是)下,驱动控制部220随着时间的经过而对分量α进行阶段性切换,并使用切换后的分量α,将从第一控制部202输出的指令值与从第二控制部213输出的指令值进行合成,并向机器人10输出。这样,动作控制部101根据指令值而使臂11(即端点)移动(步骤S1014)。步骤S1014的处理与步骤S1004相同。In the case where the end point passes through switching point 3 (Yes in step S1012), the drive control unit 220 switches the component α stepwise as time elapses, and uses the switched component α to drive the output from the first control unit 202 to The output command value is combined with the command value output from the second control unit 213 and output to the robot 10 . In this way, the motion control unit 101 moves the arm 11 (that is, the end point) according to the command value (step S1014). The processing of step S1014 is the same as that of step S1004.
接下来,第一控制部202对通过位置控制以及视觉伺服而使端点移动的结果、即对端点是否通过切换点4进行判断(步骤S1016)。表示切换点4的位置的信息包含在与路径相关的信息内。如图28所示,切换点4设定于对象物O2。Next, the first control unit 202 judges whether or not the end point has passed the switching point 4 as a result of moving the end point by position control and visual servoing (step S1016 ). Information indicating the position of the switching point 4 is included in the route-related information. As shown in FIG. 28, the switching point 4 is set to the object O2.
在端点未通过切换点4的情况(步骤S1016中为否)下,控制部20反复进行步骤S1014的处理。When the end point has not passed through the switching point 4 (No in step S1016), the control unit 20 repeats the process of step S1014.
在端点通过切换点4的情况(步骤S1016中为是)下,驱动控制部220以使分量α随着时间的经过而阶段性增大的方式对分量α进行切换,并使用切换后的分量α,将从第一控制部202输出的指令值与从第二控制部213输出的指令值进行合成,并向机器人10输出。动作控制部101根据指令值而使臂11(即端点)移动(步骤S1018)。步骤S1018的处理与步骤S1008相同。When the end point passes the switching point 4 (YES in step S1016), the drive control unit 220 switches the component α such that the component α gradually increases with the passage of time, and uses the switched component α , the command value output from the first control unit 202 and the command value output from the second control unit 213 are combined and output to the robot 10 . The motion control unit 101 moves the arm 11 (that is, the endpoint) according to the command value (step S1018). The processing of step S1018 is the same as that of step S1008.
驱动控制部220在分量α变为1之前反复进行分量α的切换。若分量α变为1,则驱动控制部220向机器人10输出从第一控制部202输出的指令值。这样,动作控制部101根据指令值而使臂11(即端点)移动(步骤S1020)。步骤S1020的处理与步骤S1010相同。The drive control unit 220 repeatedly switches the component α until the component α becomes 1. When the component α becomes 1, the drive control unit 220 outputs the command value output from the first control unit 202 to the robot 10 . In this way, the motion control unit 101 moves the arm 11 (that is, the end point) according to the command value (step S1020). The processing of step S1020 is the same as that of step S1010.
接下来,第一控制部202对通过位置控制而使端点移动的结果、即对端点是否到达目标地点进行判断(步骤S1022)。表示目标地点的位置的信息包含在预先设定的与路径相关的信息内。Next, the first control unit 202 judges the result of moving the endpoint by the position control, that is, whether the endpoint has reached the target point (step S1022 ). The information indicating the position of the target point is included in the preset route-related information.
在端点未到达目标地点的情况(步骤S1022中为否)下,控制部20反复进行步骤S1020的处理。When the end point has not reached the target point (NO in step S1022), the control unit 20 repeats the process of step S1020.
在端点到达目标地点的情况(步骤S1022中为是)下,驱动控制部220结束处理。When the end point has reached the target point (YES in step S1022), the drive control unit 220 ends the process.
根据本实施方式,在与对象物接近时,通过位置控制以及视觉伺服而使端点移动,从而也能够与对象物的位置变化的情况对应。另外,在端点(当前位置)超出所需地远离对象物的情况下,在满足端点(当前位置)通过对象物的情况等规定的条件的情况下,仅通过位置控制而使端点移动,从而能够进行高速处理。According to the present embodiment, when approaching an object, the end point is moved by position control and visual servoing, so that it can cope with a change in the position of the object. In addition, when the end point (current position) is farther away from the object than necessary, and when predetermined conditions such as the fact that the end point (current position) passes the object are satisfied, the end point can be moved only by position control. Perform high-speed processing.
另外,根据本实施方式,在对基于位置控制以及视觉伺服的控制、以及基于位置控制的控制进行切换时,通过缓缓切换分量α,能够防止臂的突然的动作、振动。In addition, according to the present embodiment, when switching between control based on position control and visual servoing, and control based on position control, sudden movement and vibration of the arm can be prevented by gradually switching the component α.
此外,在本实施方式中,在缓缓切换分量α的情况下,在每次经过一定时间时,将分量α阶段性地切换0.1,但是缓缓切换分量α的方法并不限定于此。例如,如图26所示,也可以与到对象物(相当于图26(A)中的目标位置)的位置、从对象物(相当于图26B中的开始位置)离开的位置对应地改变分量α。另外,如图26所示,分量α也可以连续地变化(参照图26A、图26B的线C、D等)。In addition, in the present embodiment, when the component α is gradually switched, the component α is switched by 0.1 every time a certain time elapses, but the method of gradually switching the component α is not limited to this. For example, as shown in FIG. 26, the components may be changed corresponding to the position to the object (corresponding to the target position in FIG. 26(A)) and the position away from the object (corresponding to the start position in FIG. 26B). alpha. In addition, as shown in FIG. 26 , the component α may change continuously (see lines C, D, etc. in FIGS. 26A and 26B ).
另外,在本实施方式中,在使用位置控制与视觉伺服的指令值的情况(步骤S1004、S1008、S1014、S1018)下,使分量α为0.5、0.6、0.7、0.8、0.9,但是分量α只要是比0大比1小的实数就可以是任意值。In addition, in this embodiment, when using command values for position control and visual servoing (steps S1004, S1008, S1014, and S1018), the component α is set to 0.5, 0.6, 0.7, 0.8, and 0.9, but the component α needs only It can be any value if it is a real number greater than 0 and less than 1.
第四实施方式Fourth Embodiment
本发明的第二、第三实施方式使用手眼摄像机而进行目视检查,但是本发明的适用范围并不限定于此。The second and third embodiments of the present invention perform visual inspection using a hand-eye camera, but the scope of application of the present invention is not limited thereto.
本发明的第四实施方式是将本发明应用于对象物的针对孔的插入等装配作业的方式。以下,对本发明的第四实施方式进行说明。此外,对于与第二实施方式以及第三实施方式相同的部分,标注相同的附图标记,并省略其说明。The fourth embodiment of the present invention is a form in which the present invention is applied to assembly work such as insertion into a hole of an object. Hereinafter, a fourth embodiment of the present invention will be described. In addition, the same code|symbol is attached|subjected to the same part as 2nd Embodiment and 3rd Embodiment, and description is abbreviate|omitted.
图29是表示本发明的一实施方式的机器人系统3的结构的一个例子的系统构成图。本实施方式的机器人系统3主要具备机器人10A、控制部20、第一拍摄部30以及第二拍摄部40。FIG. 29 is a system configuration diagram showing an example of the configuration of the robot system 3 according to the embodiment of the present invention. The robot system 3 of the present embodiment mainly includes a robot 10A, a control unit 20 , a first imaging unit 30 , and a second imaging unit 40 .
机器人10A是具有包括多个接头(关节)12以及多个连杆13的臂11A的臂型机器人。在臂11A的前端设置有把持工件W、器具的手部14(所谓的末端执行器)。臂11A的端点的位置是手部14的位置。此外,末端执行器并不限定于手部14。The robot 10A is an arm-type robot having an arm 11A including a plurality of joints (joints) 12 and a plurality of links 13 . At the tip of the arm 11A, a hand 14 (so-called end effector) for holding the workpiece W or an instrument is provided. The position of the end point of the arm 11A is the position of the hand 14 . In addition, the end effector is not limited to the hand 14 .
在臂11A的手臂部分设置有力觉传感器102(在图29中未图示,参照图30)。力觉传感器102是对作为与机器人10A输出的力相对的反作用力而受到的力、力矩进行检测的传感器。作为力觉传感器,例如,能够使用可同时检测平移3轴方向的力成分以及绕着旋转3轴的力矩成分的6成分的6轴力觉传感器。另外,力觉传感器所使用的物理量是电流、电压、电荷量、电感、形变,电阻、电磁引导、磁,空气压、光等。力觉传感器102通过将希望的物理量转换为电信号,从而能够检测6成分。此外,力觉传感器102并不限定于6轴,例如也可以是3轴。A force sensor 102 (not shown in FIG. 29 , see FIG. 30 ) is provided on the arm portion of the arm 11A. The force sensor 102 is a sensor that detects force and moment received as a reaction force to the force output by the robot 10A. As the force sensor, for example, a 6-axis force sensor capable of simultaneously detecting 6 components of a force component in a translational three-axis direction and a moment component around a three-axis rotation can be used. In addition, the physical quantities used by the force sensor are current, voltage, charge, inductance, deformation, resistance, electromagnetic guidance, magnetism, air pressure, light, etc. The force sensor 102 can detect six components by converting a desired physical quantity into an electrical signal. In addition, the force sensor 102 is not limited to six axes, and may be three axes, for example.
接下来,对机器人系统3的功能构成例进行说明。图30表示机器人系统3的功能框图。Next, an example of the functional configuration of the robot system 3 will be described. FIG. 30 shows a functional block diagram of the robot system 3 .
机器人10A具备根据促动器的编码器值和传感器的传感器值等来控制臂11A的动作控制部101、以及力觉传感器102。The robot 10A includes a motion control unit 101 that controls the arm 11A based on an encoder value of an actuator, a sensor value of a sensor, and the like, and a force sensor 102 .
控制部20A主要具备位置控制部2000、视觉伺服部210、图像处理部212、驱动控制部220以及力控制部230。The control unit 20A mainly includes a position control unit 2000 , a visual servoing unit 210 , an image processing unit 212 , a drive control unit 220 , and a force control unit 230 .
力控制部230根据来自力觉传感器102的传感器信息(力信息、力矩信息),进行力控制(力觉控制)。The force control unit 230 performs force control (force sense control) based on sensor information (force information, moment information) from the force sensor 102 .
在本实施方式中,作为力控制而进行阻抗控制。阻抗控制是为了将从外部向机器人的手尖(手部14)施加力的情况下产生的机械阻抗(惯性、衰减系数、刚性)设定为目标作业下合适的值的位置与力的控制手段。具体而言,是在机器人的末端执行器部连接质量、粘性系数、以及弹性要素的模型中,以设定为目标的质量、粘性系数以及弹性系数而与物体接触的控制。In this embodiment, impedance control is performed as force control. Impedance control is a position and force control means to set the mechanical impedance (inertia, attenuation coefficient, rigidity) generated when force is applied to the tip of the robot (hand 14) from the outside to an appropriate value for the target operation . Specifically, it is a control of contacting an object with mass, viscous coefficient, and elastic coefficient set as targets in a model in which mass, viscous coefficient, and elastic elements are connected to the end effector portion of the robot.
力控制部230通过阻抗控制而决定端点的移动方向、移动量。另外,力控制部230根据端点的移动方向、移动量,决定设置于接头12的各促动器的目标角度。另外,力控制部230生成使臂11A移动目标角度那样的指令值,并将其向驱动控制部220输出。此外,由于力控制部230进行的处理是一般的内容,所以省略详细的说明。The force control unit 230 determines the movement direction and movement amount of the end point through impedance control. In addition, the force control unit 230 determines the target angle of each actuator provided on the joint 12 based on the moving direction and moving amount of the end point. Also, the force control unit 230 generates a command value for moving the arm 11A by a target angle, and outputs it to the drive control unit 220 . Note that since the processing performed by the force control unit 230 is general, a detailed description thereof will be omitted.
此外,力控制并不限定于混合控制,而能够采用顺应性控制等能够巧妙地控制干扰力的控制方法。另外,为了进行力控制,需要检测施加于手部14等末端执行器的力,但是对施加于末端执行器的力进行检测的方法并不限定于使用力觉传感器的情况。例如,也能够从臂11A的各轴扭矩值推断末端执行器受到的外力。因此,为了进行力控制,只要臂11A具有直接或者间接地获取施加于末端执行器的力的机构即可。In addition, the force control is not limited to the hybrid control, and a control method capable of finely controlling the disturbance force, such as compliance control, can be employed. In addition, in order to perform force control, it is necessary to detect the force applied to the end effector such as the hand 14, but the method of detecting the force applied to the end effector is not limited to the case of using a force sensor. For example, the external force received by the end effector can also be estimated from the torque value of each axis of the arm 11A. Therefore, in order to perform force control, it is only necessary that the arm 11A has a mechanism for directly or indirectly acquiring the force applied to the end effector.
接下来,对本实施方式的由上述结构构成的机器人系统3的特征的处理进行说明。图31是表示本发明的臂11A的控制处理的流程的流程图。该处理例如是经由未图示的按钮等而输入控制开始指示从而开始的。在本实施方式中,如图32所示,以将工件W插入孔H的装配作业为例进行说明。Next, the characteristic processing of the robot system 3 configured as described above in the present embodiment will be described. FIG. 31 is a flowchart showing the flow of control processing of the arm 11A of the present invention. This process is started, for example, by inputting a control start instruction via a button (not shown) or the like. In this embodiment, as shown in FIG. 32 , an assembly operation of inserting a workpiece W into a hole H will be described as an example.
若经由未图示的按钮等而输入控制开始指示,则第一控制部202通过位置控制而控制臂11,并使端点移动(步骤S130)。步骤S130的处理与步骤S1000相同。When a control start instruction is input through a button (not shown) or the like, the first control unit 202 controls the arm 11 by position control to move the end point (step S130 ). The processing of step S130 is the same as that of step S1000.
在本实施方式中,将基于位置控制的指令值的分量设定为α,将基于视觉伺服的指令值的分量设定为β,并将基于力控制的指令值的分量设定为γ。分量α、β以及γ设定为,它们的合计为1。在步骤S130中,α为1,β以及γ为0。In the present embodiment, the component of the command value by position control is α, the component of the command value by visual servoing is β, and the component of the command value by force control is γ. Components α, β, and γ are set such that their total is 1. In step S130, α is 1, and β and γ are 0.
接下来,第一控制部202对通过位置控制而使端点移动的结果、即对端点是否通过切换点1进行判断(步骤S132)。步骤S132的处理与步骤S1002相同。表示切换点1的位置的信息包含在预先设定的与路径相关的信息内。Next, the first control unit 202 judges the result of moving the end point by the position control, that is, whether the end point has passed the switching point 1 (step S132 ). The processing of step S132 is the same as that of step S1002. The information indicating the position of the switching point 1 is included in the preset route-related information.
图32是对端点的轨道以及切换点的位置进行说明的图。在本实施方式中,切换点1设置于作业空间内的预先决定的规定的位置。FIG. 32 is a diagram explaining the trajectory of the end point and the position of the switching point. In the present embodiment, the switching point 1 is provided at a predetermined predetermined position in the working space.
在端点未通过切换点1的情况(步骤S132中为否)下,第一控制部202反复进行步骤S130的处理。When the end point has not passed through the switching point 1 (No in step S132), the first control unit 202 repeats the process of step S130.
在端点通过切换点1的情况(步骤S132中为是)下,驱动控制部220随着时间的经过而对分量α以及β进行阶段性切换,并使用切换后的分量α以及β,将从第一控制部202输出的指令值与从第二控制部213输出的指令值进行合成,并向机器人10输出。这样,动作控制部101根据指令值而使臂11(即端点)移动(步骤S134)。即,在步骤S134中,通过位置控制以及视觉伺服而使端点移动。When the end point passes the switching point 1 (YES in step S132), the drive control unit 220 switches the components α and β stepwise with the passage of time, and uses the switched components α and β to convert from the first The command value output from the first control unit 202 is combined with the command value output from the second control unit 213 and output to the robot 10 . In this way, the motion control unit 101 moves the arm 11 (that is, the end point) according to the command value (step S134). That is, in step S134, the end point is moved by position control and visual servoing.
以下,对步骤S134的处理进行具体说明。在进行步骤S134的处理之前、即在步骤S132的处理中,驱动控制部220使来自位置控制部200的指令值的分量α为1,使来自视觉伺服部210的指令值的分量β为0,并使来自力控制部230的指令值的分量γ为0,从而合成指令值。Hereinafter, the processing of step S134 will be specifically described. Before performing the process of step S134, that is, in the process of step S132, the drive control unit 220 sets the component α of the command value from the position control unit 200 to 1, and the component β of the command value from the visual servoing unit 210 to 0, The command value is synthesized by setting the component γ of the command value from the force control unit 230 to 0.
在步骤S134的处理开始后,若经过了一定时间(例如10msec),则驱动控制部220将来自位置控制部2000的指令值的分量α从1切换至0.95,并将来自视觉伺服部210的指令值的分量β切换至0.05。然后,驱动控制部220在使来自位置控制部2000的指令值为分量0.95、并使来自视觉伺服部210的指令值的分量为0.05的前提下对它们的指令值进行合成,并向机器人10输出。After the start of the process in step S134, if a certain period of time (for example, 10 msec) has elapsed, the drive control unit 220 switches the component α of the command value from the position control unit 2000 from 1 to 0.95, and switches the command value from the visual servoing unit 210 to 0.95. The value component β is switched to 0.05. Then, the drive control unit 220 synthesizes the command values of the command value from the position control unit 2000 and the command value from the visual servoing unit 210 to 0.05, and outputs the command value to the robot 10. .
之后,若进一步经过了一定时间,则驱动控制部220将来自位置控制部2000的指令值的分量α从0.95切换至0.9,并将来自视觉伺服部210的指令值的分量β从0.05切换至0.1。After that, when a certain period of time has elapsed, the drive control unit 220 switches the component α of the command value from the position control unit 2000 from 0.95 to 0.9, and switches the component β of the command value from the visual servoing unit 210 from 0.05 to 0.1. .
这样,随着一定时间的经过而阶段性地切换分量α以及β,并使用切换后分量,将从第一控制部202输出的指令值与从第二控制部213输出的指令值进行合成。驱动控制部220在分量α变为0.05、分量β变为0.95之前反复进行上述分量的切换。其结果是,通过位置控制以及视觉伺服而使端点移动。此外,在步骤S134中,由于不使用力控制,所以分量γ保持原样地为0。In this way, the components α and β are switched stepwise with the elapse of a certain period of time, and the command value output from the first control unit 202 and the command value output from the second control unit 213 are combined using the switched components. The drive control unit 220 repeats the switching of the above components until the component α becomes 0.05 and the component β becomes 0.95. As a result, the endpoint is moved by position control and visual servoing. In addition, in step S134, since the force control is not used, the component γ is 0 as it is.
此外,最终的分量α、β的比率α:β并不限定于0.05:0.95。分量α、β能够取分量α、β的和为1的各种值。但是,在这种作业中,由于孔H的位置不限定为恒定,所以优选使视觉伺服的分量β比位置控制的分量α大。In addition, the ratio α:β of the final components α, β is not limited to 0.05:0.95. The components α, β can take various values in which the sum of the components α, β is 1. However, in such work, since the position of the hole H is not limited to be constant, it is preferable to make the component β of the visual servoing larger than the component α of the position control.
此外,缓缓切换分量α的方法并不限定于此。例如,如图26A、图26B所示,也可以与到对象物的位置、从对象物离开的位置对应地改变分量α。另外,如图26的线C、D所示,分量α也可以连续地变化。In addition, the method of gradually switching the component α is not limited to this. For example, as shown in FIGS. 26A and 26B , the component α may be changed according to the position to the object and the position away from the object. In addition, as shown by lines C and D in FIG. 26 , the component α may change continuously.
接下来,第二控制部213对通过位置控制以及视觉伺服而使端点移动的结果、即对端点是否通过切换点2进行判断(步骤S136)。Next, the second control unit 213 judges whether or not the end point has passed the switching point 2 as a result of moving the end point by position control and visual servoing (step S136 ).
切换点2由来自孔H的相对位置决定。例如,切换点2是从孔H的开口部中心离开距离L(例如,10cm)的位置。从孔H的开口部中心离开距离L的位置能够在x、y、z空间内设定为半球状。在图32中,例示出从孔H的开口部中心沿z方向离开距离L的位置。The switching point 2 is determined by the relative position from the hole H. For example, the switching point 2 is a position away from the center of the opening of the hole H by a distance L (for example, 10 cm). The position separated by the distance L from the center of the opening of the hole H can be set in a hemispherical shape in the x, y, and z spaces. In FIG. 32 , a position separated by a distance L from the center of the opening of the hole H in the z direction is shown as an example.
图像处理部212从当前图像提取包含工件W的前端与孔H的图像,并向第二控制部213输出。另外,图像处理部212根据第一拍摄部30或者第二拍摄部40的摄像机参数(焦距等)对图像中的距离与现实空间中的距离的关系进行计算,并向第二控制部213输出。第二控制部213根据提取的图像中的工件W的前端位置与孔H的中心位置之差,对端点是否通过切换点2进行判断。The image processing unit 212 extracts an image including the tip of the workpiece W and the hole H from the current image, and outputs the image to the second control unit 213 . In addition, the image processing unit 212 calculates the relationship between the distance in the image and the distance in real space based on the camera parameters (focal length, etc.) of the first imaging unit 30 or the second imaging unit 40 , and outputs the calculation to the second control unit 213 . The second control unit 213 judges whether the end point passes the switching point 2 based on the difference between the front end position of the workpiece W and the center position of the hole H in the extracted image.
在端点未通过切换点2的情况(步骤S136中为否)下,第一控制部202、第二控制部213以及驱动控制部220反复进行步骤S134的处理。When the end point has not passed the switching point 2 (No in step S136), the first control unit 202, the second control unit 213, and the drive control unit 220 repeat the process of step S134.
在端点通过切换点2的情况(步骤S136中为是)下,驱动控制部220将从第一控制部202输出的指令值与从力控制部230输出的指令值进行合成,并向机器人10输出。动作控制部101根据指令值而使臂11(即端点)移动(步骤S138)。When the end point passes through switching point 2 (Yes in step S136), the drive control unit 220 synthesizes the command value output from the first control unit 202 and the command value output from the force control unit 230, and outputs the command value to the robot 10. . The motion control unit 101 moves the arm 11 (that is, the endpoint) according to the command value (step S138).
以下,对步骤S138的处理进行具体说明。在进行步骤S138的处理之前、即在步骤S134的处理中,驱动控制部220使来自位置控制部2000的指令值的分量α为0.05,并使来自视觉伺服部210的指令值的分量β为0.95,从而合成指令值。Hereinafter, the processing of step S138 will be specifically described. Before performing the process of step S138, that is, in the process of step S134, the drive control unit 220 sets the component α of the command value from the position control unit 2000 to 0.05, and sets the component β of the command value from the visual servoing unit 210 to 0.95. , so as to synthesize the instruction value.
在步骤S138的处理开始后,驱动控制部220将来自位置控制部2000的指令值的分量α从0.05切换至0.5。另外,将来自力控制部230的指令值的分量γ从0切换至0.5。其结果是,驱动控制部220在使来自位置控制部2000的指令值的分量α为0.5、来自视觉伺服部210的指令值的分量β为0、来自力控制部230的指令值的分量γ为0.5的前提下,对它们的指令值进行合成,并向机器人10输出。此外,在步骤S138中,由于不使用视觉伺服,所以分量β保持原样地为0。此外,也可以阶段性地切换分量α、γ。After the process of step S138 is started, the drive control unit 220 switches the component α of the command value from the position control unit 2000 from 0.05 to 0.5. In addition, the component γ of the command value from the force control unit 230 is switched from 0 to 0.5. As a result, the drive control unit 220 sets the component α of the command value from the position control unit 2000 to 0.5, the component β of the command value from the visual servoing unit 210 to 0, and the component γ of the command value from the force control unit 230 to On the premise of 0.5, their command values are synthesized and output to the robot 10 . In addition, in step S138, since visual servoing is not used, the component β is 0 as it is. In addition, the components α and γ may be switched in stages.
接下来,力控制部230对通过视觉伺服以及力控制而使端点移动的结果、即对端点是否到达目标地点进行判断(步骤S140)。能够根据力觉传感器102的输出,判断是否到达目标地点。Next, the force control unit 230 determines whether or not the end point has reached the target point as a result of moving the end point by visual servoing and force control (step S140 ). Based on the output of the force sensor 102, it can be determined whether or not the target point has been reached.
在端点未到达目标地点的情况(步骤S140中为否)下,位置控制部200、力控制部230以及驱动控制部220反复进行步骤S138的处理。When the end point has not reached the target point (NO in step S140 ), the position control unit 200 , the force control unit 230 , and the drive control unit 220 repeat the process of step S138 .
在端点到达目标地点的情况(步骤S140中为是)下,驱动控制部220结束处理。When the end point has reached the target point (YES in step S140), the drive control unit 220 ends the processing.
根据本实施方式,能够维持位置控制的高速,并能够与不同的目标位置对应。另外,即使在无法确认目标位置等无法使用视觉伺服的情况下,也能够维持位置控制的高速并安全地进行作业。According to the present embodiment, it is possible to maintain high speed of position control and to cope with different target positions. In addition, even when visual servoing cannot be used, such as when the target position cannot be confirmed, it is possible to maintain high-speed position control and perform work safely.
此外,在本实施方式中,切换点1预先设定于作业空间的任意位置,切换点2设定于从孔H离开规定的距离的位置,但是切换点1、2的位置并不限定于此。也可以利用从规定的位置开始的经过时间设定切换点1、2的位置。具体而言,例如,切换点2的位置能够设定在通过切换点1后的30秒后。另外,也可以利用从规定的位置离开的距离设定切换点1、2的位置。具体而言,例如,切换点1的位置能够设定于从开始地点离开距离X的位置。并且,也可以根据来自外部的信号输入(例如,来自输入装置25的输入信号)设定切换点1、2的位置。In addition, in this embodiment, switching point 1 is set in advance at an arbitrary position in the working space, and switching point 2 is set at a position separated from the hole H by a predetermined distance, but the positions of switching points 1 and 2 are not limited to this. . The positions of switching points 1 and 2 may also be set using the elapsed time from a predetermined position. Specifically, for example, the position of switching point 2 can be set 30 seconds after passing through switching point 1 . Alternatively, the positions of switching points 1 and 2 may be set using the distance from a predetermined position. Specifically, for example, the position of the switching point 1 can be set at a distance X from the start point. In addition, the positions of switching points 1 and 2 may be set based on an external signal input (for example, an input signal from the input device 25).
第五实施方式Fifth Embodiment
本发明的第五实施方式通过位置控制与力控制而进行对象物向孔的插入等装配作业,但是本发明的适用范围并不限定于此。In the fifth embodiment of the present invention, assembly work such as insertion of an object into a hole is performed by position control and force control, but the scope of application of the present invention is not limited thereto.
本发明的第五实施方式是通过位置控制、视觉伺服以及力控制而将本发明应用于对象物向孔的插入等装配作业的方式。以下,对本发明的第五实施方式进行说明。由于第五实施方式的机器人系统4的结构与机器人系统3相同,所以省略其说明。另外,在机器人系统4进行的处理中,对于与第二实施方式、第三实施方式以及第四实施方式相同的部分,标注相同的附图标记,并省略详细的说明。The fifth embodiment of the present invention is a form in which the present invention is applied to assembly operations such as insertion of an object into a hole through position control, visual servoing, and force control. Hereinafter, a fifth embodiment of the present invention will be described. Since the structure of the robot system 4 of the fifth embodiment is the same as that of the robot system 3 , description thereof will be omitted. In addition, in the processing performed by the robot system 4 , the same parts as those in the second embodiment, the third embodiment, and the fourth embodiment are denoted by the same reference numerals, and detailed description thereof will be omitted.
对本实施方式的机器人系统4的特征的处理进行说明。图33是表示机器人系统4的臂11A的控制处理的流程的流程图。该处理例如是经由未图示的按钮等而输入控制开始指示从而开始的。在本实施方式中,如图34所示,以将工件W插入形成于移动台的孔H的装配作业为例进行说明。The characteristic processing of the robot system 4 of this embodiment will be described. FIG. 33 is a flowchart showing the flow of control processing of the arm 11A of the robot system 4 . This process is started, for example, by inputting a control start instruction via a button (not shown) or the like. In this embodiment, as shown in FIG. 34 , the assembly work of inserting the workpiece W into the hole H formed in the moving table will be described as an example.
若经由未图示的按钮等而输入控制开始指示,则第一控制部202通过位置控制而控制臂11A,并使端点移动(步骤S130)。When a control start instruction is input through a button (not shown) or the like, the first control unit 202 controls the arm 11A by position control to move the end point (step S130 ).
接下来,第一控制部202对通过位置控制而使端点移动的结果、即对端点是否通过切换点1进行判断(步骤S132)。Next, the first control unit 202 judges the result of moving the end point by the position control, that is, whether the end point has passed the switching point 1 (step S132 ).
在端点未通过切换点1的情况(步骤S132中为否)下,第一控制部202反复进行步骤S130的处理。When the end point has not passed through the switching point 1 (No in step S132), the first control unit 202 repeats the process of step S130.
在端点通过切换点1的情况(步骤S132中为是)下,驱动控制部220随着时间的经过而对分量α以及β进行阶段性切换,并使用切换后的分量α以及β,将从第一控制部202输出的指令值与从第二控制部213输出的指令值进行合成,并向机器人10输出。这样,动作控制部101根据指令值而使臂11A(即端点)移动(步骤S134)。When the end point passes the switching point 1 (YES in step S132), the drive control unit 220 switches the components α and β stepwise with the passage of time, and uses the switched components α and β to convert from the first The command value output from the first control unit 202 is combined with the command value output from the second control unit 213 and output to the robot 10 . In this way, the operation control unit 101 moves the arm 11A (that is, the end point) according to the command value (step S134 ).
接下来,第二控制部213对通过位置控制以及视觉伺服而使端点移动的结果、即对端点是否通过切换点2进行判断(步骤S136)。Next, the second control unit 213 judges whether or not the end point has passed the switching point 2 as a result of moving the end point by position control and visual servoing (step S136 ).
在端点未通过切换点2的情况(步骤S136中为否)下,第一控制部202、第二控制部213以及驱动控制部220反复进行步骤S134的处理。When the end point has not passed the switching point 2 (No in step S136), the first control unit 202, the second control unit 213, and the drive control unit 220 repeat the process of step S134.
在端点通过切换点2的情况(步骤S136中为是)下,驱动控制部220将从第一控制部202输出的指令值、从第二控制部213输出的指令值、以及从力控制部230输出的指令值进行合成,并向机器人10输出。动作控制部101根据指令值而使臂11A(即端点)移动(步骤S139)。When the end point passes through switching point 2 (YES in step S136), the drive control unit 220 outputs the command value output from the first control unit 202, the command value output from the second control unit 213, and the command value output from the force control unit 230. The output command values are synthesized and output to the robot 10 . The motion control unit 101 moves the arm 11A (that is, the end point) according to the command value (step S139 ).
以下,对步骤S139的处理进行具体说明。在进行步骤S139的处理之前、即在步骤S134的处理中,驱动控制部220使来自位置控制部200的指令值的分量α为0.05,并使来自视觉伺服部210的指令值的分量β为0.95,从而合成指令值。Hereinafter, the processing of step S139 will be specifically described. Before the processing of step S139, that is, in the processing of step S134, the drive control unit 220 sets the component α of the command value from the position control unit 200 to 0.05, and sets the component β of the command value from the visual servoing unit 210 to 0.95. , so as to synthesize the instruction value.
在步骤S139的处理开始后,驱动控制部220将来自位置控制部2000的指令值的分量α从0.05切换至0.34。另外,驱动控制部220将来自视觉伺服部210的指令值的分量β从0.95切换至0.33。并且,驱动控制部220将来自力控制部230的指令值的分量γ从0切换至0.33。其结果是,驱动控制部220在使来自位置控制部2000的指令值的分量α为0.34、来自视觉伺服部210的指令值的分量β为0.33、来自力控制部230的指令值的分量γ为0.33的前提下对它们的指令值进行合成,并向机器人10输出。After the process of step S139 is started, the drive control unit 220 switches the component α of the command value from the position control unit 2000 from 0.05 to 0.34. Also, the drive control unit 220 switches the component β of the command value from the visual servoing unit 210 from 0.95 to 0.33. Then, the drive control unit 220 switches the component γ of the command value from the force control unit 230 from 0 to 0.33. As a result, the drive control unit 220 sets the component α of the command value from the position control unit 2000 to 0.34, the component β of the command value from the visual servoing unit 210 to 0.33, and the component γ of the command value from the force control unit 230 to These command values are synthesized under the premise of 0.33 and output to the robot 10 .
此外,分量α、β、γ的比率α:β:γ并不限定于0.34:0.33:0.33。分量α、β、γ能够与作业对应地设定分量α、β、γ的和为1的各种值。另外,也可以缓缓切换分量α、β、γ。In addition, the ratio α:β:γ of the components α, β, γ is not limited to 0.34:0.33:0.33. Components α, β, γ can be set to various values in which the sum of components α, β, γ is 1 according to the job. Alternatively, the components α, β, and γ may be switched gradually.
接下来,力控制部230对通过视觉伺服以及力控制而使端点移动的结果、即对端点是否到达目标地点进行判断(步骤S140)。Next, the force control unit 230 determines whether or not the end point has reached the target point as a result of moving the end point by visual servoing and force control (step S140 ).
在端点未到达目标地点的情况(步骤S140中为否)下,位置控制部2000、视觉伺服部210、力控制部230以及驱动控制部220反复进行步骤S139的处理。When the end point has not reached the target point (No in step S140), the position control unit 2000, the visual servoing unit 210, the force control unit 230, and the drive control unit 220 repeat the process of step S139.
在端点到达目标地点的情况(步骤S140中为是)下,驱动控制部220结束处理。When the end point has reached the target point (YES in step S140), the drive control unit 220 ends the process.
根据本实施方式,能够维持位置控制的高速,并能够使端点向不同的目标位置移动。特别是,即使在目标位置移动的情况下、并且在无法确认目标位置的情况下,由于通过位置控制、视觉伺服、力控制而进行控制,所以也能够维持位置控制的高速并安全地进行作业。According to the present embodiment, while maintaining the high speed of position control, it is possible to move the end point to a different target position. In particular, even when the target position is moving and the target position cannot be confirmed, since control is performed by position control, visual servoing, and force control, it is possible to maintain high-speed position control and perform work safely.
此外,在本实施方式中,通过同时进行位置控制、视觉伺服、力控制(并行控制)而控制臂,但是在第五实施方式中,通过同时进行位置控制、力控制(并行控制)而控制臂。驱动控制部220能够根据工件W、孔H等的可否目视确认、有无移动等规定的条件,选择是否根据预先设定的储存于存储器22等的条件等而同时进行位置控制、视觉伺服、力控制,或者同时进行位置控制、力控制。In addition, in the present embodiment, the arm is controlled by simultaneously performing position control, visual servoing, and force control (parallel control), but in the fifth embodiment, the arm is controlled by simultaneously performing position control and force control (parallel control). . The drive control unit 220 can select whether to simultaneously perform position control, visual servoing, Force control, or simultaneous position control and force control.
在上述实施方式中,对使用单臂机器人的情况进行了说明,但是也能够将本发明应用于使用双臂机器人的情况。在上述实施方式中,对在机器人的臂的前端设置有端点的情况进行了说明,但是所谓设置于机器人并不限定于设置于臂。例如,也可以在机器人设置由多个接头与连杆构成并通过使接头移动而整体活动的机械手,并且将机械手的前端作为端点。In the above embodiment, the case of using a single-arm robot was described, but the present invention can also be applied to the case of using a dual-arm robot. In the above-mentioned embodiment, the case where the endpoint is provided at the tip of the arm of the robot has been described, but the term provided on the robot is not limited to being provided on the arm. For example, the robot may be provided with a manipulator that is composed of a plurality of joints and links and moves as a whole by moving the joints, and the end of the manipulator may be used as an end point.
另外,在上述实施方式中,具备第一拍摄部30以及第二拍摄部40这两个拍摄部,但是拍摄部也可以是一个。In addition, in the above-described embodiment, two imaging units of the first imaging unit 30 and the second imaging unit 40 are provided, but there may be one imaging unit.
以上,用实施方式对本发明进行了说明,但是本发明的技术的范围并不限定于上述实施方式所记载的范围。能够对上述实施方式施加各种变更或者改进,这对于本领域技术人员是显而易见的。另外,根据权利要求书的记载可知,施加了这样的变更或者改进的方式也能够包含在本发明的技术的范围内。特别是,本发明可以提供分别设置有机器人、控制部以及拍摄部的机器人系统,可以提供机器人包括控制部等的机器人,也可以提供仅由控制部,或者由控制部以及拍摄部构成的机器人控制装置。另外,本发明也能够提供控制机器人等的程序、存储程序的存储介质。As mentioned above, although this invention was demonstrated using embodiment, the technical scope of this invention is not limited to the range described in said embodiment. It is obvious to those skilled in the art that various changes or improvements can be added to the above-described embodiments. In addition, it is clear from the description of the claims that an embodiment with such a change or improvement can also be included in the technical scope of the present invention. In particular, the present invention can provide a robot system that is respectively provided with a robot, a control unit, and an imaging unit, can provide a robot that includes a control unit, etc., or can provide a robot controlled by only a control unit or a control unit and an imaging unit. device. In addition, the present invention can also provide a program for controlling a robot or the like, and a storage medium storing the program.
第六实施方式Sixth Embodiment
1.本实施方式的手段1. The means of this embodiment
广泛公知有使用图像信息的机器人控制。例如,公知有连续地获取图像信息,并对从该图像信息获取的信息与成为目标的信息的比较处理的结果进行反馈的视觉伺服控制。在视觉伺服中,向从最新的图像信息获取的信息与成为目标的信息之差变小的方向控制机器人。具体而言,进行如下控制:求出与目标接近那样的关节角的变化量等,并根据该变化量等来驱动关节。Robot control using image information is widely known. For example, visual servoing control is known in which image information is continuously acquired, and a result of comparison processing between information acquired from the image information and target information is fed back. In visual servoing, the robot is controlled such that the difference between the information acquired from the latest image information and the target information becomes smaller. More specifically, control is performed in which the amount of change in the joint angle to approach the target is obtained, and the joint is driven based on the amount of change or the like.
在给予成为目标的机器人的手尖等的位置姿势,并以形成该目标的位置姿势的方式控制机器人的手段中,不易提高定位精度、即不易使手尖(手部)等正确地向目标的位置姿势移动。理想而言,若确定机器人的模型,则能够依据该模型而唯一求出手尖位置姿势。这里的模型例如是指设置于两个关节之间的框架(连杆)的长度、关节的构造(关节的旋转方向、是否存在偏置等)等信息。In the means of giving the position and posture of the tip of the hand of the target robot, and controlling the robot so as to form the target position and posture, it is difficult to improve the positioning accuracy, that is, it is difficult to accurately align the tip of the hand (hand) to the target. Position posture movement. Ideally, if the model of the robot is determined, the position and posture of the tip of the hand can be uniquely obtained based on the model. The model here refers to, for example, information such as the length of a frame (link) provided between two joints, the structure of the joint (rotation direction of the joint, presence or absence of offset, etc.).
但是,机器人包含各种误差。例如连杆的长度的偏差、由重力引起的挠曲等。由于这些误差因素,从而在进行使机器人采取给定的姿势的控制(例如决定各关节的角度的控制)的情况下,理想的位置姿势与实际的位置姿势会变为不同的值。However, robots contain various errors. For example, variations in the length of the connecting rod, deflection due to gravity, and the like. Due to these error factors, when performing control to make the robot take a given posture (for example, control to determine the angle of each joint), the ideal position and posture will have different values from the actual position and posture.
在这一点上,在视觉伺服控制中,由于对与拍摄图像相对的图像处理结果进行反馈,所以与人们能够边用眼睛观察作业状况边对臂、手的移动方向进行微调的情况相同地,即使当前的位置姿势与目标的位置姿势偏移,也能够识别并修正该偏移。In this regard, in the visual servoing control, since the image processing result of the captured image is fed back, it is the same as the case where people can fine-tune the movement direction of the arms and hands while observing the work situation with the eyes. It is also possible to recognize and correct the deviation between the current position and posture and the position and posture of the target.
在视觉伺服控制中,作为上述“从图像获取的信息”以及“成为目标的信息”,能够使用机器人的手尖等的三维的位置姿势信息,也能够使用从图像获取的图像特征量而不将其转换为位置姿势信息。将使用位置姿势信息的视觉伺服称为位置基准的视觉伺服,将使用图像特征量的视觉伺服称为特征量基准的视觉伺服。In the visual servoing control, three-dimensional position and posture information such as the hand tip of the robot can be used as the above-mentioned "information acquired from the image" and "target information", and image feature values acquired from the image can also be used instead of It is converted into position and pose information. Visual servoing using position and posture information is called position-based visual servoing, and visual servoing using image feature values is called feature-based visual servoing.
为了适当地进行视觉伺服,需要从图像信息精度良好地检测位置姿势信息或者图像特征量。若该检测处理的精度较低,则会错误识别当前的状态。因此,反馈于控制环路的信息也没有成为使机器人的状态与目标状态适当地接近的信息,而无法实现精度较高的机器人控制。In order to properly perform visual servoing, it is necessary to accurately detect position and posture information or image feature values from image information. If the accuracy of this detection process is low, the current state may be erroneously recognized. Therefore, the information fed back to the control loop does not become the information that brings the state of the robot appropriately close to the target state, and high-precision robot control cannot be realized.
设想位置姿势信息、图像特征量都是通过某些检测处理(例如匹配处理)等而求出的,但是该检测处理的精度未必足够。这是因为在机器人实际进行动作的环境下,在拍摄图像中,不仅拍摄作为识别对象的物体(例如机器人的手部),也会拍摄工件、夹具、或者配置于动作环境的物体等。由于各种物体照入图像的背景,从而导致希望的物体的识别精度(检测精度)降低,并且求出的位置姿势信息、图像特征量的精度也变低。It is assumed that position and orientation information and image feature quantities are obtained through some detection processing (for example, matching processing), but the accuracy of this detection processing is not necessarily sufficient. This is because in the environment where the robot actually operates, not only the object to be recognized (for example, the robot's hand) but also workpieces, jigs, or objects placed in the operating environment are captured in captured images. Since various objects fall into the background of the image, the recognition accuracy (detection accuracy) of a desired object is lowered, and the accuracy of obtained position and orientation information and image feature values is also lowered.
在专利文献1中,公开如下手段:在位置基准的视觉伺服中,对从图像计算出的空间的位置或移动速度与从编码器计算出的空间的位置或移动速度进行比较,从而检测异常。此外,由于空间的位置是包含在位置姿势信息内的信息,并且移动速度也是根据位置姿势信息的变化量求出的信息,所以以下将空间的位置或移动速度作为位置姿势信息进行说明。Patent Document 1 discloses a method for detecting an abnormality by comparing a spatial position or movement speed calculated from an image with a spatial position or movement speed calculated from an encoder in position-based visual servoing. In addition, since the position in space is information included in the position and posture information, and the movement speed is also information obtained from the amount of change in the position and posture information, the following description will describe the position or movement speed in space as the position and posture information.
考虑通过使用专利文献1的手段,从而在根据图像信息求出的位置姿势信息产生较大误差等、视觉伺服产生某些异常的情况下,能够检测该异常。若能够实现异常的检测,则能够停止机器人的控制,或者重新进行位置姿势信息的检测,从而至少抑制在控制中保持原样地使用异常的信息的情况。It is conceivable that by using the method of Patent Document 1, when some abnormality occurs in visual servoing, such as a large error in the position and posture information obtained from the image information, the abnormality can be detected. If abnormality detection can be realized, the control of the robot can be stopped, or the detection of the position and posture information can be resumed, thereby at least suppressing the use of abnormality information as it is in the control.
但是,专利文献1的手段是以位置基准的视觉伺服为前提的。若为位置基准,则如上所述,进行根据编码器等的信息容易求出的位置姿势信息与根据图像信息求出的位置姿势信息的比较处理即可,因此实现容易。另一方面,在特征量基准的视觉伺服中,在机器人的控制中使用图像特征量。而且,即使容易根据编码器等的信息求出机器人的手尖等的空间的位置,也不能直接求出与图像特征量的关系。即,在设想特征量基准的视觉伺服的情况下,难以应用专利文献1的手段。However, the method of Patent Document 1 is based on the premise of position-based visual servoing. In the case of a position reference, as described above, it is only necessary to perform a comparison process between position and posture information easily obtained from information such as an encoder and position and posture information obtained from image information, so implementation is easy. On the other hand, in visual servoing based on feature quantities, image feature quantities are used for robot control. Furthermore, even if it is easy to obtain the spatial position of the hand tip of the robot from information such as an encoder, it is not possible to directly obtain the relationship with the image feature value. That is, it is difficult to apply the method of Patent Document 1 when visual servoing based on a feature quantity is assumed.
因此,本申请人提出了如下手段:在使用图像特征量的控制中,使用实际从图像信息获取的图像特征量变化量、与根据从机器人控制的结果获取的信息而推断的推断图像特征量变化量,从而检测异常。具体而言,如图35所示,本实施方式的机器人控制装置1000包括根据图像信息来控制机器人20000的机器人控制部1110;根据图像信息来求出图像特征量变化量的变化量运算部1120;根据作为机器人20000或者对象物的信息并且作为图像信息以外的信息的变化量推断用信息、对图像特征量变化量的推断量亦即推断图像特征量变化量进行运算的变化量推断部1130;以及通过图像特征量变化量与推断图像特征量变化量的比较处理来进行异常判定的异常判定部1140。Therefore, the present applicant has proposed a means of using, in control using image feature amounts, the amount of change in image feature amounts actually acquired from image information and the estimated change in image feature amounts estimated from information acquired from the results of robot control. amount to detect anomalies. Specifically, as shown in FIG. 35 , the robot control device 1000 of the present embodiment includes a robot control unit 1110 that controls the robot 20000 based on image information; a change calculation unit 1120 that calculates the change of image feature values based on image information; The change amount estimating unit 1130 that calculates the estimated amount of image feature amount change, that is, the estimated image feature amount change amount, based on the information for estimating the amount of change that is information on the robot 20000 or the object and is information other than image information; and The abnormality determination unit 1140 performs abnormality determination by comparing the change amount of the image feature amount and the estimated image feature amount change amount.
这里,图像特征量如上所述是表示图像中的区域、面积、线段的长度、特征点的位置等特征的量,图像特征量变化量是表示从多个(狭义而言为两个)图像信息获取的多个图像特征量之间的变化的信息。作为图像特征量,若为使用3个特征点的图像上的二维位置的例子,则图像特征量为6维的矢量,并且图像特征量变化量为两个6维矢量之差、即为以矢量的各要素之差为要素的6维矢量。Here, as described above, the image feature amount is an amount representing features such as the region, area, line segment length, and feature point position in the image, and the image feature amount change amount represents the number of images from multiple (in a narrow sense, two) image information. Information on changes among the acquired plurality of image feature quantities. As an image feature quantity, if it is an example of a two-dimensional position on an image using three feature points, the image feature quantity is a 6-dimensional vector, and the amount of change in the image feature quantity is the difference between two 6-dimensional vectors, that is, The difference between the elements of the vector is the 6-dimensional vector of the elements.
另外,变化量推断用信息是用于图像特征量变化量的推断的信息,并且是图像信息以外的信息。变化量推断用信息例如可以是从机器人控制的结果获取(实测)的信息,具体而言也可以是机器人20000的关节角信息。关节角信息能够从测定、控制机器人的关节驱动用的马达(广义而言为促动器)的动作的编码器获取。或者,变化量推断用信息也可以是由机器人20000的末端执行器2220、或者由机器人20000进行的作业的对象物的位置姿势信息。位置姿势信息例如是包含物体的基准点的三维位置(x、y、z)、与相对于基准姿势的绕各轴的旋转(R1、R2、R3)的6维矢量。考虑各种求出物体的位置姿势信息的手段,但是例如使用如下手段即可:使用超声波的距离测定手段、使用测量仪的手段、在手尖设置LED等而检测该LED从而进行测量的手段、使用机械三维测定器的手段等。In addition, the change amount estimating information is information used for estimating the change amount of the image feature amount, and is information other than image information. The change amount estimating information may be, for example, information acquired (actually measured) from the results of robot control, and specifically, joint angle information of the robot 20000 may be used. The joint angle information can be obtained from encoders that measure and control the motion of motors (in a broad sense, actuators) for driving joints of the robot. Alternatively, the change amount estimating information may be the end effector 2220 of the robot 20000 or the position and posture information of the object of the work performed by the robot 20000 . The position and posture information is, for example, a six-dimensional vector including the three-dimensional position (x, y, z) of the reference point of the object and the rotation (R1, R2, R3) about each axis relative to the reference posture. Various means of obtaining the position and posture information of an object are conceivable, but for example, the following means may be used: means of distance measurement using ultrasonic waves, means of using a measuring instrument, means of placing an LED on the tip of the hand and detecting the LED to perform measurement, Means using a mechanical three-dimensional measuring device, etc.
这样,在使用图像特征量的机器人控制(狭义而言为特征量基准的视觉伺服)中,能够检测异常。此时,进行根据实际获取的图像信息而求出的图像特征量变化量、与根据从不同于图像信息的观点获取的变化量推断用信息而求出的推断图像特征量变化量的比较处理。In this way, abnormality can be detected in robot control using image feature values (visual servoing based on feature values in a narrow sense). At this time, a comparison process is performed between the image feature amount change obtained from actually acquired image information and the estimated image feature amount change obtained from change estimation information obtained from a viewpoint different from the image information.
此外,根据图像信息进行的机器人的控制并不限定于视觉伺服。例如,在视觉伺服中,连续进行针对控制环路的基于图像信息的信息反馈,但是也可以将进行1次图像信息的获取并根据该图像信息而求出针对目标位置姿势的移动量从而根据该移动量来进行位置控制的视觉方式,作为基于图像信息的机器人的控制来使用。另外,除了视觉伺服、视觉方式之外,在使用图像信息的机器人的控制中,在来自图像信息的信息的检测等,作为检测异常的手段,也能够应用本实施方式的手段。In addition, robot control based on image information is not limited to visual servoing. For example, in visual servoing, information feedback based on image information to the control loop is continuously performed, but it is also possible to acquire image information once and calculate the movement amount for the target position and posture based on the image information, and then The vision method, which controls the position by the amount of movement, is used for robot control based on image information. In addition to visual servoing and visual methods, the means of this embodiment can also be applied as a means of detecting abnormality in the control of a robot using image information, detection of information from image information, and the like.
但是,如后述那样在本实施方式的手段中,设想推断图像特征量变化量的运算使用雅克比矩阵。而且雅克比矩阵是表示给定的值的变化量与其他值的变化量的关系的信息。例如,即使第一信息x与第二信息y为非线性关系(y=g(x)中的g为非线性函数),也能够考虑在给定的值的附近,第一信息的变化量Δx与第二信息的变化量Δy为线性关系(Δy=h(Δx)中的h为线性函数),并且雅克比矩阵表示该线性关系。即,在本实施方式中,设想在处理中不使用图像特征量本身,而使用图像特征量变化量。由此,在将本实施方式的手段应用于视觉方式等、视觉伺服以外的控制的情况下,应留意的点在于,不使用仅获取一次图像信息的手段,而需要使用以求出图像特征量变化量的方式至少获取2次以上图像信息的手段。例如若将本实施方式的手段应用于视觉方式,则需要进行多次图像信息的获取以及成为目标的移动量的运算。However, as will be described later, in the means of this embodiment, it is assumed that a Jacobian matrix is used for the calculation of the estimated change amount of the image feature value. Furthermore, the Jacobian matrix is information indicating the relationship between the amount of change in a given value and the amount of change in other values. For example, even if the first information x and the second information y have a nonlinear relationship (g in y=g(x) is a nonlinear function), it can be considered that the amount of change Δx of the first information is around a given value. There is a linear relationship with the change amount Δy of the second information (h in Δy=h(Δx) is a linear function), and the Jacobian matrix represents the linear relationship. That is, in the present embodiment, it is assumed that the image feature amount change amount is used instead of the image feature amount itself in the processing. Therefore, when the means of this embodiment is applied to control other than visual servoing, such as visual methods, it should be noted that it is not necessary to use the means of acquiring image information only once, but to obtain image feature values. It is a means to obtain image information at least twice or more in the way of variation. For example, when the means of this embodiment is applied to a visual system, it is necessary to perform acquisition of image information and calculation of a target movement amount multiple times.
以下,在对本实施方式的机器人控制装置1000、机器人的系统构成例进行说明后,对视觉伺服的概要进行说明。在该前提下,对本实施方式的异常检测手段进行说明,最后也对变形例进行说明。此外,以下作为使用图像信息的机器人的控制而以视觉伺服为例,但是以下的说明能够扩大为使用其他的图像信息的机器人的控制。Hereinafter, the outline of the visual servoing will be described after describing a system configuration example of the robot control device 1000 and the robot according to the present embodiment. On this premise, the abnormality detection means of the present embodiment will be described, and finally a modified example will also be described. In addition, in the following, visual servoing is used as an example of robot control using image information, but the following description can be extended to robot control using other image information.
2.系统构成例2. Example of system configuration
在图36中示出了本实施方式的机器人控制装置1000的详细的系统构成例。但是,机器人控制装置1000并不限定于图36的结构,而能够进行省略上述一部分的构成要素、或追加其他的构成要素等各种变形实施。A detailed system configuration example of the robot controller 1000 according to this embodiment is shown in FIG. 36 . However, the robot control device 1000 is not limited to the configuration shown in FIG. 36 , and various modifications such as omitting some of the above-mentioned components or adding other components are possible.
如图36所示,机器人控制装置1000包括目标特征量输入部111、目标轨道生成部112、关节角控制部113、驱动部114、关节角检测部115、图像信息获取部116、图像特征量运算部117、变化量运算部1120、变化量推断部1130以及异常判定部1140。As shown in FIG. 36 , the robot control device 1000 includes a target feature input unit 111, a target trajectory generation unit 112, a joint angle control unit 113, a drive unit 114, a joint angle detection unit 115, an image information acquisition unit 116, and an image feature calculation unit. unit 117 , a change calculation unit 1120 , a change estimation unit 1130 , and an abnormality determination unit 1140 .
目标特征量输入部111对目标轨道生成部112输入成为目标的图像特征量fg。目标特征量输入部111例如也可以作为接受由使用者进行的目标图像特征量fg的输入的接口等来实现。在机器人控制中,进行使根据图像信息求出的图像特征量f与这里输入的目标图像特征量fg接近(狭义而言使它们一致)的控制。此外,也可以获取与目标状态对应的图像信息(参照图像、目标图像),并根据该图像信息求出目标图像特征量fg。或者,也可以不保持参照图像,而直接接受目标图像特征量fg的输入。The target feature value input unit 111 inputs the target image feature value fg to the target trajectory generation unit 112 . The target feature value input unit 111 may be realized as an interface or the like that accepts input of the target image feature value fg by the user, for example. In the robot control, control is performed to bring the image feature value f obtained from the image information close to the input target image feature value fg (in a narrow sense, to make them match). In addition, image information (reference image, target image) corresponding to the target state may be obtained, and the feature value fg of the target image may be obtained from the image information. Alternatively, instead of holding the reference image, the input of the feature value fg of the target image may be directly accepted.
目标轨道生成部112根据目标图像特征量fg、以及从图像信息求出的图像特征量f,生成使机器人20000进行动作的目标轨道。具体而言,进行求出用于使机器人20000与目标状态(与fg对应的状态)接近的关节角的变化量Δθg的处理。该Δθg成为关节角的暂定的目标值。此外,在目标轨道生成部112中,也可以从Δθg求出每单位时间的关节角的驱动量(图36中的带点θg)。The target trajectory generation unit 112 generates a target trajectory for operating the robot 20000 based on the target image feature value fg and the image feature value f obtained from the image information. Specifically, processing is performed to obtain the amount of change Δθg of the joint angle for bringing the robot 20000 closer to the target state (state corresponding to fg). This Δθg becomes a tentative target value of the joint angle. In addition, in the target trajectory generating unit 112, the driving amount of the joint angle per unit time may be obtained from Δθg (dotted θg in FIG. 36 ).
关节角控制部113根据关节角的目标值Δθg、以及当前的关节角的值θ,进行关节角的控制。例如,由于Δθg为关节角的变化量,所以使用θ与Δθg,进行求出关节角可以为何值的处理。驱动部114跟随关节角控制部113的控制,进行驱动机器人20000的关节的控制。The joint angle control unit 113 controls the joint angle based on the target value Δθg of the joint angle and the current value θ of the joint angle. For example, since Δθg is the amount of change in the joint angle, a process of finding what value the joint angle can be is performed using θ and Δθg. The drive unit 114 performs control to drive the joints of the robot 20000 following the control of the joint angle control unit 113 .
关节角检测部115进行检测机器人20000的关节角为何值的处理。具体而言,在通过由驱动部114进行的驱动控制使关节角变化后,检测该变化后的关节角的值,并使当前的关节角的值为θ而向关节角控制部113输出。关节角检测部115具体而言也可以作为获取编码器的信息的接口等来实现。The joint angle detection unit 115 performs a process of detecting what the joint angle of the robot 20000 is. Specifically, after the joint angle is changed by drive control by the drive unit 114 , the changed joint angle value is detected, and the current joint angle value is θ, which is output to the joint angle control unit 113 . Specifically, the joint angle detection unit 115 may be realized as an interface or the like for acquiring encoder information.
图像信息获取部116从拍摄部等进行图像信息的获取。这里的拍摄部可以是如图37所示地配置于环境的拍摄部,也可以是设置于机器人20000的臂2210等的拍摄部(例如手眼摄像机)。图像特征量运算部117根据图像信息获取部116所获取的图像信息,进行图像特征量的运算处理。此外,根据图像信息对图像特征量进行运算的手段公知有边缘检测处理、匹配处理等各种手段,并且在本实施方式中能够广泛地应用它,因此省略详细的说明。由图像特征量运算部117求出的图像特征量作为最新的图像特征量f,而向目标轨道生成部112输出。The image information acquisition unit 116 acquires image information from an imaging unit or the like. The imaging unit here may be an imaging unit disposed in the environment as shown in FIG. 37 , or may be an imaging unit (for example, a hand-eye camera) provided on the arm 2210 of the robot 20000 or the like. The image feature calculation unit 117 performs calculation processing of the image feature based on the image information acquired by the image information acquisition unit 116 . In addition, various means such as edge detection processing and matching processing are known as means for calculating image feature values based on image information, and can be widely applied in this embodiment, so detailed description is omitted. The image feature value obtained by the image feature value calculation unit 117 is output to the target trajectory generation unit 112 as the latest image feature value f.
变化量运算部1120保持由图像特征量运算部117运算出的图像特征量,并根据过去获取的图像特征量fold、与作为处理对象的图像特征量f(狭义而言为最新的图像特征量)的差分,对图像特征量变化量Δf进行运算。The change calculation unit 1120 holds the image feature calculated by the image feature calculation unit 117, and uses the image feature f old obtained in the past and the image feature f (in a narrow sense, the latest image feature) as the processing target. ) to calculate the image feature quantity variation Δf.
变化量推断部1130保持由关节角检测部115检测出的关节角信息,并根据过去获取的关节角信息θold、与作为处理对象的关节角信息θ(狭义而言为最新的关节角信息)的差分,对关节角信息的变化量Δθ进行运算。并且,根据Δθ,求出推断图像特征量变化量Δfe。此外,在图36中,对变化量推断用信息为关节角信息的例子进行了说明,但是如上所述,作为变化量推断用信息,也可以使用机器人20000的末端执行器2220或者对象物的位置姿势信息。The change amount estimating unit 1130 holds the joint angle information detected by the joint angle detecting unit 115, and uses joint angle information θ old acquired in the past, and joint angle information θ to be processed (the latest joint angle information in a narrow sense) The difference of the joint angle information is calculated on the change amount Δθ. Then, based on Δθ, the amount of change in the estimated image feature value Δfe is obtained. In addition, in FIG. 36 , an example in which the change amount estimation information is joint angle information has been described, but as described above, the end effector 2220 of the robot 20000 or the position of the object may be used as the change amount estimation information. Posture information.
此外,图35的机器人控制部1110也可以是与图36的目标特征量输入部111、目标轨道生成部112、关节角控制部113、驱动部114、关节角检测部115、图像信息获取部116、以及图像特征量运算部117对应的控制部。In addition, the robot control unit 1110 in FIG. 35 may also be the same as the target feature quantity input unit 111, the target trajectory generation unit 112, the joint angle control unit 113, the drive unit 114, the joint angle detection unit 115, and the image information acquisition unit 116 in FIG. , and a control unit corresponding to the image feature calculation unit 117 .
另外,如图38所示,本实施方式的手段能够应用于包含如下构成的机器人:包含根据图像信息来控制机器人(具体而言为包括臂2210以及末端执行器2220的机器人主体3000)的机器人控制部1110;根据图像信息而求出图像特征量变化量的变化量运算部1120;根据作为机器人20000或者对象物的信息并且作为图像信息以外的信息的变化量推断用信息、对图像特征量变化量的推断量亦即推断图像特征量变化量进行运算的变化量推断部1130;通过图像特征量与推断图像特征量变化量的比较处理来进行异常判定的异常判定部1140。In addition, as shown in FIG. 38 , the means of this embodiment can be applied to a robot including a robot control system that controls the robot (specifically, the robot main body 3000 including the arm 2210 and the end effector 2220 ) based on image information. part 1110; a change amount calculating part 1120 for obtaining a change amount of an image feature value from image information; and an image feature value change amount based on information for estimating change amount which is information of the robot 20000 or an object and which is information other than image information. The change amount estimating unit 1130 calculates the estimated amount of the estimated image feature value, that is, the change amount of the estimated image feature value;
如图19A、图19B所示,这里的机器人也可以是包括控制装置600以及机器人主体300的机器人。若为图19A、图19B的结构,则控制装置600包括图38的机器人控制部1110等。这样,能够进行根据基于图像信息的控制而形成的动作,从而能够实现自动检测控制中的异常的机器人。As shown in FIGS. 19A and 19B , the robot here may also be a robot including a control device 600 and a robot body 300 . 19A and 19B, the control device 600 includes the robot control unit 1110 of FIG. 38 and the like. In this way, it is possible to perform actions based on control based on image information, and realize a robot that automatically detects abnormalities in control.
此外,本实施方式的机器人的构成例并不限定于图19A、图19B。例如,如图39所示,机器人也可以包括机器人主体3000、以及基座单元部350。本实施方式的机器人也可以如图39所示是双臂机器人,除了相当于头部、躯干的部分之外,还包括第一臂2210-1与第二臂2210-2。在图39中,第一臂2210-1是由关节2211、2213、与设置于关节之间的框架2215、2217构成的,第二臂2210-2也是同样的,但是并不限定于此。此外,在图39中示出了具有两支臂的双臂机器人的例子,但是本实施方式的机器人也可以具有3支以上的臂。In addition, the structural example of the robot of this embodiment is not limited to FIG. 19A, FIG. 19B. For example, as shown in FIG. 39 , the robot may include a robot main body 3000 and a base unit part 350 . The robot of this embodiment may also be a dual-arm robot as shown in FIG. 39 , which includes a first arm 2210 - 1 and a second arm 2210 - 2 in addition to parts corresponding to the head and torso. In FIG. 39 , the first arm 2210 - 1 is composed of joints 2211 , 2213 and frames 2215 , 2217 provided between the joints. The same applies to the second arm 2210 - 2 , but the present invention is not limited thereto. In addition, although the example of the dual-arm robot which has two arms is shown in FIG. 39, the robot of this embodiment may have three or more arms.
基座单元部350设置于机器人主体3000的下部,并且支承机器人主体3000。在图39的例子中,在基座单元部350设置有车轮等,从而形成为机器人整体能够移动的结构。但是,也可以是基座单元部350不具有车轮等,而固定于地面等的结构。在图39中,未图示与图19A、图19B的控制装置600对应的装置,但是在图39的机器人系统中,通过在基座单元部350收纳控制装置600,从而使机器人主体3000与控制装置600作为一体而构成。The base unit part 350 is provided at a lower portion of the robot main body 3000 and supports the robot main body 3000 . In the example of FIG. 39 , wheels and the like are provided on the base unit portion 350 so that the entire robot can move. However, the base unit portion 350 may be fixed to the ground or the like without having wheels or the like. In FIG. 39, the device corresponding to the control device 600 of FIGS. 19A and 19B is not shown, but in the robot system of FIG. The device 600 is constructed as a whole.
或者,也可以如控制装置600那样,不设置特定的控制用的机器,而通过内置于机器人的基板(更具体而言为设置于基板上的IC等),实现上述机器人控制部1110等。Alternatively, like the control device 600 , the robot control unit 1110 and the like may be realized by a substrate built into the robot (more specifically, an IC or the like provided on the substrate) without providing a specific control device.
另外,如图20所示,机器人控制装置1000的功能也可以通过经由包括有线以及无线的至少一方的网络400而与机器人通信连接的服务器500来实现。In addition, as shown in FIG. 20 , the functions of the robot controller 1000 may be realized by a server 500 communicatively connected to the robot via a network 400 including at least one of wired and wireless.
或者在本实施方式中,也可以构成为,作为机器人控制装置的服务器500进行本发明的机器人控制装置的处理的一部分。此时,通过与设置于机器人侧的机器人控制装置的分散处理,从而实现该处理。Alternatively, in the present embodiment, the server 500 as the robot control device may perform a part of the processing of the robot control device of the present invention. In this case, the processing is realized by distributed processing with the robot controller provided on the robot side.
而且,在这种情况下,作为机器人控制装置的服务器500进行本发明的机器人控制装置的各处理中的、分配于服务器500的机器人控制系统的处理。另一方面,设置于机器人的机器人控制装置进行本发明的机器人控制装置的各处理中的、分配于机器人的机器人控制装置的处理。In addition, in this case, the server 500 serving as the robot control device performs processing assigned to the robot control system of the server 500 among the respective processes of the robot control device of the present invention. On the other hand, the robot control device provided in the robot performs processing assigned to the robot control device of the robot among the respective processes of the robot control device of the present invention.
例如,本发明的机器人控制装置进行第一~第M(M为整数)处理,考虑能够以使第一处理通过子处理1a以及子处理1b来实现、并使第二处理通过子处理2a以及子处理2b来实现的方式,将第一~第M的各处理分割为多个子处理的情况。在该情况下,考虑作为机器人控制装置的服务器500进行子处理1a、子处理2a、···子处理Ma,设置于机器人侧的机器人控制装置进行子处理1b、子处理2b、···子处理Mb这一分散处理。此时,本实施方式的机器人控制装置、即执行第一~第M处理的机器人控制装置可以是执行子处理1a~子处理Ma的机器人控制装置,可以是执行子处理1b~子处理Mb的机器人控制装置,也可以是执行子处理1a~子处理Ma以及子处理1b~子处理Mb的全部的机器人控制装置。进一步而言,本实施方式的机器人控制装置是对第一~第M处理的各处理至少执行一个子处理的机器人控制装置。For example, the robot controller of the present invention performs the first to Mth (M is an integer) processing, and it is considered that the first processing can be realized by sub-processing 1a and sub-processing 1b, and the second processing can be realized by sub-processing 2a and sub-processing 1b. The way to realize the process 2b is to divide each of the first to Mth processes into a plurality of sub-processes. In this case, it is considered that the server 500 as the robot control device performs sub-processing 1a, sub-processing 2a, ... sub-processing Ma, and the robot control device installed on the robot side performs sub-processing 1b, sub-processing 2b, ... sub-processing Distributed processing that deals with Mb. At this time, the robot control device of this embodiment, that is, the robot control device that executes the first to Mth processes may be a robot control device that executes sub-process 1a to sub-process Ma, or a robot that executes sub-process 1b to sub-process Mb. The control device may be a robot control device that executes all of sub-processing 1a to sub-processing Ma and sub-processing 1b to sub-processing Mb. Furthermore, the robot control device of this embodiment is a robot control device that executes at least one sub-process for each of the first to M-th processes.
由此,例如与机器人侧的终端装置(例如图19A、图19B的控制装置600)相比处理能力较高的服务器500能够进行处理负荷高的处理等。并且,服务器500能够一并控制各机器人的动作,从而例如容易使多个机器人协调动作等。Thereby, for example, the server 500 having a higher processing capability than a terminal device on the robot side (for example, the control device 600 in FIGS. 19A and 19B ) can perform processing with a high processing load. In addition, the server 500 can collectively control the operation of each robot, so that, for example, it is easy to coordinate the operation of a plurality of robots.
另外,近几年,制造多品种且少数的部件的情况有增加的趋势。而且,在变更制造的部件的种类的情况下,需要变更机器人进行的动作。若为如图20所示的结构,则即使不重新进行针对多个机器人的各机器人的指导作业,服务器500也能够一并变更机器人所进行的动作等。并且,与针对各机器人设置一个机器人控制装置1000的情况相比,能够大幅度减少进行机器人控制系统1000的软件更新时的麻烦等。In addition, in recent years, there is an increasing tendency to manufacture many types of parts with a small number of parts. Furthermore, when changing the type of parts to be manufactured, it is necessary to change the operation performed by the robot. With the configuration shown in FIG. 20 , the server 500 can collectively change the actions performed by the robots without re-performing the instruction work for each of the plurality of robots. In addition, compared with the case where one robot control device 1000 is provided for each robot, it is possible to significantly reduce troubles and the like at the time of updating the software of the robot control system 1000 .
3.视觉伺服控制3. Visual Servo Control
在对本实施方式的异常检测手段进行说明前,对一般的视觉伺服控制进行说明。在图40中示出了一般的视觉伺服控制系的构成例。由图40可知,在与图36所示的本实施方式的机器人控制装置1000进行比较的情况下,形成为除去变化量运算部1120、变化量推断部1130、以及异常判定部1140的结构。Before describing the abnormality detection means of this embodiment, general visual servoing control will be described. A configuration example of a general visual servoing control system is shown in FIG. 40 . As can be seen from FIG. 40 , when compared with the robot controller 1000 of the present embodiment shown in FIG. 36 , the change calculation unit 1120 , the change estimation unit 1130 , and the abnormality determination unit 1140 are excluded.
在将用于视觉伺服的图像特征量的维数设为n(n为整数)的情况下,图像特征量f以作为f=[f1,f2,···,fn]T的图像特征量矢量而表现出来。f的各要素例如使用特征点(控制点)的图像的坐标值等即可。在该情况下,从目标特征量输入部111输入的目标图像特征量fg也同样地,表现为fg=[fg1,fg2,···,fgn]T。When the dimensionality of the image feature amount used for visual servoing is set to n (n is an integer), the image feature amount f is an image feature amount vector as f=[f1, f2, . . . , fn]T And show it. Each element of f may use, for example, coordinate values of an image of a feature point (control point), or the like. In this case, the target image feature value fg input from the target feature value input unit 111 is similarly expressed as fg=[fg1, fg2, . . . , fgn]T.
另外,关节角也作为与机器人20000(狭义而言为臂2210)所包含的关节数对应的维数的关节角矢量而表现出来。例如,若臂2210是具有6个关节的6自由度的臂,则关节角矢量θ表现为θ=[θ1,θ2,···,θ6]T。In addition, the joint angle is also expressed as a joint angle vector having a dimension corresponding to the number of joints included in the robot 20000 (in a narrow sense, the arm 2210 ). For example, if the arm 2210 is a six-degree-of-freedom arm having six joints, the joint angle vector θ is represented by θ=[θ1, θ2, . . . , θ6]T.
在视觉伺服中,在获取当前的图像特征量f的情况下,将该图像特征量f与目标图像特征量fg的差分向机器人的动作反馈。具体而言,使机器人向减小图像特征量f与目标图像特征量fg的差分的方向进行动作。为此,必须知道怎么使关节角θ活动,图像特征量f就怎么变化这一关系性。一般地该关系性形成为非线性,例如在f1=g(θ1,θ2,θ3,θ4,θ5,θ6)的情况下,函数g为非线性函数。In visual servoing, when the current image feature f is acquired, the difference between the image feature f and the target image feature fg is fed back to the motion of the robot. Specifically, the robot is operated in a direction to reduce the difference between the image feature value f and the target image feature value fg. For this reason, it is necessary to know the relationship that how the image feature value f changes by moving the joint angle θ. Generally, this relationship is nonlinear. For example, in the case of f1=g(θ1, θ2, θ3, θ4, θ5, θ6), the function g is a nonlinear function.
因此,在视觉伺服中,广泛公知有使用雅克比矩阵J的手段。即使两个空间处于非线性关系,各个空间中的微少的变化量之间也能够以线性关系表现出来。雅克比矩阵J是将上述微少变化量彼此建立联系的矩阵。Therefore, in visual servoing, a method using the Jacobian matrix J is widely known. Even if the two spaces are in a nonlinear relationship, the slight changes in each space can also be expressed in a linear relationship. The Jacobian matrix J is a matrix for associating the above-mentioned minute variations with each other.
具体而言,在机器人20000的手尖的位置姿势X为X=[x,y,z,R1,R2,R3]T的情况下,关节角的变化量与位置姿势的变化量之间的雅克比矩阵Ja以下式(1)表现出来,位置姿势的变化量与图像特征量变化量之间的雅克比矩阵Ji以下式(2)表现出来。Specifically, when the position and posture X of the tip of the robot 20000 is X=[x, y, z, R1, R2, R3]T, the jacques between the amount of change in the joint angle and the amount of change in the position and posture The ratio matrix Ja is expressed by the following equation (1), and the Jacobian matrix Ji between the amount of change in the position and orientation and the amount of change in the image feature amount is expressed by the following equation (2).
数式1Formula 1
数式2Math 2
而且,通过使用Ja、Ji,能够如下式(3)、(4)所示地表述Δθ、ΔX、Δf的关系。Ja一般被称为机器人雅克比矩阵,并且若有机器人20000的连杆长度、旋转轴等的机构信息,则能够解析计算Ja。另一方面,Ji能够事先从使机器人20000的手尖的位置姿势微量变化时的图像特征量的变化等推测出来,并且也提出了在动作中随时推断Ji的手段。Furthermore, by using Ja and Ji, the relationship of Δθ, ΔX, and Δf can be expressed as shown in the following equations (3) and (4). Ja is generally called a robot Jacobian matrix, and Ja can be analytically calculated if there is mechanism information such as the link length and the rotation axis of the robot 20000 . On the other hand, Ji can be estimated in advance from changes in image feature values when the position and posture of the tip of the robot 20000 is slightly changed, and a means for estimating Ji at any time during the movement has also been proposed.
ΔX=JaΔθ·····(3)ΔX=JaΔθ·····(3)
Δf=JiΔX·····(4)Δf=JiΔX·····(4)
并且,通过使用上式(3)、(4),能够如下式(5)所示地表现图像特征量变化量Δf与关节角的变化量Δθ的关系。Furthermore, by using the above equations (3) and (4), the relationship between the amount of change Δf of the image feature value and the amount of change Δθ of the joint angle can be expressed as shown in the following equation (5).
Δf=JvΔθ·····(5)Δf=JvΔθ·····(5)
这里,Jv=JiJa,并且表示关节角的变化量与图像特征量变化量之间的雅克比矩阵。另外,也将Jv表述为图像雅克比矩阵。在图41中图示出了上式(3)~(5)的关系性。Here, Jv=JiJa, and represents the Jacobian matrix between the amount of change in the joint angle and the amount of change in the image feature amount. In addition, Jv is also expressed as an image Jacobian matrix. The relationship between the above formulas (3) to (5) is illustrated in FIG. 41 .
依据以上内容,目标轨道生成部112将f与fg的差分作为Δf,而求出关节角的驱动量(关节角的变化量)Δθ即可。这样,能够求出用于使图像特征量f与fg接近的关节角的变化量。具体而言,为了从Δf求出Δθ,将上式(5)的两边从左边乘以Jv的逆矩阵Jv-1即可,但是进一步考虑到作为λ的控制增益,利用下式(6)求出成为目标的关节角的变化量Δθg。Based on the above, the target trajectory generation unit 112 may obtain the driving amount of the joint angle (change amount of the joint angle) Δθ using the difference between f and fg as Δf. In this way, the amount of change in the joint angle for bringing the image feature value f closer to fg can be obtained. Specifically, in order to obtain Δθ from Δf, it is sufficient to multiply both sides of the above equation (5) from the left by the inverse matrix Jv-1 of Jv, but further considering the control gain as λ, the following equation (6) can be used to obtain The target joint angle variation Δθg is obtained.
Δθg=-λJv-1(f-fg)·····(6)Δθg=-λJv -1 (f-fg)·····(6)
此外,在上式(6)中求出了Jv的逆矩阵Jv-1,但是在未求出Jv-1的情况下,也可以使用Jv的广义逆矩阵(疑似逆矩阵)Jv#。In addition, the inverse matrix Jv -1 of Jv is obtained in the above formula (6), but when Jv -1 is not obtained, the generalized inverse matrix (pseudo inverse matrix) Jv# of Jv may be used.
通过使用上式(6),从而每当获取新的图像时,求出新的Δθg。由此,能够使用获取的图像,边更新成为目标的关节角,边进行与目标状态(图像特征量成为fg的状态)接近的控制。在图42中图示出了该流程。若从第m-1图像(m为整数)求出图像特征量fm-1,则通过形成上式(6)的f=fm-1,能够求出Δθgm-1。然后,在第m-1图像与下一个图像亦即第m图像之间,将求出的Δθgm-1作为目标而进行机器人20000的控制即可。然后,若获取第m图像,则从该第m图像求出图像特征量fg,并利用上式(6)计算作为新的目标的Δθgm。在第m图像与第m+1图像之间,在控制中使用计算出的Δθgm。以下,在结束该处理之前(在图像特征量与fg充分接近之前),持续进行该处理即可。By using the above formula (6), a new Δθg is obtained every time a new image is acquired. Accordingly, it is possible to perform control close to the target state (state in which the image feature value is fg) while updating the target joint angle using the acquired image. This flow is illustrated diagrammatically in FIG. 42 . When the image feature value f m-1 is obtained from the m-1th image (m is an integer), Δθg m-1 can be obtained by forming f=f m-1 in the above formula (6). Then, between the m−1th image and the next image, that is, the mth image, the robot 20000 may be controlled with the calculated Δθg m−1 as a target. Then, when the m-th image is obtained, the image feature value fg is obtained from the m-th image, and Δθg m as a new target is calculated using the above formula (6). Between the m-th image and the m+1-th image, the calculated Δθg m is used for control. Hereinafter, the processing may be continued until the processing is terminated (until the image feature value is sufficiently close to fg).
此外,虽然求出成为目标的关节角的变化量,但是未必需要使关节角变化目标量。例如,在第m图像与第m+1图像之间,将Δθgm作为目标值而进行控制,但是也大多考虑如下情况:在实际的变化量还未成为Δθgm时,获取下一个图像亦即第m+1图像,并通过它计算新的目标值Δθgm+1。In addition, although the target change amount of the joint angle is obtained, it is not necessarily necessary to change the joint angle by the target amount. For example, between the m-th image and the m+1-th image, Δθg m is controlled as the target value, but the following situation is often considered: when the actual amount of change has not yet reached Δθg m , the next image is acquired, that is, The m+1th image, and calculate the new target value Δθg m+1 through it.
4.异常检测手段4. Anomaly detection means
对本实施方式的异常检测手段进行说明。如图43A所示,在机器人20000的关节角为θp时,获取第p图像信息,并根据该第p图像信息计算图像特征量fp。然后,在比第p图像信息的获取时刻靠后的时刻,在机器人20000的关节角为θq时,获取第q图像信息,并根据该第q图像信息计算图像特征量fq。这里,第p图像信息与第q图像信息可以是时间序列上邻接的图像信息,也可以是不邻接的(在第p图像信息的获取后、第q图像信息的获取前,获取其他的图像信息)图像信息。The abnormality detection means of this embodiment is demonstrated. As shown in FIG. 43A , when the joint angle of the robot 20000 is θp, the p-th image information is acquired, and the image feature value fp is calculated based on the p-th image information. Then, when the joint angle of the robot 20000 is θq at a time later than the acquisition time of the p-th image information, the q-th image information is acquired, and the image feature value fq is calculated from the q-th image information. Here, the p-th image information and the q-th image information may be adjacent image information in time series, or may be non-adjacent (after the acquisition of the p-th image information and before the acquisition of the q-th image information, other image information ) image information.
在视觉伺服中,如上所述地将fp、fq与fg的差分作为Δf而用于Δθg的计算,但是fp与fq的差分fq-fp不外乎也是图像特征量变化量。另外,由于关节角θp、关节角θq是由关节角检测部115从编码器等获取的,所以能够作为实测值而求出,θp与θq的差分θq-θp是关节角的变化量Δθ。即,对于两个图像信息而言,为了分别求出对应的图像特征量f与关节角θ,求出图像特征量变化量作为Δf=fq-fp,还求出对应的关节角的变化量作为Δθ=θq-θp。In visual servoing, the difference between fp, fq, and fg is used for calculation of Δθg as Δf as described above, but the difference fq−fp between fp and fq is nothing more than an amount of image feature change. In addition, since the joint angle θp and the joint angle θq are obtained by the joint angle detection unit 115 from an encoder or the like, they can be obtained as actually measured values, and the difference θq−θp between θp and θq is the change amount Δθ of the joint angle. That is, for two pieces of image information, in order to obtain the corresponding image feature value f and joint angle θ respectively, the change amount of the image feature value is obtained as Δf=fq-fp, and the change amount of the corresponding joint angle is obtained as Δθ=θq-θp.
而且,如上式(5)所示,存在Δf=JvΔθ的关系。即,若使用实测的Δθ=θq-θp与雅克比矩阵Jv而求出Δfe=JvΔθ,则求出的Δfe应与完全不产生误差的理想的环境下实测的Δf=fq-fp一致。Furthermore, as shown in the above formula (5), there is a relationship of Δf=JvΔθ. That is, when Δfe=JvΔθ is obtained using the measured Δθ=θq−θp and the Jacobian matrix Jv, the obtained Δfe should agree with the actually measured Δf=fq−fp in an ideal environment where no error occurs.
由此,变化量推断部1130通过对关节角信息的变化量作用使关节角信息与图像特征量相对应(具体而言使关节角信息的变化量与图像特征量变化量相对应)的雅克比矩阵Jv,从而对推断图像特征量变化量Δfe进行运算。如上所述,若为理想的环境,则求出的推断图像特征量变化量Δfe应与变化量运算部1120中作为Δf=fq-fp而求出的图像特征量变化量Δf一致,反之而言,在Δf与Δfe有较大不同的情况下,能够判定为产生了某些异常。Thus, the change amount estimating unit 1130 acts on the change amount of the joint angle information to associate the joint angle information with the image feature amount (specifically, associate the joint angle information change amount with the image feature amount change). The matrix Jv is used to calculate the estimated image feature quantity change Δfe. As described above, in an ideal environment, the obtained estimated image feature quantity change Δfe should agree with the image feature quantity change Δf obtained by the change calculation unit 1120 as Δf=fq−fp, and vice versa. , when Δf is significantly different from Δfe, it can be determined that some abnormality has occurred.
这里,作为Δf与Δfe产生误差的因素,考虑根据图像信息对图像特征量进行运算时的误差、编码器读取关节角的值时的误差、雅克比矩阵Jv所包含的误差等。但是,在编码器读取关节角的值时,产生误差的可能性在与其他两种相比时较低。另外,雅克比矩阵Jv所包含的误差也不是很大的误差。与此相对,由于在图像中拍摄有非识别对象的多数的物体,从而导致根据图像信息对图像特征量进行运算时的误差的产生频率比较高。另外,在图像特征量运算中产生异常的情况下,存在误差变得非常大的可能性。例如,若从图像中识别希望的物体的识别处理失败的话,则有在与原来的物体位置大不同的图像上的位置,误识别为存在物体的可能性。由此,在本实施方式中,主要检测图像特征量的运算中的异常。但是,也可以将由其他因素引起的误差作为异常来检测。Here, as factors causing errors in Δf and Δfe, errors in computing image feature quantities based on image information, errors in reading joint angle values by encoders, errors contained in the Jacobian matrix Jv, and the like are considered. However, when the encoder reads the value of the joint angle, the possibility of error is lower when compared to the other two. In addition, the error contained in the Jacobian matrix Jv is not a large error. On the other hand, since many objects that are not objects of recognition are captured in the image, errors occur more frequently when the image feature value is calculated based on the image information. In addition, when an abnormality occurs in the calculation of the image feature value, there is a possibility that the error becomes very large. For example, if the recognition process for recognizing a desired object from an image fails, there is a possibility that the object may be mistakenly recognized as being present at a position on the image that is greatly different from the original object position. Therefore, in the present embodiment, abnormalities in the calculation of image feature quantities are mainly detected. However, errors caused by other factors can also be detected as abnormalities.
在异常判定中,例如进行使用阈值的判定处理即可。具体而言,异常判定部1140进行图像特征量变化量Δf与推断图像特征量变化量Δfe的差别信息、和阈值的比较处理,并且在差别信息比阈值大的情况下,判定为异常。例如设定给定的阈值Th,在满足下式(7)的情况下,判定为产生异常即可。这样,能够利用下式(7)等容易的运算来检测异常。In abnormality determination, for example, a determination process using a threshold value may be performed. Specifically, the abnormality determination unit 1140 compares the difference information between the image feature amount change amount Δf and the estimated image feature amount change amount Δfe with a threshold, and determines that it is abnormal when the difference information is larger than the threshold value. For example, if a predetermined threshold Th is set and the following formula (7) is satisfied, it may be determined that an abnormality has occurred. In this way, an abnormality can be detected by a simple calculation such as the following formula (7).
|Δf-Δfe|>Th·····(7)|Δf-Δfe|>Th·····(7)
另外,阈值Th无需为固定值,也可以与状况对应地使其值变化。例如,异常判定部1140也可以构成为,变化量运算部1120中的图像特征量变化量的运算所使用的两个图像信息的获取时刻之差越大,则将阈值设定得越大。In addition, the threshold Th does not need to be a fixed value, and the value may be changed according to the situation. For example, the abnormality determination unit 1140 may be configured to set a larger threshold value as the difference between the acquisition times of the two pieces of image information used in the calculation of the change amount of the image feature in the change amount calculation unit 1120 is greater.
如图41等所示,雅克比矩阵Jv是将Δθ与Δf建立联系的矩阵。而且如图44所示,即便在使相同的雅克比矩阵Jv作用的情况下,与作用于Δθ而得到的Δfe相比,作用于比Δθ变化量大的Δθ’而得到的Δfe’一方变化量较大。此时,难以考虑雅克比矩阵Jv完全不产生误差,从而与关节角变化Δθ、Δθ’的情况下的图像特征量的理想的变化量Δfi、Δfi’相比,如图44所示,Δfe、Δfe’产生偏差。而且,从图44的A1与A2的比较可知,变化量越大,该偏差就越大。As shown in FIG. 41 and the like, the Jacobian matrix Jv is a matrix linking Δθ and Δf. Furthermore, as shown in FIG. 44, even when the same Jacobian matrix Jv is applied, the amount of change in Δfe' obtained by acting on Δθ' greater than the amount of change in Δθ is compared with Δfe obtained by acting on Δθ. larger. At this time, it is difficult to consider that the Jacobian matrix Jv does not generate any error at all. Therefore, as shown in FIG. 44 , Δfe, Δfe' produces a bias. Furthermore, as can be seen from the comparison of A1 and A2 in FIG. 44 , the larger the amount of change, the larger the deviation.
若假设在图像特征量运算中完全不产生误差,则根据图像信息求出的图像特征量变化量Δf与Δfi、Δfi’相等。在该情况下,上式(7)的左边表示因雅克比矩阵产生的误差,并且如Δθ、Δfe等那样在变化量较小的情况下成为与A1相当的值,如Δθ’、Δfe’等那样在变化量较大的情况下成为与A2相当的值。但是如上所述,Δfe、Δfe’的两方所使用的雅克比矩阵Jv是相同的,虽然上式(7)的左边的值变大,但是判定为A2一方与A1相比为异常度更高的状态是不适当的。即,不可以说在与A1对应的状况中不满足上式(7)(不判定为异常),在与A2对应的状况中满足上式(7)(判定为异常)是适当的。因此,在异常判定部1140中,若Δθ、Δfe等变化量越大,则进一步将阈值Th也设定得越大。这样,由于与对应于A1的状况相比,对应于A2的状况下的阈值Th大,所以能够进行适当的异常判定。考虑两个图像信息(若在图43A中则是第p图像信息与第q图像信息)的获取时刻之差越大,Δθ、Δfe等也越大,因此在处理上,例如,与图像获取时刻之差对应地设定阈值Th即可。Assuming that no error occurs in the calculation of the image feature, the image feature change Δf obtained from the image information is equal to Δfi and Δfi'. In this case, the left side of the above formula (7) represents an error due to the Jacobian matrix, and when the amount of change is small such as Δθ, Δfe, etc., it becomes a value equivalent to A1, such as Δθ', Δfe', etc. Thus, when the amount of change is large, it becomes a value equivalent to A2. However, as described above, the Jacobian matrix Jv used for both Δfe and Δfe' is the same, and although the value on the left side of the above formula (7) becomes larger, it is determined that A2 has a higher degree of abnormality than A1 status is inappropriate. That is, it cannot be said that the above formula (7) is not satisfied (not judged as abnormal) in the situation corresponding to A1, and it is appropriate to satisfy the above formula (7) (judged as abnormal) in the situation corresponding to A2. Therefore, in the abnormality determination unit 1140 , the threshold value Th is further set to be larger as the amount of change in Δθ, Δfe, etc. is larger. In this way, since the threshold Th is larger in the situation corresponding to A2 than in the situation corresponding to A1, appropriate abnormality determination can be performed. Considering that the difference between the acquisition times of the two image information (the p-th image information and the q-th image information in FIG. 43A ) is larger, Δθ, Δfe, etc. are also larger. It is sufficient to set the threshold Th correspondingly to the difference.
另外,考虑异常判定部1140中检测到异常的情况下的各种控制。例如,在由异常判定部1140检测到异常的情况下,机器人控制部1110也可以进行使机器人20000停止的控制。如上所述,检测到异常的情况例如是来自图像信息的图像特征量的运算产生较大误差的情况等。即,若使用该图像特征量(若为图43A的例子则是fq)来进行机器人20000的控制,则存在向和图像特征量与目标图像特征量fg接近的方向相距甚远的方向使机器人20000移动的可能性。因该情况,恐怕会使臂2210等与其他物体相碰撞,并由于采取不合理的姿势而使手部等所把持的对象物落下。由此,作为异常时的控制的一个例子,考虑使机器人20000的动作本身停止,而不进行那样风险较大的动作。In addition, various controls in the case where an abnormality is detected in the abnormality determination unit 1140 are considered. For example, when an abnormality is detected by the abnormality determination unit 1140 , the robot control unit 1110 may perform control to stop the robot 20000 . As described above, the case where an abnormality is detected is, for example, a case where a large error occurs in the calculation of the image feature value from the image information. That is, if the robot 20000 is controlled using the image feature value (fq in the example of FIG. 43A ), there is a possibility that the robot 20000 will be far away from the direction in which the image feature value is close to the target image feature value fg. Possibility of moving. In this case, there is a possibility that the arm 2210 and the like collide with other objects, and the object grasped by the hands and the like may fall due to taking an unreasonable posture. Therefore, as an example of control at the time of abnormality, it is conceivable to stop the operation itself of the robot 20000 so as not to perform such a risky operation.
另外,若推断出图像特征量fq产生较大误差,而不希望进行使用fq的控制,则也可以不立即使机器人动作停止,并且不将fq用于控制。由此例如,在由异常判定部1140检测到异常的情况下,机器人控制部1110也可以跳过基于变化量运算部1120中的图像特征量变化量的运算所使用的两个图像信息中的、在时间序列上靠后的时刻获取的图像信息亦即异常判定图像信息所实现的控制,而进行基于在比异常判定图像信息靠前的时刻获取的图像信息所实现的控制。In addition, if it is estimated that a large error occurs in the image feature value fq and it is not desired to perform control using fq, the robot may not be stopped immediately and fq may not be used for control. Thus, for example, when an abnormality is detected by the abnormality determination unit 1140 , the robot control unit 1110 may skip one of the two pieces of image information used in the calculation based on the change amount of the image feature in the change amount calculation unit 1120 . Control based on image information acquired at a later time in time series, that is, abnormality determination image information, is performed based on image information acquired earlier than the abnormality determination image information.
若为图43A的例子,则异常判定图像信息是第q图像。另外,在图42的例子中,使用邻接的两个图像信息来进行异常判定,并且判定为在第m-2图像信息与第m-1图像信息中无异常,在第m-1图像信息与第m图像信息中无异常,在第m图像信息与第m+1图像信息中有异常。在该情况下,考虑可知fm-1以及fm不存在异常,而fm+1存在异常,从而Δθgm-1、Δθgm能够用于控制,但是将Δθgm+1用于控制是不适当的。本来,在第m+1图像信息与下一个第m+2图像信息之间将Δθgm +1用于控制,但是这里,由于该控制不适当所以不进行。在该情况下,在第m+1图像信息与第m+2图像信息之间,也使用之前求出的Δθgm而使机器人20000进行动作即可。由于Δθgm至少是在fm的计算时刻向目标方向使机器人20000移动的信息,所以即使在fm+1的计算后继续利用,也难以认为会产生较大的误差。这样,即使在检测到异常的情况下,也能够用在此以前的信息、特别是在比异常检测时刻靠前的时刻获取并且未检测到异常的信息,进行大体控制,从而使机器人20000的动作继续进行。之后,若获取新的图像信息(若为图42的例子则是第m+2图像信息),则利用从该新的图像信息求出的新的图像特征量来进行控制即可。In the example shown in FIG. 43A , the abnormality determination image information is the qth image. In addition, in the example of FIG. 42 , abnormality determination is performed using two pieces of adjacent image information, and it is determined that there is no abnormality in the m-2th image information and the m-1th image information. There is no abnormality in the mth image information, and there are abnormalities in the mth image information and the m+1th image information. In this case, it can be considered that f m-1 and f m are not abnormal, but f m+1 is abnormal, so Δθg m-1 and Δθg m can be used for control, but it is not correct to use Δθg m+1 for control appropriate. Originally, Δθg m +1 is used for control between the m+1th image information and the next m+2th image information, but here, this control is not appropriate because it is not performed. In this case, between the m+1th image information and the m+2th image information, the robot 20000 may be operated using the previously obtained Δθg m . Since Δθg m is at least information for moving the robot 20000 in the target direction at the time of calculation of f m , it is unlikely that a large error will occur even if it is used continuously after the calculation of f m+1 . In this way, even when an abnormality is detected, it is possible to roughly control the operation of the robot 20000 by using the information before that, especially the information obtained at a time earlier than the abnormality detection time and in which no abnormality was detected. keep going. Afterwards, when new image information is acquired (the m+2th image information in the example of FIG. 42 ), control may be performed using new image feature values obtained from the new image information.
在图48的流程图中,示出了到异常检测时为止考虑的本实施方式的处理流程。若开始进行该处理,则首先进行由图像信息获取部116实现的图像的获取、与由图像特征量运算部117实现的图像特征量的运算,并在变化量运算部1120中对图像特征量变化量进行运算(S10001)。另外,进行由关节角检测部115实现的关节角的检测,并且在变化量推断部1130中对推断图像特征量变化量进行推断(S10002)。然后,根据图像特征量变化量与推断图像特征量变化量的差分是否在阈值以下来进行异常判定(S10003)。In the flowchart of FIG. 48 , the processing flow of this embodiment considered up to abnormality detection is shown. When this process is started, the acquisition of the image by the image information acquisition unit 116 and the calculation of the image feature by the image feature calculation unit 117 are first performed, and the variation of the image feature is changed in the change calculation unit 1120. Quantity calculation (S10001). In addition, the detection of the joint angle by the joint angle detection unit 115 is performed, and the change amount of the estimated image feature value is estimated in the change amount estimating unit 1130 ( S10002 ). Then, an abnormality determination is performed based on whether the difference between the change amount of the image feature amount and the estimated image feature amount change amount is equal to or less than a threshold value (S10003).
在差分在阈值以下(S10003中为是)的情况下,不产生异常,从而使用S10001中求出的图像特征量进行控制(S10004)。然后,进行当前的图像特征量是否与成为目标的图像特征量充分接近(狭义而言为一致)的判定,在为是的情况下,正常地到达目标而结束处理。另一方面,在S10005中为否的情况下,动作本身不产生异常,但是未到达目标,从而回到S10001而继续进行控制。When the difference is equal to or less than the threshold (YES in S10003), no abnormality occurs, and control is performed using the image feature value obtained in S10001 (S10004). Then, it is determined whether or not the current image feature value is sufficiently close to the target image feature value (in a narrow sense, coincides). If yes, the target is reached normally, and the process ends. On the other hand, in the case of No in S10005, the operation itself does not generate an abnormality, but the target has not been reached, and the control returns to S10001 to continue.
另外,在图像特征量变化量与推断图像特征量变化量的差分比阈值大(S10003中为否)的情况下,判定为产生异常。然后,对异常的产生是否是N次连续进行判定(S10006),在连续产生的情况下,为不优选继续动作的程度的异常,从而停止动作。另一方面,在异常的产生不是N次连续的情况下,使用过去的并且判定为不产生异常的图像特征量来进行控制(S10007),并回到S10001而继续进行下一个时刻的图像处理。在图48的流程图中,如上所述,在到一定程度的异常之前(这里为连续N-1次以下的异常产生),不立即停止动作,而进行使动作继续的方向上的控制。In addition, when the difference between the amount of change in the image feature amount and the amount of change in the estimated image feature amount is larger than the threshold (NO in S10003 ), it is determined that an abnormality has occurred. Then, it is determined whether the occurrence of the abnormality is N consecutive times (S10006), and if it occurs continuously, it is an abnormality to such an extent that it is not preferable to continue the operation, and the operation is stopped. On the other hand, when the occurrence of the abnormality is not N consecutive times, control is performed using the past image feature value determined not to be abnormal (S10007), and the image processing at the next time is continued by returning to S10001. In the flowchart of FIG. 48 , as described above, before a certain level of abnormality occurs (here, N-1 or less consecutive abnormalities occur), the operation is not stopped immediately, but the operation is controlled in the direction of continuing the operation.
此外,在以上的说明中,没有特别考虑图像信息的获取时刻、关节角信息的获取时刻、图像特征量的获取时刻(运算结束时刻)之间的时间差。但是实际上,如图45所示,即使在给定的时刻获取图像信息,在编码器读取该图像信息的获取时的关节角信息并且将读取的信息发送至关节角检测部115之前也会产生时滞。另外,由于在图像获取后进行图像特征量的运算,所以在这也产生时滞,并且由于因图像信息的不同而使图像特征量的运算负荷不同,所以时滞的长度也不同。例如,在完全没有拍摄识别对象物以外的物体并且背景为单一素色等情况下,能够高速地进行图像特征量的运算,但是在拍摄有各种物体的情况下等,图像特征量的运算需要时间。In addition, in the above description, the time difference between the acquisition time of the image information, the acquisition time of the joint angle information, and the acquisition time of the image feature amount (calculation end time) is not particularly taken into consideration. However, actually, as shown in FIG. 45 , even if image information is acquired at a given timing, the encoder reads the joint angle information at the time of acquisition of the image information and transmits the read information to the joint angle detection section 115. There will be a time lag. In addition, since the calculation of image feature values is performed after image acquisition, there is also a time lag, and the length of the time lag is also different because the calculation load of image feature values differs depending on the image information. For example, when no object other than the recognition target is photographed at all and the background is a single solid color, the calculation of the image feature value can be performed at high speed, but when various objects are photographed, the calculation of the image feature value needs to be performed. time.
即,在图43A中,简单地对使用第p图像信息与第q图像信息的异常判定进行了说明,但是实际上如图45所示,需要考虑从第p图像信息的获取到对应的关节角信息的获取的时滞tθp、与从第p图像信息的获取到图像特征量的运算结束的时滞tfp,第q图像信息也同样需要考虑tθq与tfq。That is, in FIG. 43A , the abnormality judgment using the p-th image information and the q-th image information is briefly described, but in fact, as shown in FIG. The time lag tθp of information acquisition, the time lag tfp from the acquisition of the pth image information to the end of the calculation of the image feature value, and the qth image information also need to consider tθq and tfq.
异常判定处理例如是在获取第q图像信息的图像特征量fq的时刻开始的,但是必须适当地判定作为取得差分的对象的fp是多久前获取的图像特征量、或者θq与θp的获取时刻是何时。The abnormality determination process starts, for example, when the image feature value fq of the qth image information is acquired. However, it must be appropriately determined how long ago the image feature value fp to obtain the difference is obtained, or when the acquisition times of θq and θp are when.
具体而言,在第i(i为自然数)时刻获取第一图像信息的图像特征量f1、并且在第j(j为满足j≠i的自然数)时刻获取第二图像信息的上述图像特征量f2的情况下,变化量运算部1120将图像特征量f1与图像特征量f2的差分作为图像特征量变化量而求出,变化量推断部1130在第k(k为自然数)时刻获取与第一图像信息对应的变化量推断用信息p1、并且在第l(l为自然数)时刻获取与第二图像信息对应的变化量推断用信息p2的情况下,根据变化量推断用信息p1与变化量推断用信息p2,求出推断图像特征量变化量。Specifically, the image feature value f1 of the first image information is obtained at the i-th time (i is a natural number), and the image feature value f2 of the second image information is obtained at the j-th time (j is a natural number satisfying j≠i) In the case of , the change calculation unit 1120 obtains the difference between the image feature value f1 and the image feature value f2 as the image feature value change amount, and the change amount estimation unit 1130 obtains the difference between the first image and the first image at time k (k is a natural number). In the case of information p1 corresponding to the information for estimating change, and information p2 for estimating change corresponding to the second image information is obtained at time l (l is a natural number), the information p1 for estimating change and the information p2 for estimating change The information p2 is used to obtain the amount of change in the estimated image feature amount.
若为图45的例子,则图像特征量、关节角信息是在各种时刻获取的信息,但是在以fq的获取时刻(例如第j时刻)为基准的情况下,与第p图像信息对应的图像特征量fp是在靠前(tfq+ti-tfp)的时刻获取的图像特征量,即,确定第i时刻比第j时刻靠前(tfq+ti-tfp)。这里,ti如图45所示表示图像获取时刻之差。In the example of FIG. 45 , image feature values and joint angle information are information acquired at various times, but when the acquisition time of fq (for example, the j-th time) is used as a reference, the information corresponding to the p-th image information The image feature quantity fp is an image feature quantity acquired at a time earlier (tfq+ti−tfp), that is, it is determined that the i-th time is earlier (tfq+ti-tfp) than the j-th time. Here, ti represents a difference in image acquisition time as shown in FIG. 45 .
同样,确定作为θq的获取时刻的第l时刻是比第j时刻靠前(tfq-tθq)的时刻,作为θp的获取时刻的第k时刻是比第j时刻靠前(tfq+ti-tθp)的时刻。在本实施方式的手段中,需要取得Δf与Δθ的对应,具体而言,若Δf是根据第p图像信息与第q图像信息求出的,则Δθ也需要与第p图像信息以及第q图像信息对应。如若不然,根据Δθ求出的推断图像特征量变化量Δfe变得根本与Δf不具有对应关系,而没有进行比较处理的意义。由此如上所述,可以说确定时刻的对应关系是重要的。此外,在图45中,由于非常高速且高频率地进行关节角的驱动本身,所以作为连续的过程而进行处理。Similarly, it is determined that the l-th time as the acquisition time of θq is (tfq-tθq) before the j-th time, and the k-th time as the acquisition time of θp is (tfq+ti-tθp) before the j-th time moment. In the method of this embodiment, it is necessary to obtain the correspondence between Δf and Δθ. Specifically, if Δf is obtained from the p-th image information and the q-th image information, then Δθ also needs to be related to the p-th image information and the q-th image information. Information correspondence. Otherwise, the estimated image feature quantity change Δfe obtained from Δθ does not have a corresponding relationship with Δf at all, and there is no meaning for comparison processing. From this, as described above, it can be said that it is important to determine the correspondence between the time points. In addition, in FIG. 45 , since the drive itself of the joint angle is performed at a very high speed and high frequency, it is processed as a continuous process.
此外,在现有的机器人20000以及机器人控制装置1000中,能够考虑图像信息的获取时刻与对应的关节角信息的获取时刻之差足够小。由此,也可以考虑第k时刻为第一图像信息的获取时刻,第l时刻为第二图像信息的获取时刻。在该情况下,由于能够使图45中的tθp、tθq为0,所以容易进行处理。In addition, in the conventional robot 20000 and the robot control device 1000 , it can be considered that the difference between the acquisition time of the image information and the acquisition time of the corresponding joint angle information is sufficiently small. Therefore, it can also be considered that the kth time is the acquisition time of the first image information, and the lth time is the acquisition time of the second image information. In this case, since tθp and tθq in FIG. 45 can be set to 0, processing is easy.
另外,作为更具体的例子,考虑在根据前一个图像信息计算图像特征量的时刻进行下一个图像信息的获取的手段。在图46示出了该情况的例子。图46的纵轴是图像特征量的值,“实际的特征量”是假设获取了与该时刻的关节角信息对应的图像特征量的情况下的值,而无法在处理上确认。从实际的特征量顺利地推移这一情况可知,可以考虑关节角的驱动是连续的。In addition, as a more specific example, a means of acquiring the next image information at the time when the image feature value is calculated from the previous image information is considered. An example of this is shown in FIG. 46 . The vertical axis of FIG. 46 represents the value of the image feature value, and the "actual feature value" is a value assumed to have acquired the image feature value corresponding to the joint angle information at that point in time, and cannot be confirmed in terms of processing. From the fact that the actual feature quantity smoothly transitions, it can be considered that the drive of the joint angle is continuous.
在该情况下,由于与在B1的时刻获取的图像信息对应的图像特征量是在经过t2后的B2的时刻获取的,所以B1的实际的特征量与B2的图像特征量对应(若没误差则一致)。而且在B2的时刻获取下一个图像信息。In this case, since the image feature quantity corresponding to the image information acquired at the time of B1 is acquired at the time of B2 after t2, the actual feature quantity of B1 corresponds to the image feature quantity of B2 (if there is no error are consistent). And the next image information is acquired at the time of B2.
同样,对于在B2获取的图像信息的图像特征量而言,在B3结束运算,并且在B3获取下一个图像信息。以下相同地,对于在B4获取的图像信息的图像特征量而言,在经过t1后的B5结束运算,并且在B5获取下一个图像信息。Likewise, with regard to the image feature amount of the image information acquired at B2, the calculation is ended at B3, and the next image information is acquired at B3. Similarly, as for the image feature value of the image information acquired at B4, the calculation is completed at B5 after t1 has elapsed, and the next image information is acquired at B5.
若为图46的例子,则在将B5的时刻计算出的图像特征量、与B2的时刻计算出的图像特征量用于异常判定处理的情况下,图像信息的获取时刻分别为B4与B1。如上所述,在图像信息的获取时刻与对应的关节角信息的获取时刻之差足够小的情况下,关节角信息使用B4的时刻的信息与B1的时刻的信息即可。即如图46所示,在将B2与B5之差作为Ts,并使时刻的基准为B5的情况下,作为比较对象的图像特征量使用靠前Ts的时刻的特征量。另外,在求出关节角信息的差分时使用的两个关节角信息使用靠前t1的时刻的信息、以及靠前Ts+t2的时刻的信息即可。另外,两个图像信息的获取时刻之差为(Ts+t2-t1)。由此,在根据图像信息的获取时刻之差来决定阈值Th的情况下,使用(Ts+t2-t1)的值即可。In the example of FIG. 46 , when the image feature value calculated at time B5 and the image feature value calculated at time B2 are used in the abnormality determination process, the acquisition times of image information are B4 and B1, respectively. As described above, when the difference between the acquisition time of the image information and the acquisition time of the corresponding joint angle information is sufficiently small, the information of the time of B4 and the information of the time of B1 may be used for the joint angle information. That is, as shown in FIG. 46 , when the difference between B2 and B5 is Ts and the time reference is B5, the feature value at a time earlier than Ts is used as the image feature value to be compared. In addition, the two pieces of joint angle information used when calculating the difference of the joint angle information may be information at a time before t1 and information at a time before Ts+t2. In addition, the difference between the acquisition times of the two pieces of image information is (Ts+t2-t1). Thus, when determining the threshold Th based on the difference in acquisition time of image information, a value of (Ts+t2−t1) may be used.
另外,考虑各种信息的获取时刻,但是如上所述,在可以以使Δf与Δθ之间存在对应关系的方式进行时刻的确定这一点上是相同的。In addition, the acquisition time of various information is considered, but as described above, the point that the time can be specified so that Δf and Δθ have a correspondence relationship is the same.
5.变形例5. Variations
在以上的说明中,获取Δf与Δθ,并根据Δθ求出推断图像特征量变化量Δfe,从而对Δf与Δfe进行比较。但是本实施方式的手段并不限定于此。例如像上述的测定手段那样,也可以通过某些手段获取机器人20000的手尖的位置姿势信息、或者由手尖进行把持等的对象物的位置姿势信息。In the above description, Δf and Δθ are obtained, and the estimated image feature quantity change Δfe is obtained from Δθ to compare Δf with Δfe. However, the means of this embodiment are not limited to this. For example, position and posture information of the hand tip of the robot 20000 or position and posture information of an object grasped by the hand tip may be acquired by some means like the above-mentioned measurement means.
在该情况下,作为变化量推断用信息而获取位置姿势信息X,因此能够求出其变化量ΔX。而且如上式(4)所示,通过对ΔX作用雅克比矩阵Ji,能够与Δθ的情况相同地求出推断图像特征量变化量Δfe。若求出Δfe,则之后的处理与上述例子相同。即,变化量推断部1130通过对位置姿势信息的变化量作用使位置姿势信息与图像特征量相对应(具体而言使位置姿势信息的变化量与图像特征量变化量相对应)的雅克比矩阵,而对推断图像特征量变化量进行运算。在图43B中,与图43A对应地示出了该处理的流程。In this case, since the position and posture information X is acquired as information for change amount estimation, the change amount ΔX thereof can be obtained. Furthermore, as shown in the above formula (4), by applying the Jacobian matrix Ji to ΔX, it is possible to obtain the estimated image feature amount change Δfe in the same way as in the case of Δθ. Once Δfe is obtained, subsequent processing is the same as in the above example. That is, the change amount estimating unit 1130 acts on the change amount of the position and posture information by the Jacobian matrix that associates the position and posture information with the image feature value (specifically, associates the change amount of the position and posture information with the change amount of the image feature value). , and the calculation is performed on the amount of change in the inferred image feature quantity. In FIG. 43B , the flow of this processing is shown corresponding to FIG. 43A .
这里,作为位置姿势信息,在使用机器人20000的手尖(手部或者末端执行器2220)的位置姿势的情况下,Ji是使手尖的位置姿势信息的变化量与图像特征量变化量相对应的信息。另外,作为位置姿势信息,在使用对象物的位置姿势的情况下,Ji是使对象物的位置姿势信息的变化量与图像特征量变化量相对应的信息。或者,若已知利用末端执行器以什么样的相对的位置姿势把持对象物这一信息,则由于末端执行器2220的位置姿势信息与对象物的位置姿势信息一对一对应,所以也能够将一方的信息转换为另一方的信息。即,考虑在获取末端执行器2220的位置姿势信息后,将其转换为对象物的位置姿势信息,之后使用使对象物的位置姿势信息的变化量与图像特征量变化量相对应的雅克比矩阵Ji来求出Δfe等各种实施方式。Here, when the position and posture of the hand tip (hand or end effector 2220) of the robot 20000 is used as the position and posture information, Ji corresponds the amount of change in the position and posture information of the hand to the amount of change in the image feature value. Information. In addition, when using the position and posture of the object as the position and posture information, Ji is information in which the amount of change in the position and posture information of the object is associated with the amount of change in the image feature value. Alternatively, if the information about the relative position and posture of the end effector to grasp the object is known, since the position and posture information of the end effector 2220 is in one-to-one correspondence with the position and posture information of the object, it is also possible to Information from one party is transformed into information from the other. That is, it is considered that after acquiring the position and posture information of the end effector 2220, converting it into the position and posture information of the object, and then using the Jacobian matrix in which the amount of change in the position and posture information of the object corresponds to the amount of change in the image feature value Ji to obtain various embodiments such as Δfe.
另外,本实施方式的异常判定的比较处理并不限定于使用图像特征量变化量Δf与推断图像特征量变化量Δfe来进行。图像特征量变化量Δf、位置姿势信息的变化量ΔX、以及关节角信息的变化量Δθ能够通过使用雅克比矩阵、雅克比矩阵的逆矩阵(广义而言为广义逆矩阵)亦即逆雅克比矩阵从而相互转换。In addition, the comparison processing of the abnormality determination in this embodiment is not limited to performing using the image feature amount change amount Δf and the estimated image feature amount change amount Δfe. The amount of change Δf of the image feature quantity, the amount of change ΔX of the position and posture information, and the amount of change Δθ of the joint angle information can be obtained by using the Jacobian matrix, the inverse matrix of the Jacobian matrix (in a broad sense, the generalized inverse matrix), that is, the inverse Jacobian matrix. Matrices are thus transformed into each other.
即如图49所示,本实施方式的手段能够应用于包含如下构成的机器人控制装置:包含根据图像信息来控制机器人20000的机器人控制部1110;求出表示机器人20000的末端执行器2220或者对象物的位置姿势信息的变化量的位置姿势变化量、或者表示机器人20000的关节角信息的变化量的关节角变化量的变化量运算部1120;根据图像信息而求出图像特征量变化量,并根据图像特征量变化量而求出位置姿势变化量的推断量亦即推断位置姿势变化量、或者关节角变化量的推断量亦即推断关节角变化量的变化量推断部1130;以及通过位置姿势变化量与推断位置姿势变化量的比较处理、或者通过关节角变化量与推断关节角变化量的比较处理来进行异常判定的异常判定部1140。That is, as shown in FIG. 49 , the means of this embodiment can be applied to a robot control device including a robot control unit 1110 that controls the robot 20000 based on image information; The position and posture change amount of the position and posture information of the robot 20000, or the change amount calculation unit 1120 of the joint angle change amount representing the change amount of the joint angle information of the robot 20000; calculates the image feature value change amount based on the image information, and according to The estimated amount of position and posture change, that is, the estimated position and posture change amount, or the estimated amount of joint angle change, that is, the change amount estimation unit 1130 that obtains the estimated amount of change in the joint angle by obtaining the amount of change in the image feature value; The abnormality determination unit 1140 performs abnormality determination by comparing the amount of change with the estimated position and posture change amount, or the comparison process between the change amount of the joint angle and the estimated joint angle change amount.
在图49中,在与图36进行比较的情况下,形成为变化量运算部1120和变化量推断部1130替换的结构。即变化量运算部1120根据关节角信息而求出变化量(这里是关节角变化量或者位置姿势变化量),变化量推断部1130根据图像特征量的差分而推断变化量(求出推断关节角变化量或者推断位置姿势变化量)。另外,在图49中,变化量运算部1120形成为获取关节角信息的部件,但是如上所述,在变化量运算部1120中,也可以使用测定结果等来获取位置姿势信息。In FIG. 49 , when comparing with FIG. 36 , the change amount calculating unit 1120 and the change amount estimating unit 1130 are replaced. That is, the change amount calculation unit 1120 obtains the change amount (here, the joint angle change amount or the position posture change amount) based on the joint angle information, and the change amount estimation unit 1130 estimates the change amount based on the difference of the image feature value (calculates the estimated joint angle change or inferred position and posture change). In addition, in FIG. 49 , the change amount calculation unit 1120 is configured to acquire joint angle information, but as described above, the change amount calculation unit 1120 may acquire position and posture information using measurement results and the like.
具体而言,在获取Δf与Δθ的情况下,也可以利用从上式(5)求出的下式(8)来求出推断关节角变化量Δθe,从而进行Δθ与Δθe的比较处理。具体而言,使用给定的阈值Th2,在下式(9)成立的情况下判定为异常即可。Specifically, when Δf and Δθ are acquired, the estimated joint angle change amount Δθe may be obtained using the following equation (8) obtained from the above equation (5), and the comparison process of Δθ and Δθe may be performed. Specifically, using a predetermined threshold Th2, it is only necessary to determine that the condition is abnormal when the following formula (9) holds true.
Δθe=Jv-1Δf·····(8)Δθe=Jv -1 Δf·····(8)
|Δθ-Δθe|>Th2·····(9)|Δθ-Δθe|>Th2·····(9)
或者,在使用如上所述的测定手段而获取Δf与ΔX的情况下,也可以利用从上式(4)求出的下式(10)来求出推断位置姿势变化量ΔXe,从而进行ΔX与ΔXe的比较处理。具体而言,使用给定的阈值Th3,在下式(11)成立的情况下,判定为异常即可。Alternatively, when Δf and ΔX are obtained using the above-mentioned measuring means, the estimated position and posture change ΔXe may be obtained by using the following equation (10) obtained from the above equation (4), and ΔX and ΔX may be calculated. Comparative treatment of ΔXe. Specifically, using a predetermined threshold Th3, when the following formula (11) holds true, it may be determined as abnormal.
ΔXe=Ji-1Δf·····(10)ΔXe=Ji -1 Δf...(10)
|ΔX-ΔXe|>Th3·····(11)|ΔX-ΔXe|>Th3·····(11)
另外,也不限定于利用直接求出的信息进行比较的情况。例如,在获取Δf与Δθ的情况下,也可以根据Δf利用上式(10)来求出推断位置姿势变化量ΔXe,并根据Δθ利用上式(3)来求出位置姿势变化量ΔX(严格来说该ΔX也不是实测值而是推断值),从而进行使用上式(11)的判定。In addition, the comparison is not limited to the case of using directly obtained information. For example, in the case of obtaining Δf and Δθ, the estimated position and posture change ΔXe can also be obtained by using the above formula (10) according to Δf, and the position and posture change ΔX can be obtained by using the above formula (3) according to Δθ (strictly In other words, this ΔX is not an actual measured value but an estimated value), and the judgment using the above formula (11) is performed.
或者,在获取Δf与ΔX的情况下,也可以根据Δf利用上式(8)来求出推断关节角变化量Δθe,并根据ΔX利用从上式(3)求出的下式(12)来求出关节角变化量Δθ(严格来说该Δθ也不是实测值而是推断值),从而进行使用上式(9)的判定。Alternatively, when Δf and ΔX are obtained, the estimated joint angle change Δθe can be obtained from Δf using the above formula (8), and the following formula (12) obtained from the above formula (3) can be used to obtain The amount of change in the joint angle Δθ is obtained (strictly speaking, this Δθ is also not an actual measurement value but an estimated value), and a determination using the above formula (9) is performed.
Δθ=Ja-1ΔX·····(12)Δθ=Ja -1 ΔX·····(12)
即,变化量运算部1120进行获取多个位置姿势信息并作为位置姿势变化量而求出多个位置姿势信息的差分的处理、获取多个位置姿势信息并根据多个位置姿势信息的差分而求出关节变化量的处理、获取多个关节角信息并作为上述关节角变化量而求出多个关节角信息的差分的处理、以及获取多个关节角信息并根据多个关节角信息的差分而求出位置姿势变化量的处理中的任一个处理。That is, the change amount calculation unit 1120 performs a process of acquiring a plurality of position and posture information and obtaining a difference between the plurality of position and posture information as a position and posture change amount, acquiring a plurality of position and posture information, and obtaining a plurality of position and posture information from the difference of the plurality of position and posture information The process of obtaining the joint change amount, the process of obtaining a plurality of joint angle information and obtaining the difference of the plurality of joint angle information as the above-mentioned joint angle change amount, and the process of obtaining a plurality of joint angle information and calculating the difference based on the difference of the joint angle information Any of the processes for calculating the amount of change in position and orientation.
在图47中,以一并标注本说明书中的数式编号的方式总结了以上所示的Δf、ΔX、Δθ的关系。即,对于本实施方式的手段而言,若获取Δf、ΔX以及Δθ中的任意两个信息,则通过将它们转换为Δf、ΔX、Δθ中的任一个信息而进行比较,从而能够实现本实施方式的手段,并且能够对获取的信息、用于比较的信息进行各种变形实施。In FIG. 47 , the relationship between Δf, ΔX, and Δθ shown above is summarized by attaching the numerical formula numbers in this specification. That is, for the means of this embodiment, if any two pieces of information in Δf, ΔX, and Δθ are acquired, they are converted into any one of information in Δf, ΔX, and Δθ for comparison, so that this embodiment can be realized The means of the method, and various modifications can be made to the acquired information and the information used for comparison.
此外,本实施方式的机器人控制装置1000等也可以利用程序来实现其处理的一部分或者大部分。此时,通过CPU等处理器执行程序,从而实现本实施方式的机器人控制装置1000等。具体而言,读出存储于非暂时性信息存储介质的程序,并且CPU等处理器执行读出的程序。这里,信息存储介质(能够利用计算机读取的介质)是储存程序、数据等的介质,其功能能够通过光盘(DVD、CD等)、HDD(硬盘驱动器)、或者存储器(卡式存储器、ROM等)等来实现。而且,CPU等处理器根据储存于信息存储介质的程序(数据),进行本实施方式的各种处理。即,在信息存储介质存储用于使计算机(具备操作部、处理部、存储部、输出部的装置)作为本实施方式的各部而发挥功能的程序(用于使计算机执行各部的处理的程序)。In addition, the robot control device 1000 etc. of this embodiment may implement a part or most of the processing by a program. In this case, the robot control device 1000 and the like of the present embodiment are realized by a processor such as a CPU executing the program. Specifically, a program stored in a non-transitory information storage medium is read, and a processor such as a CPU executes the read program. Here, the information storage medium (a medium that can be read by a computer) is a medium that stores programs, data, etc., and its function can be performed by an optical disk (DVD, CD, etc.), HDD (hard disk drive), or a memory (card memory, ROM, etc.) ) and so on to achieve. Furthermore, a processor such as a CPU performs various processes in the present embodiment based on a program (data) stored in an information storage medium. That is, a program for causing a computer (a device including an operation unit, a processing unit, a storage unit, and an output unit) to function as each unit of the present embodiment (a program for causing the computer to execute the processing of each unit) is stored in the information storage medium. .
此外,以上对本实施方式进行了详细说明,但是可以在实质上不脱离本发明的新内容和效果的条件下,进行多种多样的改变,这对于本领域技术人员来说是很容易理解的。因此,这种改变例也均包含在本发明的范围内。例如,在说明书或附图中,至少一次与更加广义或同义的不同用语一起被记载的用语,在说明书或附图中的任何位置,均能够替换成该不同用语。另外,机器人控制装置1000等的结构、动作也不限定于本实施方式中说明的结构、动作,而能够进行各种变形实施。In addition, although the present embodiment has been described in detail above, it is easily understood by those skilled in the art that various changes can be made without substantially departing from the novel contents and effects of the present invention. Therefore, such modified examples are also included in the scope of the present invention. For example, in the specification or the drawings, a term described together with a different term having a broader or synonymous meaning at least once can be replaced by the different term at any position in the specification or the drawings. In addition, the configuration and operation of the robot controller 1000 and the like are not limited to those described in this embodiment, and various modifications can be made.
第七实施方式Seventh Embodiment
1.本实施方式的手段1. The means of this embodiment
首先对本实施方式的手段进行说明。在多数状况下使用针对检查对象物的检查(特别是外观检查)。对于外观检查(目视检查)而言,用人们的眼睛进行观看并观察的检查方法是基本,但是从检查的使用者的省力化、检查的高精度化等观点来看,提出了利用检查装置使检查自动化的手段。First, the means of this embodiment will be described. In many cases, the inspection of the object to be inspected (especially the visual inspection) is used. For appearance inspection (visual inspection), the inspection method of viewing and observing with human eyes is basic, but from the viewpoint of labor saving for inspection users and high accuracy of inspection, it is proposed to use inspection equipment A means of automating the inspection.
这里的检查装置也可以是专用的装置,例如作为专用的检查装置,如图54所示,考虑包括拍摄部CA、处理部PR、以及接口部IF的装置。在该情况下,检查装置获取使用拍摄部CA而拍摄的检查对象物OB的拍摄图像,并在处理部PR中使用拍摄图像来进行检查处理。考虑这里的各种检查处理的内容,但是例如,在检查中作为合格图像而预先获取判定为合格的状态的检查对象物OB的图像(可以是拍摄图像,也可以由模型数据制成),并且进行该合格图像与实际的拍摄图像的比较处理即可。若拍摄图像与合格图像接近,则能够判定该拍摄图像所拍摄的检查对象物OB是合格的,若拍摄图像与合格图像的差异大,则能够判定检查对象物OB存在某些问题而不合格。另外,在专利文献1中,公开有利用机器人作为检查装置的手段。The inspection device here may be a dedicated device. For example, as a dedicated inspection device, as shown in FIG. 54 , a device including an imaging unit CA, a processing unit PR, and an interface unit IF may be considered. In this case, the inspection device acquires a captured image of the inspection object OB captured by the imaging unit CA, and performs inspection processing using the captured image in the processing unit PR. Considering the contents of various inspection processes here, for example, an image of the inspection object OB in a state judged to be acceptable (may be a captured image or may be created from model data) is acquired in advance as an acceptable image during the inspection, and What is necessary is to perform a comparison process between the acceptable image and the actual captured image. If the captured image is close to the qualified image, it can be determined that the inspection object OB captured by the captured image is acceptable. If the difference between the captured image and the acceptable image is large, it can be determined that the inspection object OB has some problems and is unqualified. In addition, Patent Document 1 discloses a method using a robot as an inspection device.
但是,从上述合格图像的例子亦可知,为了利用检查装置进行检查,需要预先设定用于该检查的信息。例如,虽然取决于检查对象物OB的配置方式,但是需要预先设定从什么方向观察检查对象物OB之类的信息。However, as can also be seen from the above-mentioned examples of acceptable images, in order to perform an inspection by an inspection device, it is necessary to set information for the inspection in advance. For example, information such as the direction from which the inspection object OB is viewed needs to be set in advance, although it depends on the arrangement of the inspection object OB.
一般地,如何观察检查对象物OB(狭义而言为在拍摄于拍摄图像时呈何种形状、尺寸)因检查对象物OB与观察的位置、方向的相对的关系而变化。以下,将观察检查对象物OB的位置表述为视点位置,视点位置狭义而言为表示拍摄部的配置的位置的意思。另外,将观察检查对象物OB的方向表述为视线方向,视线方向狭义而言为表示拍摄部的拍摄方向(光轴的方向)的意思。若未设置视点位置、视线方向的基准,则由于在每次检查时检查对象物OB的观察方式可能变化,所以根本不可能与观察方式对应地进行判定检查对象物OB的正常异常的外观检查。Generally, how the inspection object OB is observed (in a narrow sense, what shape and size it is when it is captured on a captured image) changes depending on the relative relationship between the inspection object OB and the position and direction of observation. Hereinafter, the position at which the inspection target object OB is observed is expressed as a viewpoint position, and the viewpoint position in a narrow sense means the position where the imaging unit is arranged. In addition, the direction in which the inspection object OB is observed is expressed as a line-of-sight direction, and the line-of-sight direction means the imaging direction (direction of the optical axis) of the imaging unit in a narrow sense. If no reference is provided for the position of the viewpoint and the direction of sight, the observation method of the inspection object OB may change every inspection, so it is impossible to perform visual inspection to determine whether the inspection object OB is normal or abnormal according to the observation method.
另外,作为用于对该检查对象物OB判定为无异常的基准的合格图像,无法决定可以保持从何视点位置、视线方向观察的图像。即,若在检查时进行观察的位置方向不确定,则相对于检查时获取的拍摄图像的比较对象(检查基准)也不确定,从而无法进行适当的检查。此外,只要保持从所有视点位置、视线方向观察判定为合格的检查对象物OB的情况下的图像,则能够避免没有合格图像的状况。但是,该情况下的视点位置、视线方向会变得比较庞大,从而合格图像的张数也变得比较庞大,因此不现实。依据以上的点,也需要预先保持合格图像。In addition, it is not possible to determine from what viewpoint position and line-of-sight direction an image can be held as a standard acceptable image for judging that there is no abnormality with respect to the inspection object OB. That is, if the position and direction of observation during inspection are uncertain, the comparison object (inspection reference) with respect to the captured image acquired during inspection is also uncertain, and appropriate inspection cannot be performed. In addition, as long as the images of the inspection object OB judged to be acceptable are observed from all viewpoint positions and viewing directions, it is possible to avoid a situation where there is no acceptable image. However, in this case, the position of the viewpoint and the direction of the line of sight will become relatively large, and the number of acceptable images will also become relatively large, which is not realistic. From the above point, it is also necessary to hold the acceptable image in advance.
并且,一般地,合格图像、拍摄图像在检查中也会包含不必要的信息,因此在使用图像整体来进行检查处理(比较处理)时,恐怕检查的精度会变低。例如,在拍摄图像除了检查对象物以外,也存在拍摄到了工具、夹具等的情况,不优选将上述信息用于检查。另外,在检查对象物的一部分是检查对象的情况下,恐怕也会因检查对象物的不是检查对象的区域的信息而使检查精度降低。具体而言,如用图64A~图64D进行后述那样,在考虑针对较大的物体A而组装比物体A小的物体B的作业的情况下,检查的对象应为被组装的物体B的周围,并且检查物体A整体的必要性较低,并且因为使A整体为检查对象也会提高误判定的可能性。由此,若考虑提高检查处理的精度,检查区域在检查中也成为重要的信息。In addition, in general, acceptable images and captured images also contain unnecessary information during inspection, so when inspection processing (comparison processing) is performed using the entire image, the accuracy of the inspection may be lowered. For example, tools, jigs, and the like may be captured in the captured image in addition to the inspection target, and it is not preferable to use the above information for inspection. In addition, when a part of the inspection object is the inspection object, there is a possibility that the inspection accuracy may be lowered due to the information of the area of the inspection object that is not the inspection object. Specifically, as will be described later using FIGS. 64A to 64D , when considering the operation of assembling an object B smaller than the object A for a larger object A, the object of inspection should be the object B to be assembled. surroundings, and it is less necessary to check the entirety of object A, and the possibility of misjudgment increases because the entirety of A is the inspection target. Therefore, in consideration of improving the accuracy of the inspection process, the inspection area becomes important information also in the inspection.
但是以往,上述检查区域、视点位置、视线方向、或者合格图像之类的用于检查的信息是由具有图像处理的专业知识的使用者进行设定的。这是因为虽然是通过图像处理来进行合格图像与拍摄图像的比较处理,但是还要求与该图像处理的具体的内容对应地变更检查所需要的信息的设定。However, in the past, information for inspection such as the inspection area, viewpoint position, line of sight direction, or acceptable image has been set by a user with specialized knowledge in image processing. This is because although the comparison process between the acceptable image and the captured image is performed by image processing, it is required to change the setting of information required for inspection according to the specific content of the image processing.
例如,应用使用图像中的边缘的图像处理、全部使用像素值的图像处理、使用亮度、色差色相的图像处理、或者其他的图像处理中的一种的情况是否适于合格图像与拍摄图像的比较处理(狭义而言为相似度的判定处理),可能与检查对象物OB的形状、色调、质感等对应地变化。由此若为能够变更图像处理的内容的检查,则进行检查的使用者必须适当地设定进行何种图像处理。For example, whether the application of image processing using edges in the image, image processing using all pixel values, image processing using brightness, color difference hue, or other image processing is suitable for the comparison of acceptable images and captured images The processing (similarity determination processing in a narrow sense) may vary according to the shape, color tone, texture, etc. of the inspection object OB. Therefore, for an inspection in which the content of image processing can be changed, the user performing the inspection must appropriately set what kind of image processing to perform.
另外,即使在设定完图像处理的内容的情况下、或者在通用性较高的图像处理的内容事先设定完毕的情况下,使用者也需要适当地掌握该图像处理的内容。这是因为若图像处理的具体的内容发生变化,则与检查相适的视点位置、视线方向也可能发生变化。例如,在使用边缘信息来进行比较处理的情况下,可以将能够对检查对象物OB的形状中的复杂的部分进行观察的位置方向设为视点位置、视线方向,而观察平坦的部分的位置方向为视点位置方向是不适当的。另外,若使用像素值来进行比较处理,则优选将能够对因色调的变化较大、或者因来自光源的光充分照射而能够明亮地观察的区域进行观察的位置方向作为视点位置、视线方向。即,在现有的手段中,在包含视点位置或视线方向、合格图像在内的检查所需要的信息的设定中,需要图像处理的专业知识。进一步而言,若图像处理的内容不同,需要使合格图像与拍摄图像的比较处理的基准也进行变更。例如,需要与图像处理的内容对应地决定合格图像与拍摄图像有多少程度相似则合格、有多少程度差异则不合格的基准,但是若没有图像处理的专业知识,则也无法设定该基准。In addition, even when the content of image processing has been set, or when the content of image processing with high versatility has been set in advance, the user needs to properly grasp the content of the image processing. This is because if the specific content of the image processing changes, the viewpoint position and line-of-sight direction suitable for the inspection may also change. For example, in the case of performing comparison processing using edge information, the position and direction in which a complex part of the shape of the inspection object OB can be observed can be set as the viewpoint position and the line of sight direction, and the position and direction for observing a flat part can be Direction is not appropriate for viewpoint position. In addition, if the comparison process is performed using pixel values, it is preferable to use a position and direction where a region that can be observed brightly due to a large change in color tone or sufficiently irradiated with light from a light source is used as the viewpoint position and the line-of-sight direction. That is, in the conventional means, professional knowledge of image processing is required for the setting of information necessary for the inspection including the position of the viewpoint, the direction of the line of sight, and the acceptable image. Furthermore, if the content of the image processing is different, it is necessary to change the criteria for the comparison processing between the acceptable image and the captured image. For example, depending on the content of image processing, it is necessary to determine a criterion for determining the degree of similarity between the acceptable image and the captured image and the failure for the degree of difference. However, this criterion cannot be set without professional knowledge in image processing.
即,即使能够通过使用机器人等使检查自动化,也难以设定该检查所需要的信息,对于不具有专业的知识的使用者而言,不能说实现检查的自动化是容易的。That is, even if the inspection can be automated by using a robot or the like, it is difficult to set the information required for the inspection, and it cannot be said that the automation of the inspection is not easy for a user without professional knowledge.
另外,本申请人设想的机器人通过做成对使用者而言容易进行执行机器人作业时的指导,并且设置有各种传感器等而使机器人本身能够识别作业环境的机器人,能够灵活地进行多样的作业。这样的机器人适于多品种制造(狭义而言为平均一个品种的制造量较少的多品种少量制造)。但是,即使容易进行制造时刻的指导等,是否容易进行制造成的制品的检查也成为另一个问题。这是因为若制品不同则应检查的对象物的位置也不同,作为结果,在拍摄图像以及合格图像中,应成为比较对象的检查区域对于每个制品也不同。即,在设想多品种制造的情况下,若将检查区域的设定委托给使用者,则与该设定处理相关的负担较大,而导致生产性的降低。In addition, the robot conceived by the present applicant can perform various tasks flexibly by making it easy for the user to provide guidance when performing the robot's work, and is equipped with various sensors so that the robot itself can recognize the work environment. . Such a robot is suitable for multi-item manufacturing (in a narrow sense, multi-item low-volume manufacturing in which the average amount of one item is produced is small). However, even if it is easy to provide guidance on the timing of manufacturing, it is another question whether it is easy to inspect the manufactured product. This is because the position of the object to be inspected differs depending on the product, and as a result, the inspection area to be compared is different for each product in the captured image and the pass image. That is, in the case of assuming multi-item manufacturing, entrusting the setting of the inspection area to the user will result in a large burden on the setting process, leading to a decrease in productivity.
因此,本申请人提出如下手段:根据第一检查信息而生成用于检查处理的第二检查信息,从而减少检查处理中的使用者的负担,并提高机器人作业时的生产性。具体而言,本实施方式的机器人30000是使用由拍摄部(例如图52的拍摄部5000)拍摄的检查对象物的拍摄图像,而进行对检查对象物进行检查的检查处理的机器人,根据第一检查信息,生成包含检查处理的检查区域在内的第二检查信息,并根据第二检查信息而进行检查处理。Therefore, the present applicant proposes a means for reducing the burden on the user in the inspection process and improving productivity during robot work by generating second inspection information used in the inspection process based on the first inspection information. Specifically, the robot 30000 of this embodiment is a robot that performs inspection processing for inspecting an inspection object using a captured image of the inspection object captured by an imaging unit (for example, the imaging unit 5000 in FIG. 52 ). The inspection information generates second inspection information including an inspection area for inspection processing, and performs inspection processing based on the second inspection information.
这里,第一检查信息是机器人30000在比执行检查处理靠前的时刻能够获取的信息,并表示第二检查信息的生成所使用的信息。由于第一检查信息是事先获取的信息,也能够表现为事先信息。在本实施方式中,第一检查信息可以由使用者输入,也可以在机器人30000中生成。即使在由使用者输入第一检查信息的情况下,该第一检查信息在输入时也不要求图像处理的专业知识,而成为能够容易进行输入的信息。具体而言,可以是包含检查对象物OB的形状信息、检查对象物OB的位置姿势信息、以及针对检查对象物OB的相对的检查处理对象位置中的至少一个的信息。Here, the first inspection information is information that can be acquired by the robot 30000 before execution of the inspection process, and indicates information used to generate the second inspection information. Since the first inspection information is information obtained in advance, it can also be expressed as advance information. In this embodiment, the first inspection information may be input by the user, or may be generated by the robot 30000 . Even when the user inputs the first inspection information, the first inspection information does not require expertise in image processing at the time of input, and is information that can be easily input. Specifically, it may be information including at least one of shape information of the inspection object OB, position and posture information of the inspection object OB, and a relative inspection processing target position with respect to the inspection object OB.
如后述那样,通过使用形状信息(狭义而言为三维模型数据)、位置姿势信息、检查处理对象位置的信息,能够生成第二检查信息。而且形状信息作为CAD数据等,一般是事先获取的,在使用者输入形状信息时,选择现有的信息即可。例如,在保持有作为检查对象物OB的候补的各种物体的数据的状况下,使用者从该候补之中选择检查对象物OB即可。另外,对于位置姿势信息而言,若在检查时知道检查对象物OB是如何配置的(例如以何种姿势配置于作业台上的什么位置),则也能够容易地设定位置姿势信息,位置姿势信息的输入不要求图像处理的专业知识。另外,检查处理对象位置是表示检查对象物OB中欲进行检查的位置的信息,例如若对检查对象物OB的给定的部分的破损进行检查,则是表示该给定的部分的位置的信息。另外,将针对物体A而组装物体B的检查对象物OB作为对象,在检查是否正常进行物体A与物体B的装配的情况下,物体A与物体B的装配位置(接触面、接触点、插入位置等)成为检查处理对象位置。检查处理对象位置也相同地,若能够掌握检查的内容则能够容易进行输入,而在输入时不需要图像处理的专业知识。As will be described later, the second inspection information can be generated by using shape information (three-dimensional model data in a narrow sense), position and posture information, and information on an inspection processing target position. Moreover, the shape information is generally acquired in advance as CAD data, etc., and when the user inputs the shape information, the existing information may be selected. For example, in a situation where data of various objects that are candidates for the inspection object OB are held, the user may select the inspection object OB from among the candidates. In addition, as for the position and posture information, if it is known at the time of inspection how the inspection object OB is arranged (for example, in what posture and at what position on the workbench), the position and posture information can also be easily set. The input of pose information does not require expertise in image processing. In addition, the inspection processing target position is information indicating a position to be inspected in the inspection object OB, for example, when inspecting a predetermined portion of the inspection object OB for damage, it is information indicating the position of the predetermined portion . In addition, the inspection target object OB, which assembles an object B with respect to an object A, is used as an object. When checking whether the assembly of the object A and the object B is carried out normally, the assembly position (contact surface, contact point, insertion position) of the object A and the object B is inspected. location, etc.) becomes the inspection processing target location. Similarly, the inspection processing target position can be easily input if the content of the inspection can be grasped, and professional knowledge of image processing is not required for input.
此外,本实施方式的手段并不限定于自动生成第二检查信息的全部。例如,也可以利用本实施方式的手段生成第二检查信息中的一部分,而其他的第二检查信息通过使用者手动输入。在该情况下,使用者并非能够完全省略第二检查信息的输入,但是至少在能够自动生成难以设定的视点信息等这一点上,能够利用本实施方式的手段而容易进行检查这一优点是不变的。In addition, the means of this embodiment are not limited to automatically generating all of the second inspection information. For example, part of the second inspection information may be generated using the means of this embodiment, and the other second inspection information may be manually input by the user. In this case, the user cannot completely omit the input of the second inspection information, but at least in the point of being able to automatically generate viewpoint information that is difficult to set, the advantage of being able to perform the inspection easily by using the means of this embodiment is that Changeless.
另外,检查处理是针对基于机器人30000的机器人作业的结果而进行的处理,从而第一检查信息也可以是在机器人作业中获取的信息。In addition, since the inspection processing is performed on the result of the robot work by the robot 30000, the first inspection information may be information acquired during the robot work.
这里机器人作业是指由机器人进行的作业,考虑由螺钉紧固、焊接、压焊、抽点(Snapshot)等形成的结合、以及使用手部、工具、夹具的变形等各种作业。在针对机器人作业的结果而进行检查处理的情况下,该检查处理判定是否正常进行了机器人作业。在该情况下,为了开始执行机器人作业,需要获取与检查对象物OB、作业内容相关的各种信息。例如,作业对象物(检查对象物OB的全部或者一部分)在作业前配置在什么位置、以何种姿势、在作业后变化为何种位置姿势成为已知的信息。另外,若进行螺钉紧固、焊接,则作业对象物中的紧固螺钉的位置、进行焊接的位置为已知。同样,若使多个物体结合,则物体A在什么位置从什么方向与什么物体结合为已知的信息,若对作业对象物施加变形,则作业对象物中的变形位置与变形后的形状也都为已知的信息。Here, robot work refers to work performed by a robot, and various works such as joints such as screw fastening, welding, pressure welding, and snapshots, and deformation of hands, tools, and jigs are considered. When the checking process is performed on the results of the robot work, the checking process determines whether the robot work was performed normally. In this case, in order to start the robot work, it is necessary to acquire various information related to the inspection target object OB and work details. For example, what position and posture the work object (all or part of the inspection object OB) is placed before the work, and what position and posture it changes after the work is known information. In addition, when screw fastening and welding are performed, the positions of the fastening screws and the welding positions in the work object are known. Similarly, if multiple objects are combined, the position and direction of object A combined with what object is known information. If deformation is applied to the work object, the deformed position and deformed shape of the work object will also be are all known information.
即,在机器人作业成为对象的情况下,对于与上述的形状信息、位置姿势信息、以及检查处理对象位置对应的信息、其他第一检查信息所包含的信息而言,在完成了机器人作业的前提下,相当大的部分(根据情况所需的第一检查信息的全部)成为已知。即,在本实施方式的机器人30000中,第一检查信息挪用进行机器人的控制的单元(例如图50的处理部11120)等所保持的信息即可。另外,即便在使用图51A、图51B如后述那样将本实施方式的手段应用于与机器人30000不同的处理装置10000的情况下,处理装置10000从包括在机器人内的控制部3500等获取第一检查信息即可。因此,从使用者来看,为了进行检查,无需重新输入第一检查信息,就能够容易地生成第二检查信息。That is, when the robot operation is the object, the information corresponding to the above-mentioned shape information, position and posture information, and the inspection processing target position, and other information included in the first inspection information, on the premise that the robot operation is completed In this case, a considerable part (all of the first examination information required according to the situation) becomes known. That is, in the robot 30000 according to the present embodiment, the first inspection information may divert information held by a unit (for example, the processing unit 11120 in FIG. 50 ) that controls the robot. Also, even when the means of this embodiment is applied to a processing device 10000 different from the robot 30000 as described later using FIGS. 51A and 51B , the processing device 10000 acquires the first Just check the information. Therefore, from the user's point of view, the second inspection information can be easily generated without re-inputting the first inspection information for inspection.
由此,即使是不具有图像处理的专业知识的使用者,也能够容易地执行检查(至少获取第二检查信息),或者能够在检查执行时减少设定第二检查信息的负担等。此外,在本说明书的以下的说明中,对检查处理的对象是机器人作业的结果的例子进行说明。即,使用者无需输入第一检查信息,但是如上所述,使用者输入第一检查信息的一部分或者全部也无妨。即使是使用者输入第一检查信息的情况,对于使用者而言,在该第一检查信息的输入中不要求专业知识这一点上,容易进行检查这一优点是不变的。Thereby, even a user without professional knowledge of image processing can easily execute the inspection (acquire at least the second inspection information), or reduce the burden of setting the second inspection information when performing the inspection. In addition, in the following description of this specification, an example in which the object of the inspection process is the result of robot work will be described. That is, the user does not need to input the first examination information, but as described above, the user may input part or all of the first examination information. Even when the user inputs the first inspection information, the user does not require specialized knowledge to input the first inspection information, and the advantage of being easy to perform the inspection remains unchanged.
另外,在以下的说明中,使用图52、图53如后述那样,主要对由机器人30000生成第二检查信息、并且在该机器人30000中执行检查处理的例子进行说明。但是本实施方式的手段并不限定于此,以下的说明能够如图51A所示地扩大为在处理装置10000中生成第二检查信息、并且机器人30000获取该第二检查信息而执行检查处理的手段。或者,也能够如图51B所示地扩展为在处理装置10000中生成第二检查信息、并且不在机器人中而是在专用的检查装置等执行使用该第二检查信息的检查处理的手段。In addition, in the following description, an example in which the second inspection information is generated by the robot 30000 and the inspection process is executed by the robot 30000 will be mainly described using FIGS. 52 and 53 as will be described later. However, the means of this embodiment are not limited thereto, and the following description can be extended to a means of generating second inspection information in the processing device 10000 as shown in FIG. 51A , and the robot 30000 acquires the second inspection information and executes inspection processing. . Alternatively, as shown in FIG. 51B , the second inspection information can be generated in the processing device 10000 , and the inspection process using the second inspection information is executed not in the robot but in a dedicated inspection device or the like.
以下,对本实施方式的机器人30000、处理装置10000的系统构成例进行说明,之后对具体的处理流程进行说明。进一步具体而言,作为离线处理而对从第一检查信息的获取到第二检查信息的生成的流程进行说明,并且作为在线处理(online)而对使用了生成好的第二检查信息的机器人所进行的实际的检查处理的流程进行说明。Hereinafter, a system configuration example of the robot 30000 and the processing device 10000 according to this embodiment will be described, and then a specific processing flow will be described. More specifically, the flow from the acquisition of the first inspection information to the generation of the second inspection information will be described as offline processing, and the process performed by a robot using the generated second inspection information will be described as online processing (online). The flow of the actual inspection process performed will be described.
2.系统构成例2. Example of system configuration
接下来对本实施方式的机器人30000、处理装置10000的系统构成例进行说明。如图50所示,本实施方式的机器人包括信息获取部11110、处理部11120、机器人机构300000以及拍摄部5000。但是,机器人30000并不限定于图50的结构,而能够省略上述一部分的构成要素、或追加其他构成要素等而实施各种变形。Next, a system configuration example of the robot 30000 and the processing device 10000 according to this embodiment will be described. As shown in FIG. 50 , the robot of this embodiment includes an information acquisition unit 11110 , a processing unit 11120 , a robot mechanism 300000 , and an imaging unit 5000 . However, the robot 30000 is not limited to the configuration shown in FIG. 50 , and various modifications can be made by omitting some of the above-mentioned constituent elements or adding other constituent elements.
信息获取部11110在检查处理之前获取第一检查信息。在由使用者输入了第一检查信息的情况下,信息获取部11110进行接受来自使用者的输入信息的处理。另外,在使机器人作业所使用的信息为第一检查信息的情况下,信息获取部11110进行从图50中的未图示的存储部等读出作业时在处理部11120中使用的控制用信息的处理等即可。The information acquisition unit 11110 acquires first inspection information before inspection processing. When the first inspection information is input by the user, the information acquisition unit 11110 performs processing for accepting the input information from the user. In addition, when the information used in the robot work is the first inspection information, the information acquisition unit 11110 reads out the control information used in the processing unit 11120 when the work is read from a storage unit not shown in FIG. 50 . processing and so on.
处理部11120根据信息获取部11110所获取的第一检查信息,进行第二检查信息的生成处理,并且进行使用第二检查信息的检查处理。在后面对处理部11120中的处理进行详细叙述。另外,处理部11120在检查处理、以及检查处理以外(例如装配等机器人作业),进行机器人30000的控制。例如在处理部11120中,进行包括在机器人机构300000内的臂3100、与拍摄部5000等的控制。此外,拍摄部5000也可以是安装于机器人的臂3100的手眼摄像机。The processing unit 11120 performs generation processing of the second inspection information based on the first inspection information acquired by the information acquiring unit 11110 and performs inspection processing using the second inspection information. The processing in the processing unit 11120 will be described in detail later. In addition, the processing unit 11120 controls the robot 30000 in inspection processing and other than inspection processing (for example, robot work such as assembly). For example, the processing unit 11120 controls the arm 3100 included in the robot mechanism 300000, the imaging unit 5000, and the like. In addition, the imaging unit 5000 may be a hand-eye camera attached to the arm 3100 of the robot.
另外,如图51A所示,本实施方式的手段能够应用于如下处理装置,其是针对使用由拍摄部(在图51A中示出了拍摄部5000,但并不限定于此)拍摄的检查对象物的拍摄图像而进行上述检查对象物的检查处理的装置,输出用于检查处理的信息的处理装置10000,其根据第一检查信息,生成将检查处理的包含拍摄部的视点位置以及视线方向的视点信息、与检查处理的检查区域包含在内的第二检查信息,并对进行检查处理的装置输出第二检查信息。在该情况下,第一检查信息的获取与第二检查信息的生成是通过处理装置10000进行的,处理装置10000例如图51A所示,能够作为包括信息获取部11110与处理部11120的处理装置来实现。In addition, as shown in FIG. 51A , the means of this embodiment can be applied to a processing device for using an inspection object photographed by an imaging unit (the imaging unit 5000 is shown in FIG. 51A , but not limited thereto). An apparatus for performing the inspection processing of the inspection target object by capturing an image of the object, and a processing apparatus 10000 for outputting information for the inspection processing, which generates a view including the position of the viewpoint and the direction of the line of sight of the imaging unit to be inspected based on the first inspection information. The viewpoint information and the second inspection information including the inspection area of the inspection process are included, and the second inspection information is output to the device performing the inspection process. In this case, the acquisition of the first inspection information and the generation of the second inspection information are performed by the processing device 10000, and the processing device 10000 can be used as a processing device including an information acquisition unit 11110 and a processing unit 11120 as shown in FIG. 51A , for example. accomplish.
这里,进行检查处理的装置如上所述可以是机器人30000。在该情况下,如图51A所示,机器人30000包括臂3100、用于检查对象物的检查处理的拍摄部5000、以及进行臂3100及拍摄部5000的控制的控制部3500。控制部3500作为第二检查信息而从处理装置10000获取将表示拍摄部5000的视点位置以及视线方向的视点信息、与检查区域包含在内的信息,并依据该第二检查信息,进行使拍摄部5000向与视点信息对应的视点位置以及视线方向移动的控制,从而使用获取的拍摄图像与检查区域而执行检查处理。Here, the device that performs the inspection process may be the robot 30000 as described above. In this case, as shown in FIG. 51A , robot 30000 includes arm 3100 , imaging unit 5000 for inspection processing of an inspection target, and control unit 3500 for controlling arm 3100 and imaging unit 5000 . The control unit 3500 acquires from the processing device 10000 information including the viewpoint information indicating the viewpoint position and line-of-sight direction of the imaging unit 5000 and the inspection region as the second inspection information, and performs the operation of the imaging unit based on the second inspection information. 5000 controls movement to the viewpoint position and line of sight direction corresponding to the viewpoint information, and executes the inspection process using the acquired captured image and inspection area.
这样,能够在处理装置10000中进行第二检查信息的生成,并在其他机器中使用该第二检查信息而适当执行检查处理。若进行检查处理的装置是机器人30000,则与图50相同地,能够实现使用第二检查信息来进行检查处理的机器人,但是在形成为第二检查信息的生成处理与使用第二检查信息的检查处理的执行主体不同的机器这一点上,图51A与图50不同。In this manner, the second inspection information can be generated in the processing device 10000, and inspection processing can be appropriately executed in other devices using the second inspection information. If the inspection processing device is the robot 30000, similarly to FIG. 50 , a robot that performs inspection processing using the second inspection information can be realized. FIG. 51A differs from FIG. 50 in that the machines executing the processing are different.
另外,处理装置10000不仅进行第二检查信息的生成处理,也可以配合地进行机器人30000的控制处理。例如,处理装置10000的处理部11120生成第二检查信息,并且进行基于该第二检查信息的机器人的控制用信息的生成。在该情况下,机器人的控制部3500依据由处理装置10000的处理部11120生成的控制用信息,使臂3100等进行动作。即,处理装置10000承担机器人的控制的实质的部分,该情况下的处理装置10000也能够理解为机器人的控制装置。In addition, the processing device 10000 may perform not only the generation process of the second inspection information but also the control process of the robot 30000 in conjunction with it. For example, the processing unit 11120 of the processing device 10000 generates second inspection information, and generates robot control information based on the second inspection information. In this case, the control unit 3500 of the robot operates the arm 3100 and the like based on the control information generated by the processing unit 11120 of the processing device 10000 . That is, the processing device 10000 is in charge of a substantial part of the control of the robot, and the processing device 10000 in this case can also be understood as a robot control device.
另外,使用由处理装置10000生成的第二检查信息而执行检查处理的主体并不限定于机器人30000。例如,也可以在如图54所示的专用的机器中使用第二检查信息来进行检查处理,该情况的构成例如图51B所示。在图51B中,示出了检查装置接受第一检查信息的输入(这例如使用图54的接口部IF即可)、并对处理装置10000输出该第一检查信息的例子。在该情况下,处理装置10000使用从检查装置输入的第一检查信息而生成第二检查信息。但是,第一检查信息能够如从使用者直接向处理装置输入的例子那样,进行各种变形实施。In addition, the subject who executes the inspection process using the second inspection information generated by the processing device 10000 is not limited to the robot 30000 . For example, the second inspection information may be used to perform inspection processing in a dedicated device as shown in FIG. 54 , and the configuration in this case is shown in FIG. 51B , for example. FIG. 51B shows an example in which the inspection device receives input of first inspection information (for example, the interface unit IF in FIG. 54 may be used for this), and outputs the first inspection information to the processing device 10000 . In this case, the processing device 10000 generates the second inspection information using the first inspection information input from the inspection device. However, the first inspection information can be implemented in various modifications, such as the example in which the user directly inputs it to the processing device.
如图52所示,本实施方式的机器人30000也可以是臂为1个的单臂机器人。在图52中,作为臂3100的末端执行器而设置有拍摄部5000(手眼摄像机)。但是,能够设置手部等把持部作为末端执行器、在该把持部、臂3100的其他位置等设置拍摄部5000等来实施各种变形。另外,在图52中,作为与图51A的控制部3500对应的机器而示出了PC等机器,但是该机器也可以是与图50的信息获取部11110以及处理部11120对应的机器。另外,在图52中,包括接口部6000在内,作为接口部6000而示出了操作部6100与显示部6200,但是能够对是否包括接口部6000、或者如何形成在包括接口部6000情况下的该接口部6000的结构进行变形实施。As shown in FIG. 52, the robot 30000 of this embodiment may be a single-arm robot having one arm. In FIG. 52 , an imaging unit 5000 (hand-eye camera) is provided as an end effector of the arm 3100 . However, it is possible to implement various modifications by providing a grip such as a hand as an end effector, installing the imaging unit 5000 at other positions of the grip or the arm 3100 , and the like. In addition, in FIG. 52 , a device such as a PC is shown as a device corresponding to the control unit 3500 of FIG. 51A , but this device may also be a device corresponding to the information acquisition unit 11110 and the processing unit 11120 of FIG. 50 . In addition, in FIG. 52 , including the interface unit 6000 , the operation unit 6100 and the display unit 6200 are shown as the interface unit 6000 . The structure of the interface unit 6000 is modified.
另外,本实施方式的机器人30000的结构并不限定于图52。例如,如图53所示,机器人30000也可以至少包括第一臂3100、以及与第一臂3100不同的第二臂3200,并且拍摄部5000是设置于第一臂3100以及第二臂3200的至少一方的手眼摄像机。在图53中,第一臂3100是由关节3110、3130与设置于关节之间的框架3150、3170构成的,第二臂3200也同样,但是并不限定于此。另外,在图53中,示出了具有两支臂的双臂机器人的例子,但是本实施方式的机器人也可以具有3支以上的臂。虽然也记载了拍摄部5000是设置于第一臂3100的手眼摄像机(5000-1)与设置于第二臂3200的手眼摄像机(5000-2)的两方上,但是也可以是设置在其中一方上。In addition, the structure of the robot 30000 of this embodiment is not limited to FIG. 52 . For example, as shown in FIG. 53 , the robot 30000 may also include at least a first arm 3100 and a second arm 3200 different from the first arm 3100 , and the photographing unit 5000 is provided on at least one of the first arm 3100 and the second arm 3200 One hand-eye camera. In FIG. 53, the first arm 3100 is composed of joints 3110, 3130 and frames 3150, 3170 provided between the joints, and the same is true for the second arm 3200, but the present invention is not limited thereto. In addition, in FIG. 53 , an example of a dual-arm robot having two arms is shown, but the robot of this embodiment may have three or more arms. Although it is also described that the imaging unit 5000 is installed on both the hand-eye camera (5000-1) of the first arm 3100 and the hand-eye camera (5000-2) of the second arm 3200, it may be installed on one of them. superior.
另外,图53的机器人30000包括基座单元部4000。基座单元部4000设置于机器人主体的下部,并支承机器人主体。在图53的例子中,形成为在基座单元部4000设置有车轮等、并且机器人整体能够移动的结构。但是,也可以是基座单元部4000不具有车轮等,而固定于地面等的结构。在图53的机器人中,通过在基座单元部4000收纳控制装置(在图52中是作为控制部3500而示出的装置),从而使机器人机构300000与控制部3500作为一体而构成。或者,也可以如相当于图52的控制部3500的装置那样,不设置特定的控制用的机器,而通过内置于机器人的基板(更具体而言为设置于基板上的IC等),实现上述控制部3500。In addition, the robot 30000 of FIG. 53 includes a base unit part 4000 . The base unit part 4000 is provided at the lower part of the robot main body, and supports the robot main body. In the example of FIG. 53 , wheels and the like are provided on the base unit 4000 and the entire robot is movable. However, the base unit 4000 may be fixed to the ground or the like without having wheels or the like. In the robot of FIG. 53 , the robot mechanism 300000 and the control unit 3500 are integrally configured by accommodating the control device (the device shown as the control unit 3500 in FIG. 52 ) in the base unit unit 4000 . Alternatively, like the device corresponding to the control unit 3500 in FIG. 52 , no specific control equipment may be provided, and the above-mentioned control may be realized by incorporating a substrate of the robot (more specifically, an IC or the like provided on the substrate). Control part 3500.
在使用具有两支以上的臂的机器人的情况下,能够进行灵活的检查处理。例如,在设置多个拍摄部5000的情况下,能够从多个视点位置、视线方向同时进行检查处理。另外,也能够用设置于给定的臂的手眼摄像机,对由设置于其他臂的把持部把持的检查对象物OB进行检查。在该情况下,不仅是拍摄部5000的视点位置、视线方向,也能够使检查对象物OB的位置姿势变化。In the case of using a robot having two or more arms, flexible inspection processing can be performed. For example, when a plurality of imaging units 5000 are installed, inspection processing can be performed simultaneously from a plurality of viewpoint positions and line-of-sight directions. In addition, it is also possible to inspect the inspection object OB grasped by the grasping part provided on the other arm using the hand-eye camera provided on a given arm. In this case, it is possible to change not only the viewpoint position and the line-of-sight direction of the imaging unit 5000 but also the position and posture of the inspection object OB.
此外,如图20所示,与本实施方式的处理装置或者机器人30000中的处理部11120等对应的部分的功能也可以通过经由包含有线以及无线的至少一方的网络20而与机器人30通信连接的服务器700来实现。In addition, as shown in FIG. 20 , the functions of parts corresponding to the processing device of this embodiment or the processing unit 11120 in the robot 30000 may also be communicated with the robot 30 through a network 20 including at least one of wired and wireless. server 700 to achieve.
或者在本实施方式中,也可以在作为处理装置的服务器700一侧进行本发明的处理装置等的处理的一部分。此时,通过设置于机器人侧的处理装置与作为处理装置的服务器700的分散处理,来实现该处理。具体而言,服务器700侧进行本发明的处理装置的各处理中的、分配于服务器700的处理。另一方面,设置于机器人的处理装置10000进行本发明的处理装置的各处理中的、分配于机器人的处理部等的处理。Alternatively, in the present embodiment, a part of processing by the processing device or the like of the present invention may be performed on the side of the server 700 as the processing device. At this time, the processing is realized by distributed processing between the processing device provided on the robot side and the server 700 as the processing device. Specifically, the server 700 side performs the processing assigned to the server 700 among the respective processes of the processing device of the present invention. On the other hand, the processing device 10000 provided in the robot performs the processing assigned to the processing unit of the robot and the like among the respective processes of the processing device of the present invention.
例如,本发明的处理装置进行第一~第M(M为整数)处理,考虑能够以使第一处理通过子处理1a以及子处理1b来实现、并使第二处理通过子处理2a以及子处理2b来实现的方式,将第一~第M的各处理分割为多个子处理的情况。在该情况下,考虑服务器700侧进行子处理1a、子处理2a、···子处理Ma,设置于机器人侧的处理装置100进行子处理1b、子处理2b、···子处理Mb的分散处理。此时,本实施方式的处理装置、即执行第一~第M处理的处理装置可以是执行子处理1a~子处理Ma的处理装置,可以是执行子处理1b~子处理Mb的处理装置,也可以是执行子处理1a~子处理Ma以及子处理1b~子处理Mb的全部的处理装置。进一步而言,本实施方式的处理装置是对第一~第M处理的各处理至少执行一个子处理的处理装置。For example, the processing device of the present invention performs the first to Mth (M is an integer) processing, and it is considered that the first processing can be realized by sub-processing 1a and sub-processing 1b, and the second processing can be realized by sub-processing 2a and sub-processing 2b, each of the first to Mth processes is divided into a plurality of sub-processes. In this case, it is assumed that the server 700 performs sub-processing 1a, sub-processing 2a, ... sub-processing Ma, and the processing device 100 installed on the robot side performs sub-processing 1b, sub-processing 2b, ... sub-processing Mb. deal with. At this time, the processing device of this embodiment, that is, the processing device that executes the first to the Mth processing may be a processing device that executes sub-processing 1a to sub-processing Ma, may be a processing device that executes sub-processing 1b to sub-processing Mb, or It may be a processing device that executes all of sub-processing 1a to sub-processing Ma and sub-processing 1b to sub-processing Mb. Furthermore, the processing device of this embodiment is a processing device that executes at least one sub-processing for each of the first to M-th processing.
由此,例如与机器人侧的处理装置10000相比处理能力较高的服务器700能够进行处理负荷高的处理等。并且,在处理装置也进行机器人控制的情况下,服务器700能够一并控制各机器人的动作,从而例如容易使多个机器人协调动作等。Thereby, for example, the server 700 having a higher processing capability than the processing device 10000 on the robot side can perform processing with a high processing load. In addition, when the processing device also performs robot control, the server 700 can collectively control the operation of each robot, thereby facilitating, for example, cooperative operations of a plurality of robots.
另外,近几年,制造多品种且少数的部件的情况有增加的趋势。而且,在变更制造的部件的种类的情况下,需要变更机器人进行的动作。若为如图20所示的结构,则即使不重新进行针对多个机器人的各机器人的指导作业,服务器700也能够一并变更机器人进行的动作等。并且,与针对各机器人设置一个处理装置的情况相比,能够大幅度减少进行处理装置的软件更新时的麻烦等。In addition, in recent years, there is an increasing tendency to manufacture many types of parts with a small number of components. Furthermore, when changing the type of parts to be manufactured, it is necessary to change the operation performed by the robot. With the configuration shown in FIG. 20 , the server 700 can collectively change the actions performed by the robots without redoing the instruction work for each of the plurality of robots. Furthermore, compared with the case where one processing device is provided for each robot, it is possible to greatly reduce the troubles and the like at the time of updating the software of the processing device.
3.处理流程3. Processing flow
接下来对本实施方式的处理流程进行说明。具体而言,对进行第一检查信息的获取以及第二检查信息的生成的流程、与根据生成的第二检查信息来执行检查处理时的流程进行说明。在设想由机器人执行检查处理的情况下,由于第一检查信息的获取以及第二检查信息的生成即使不伴随机器人的检查处理中的动作也能够执行,所以表述为离线处理(offline)。另一方面,由于检查处理的执行伴随机器人动作,所以表述为在线处理。Next, the processing flow of this embodiment will be described. Specifically, the flow of obtaining the first inspection information and the generation of the second inspection information, and the flow of executing the inspection process based on the generated second inspection information will be described. Assuming that the inspection process is performed by a robot, the acquisition of the first inspection information and the generation of the second inspection information can be performed without the operation of the robot during the inspection process, so they are described as offline processing (offline). On the other hand, since the execution of the inspection process is accompanied by the movement of the robot, it is described as online processing.
此外,以下,对检查处理的对象是基于机器人的装配作业的结果、并且检查处理也由机器人执行的例子进行说明,但是如上所述地存在能够进行各种变形实施的点。In addition, in the following, an example in which the object of the inspection process is the result of the assembly work by the robot and the inspection process is also executed by the robot will be described, but there are points where various modifications can be performed as described above.
3.1离线处理3.1 Offline processing
首先,在图55中示出了本实施方式的第一检查信息与第二检查信息的具体例子。第二检查信息包含视点信息(视点位置以及视线方向)、检查区域(ROI,确认区域)以及合格图像。另外,第一检查信息包含形状信息(三维模型数据)、检查对象物的位置姿势信息以及检查处理对象位置。First, a specific example of the first inspection information and the second inspection information of the present embodiment is shown in FIG. 55 . The second inspection information includes viewpoint information (viewpoint position and line-of-sight direction), inspection region (ROI, confirmation region), and acceptable images. In addition, the first inspection information includes shape information (three-dimensional model data), position and posture information of an inspection object, and an inspection processing object position.
在图56的流程图中示出了具体的离线处理的流程。若开始进行离线处理,则首先信息获取部11110作为第一检查信息而获取检查对象物的三维模型数据(形状信息)(S100001)。在检查(外观检查)中,重要之处在于如何观察检查对象物,从给定的视点位置、视线方向观察的观察方式取决于检查对象物的形状。特别是,对于三维模型数据而言,由于是无缺损、无变形的理想的状态下的检查对象物的信息,所以成为针对检查对象物的实物的检查处理中有用的信息。The flow of specific offline processing is shown in the flowchart of FIG. 56 . When offline processing is started, first, the information acquiring unit 11110 acquires three-dimensional model data (shape information) of an inspection object as first inspection information (S100001). In inspection (appearance inspection), how to observe the inspection object is important, and the observation method from a given viewpoint position and line of sight depends on the shape of the inspection object. In particular, since the three-dimensional model data is information on an inspection object in an ideal state with no defect or deformation, it is useful information for inspection processing of the actual object to be inspected.
在检查处理是针对基于机器人的机器人作业的结果进行的处理的情况下,信息获取部11110获取作为机器人作业的结果而获取的检查对象物的三维模型数据亦即作业后三维模型数据、与机器人作业前的检查对象物的三维模型数据亦即作业前三维模型数据。In the case where the inspection processing is performed on the result of robot work by the robot, the information acquisition unit 11110 acquires the three-dimensional model data of the inspection object acquired as a result of the robot work, that is, post-work three-dimensional model data, and the robot work result. The three-dimensional model data of the object to be inspected before, that is, the three-dimensional model data before operation.
在对机器人作业的结果进行检查的情况下,需要对是否适当地进行了作业进行判定。在作业为将物体A与物体B进行装配的装配作业的情况下,对是否针对物体A在规定的位置从规定的方向组装物体B进行判定。即,物体A、物体B个体的三维模型数据的获取是不足够的,重要之处在于针对物体A在规定的位置从规定的方向组装有物体B的数据、即理想地结束作业的状态下的三维模型数据。由此本实施方式的信息获取部11110获取作业后三维模型数据。另外,如后述检查区域、合格阈值的设定那样,也存在作业前后的观察方式的差异成为要点的场景,因此也预先配合地获取作业前三维模型数据。When checking the results of robot work, it is necessary to determine whether the work was performed properly. When the operation is an assembly operation of assembling the object A and the object B, it is determined whether or not to assemble the object B with respect to the object A at a predetermined position from a predetermined direction. That is, the acquisition of individual 3D model data of object A and object B is not sufficient, and the important point is to assemble the data of object B at a predetermined position from a predetermined direction for object A, that is, in the state where the work is ideally completed. 3D model data. Thus, the information acquiring unit 11110 of this embodiment acquires post-work three-dimensional model data. In addition, as in the setting of the inspection area and pass threshold value described later, there are also scenes where the difference in the observation method before and after the operation becomes the main point, so the three-dimensional model data before the operation is also acquired in advance.
在图57A、图57B中示出了作业前三维模型数据与作业后三维模型数据的例子。在图57A、图57B中,以针对立方体的块状的物体A,在沿着给定的1个轴方向偏移的位置以与物体A相同的姿势组装立方体的块状的物体B的作业为例而进行说明。在该情况下,作业前三维模型数据由于是物体B的组装前,所以如图57A所示是物体A的三维模型数据。另外,如图57B所示,作业后三维模型数据是以上述条件组装了物体A与物体B的数据。此外,在图57A、图57B中,在以平面的方式图示的关系上,三维模型数据成为是否从给定的视点位置、视线方向进行观察那样的数据,但是从“三维”这一词语亦可知,作为第一检查信息而获取的形状数据成为观察的位置、方向不被限定的三维的数据。Examples of pre-work three-dimensional model data and post-work three-dimensional model data are shown in FIGS. 57A and 57B. In FIGS. 57A and 57B , the operation of assembling a cubic block-shaped object B in the same posture as the object A at a position shifted along a given axis direction with respect to a cubic block-shaped object A is as follows: For example. In this case, the pre-work three-dimensional model data is the three-dimensional model data of the object A as shown in FIG. 57A because it is before the object B is assembled. In addition, as shown in FIG. 57B , post-work three-dimensional model data is data in which object A and object B are assembled under the above-mentioned conditions. In addition, in Fig. 57A and Fig. 57B, in relation to the relationship shown in a planar manner, the three-dimensional model data is data such as whether to observe from a given viewpoint position and line of sight direction, but the word "three-dimensional" also It can be seen that the shape data acquired as the first inspection information is three-dimensional data in which the observation position and direction are not limited.
另外,在S100001中,也配合地获取作为视点信息(包含视点位置以及视线方向的信息)的候补的视点候补信息。设想该视点候补信息不是由使用者输入或者由处理部11120等生成的信息,而例如是处理装置10000(或者机器人30000)的制造商在出厂前预先设定的信息。In addition, in S100001, viewpoint candidate information as a candidate for viewpoint information (information including viewpoint position and line-of-sight direction) is acquired in conjunction with it. It is assumed that the viewpoint candidate information is not input by the user or generated by the processing unit 11120 , but is, for example, information preset by the manufacturer of the processing device 10000 (or robot 30000 ) before shipment.
虽然视点候补信息如上所述是作为视点信息的候补的信息,但是考虑可能成为该视点候补信息的点非常多(狭义而言为无限)。例如,在以检查对象物为基准的对象物坐标系(物体坐标系)中设定视点信息的情况下,对象物坐标系中的检查对象物的内部所包含的点以外的点全部可能成为视点候补信息。当然,使用这么多的视点候补信息(处理上不限定视点候补信息),能够与状况对应地灵活或者精细地设定视点信息。由此,若在视点信息的设定时处理负荷不成为问题,则在S100001中可以不获取视点候补信息。但是在以下的说明中,预先设定视点候补信息,以使得即使在各种物体成为检查对象物的情况下也能够通用地进行利用,并且不使视点信息的设定中的处理负荷增大。Although the viewpoint candidate information is information that is a candidate for viewpoint information as described above, it is considered that there are very many points that may become the viewpoint candidate information (infinite in a narrow sense). For example, when viewpoint information is set in the object coordinate system (object coordinate system) based on the inspection object, all points other than the points included in the inspection object in the object coordinate system may become viewpoints. Alternate information. Of course, by using such a large number of viewpoint candidate information (the viewpoint candidate information is not limited in terms of processing), it is possible to flexibly or finely set the viewpoint information according to the situation. Therefore, if the processing load does not pose a problem when setting the viewpoint information, it is not necessary to acquire viewpoint candidate information in S100001. However, in the following description, viewpoint candidate information is set in advance so that it can be commonly used even when various objects are inspection targets, and does not increase the processing load in setting viewpoint information.
此时,检查时刻的检查对象物OB的配置的位置姿势等不限定为已知。因此,由于是否能够使拍摄部5000向与视点信息对应的位置姿势移动的情况不明确等,所以将视点信息限定为非常少的数量(例如一个、两个)是不现实的。这是因为在仅生成少数的视点信息的情况下,若无法使拍摄部5000向该少数的视点信息的全部移动,则无法执行检查处理。为了抑止这种危险性,需要使视点信息生成一定程度的数量,作为结果,视点候补信息的数量也成为一定程度的数量。At this time, the position, orientation, and the like of the arrangement of the inspection object OB at the time of inspection are not limited to being known. Therefore, since it is unclear whether the imaging unit 5000 can be moved to a position and orientation corresponding to the viewpoint information, it is not practical to limit the viewpoint information to a very small number (for example, one or two). This is because when only a small amount of viewpoint information is generated, the inspection process cannot be executed unless the imaging unit 5000 is moved to all of the small amount of viewpoint information. In order to suppress this risk, it is necessary to generate a certain amount of viewpoint information, and as a result, the number of viewpoint candidate information also becomes a certain amount.
在图58中示出了视点候补信息的例子。在图58中,在对象物坐标系的原点的周围设定有18个视点候补信息。具体的坐标值如图59所示。若为视点候补信息A,则视点位置在x轴上,并且位于从原点离开给定的距离(若为图59的例子则是200)的位置。另外,视线方向与由(ax、ay、az)表示的矢量对应,在为视点候补信息A的情况下,成为x轴负方向、即原点方向。此外,即使仅确定了视线方向矢量,也由于拍摄部5000能够进行绕着视线方向矢量的旋转,所以姿势不固定为一种。由此这里,预先将规定绕着视线方向矢量的旋转角度的另一矢量设定作为(bx、by、bz)。另外,如图58所示,将xyz各轴上的2点与xyz中的2轴之间的点的合计18个点作为视点候补信息。若这样地以环绕对象物坐标系的原点的方式设定视点候补信息,则在世界坐标系(机器人坐标系)中,不论检查对象物如何配置,都能够实现适当的视点信息的设定。具体而言,能够抑止无法使拍摄部5000向根据视点候补信息而设定的视点信息的全部(或者大多数)移动、或者即使移动也因遮挡物等而无法检查之类的可能性,从而能够实现至少足够数量的视点信息下的检查。An example of viewpoint candidate information is shown in FIG. 58 . In FIG. 58 , 18 pieces of viewpoint candidate information are set around the origin of the object coordinate system. The specific coordinate values are shown in Figure 59. In the case of viewpoint candidate information A, the viewpoint position is on the x-axis and is located at a predetermined distance (200 in the example of FIG. 59 ) from the origin. Also, the line of sight direction corresponds to a vector represented by (ax, ay, az), and in the case of viewpoint candidate information A, it is the negative direction of the x-axis, that is, the direction of the origin. Also, even if only the gaze direction vector is determined, since the imaging unit 5000 can rotate around the gaze direction vector, the posture is not fixed to one. Therefore, here, another vector specifying the rotation angle around the line-of-sight vector is set in advance as (bx, by, bz). In addition, as shown in FIG. 58 , a total of 18 points between two points on each xyz axis and points between two xyz axes are used as viewpoint candidate information. By setting the viewpoint candidate information around the origin of the object coordinate system in this way, it is possible to set appropriate viewpoint information regardless of how the inspection object is arranged in the world coordinate system (robot coordinate system). Specifically, it is possible to suppress the possibility that the imaging unit 5000 cannot be moved to all (or most) of the viewpoint information set based on the viewpoint candidate information, or that even if it is moved, it cannot be inspected due to an obstruction or the like. Enables inspection with at least a sufficient amount of viewpoint information.
在外观检查中,虽然仅从一个视点位置以及视线方向进行检查是无妨的,但是若考虑精度,则优选从多个视点位置以及视线方向进行检查。这是因为考虑在仅从1个方向检查时,存在无法充分(例如在图像上以足够大的尺寸)观察应检查的区域的可能性等。因此,优选第二检查信息不是一个视点信息,而是包含多个视点信息的视点信息组。这例如是通过使用上述视点候补信息中的多个候补信息(基本上是全部候补信息)而生成视点信息从而实现的。另外,即使在不使用上述视点候补信息的情况下,求出多个视点信息即可。即,第二检查信息包含将多个视点信息包含在内的视点信息组,视点信息组的各视点信息包含检查处理中的拍摄部5000的视点位置以及视线方向。具体而言,处理部11120根据第一检查信息,作为第二检查信息而生成将拍摄部5000的多个视点信息包含在内的视点信息组。In appearance inspection, it is fine to perform inspection from only one viewpoint position and line of sight direction, but in consideration of accuracy, it is preferable to perform inspection from multiple viewpoint positions and line of sight directions. This is because there is a possibility that the region to be inspected may not be observed sufficiently (for example, with a sufficiently large size on the image) when inspecting from only one direction. Therefore, it is preferable that the second inspection information is not one piece of viewpoint information, but a viewpoint information group including a plurality of viewpoint information. This is achieved, for example, by generating viewpoint information using a plurality of candidate information (basically all candidate information) among the aforementioned viewpoint candidate information. In addition, even when the above-mentioned viewpoint candidate information is not used, it is sufficient to obtain a plurality of viewpoint information. That is, the second inspection information includes a viewpoint information group including a plurality of viewpoint information, and each viewpoint information in the viewpoint information group includes the viewpoint position and line-of-sight direction of the imaging unit 5000 in the inspection process. Specifically, the processing unit 11120 generates a viewpoint information group including a plurality of viewpoint information of the imaging unit 5000 as second inspection information based on the first inspection information.
上述视点候补信息为对象物坐标系中的位置,但是在视点候补信息的设定阶段,具体的检查对象物的形状、尺寸是不确定的。具体而言,图58虽然是以检查对象物为基准的对象物坐标系,但是该对象物坐标系中的物体的位置姿势成为不定的状态。对于视点信息而言,由于至少需要规定与检查对象物的相对的位置关系,所以为了从视点候补信息生成具体的视点信息,需要与检查对象物的对应。The above-mentioned viewpoint candidate information is the position in the object coordinate system, but in the stage of setting the viewpoint candidate information, the specific shape and size of the inspection object are uncertain. Specifically, although FIG. 58 is an object coordinate system based on the inspection target object, the position and posture of the object in this object coordinate system are in an indeterminate state. Since viewpoint information needs to specify at least a relative positional relationship with the inspection object, in order to generate specific viewpoint information from viewpoint candidate information, correspondence with the inspection object is required.
这里,鉴于上述视点候补信息,对于设定了视点候补信息的坐标系的原点而言,其是作为全部的视点候补信息的中心的位置,并且在以各视点候补信息配置拍摄部5000的情况下,位于该拍摄部5000的拍摄方向(光轴方向)。即,可以说坐标系的原点是在使用拍摄部5000的观察下最佳的位置。由于在检查处理中最应观察的位置是上述检查处理对象位置(狭义而言可以是装配位置,也可以如图58所示是组装位置),所以使用作为第一检查信息而获取的检查处理对象位置而生成与检查对象物相对应的视点信息。Here, in consideration of the above-mentioned viewpoint candidate information, the origin of the coordinate system in which the viewpoint candidate information is set is the position that is the center of all viewpoint candidate information, and when the imaging unit 5000 is arranged with each viewpoint candidate information , located in the imaging direction (optical axis direction) of the imaging unit 5000 . That is, it can be said that the origin of the coordinate system is the best position under observation using the imaging unit 5000 . Since the position that should be most observed in the inspection process is the above-mentioned inspection processing target position (it may be an assembly position in a narrow sense, or may be an assembly position as shown in FIG. 58), the inspection processing target acquired as the first inspection information is used. Viewpoint information corresponding to the inspection target object is generated based on its position.
即,第一检查信息包含针对上述检查对象物的相对的检查处理对象位置,机器人30000以检查处理对象位置为基准,设定与检查对象物对应的对象物坐标系,并使用对象物坐标系,生成视点信息。具体而言,信息获取部11110作为第一检查信息而获取针对检查对象物的相对的检查处理对象位置,处理部11120以检查处理对象位置为基准,设定与检查对象物对应的对象物坐标系,并使用对象物坐标系,生成视点信息(S100002)。That is, the first inspection information includes the relative inspection processing target position of the above-mentioned inspection target object, and the robot 30000 sets an object coordinate system corresponding to the inspection target object based on the inspection processing target position, and uses the object coordinate system, Generate viewpoint information. Specifically, the information acquisition unit 11110 acquires the relative inspection processing target position of the inspection target object as the first inspection information, and the processing unit 11120 sets the object coordinate system corresponding to the inspection target object based on the inspection processing target position. , and generate viewpoint information using the object coordinate system (S100002).
例如,检查对象物的形状数据呈图60所示的形状,获取其中的点O为检查处理对象位置的第一检查信息。在该情况下,将点O作为原点,而设定检查对象物的姿势为图60所示那样的对象物坐标系即可。若确定对象物坐标系中的检查对象物的位置姿势,则各视点候补信息与检查对象物的相对关系明确,因此能够使用各视点候补信息作为视点信息。For example, the shape data of the inspection object has the shape shown in FIG. 60 , and the first inspection information in which point O is the position of the inspection processing object is acquired. In this case, it is only necessary to set the point O as the origin, and set the posture of the inspection object to an object coordinate system as shown in FIG. 60 . If the position and posture of the inspection object in the object coordinate system are specified, the relative relationship between each viewpoint candidate information and the inspection object becomes clear, so each viewpoint candidate information can be used as viewpoint information.
若生成包含多个视点信息的视点信息组,则进行各种第二检查信息的生成。首先,进行与各视点信息对应的合格图像的生成(S100003)。具体而言,处理部11120作为检查处理所使用的合格图像而获取由配置于与视点信息对应的视点位置以及视线方向的假想摄像机拍摄的三维模型数据的图像。When a viewpoint information group including a plurality of viewpoint information is generated, various kinds of second inspection information are generated. First, a pass image corresponding to each viewpoint information is generated (S100003). Specifically, the processing unit 11120 acquires an image of 3D model data captured by a virtual camera arranged at a viewpoint position and a line-of-sight direction corresponding to the viewpoint information as an acceptable image used in the inspection process.
合格图像需要成为显示检查对象物的理想的状态的图像。由于作为第一检查信息而获取的三维模型数据(形状信息)是检查对象物的理想的形状数据,所以将从与视点信息对应地配置的拍摄部观察的该三维模型数据的图像作为合格图像来使用即可。此外,在使用三维模型数据的情况下,实际不是基于拍摄部5000的拍摄,而是进行使用假想摄像机的处理(具体而言为将三维数据投影为二维数据的转换处理)。此外,若设想针对机器人作业的结果的检查处理,则合格图像是显示机器人作业结束时的理想的检查对象物的状态的图像。而且,由于机器人作业结束时的理想的检查对象物的状态由上述作业后三维模型数据显示,所以将由假想摄像机拍摄的作业后三维模型数据的图像作为合格图像即可。由于合格图像是在每个视点信息中求出的,所以如上所述在设定18个视点信息的情况下,合格图像也为18个。图61A~图61G各自的右侧的图像与进行对应于图57B的装配作业的情况下的合格图像对应。在图61A~图61G中,记载了7个视点量的图像,而如上所述,以视点信息的数量求出图像数量。此外,在S100003的处理中,考虑到后述检查区域、合格阈值的处理,也预先求出由假想摄像机拍摄的作业前三维模型数据的作业前图像(图61A~图61G各自的左侧)。The acceptable image needs to be an image showing an ideal state of the inspection target object. Since the three-dimensional model data (shape information) acquired as the first inspection information is ideal shape data of the object to be inspected, the image of the three-dimensional model data observed from the imaging unit arranged corresponding to the viewpoint information is regarded as a qualified image. Just use it. In addition, when using 3D model data, processing using a virtual camera (specifically, conversion processing of projecting 3D data into 2D data) is performed instead of imaging by the imaging unit 5000 . In addition, assuming inspection processing for the results of robot work, the pass image is an image showing an ideal state of the inspection target object at the end of the robot work. Furthermore, since the ideal state of the inspection object at the end of the robot operation is displayed by the above-mentioned post-work 3D model data, the image of the post-work 3D model data captured by the virtual camera may be regarded as a pass image. Since a qualified image is obtained for each viewpoint information, when 18 viewpoint information are set as described above, there are also 18 qualified images. The images on the right side of each of FIGS. 61A to 61G correspond to acceptable images when the assembly work corresponding to FIG. 57B is performed. In FIGS. 61A to 61G , images corresponding to seven viewpoints are described, but as described above, the number of images is obtained from the number of viewpoint information. In addition, in the process of S100003, the pre-work image of the pre-work 3D model data captured by the virtual camera is also obtained in advance in consideration of the processing of the inspection area and pass threshold value described later (the left sides of each of FIGS. 61A to 61G ).
接下来,作为第二检查信息而求出用于检查处理的合格图像以及拍摄图像的图像中的区域亦即检查区域(S100004)。检查区域如上所述。此外,在检查中重要的部分的观察方式与视点信息对应地变化,因此,针对视点信息组所包含的各视点信息进行检查区域的设定。Next, an inspection area, which is an area in the image of the pass image and the captured image used for the inspection process, is obtained as the second inspection information ( S100004 ). Check the area as above. In addition, since the observation method of important parts in the inspection changes according to the viewpoint information, the inspection area is set for each viewpoint information included in the viewpoint information group.
具体而言,处理部11120作为合格图像而获取由配置于与视点信息对应的视点位置以及视线方向的假想摄像机拍摄的作业后三维模型数据的图像,作为作业前图像而获取由配置于与视点信息对应的视点位置以及视线方向的假想摄像机拍摄的作业前三维模型数据的图像,并根据作业前图像与合格图像的比较处理,求出用于检查处理的图像中的区域亦即检查区域。Specifically, the processing unit 11120 acquires, as a qualified image, an image of the post-work 3D model data captured by a virtual camera arranged at the viewpoint position and line-of-sight direction corresponding to the viewpoint information, and obtains an image of the post-operation 3D model data obtained by placing it on the viewpoint information as a pre-operation image. The image of the pre-work 3D model data captured by the virtual camera corresponding to the viewpoint position and line-of-sight direction, and based on the comparison process between the pre-work image and the acceptable image, the area in the image used for the inspection process, that is, the inspection area is obtained.
在图62A~图62D中示出了检查区域的设定处理的具体例子。在针对物体A从右组装物体B的机器人作业的结果是检查对象的情况下,如图62B所示地作为合格图像而获取由假想摄像机拍摄的作业后三维模型数据的图像,如图62A所示地作为作业前图像而获取由假想摄像机拍摄的作业前三维模型数据的图像。这样,在作业前后带有状态的变化的机器人作业中,更重要的是状态的变化部分。若为图62B的例子,则在检查中应该判断的是物体B是否与物体A组装在一起、其组装位置是否正确之类的条件。虽然也可以对物体A中的与作业相关的部分(例如装配中的接合面)以外的部分进行检查,但是重要度比较低。A specific example of the inspection region setting process is shown in FIGS. 62A to 62D . In the case where the result of the robot work of assembling the object B from the right with respect to the object A is an inspection object, an image of post-work three-dimensional model data captured by a virtual camera is acquired as an acceptable image as shown in FIG. 62B as shown in FIG. 62A The image of the pre-work three-dimensional model data captured by the virtual camera is acquired as the pre-work image. In this way, in a robot operation with state changes before and after the operation, the part of the state change is more important. If it is the example in FIG. 62B , what should be judged in the inspection is whether the object B is assembled with the object A, and whether the assembly position is correct or not. Although it is also possible to inspect parts of the object A other than the parts related to the work (for example, the joint surface during assembly), the importance is relatively low.
即,能够将合格图像以及拍摄图像中重要度较高的图像中的区域考虑为在作业的前后产生变化的区域。由此在本实施方式中,处理部120作为比较处理而进行求出作业前图像与合格图像的差分亦即差分图像的处理,并作为检查区域而求出差分图像中的包含检查对象物的区域。若为图62A、图62B的例子,则差分图像为图62C,因此设定将图62C所包含的物体B的区域包含在内的检查区域。That is, it is possible to consider a region in an acceptable image and an image with higher importance among captured images as a region that changes before and after the job. Therefore, in the present embodiment, the processing unit 120 performs a process of obtaining a difference image, which is a difference between the pre-work image and the acceptable image, as a comparison process, and obtains an area including an inspection object in the difference image as an inspection area. . In the example of FIG. 62A and FIG. 62B , since the difference image is FIG. 62C , an inspection region including the region of object B included in FIG. 62C is set.
这样,能够将差分图像中的包含检查对象物的区域、即推断为检查的重要度较高的区域作为检查区域。In this way, the region including the inspection target object in the differential image, that is, the region estimated to be of high importance for inspection can be used as the inspection region.
此时,检查处理对象位置(图62A等中的装配位置)作为第一检查信息是已知的,该检查处理对象位置在图像中处于什么位置也是已知的。如上所述,由于检查处理对象位置是作为检查处理中检查的基准的位置,所以也可以从差分图像与检查处理对象位置求出检查区域。例如,如图62C所示,处理部11120求出差分图像中的检查处理对象位置与差分图像所剩余的区域的纵向的长度的最大值BlobHeight、以及横向的长度的最大值Blobwidth。这样,若以检查处理对象位置为中心,将上下各自BlobHeight、左右各自Blobwidth的距离的范围内的区域作为检查区域,则能够作为检查区域而求出差分图像中的包含检查对象物的区域。此外,在本实施方式中,也可以在纵向以及横向分别具有余白,若为图62D的例子,则将在上下左右分别具有30像素的余白的区域作为检查区域。At this time, the inspection processing target position (mounting position in FIG. 62A and the like) is known as the first inspection information, and what position the inspection processing target position is in the image is also known. As described above, since the inspection processing target position is a position serving as a reference for inspection in the inspection processing, the inspection area may be obtained from the difference image and the inspection processing target position. For example, as shown in FIG. 62C , the processing unit 11120 obtains the maximum vertical length BlobHeight and the horizontal maximum Blobwidth of the inspection processing target position in the differential image and the remaining area of the differential image. In this way, if the region within the distances of the upper and lower BlobHeights and the left and right Blobwidths is used as the inspection region centering on the inspection processing object position, the region including the inspection object in the difference image can be obtained as the inspection region. In addition, in the present embodiment, there may be margins in the vertical and horizontal directions, and in the example of FIG. 62D , regions having margins of 30 pixels in the top, bottom, left, and right sides are used as inspection areas.
图63A~图63D、图64A~图64D也是同样的。图63A~图63D是在从视点位置观察的情况下,在物体A的里侧组装比较细的物体B的作业(或者在图像中的横向有孔的物体A插入棒状的物体B的作业)的例子。在该情况下,差分图像中的检查对象物的区域被分为不连续的多个区域,但是能够与图62A~图62D相同地进行处理。这里,设想物体B比物体A细,物体A的图像中的上端部附近、或者下端部附近在检查中的重要度较低。若为本实施方式的手段,则如图63D所示,能够将被认为重要度较低的物体A的区域从检查区域除去。The same applies to FIGS. 63A to 63D and FIGS. 64A to 64D. Figures 63A to 63D show the operation of assembling a relatively thin object B on the back side of an object A (or the operation of inserting a rod-shaped object B into an object A with holes in the horizontal direction in the image) when viewed from the viewpoint position. example. In this case, the region of the inspection target object in the difference image is divided into a plurality of discontinuous regions, but it can be processed in the same manner as in FIGS. 62A to 62D . Here, it is assumed that the object B is thinner than the object A, and the vicinity of the upper end or the vicinity of the lower end in the image of the object A is less important in inspection. According to the means of this embodiment, as shown in FIG. 63D , the region of the object A considered to be of low importance can be removed from the inspection region.
图64A~图64D是针对较大的物体A组装比物体A小的物体B的作业。这例如与将作为物体B的螺钉紧固于作为PC、打印机等的物体A的规定的位置的作业对应。在这种作业中,检查PC、打印机整体的必要性低,而进行螺钉紧固的位置的重要度高。在这一点上,若为本实施方式的手段,则如图64D所示,可以将物体A的大部分从检查区域除去,将应检查的物体B的周围设定作为检查区域。64A to 64D are operations of assembling an object B smaller than the object A for a larger object A. FIG. This corresponds to, for example, an operation of fastening a screw as the object B to a predetermined position of the object A such as a PC or a printer. In this kind of work, the necessity of checking the entire PC and printer is low, and the position where the screws are fastened is highly important. In this regard, according to the means of this embodiment, as shown in FIG. 64D , most of the object A can be removed from the inspection area, and the periphery of the object B to be inspected can be set as the inspection area.
此外,上述手段是通用性较高的检查区域的设定手段,但是本实施方式的检查区域的设定手段并不限定于此,也可以通过其他手段来设定检查区域。例如,在图62D中,由于即使是更狭小的检查区域也足够,所以也可以使用设定更狭小的区域的手段。In addition, the above-mentioned means are highly versatile means for setting the inspection area, but the means for setting the inspection area in this embodiment is not limited thereto, and the inspection area may be set by other means. For example, in FIG. 62D , since even a narrower inspection area is sufficient, means of setting a narrower area may also be used.
接下来,进行在合格图像与实际拍摄的拍摄图像的比较处理中使用的阈值(合格阈值)的设定处理(S100005)。具体而言,处理部11120获取上述合格图像与作业前图像,并根据作业前图像与合格图像的相似度,设定基于拍摄图像与合格图像的检查处理所使用的阈值。Next, setting processing of a threshold (pass threshold) used in the comparison processing of the pass image and the actually shot image is performed ( S100005 ). Specifically, the processing unit 11120 acquires the above-mentioned acceptable image and the pre-operation image, and sets a threshold used for inspection processing based on the captured image and the acceptable image according to the similarity between the pre-operation image and the acceptable image.
在图65A~图65D中示出了阈值设定处理的具体例子。图65A是合格图像,若理想地进行机器人作业(广义而言若检查对象物为理想的状态),则实际拍摄的拍摄图像也应与合格图像一致,相似度为最大值(这里是1000)。另一方面,若完全没有与合格图像一致的要素,则相似度为最小值(这里是0)。这里的阈值是如下值:若合格图像与拍摄图像的相似度在该阈值以上则判定为检查合格,若相似度比阈值小则判定为检查不合格。即,阈值为0与1000之间的给定的值。Specific examples of threshold value setting processing are shown in FIGS. 65A to 65D . 65A is a qualified image. If the robot operation is performed ideally (in a broad sense, if the inspection object is in an ideal state), the actually captured image should also be consistent with the qualified image, and the similarity is the maximum value (here, 1000). On the other hand, if there is no element matching the acceptable image at all, the similarity is the minimum value (here, 0). Here, the threshold is a value such that if the degree of similarity between the acceptable image and the captured image is equal to or greater than the threshold, it is determined that the inspection is passed, and when the degree of similarity is smaller than the threshold, it is determined that the inspection is not passed. That is, the threshold is a given value between 0 and 1000.
这里,图65B是与图65A对应的作业前图像,但是由于图65B也包含与图65A通用的部件,所以作业前图像与合格图像的相似度成为不为0的值。例如,在使用图像的边缘信息来进行相似度的判定的情况下,将作为图65A的边缘信息的图65C用于比较处理,但是作为作业前图像的边缘信息的图65D也包含与图65C一致的部分。若为图65C、图65D的例子,则相似度的值不到700。由此,即使将完全没有进行作业的状态的检查对象物拍摄于拍摄图像,该拍摄图像与合格图像的相似度也保持700左右的值。将完全没有进行作业的状态的检查对象物拍摄于拍摄图像例如是指,无法执行作业本身、或者执行了作业但是组装侧的物体落下而未拍摄于图像之类的状态,并且机器人作业失败的可能性较高。即,由于作为检查即使应成为“不合格”的状况也出现700左右的相似度,所以作为阈值而设定为低于该值的值可以说是不适当的。Here, FIG. 65B is a pre-operation image corresponding to FIG. 65A . However, since FIG. 65B also includes components common to those in FIG. 65A , the similarity between the pre-operation image and the pass image is a value other than zero. For example, in the case of using the edge information of the image to determine the degree of similarity, the image 65C which is the edge information of FIG. 65A is used for comparison processing, but the image 65D which is the edge information of the image before operation also includes part. In the example shown in FIG. 65C and FIG. 65D , the value of the degree of similarity is less than 700. As a result, even if an inspection object in a state where no work is performed is captured as a captured image, the similarity between the captured image and the acceptable image maintains a value of about 700. Taking an image of an inspection object in a state where no work is being done at all means, for example, that the work itself cannot be carried out, or that the work has been carried out but the object on the assembly side falls and is not captured in the image, and the robot may fail to work. Sex is higher. That is, since a similarity degree of about 700 appears even when the test should be "failed", it can be said that it is inappropriate to set a value lower than this value as the threshold.
由此在本实施方式中,将相似度的最大值(例如1000)和作业前图像与合格图像的相似度(例如700)之间的值设定作为阈值。作为一个例子,使用平均值即可,也可以利用下式(13)求出阈值。Therefore, in this embodiment, a value between the maximum value of similarity (for example, 1000) and the similarity between the pre-work image and the acceptable image (for example, 700) is set as the threshold. As an example, the average value may be used, and the threshold value may be obtained by the following equation (13).
阈值={1000+(合格图像与作业前图像的相似度)}/2·····(13)Threshold={1000+(similarity between qualified image and pre-work image)}/2·····(13)
另外,阈值的设定能够进行各种变形实施,例如也可以与合格图像和作业前图像的相似度的值对应地,变更求出阈值的公式。In addition, the setting of the threshold value can be implemented in various modifications, for example, the formula for calculating the threshold value may be changed in accordance with the value of the similarity between the pass image and the pre-work image.
例如,能够进行如下变形实施:在合格图像与作业前图像的相似度处于600以下的情况下,将阈值固定为800,在合格图像与作业前图像的相似度处于900以上的情况下,将阈值固定为1000,在除此之外的情况下,使用上式(13)。For example, the following modification can be implemented: when the similarity between the qualified image and the pre-work image is 600 or less, the threshold is fixed at 800; It is fixed at 1000, and in other cases, the above formula (13) is used.
此外,合格图像与作业前图像的相似度因作业前后的检查对象物的观察方式的变化而变化。例如,在与不同于图65(A)~图65(D)的视点信息对应的图66A~图66D的情况下,装配前后的检查对象物的观察方式的差异较小,作为结果,合格图像与作业前图像的相似度比上述例子高。即,与合格图像、检查区域相同地,针对视点信息组所包含的各视点信息也进行相似度以及阈值的计算处理。In addition, the degree of similarity between the pass image and the pre-work image changes due to changes in the way the inspection object is observed before and after the work. For example, in the case of FIG. 66A to FIG. 66D corresponding to viewpoint information different from FIG. 65(A) to FIG. The degree of similarity to the pre-work image is higher than the above example. That is, similarly to the acceptable image and the inspection region, calculation processing of the similarity and the threshold is performed for each piece of viewpoint information included in the viewpoint information group.
最后,处理部11120针对视点信息组的各视点信息,设定表示使拍摄部5000向与视点信息对应的视点位置以及视线方向移动时的优先度的优先度信息(S100006)。如上所述,检查对象物的观察方式与由视点信息表示的视点位置以及视线方向对应地变化。由此,也可能产生如下状况:从给定的视点信息可良好地观察检查对象物中的应检查的区域,与此相对,从其他视点信息无法观察该区域。另外,如上所述,由于视点信息组包含足够数量的视点信息,所以在检查处理中,无需将其全部作为对象进行检查,若在规定的视点信息(例如2处位置)合格的话,则最终结果也合格,从而可以不对之前没有成为对象的视点信息进行处理。依据上述内容,若考虑检查处理的效率化,则优选进一步优先处理可良好地观察应检查的区域等、检查处理中有用的视点信息。因此在本实施方式中,针对各视点信息设定优先度。Finally, the processing unit 11120 sets priority information indicating the priority of moving the imaging unit 5000 to the viewpoint position and line of sight direction corresponding to the viewpoint information for each viewpoint information in the viewpoint information group ( S100006 ). As described above, the observation method of the inspection object changes in accordance with the viewpoint position and line-of-sight direction indicated by the viewpoint information. As a result, a situation may arise in which a region to be inspected in an inspection target object can be well observed from given viewpoint information, while this region cannot be observed from other viewpoint information. In addition, as described above, since the viewpoint information group contains a sufficient number of viewpoint information, it is not necessary to inspect all of them in the inspection process. If the specified viewpoint information (for example, two positions) is qualified, the final result It is also qualified, so that the viewpoint information that has not been targeted before can not be processed. From the above, in consideration of efficiency of inspection processing, it is preferable to further prioritize viewpoint information useful in inspection processing, such as a region to be inspected, which can be observed well. Therefore, in this embodiment, priority is set for each piece of viewpoint information.
这里,在检查处理是针对机器人作业的结果的处理的情况下,明确作业的前后的差异在检查中是有用的。作为极端的例子,如图67A所示,考虑在较大的物体A从附图上左侧组装较小的物体B的作业。在该情况下,在使用与图67A的视点位置1以及视线方向1对应的视点信息1的情况下,作业前图像如图67B所示,合格图像如图67C所示,不产生差异。即,视点信息1在对这里的作业进行检查时不是有用的视点信息。另一方面,在使用与视点位置2以及视线方向2对应的视点信息2的情况下,作业前图像如图67D所示,合格图像如图67E所示,差异明确。在该情况下,可以使视点信息2的优先度比视点信息1的优先度高。Here, when the inspection processing is processing for the result of the robot work, it is useful for the inspection to clarify the difference between before and after the work. As an extreme example, as shown in FIG. 67A , consider the task of assembling a small object B on a large object A from the left side of the drawing. In this case, when using viewpoint information 1 corresponding to viewpoint position 1 and gaze direction 1 in FIG. 67A , there is no difference between the pre-work image as shown in FIG. 67B and the pass image as shown in FIG. 67C . That is, viewpoint information 1 is not useful viewpoint information when checking the work here. On the other hand, when the viewpoint information 2 corresponding to the viewpoint position 2 and the gaze direction 2 is used, the pre-work image is as shown in FIG. 67D and the acceptable image is as shown in FIG. 67E . The difference is clear. In this case, the priority of viewpoint information 2 may be set higher than the priority of viewpoint information 1 .
即,作业的前后的变化量越大,则将优先度设定得越高即可,作业的前后的变化量大的情况表示用图65A~图66D说明的作业前图像与合格图像的相似度低。由此在S100006的处理中,计算多个视点信息各自的作业前图像与合格图像的相似度(这在S100005的阈值设定时求出),相似度越低,则设定越高的优先度即可。在执行检查处理时,从优先度高的视点信息按顺序使拍摄部5000移动,从而进行检查。That is, the larger the amount of change before and after the job, the higher the priority may be set, and the case where the amount of change before and after the job is large indicates the degree of similarity between the pre-job image and the acceptable image as described with reference to FIGS. 65A to 66D Low. Therefore, in the process of S100006, the degree of similarity between the pre-operation image and the acceptable image of each of the pieces of viewpoint information is calculated (this is obtained when setting the threshold value in S100005), and the lower the degree of similarity is, the higher the priority is set. That's it. When executing the inspection process, the imaging unit 5000 is moved sequentially from viewpoint information with higher priority to perform the inspection.
3.2在线处理3.2 Online processing
接下来,用图68的流程图对使用第二检查信息的检查处理亦即在线处理的流程进行说明。若开始进行在线处理,则首先进行由上述离线处理生成的第二检查信息的读入(S2001)。Next, the flow of the online processing that is the inspection process using the second inspection information will be described with reference to the flowchart of FIG. 68 . When the online processing is started, first, the second inspection information generated by the above-mentioned offline processing is read in (S2001).
然后,机器人30000根据基于由优先度信息表示的优先度而设定的移动顺序,使拍摄部5000向与视点信息组的各视点信息对应的视点位置以及视线方向移动。这例如能够通过图50的处理部11120、或者图51A的控制部3500的控制来实现。具体而言,选择视点信息组所包含的多个视点信息中的优先度最高的一个视点信息(S2002),并使拍摄部5000向与该视点信息对应的视点位置以及视线方向移动(S2003)。这样,通过依据上述优先度而进行的拍摄部5000的控制,能够实现高效的检查处理。Then, the robot 30000 moves the imaging unit 5000 to the viewpoint position and line-of-sight direction corresponding to each viewpoint information in the viewpoint information group according to the movement order set based on the priority indicated by the priority information. This can be realized, for example, by the control of the processing unit 11120 in FIG. 50 or the control unit 3500 in FIG. 51A . Specifically, one viewpoint information with the highest priority among the plurality of viewpoint information included in the viewpoint information group is selected (S2002), and the imaging unit 5000 is moved to the viewpoint position and line-of-sight direction corresponding to the viewpoint information (S2003). In this way, efficient inspection processing can be realized by controlling the imaging unit 5000 according to the above-mentioned priorities.
但是,离线处理中的视点信息基本上是对象物坐标系所规定的信息,不是考虑现实空间(世界坐标系、机器人坐标系)中的位置的信息。例如,如图69A所示,在对象物坐标系中,在检查对象物的给定的面F1的方向设定视点位置以及视点方向。在该情况下,在该检查对象物的检查处理中,如图69B所示,在检查对象物以使面F1朝下的方式配置于作业台的情况下,上述视点位置以及视线方向为作业台下,而无法使拍摄部5000(机器人30000的手眼摄像机)移动至该位置方向。However, viewpoint information in offline processing is basically information defined by the object coordinate system, and does not consider the position in real space (world coordinate system, robot coordinate system). For example, as shown in FIG. 69A , in the object coordinate system, the viewpoint position and viewpoint direction are set in the direction of a given surface F1 of the inspection object. In this case, in the inspection process of the inspection object, as shown in FIG. 69B , when the inspection object is arranged on the workbench with the surface F1 facing downward, the position of the viewpoint and the direction of the line of sight are the workbench. down, the imaging unit 5000 (the hand-eye camera of the robot 30000) cannot be moved to this position.
即,S2003成为如下控制:未必使拍摄部5000移动至与视点信息对应的位置姿势,而进行是否能够移动的判定(S2004),在能够移动的情况下进行移动。具体而言,处理部11120在根据机器人的可动范围信息,判定为无法使拍摄部5000向与多个视点信息中的第i(i为自然数)视点信息对应的视点位置以及视线方向移动的情况下,跳过与第i视点信息对应的拍摄部5000的移动,而进行与移动顺序中的第i视点信息的下一个的第j(j为满足i≠j的自然数)的视点信息相对的控制。具体而言,在S2004中判定中为否的情况下,跳过S2005以下的检查处理,而返回至S2002,并接着进行视点信息的选择。That is, S2003 is a control of not necessarily moving the imaging unit 5000 to the position and posture corresponding to the viewpoint information, but determining whether it is possible to move (S2004), and moving if possible. Specifically, the processing unit 11120 judges that the imaging unit 5000 cannot move to the viewpoint position and line-of-sight direction corresponding to the i-th (i is a natural number) viewpoint information among the plurality of viewpoint information based on the movable range information of the robot. Next, the movement of the imaging unit 5000 corresponding to the i-th viewpoint information is skipped, and the control corresponding to the j-th viewpoint information (j is a natural number satisfying i≠j) next to the i-th viewpoint information in the movement order is performed . Specifically, when the determination in S2004 is NO, the check process after S2005 is skipped, the process returns to S2002, and then the viewpoint information is selected.
这里,若使视点信息组所包含的视点信息的数量为N(N为自然数,并且若为上述图58的例子,则N=18),i为满足1≤i≤N的整数,j为满足1≤j≤N以及j≠i的整数。另外,机器人的可动范围信息是表示机器人中的特别是设置有拍摄部5000的部分能够移动的范围的信息。对于机器人所包含的各关节而言,在设计上决定该关节的选取的关节角的范围。而且,若决定各关节的关节角的值,则能够根据正向运动学计算机器人的给定的位置。即,可动范围信息是从机器人的设计事项求出的信息,可以是关节角的可取值的组,可以是拍摄部5000的可取空间的位置姿势,也可以是其他信息。Here, assuming that the number of viewpoint information included in the viewpoint information group is N (N is a natural number, and in the example of FIG. 58 above, N=18), i is an integer satisfying 1≤i≤N, and j is an integer satisfying An integer of 1≤j≤N and j≠i. In addition, the movable range information of the robot is information indicating a movable range of a part of the robot, in particular, where the imaging unit 5000 is installed. For each joint included in the robot, the range of the selected joint angle of the joint is determined in design. Furthermore, if the value of the joint angle of each joint is determined, a predetermined position of the robot can be calculated based on forward kinematics. That is, the movable range information is information obtained from design items of the robot, and may be a set of possible values of joint angles, a position and posture of the imaging unit 5000 in a space that can be taken, or other information.
此外,机器人的可动范围信息用机器人坐标系、或者世界坐标系来表现。因此,为了进行视点信息与可动范围信息的比较,需要将如图69A所示的对象物坐标系中的视点信息转换为如图69B所示的现实空间中的位置关系、即机器人坐标系中的视点信息。In addition, the movable range information of the robot is represented by the robot coordinate system or the world coordinate system. Therefore, in order to compare the viewpoint information with the movable range information, it is necessary to convert the viewpoint information in the object coordinate system as shown in FIG. 69A into the positional relationship in the real space as shown in FIG. viewpoint information.
由此信息获取部11110预先作为第一检查信息而获取表示检查对象物的全局坐标系中的位置姿势的对象物位置姿势信息,在S2004的处理中,处理部11120根据基于对象物位置姿势信息求出的全局坐标系与对象物坐标系的相对关系,求出全局坐标系所表现的视点信息,并根据全局坐标系所表现的机器人的可动范围信息、与全局坐标系所表现的视点信息,对是否能够使拍摄部5000向由视点信息表示的视点位置以及视线方向移动进行判定。Thus, the information acquiring unit 11110 acquires in advance as the first inspection information object position and posture information indicating the position and posture of the inspection target object in the global coordinate system. The relative relationship between the global coordinate system and the object coordinate system is obtained, and the viewpoint information represented by the global coordinate system is obtained, and according to the movable range information of the robot represented by the global coordinate system and the viewpoint information represented by the global coordinate system, It is determined whether or not the imaging unit 5000 can be moved to the viewpoint position and line-of-sight direction indicated by the viewpoint information.
由于该处理是坐标变换处理,所以需要两个坐标系的相对关系,能够根据全局坐标系中的对象物坐标系的基准的位置姿势、即对象物位置姿势信息,求出该相对关系。Since this processing is coordinate transformation processing, a relative relationship between the two coordinate systems is required, and this relative relationship can be obtained from the position and orientation of the reference object coordinate system in the global coordinate system, that is, object position and orientation information.
此外,由视点信息表示的视线方向不必是唯一决定拍摄部5000的姿势的信息。具体而言,在图58、图59的视点候补信息的说明中如上所述,由(x、y、z)决定视点位置,并且由(ax、ay、az)以及(bx、by、bz)决定拍摄部5000的姿势,但是也可以不考虑(bx、by、bz)。在对是否能够使拍摄部5000向由视点信息表示的视点位置以及视线方向移动进行的判定中,若将(x、y、z)、(ax、ay、az)以及(bx、by、bz)的全部作为条件,则难以实现满足该条件的拍摄部5000的移动。具体而言,即使能够从由(x、y、z)表示的位置拍摄作为原点的方向的(ax、ay、az),表示此时的绕着(ax、ay、az)的旋转角度的矢量也仅取规定范围,而可能无法满足(bx、by、bz)。由此在本实施方式中,视线方向也可以不包含(bx、by、bz),若满足(x、y、z)以及(ax、ay、az),则判定为拍摄部5000能够移动至视点信息。In addition, the gaze direction indicated by the viewpoint information does not necessarily have to be the only information for determining the posture of the imaging unit 5000 . Specifically, as described above in the description of the viewpoint candidate information in FIGS. 58 and 59 , the viewpoint position is determined by (x, y, z), and the viewpoint position is determined by (ax, ay, az) and (bx, by, bz). The posture of the imaging unit 5000 is determined, but (bx, by, bz) may not be considered. In determining whether it is possible to move the imaging unit 5000 to the viewpoint position and line-of-sight direction indicated by the viewpoint information, if (x, y, z), (ax, ay, az) and (bx, by, bz) If all the conditions are used as conditions, it is difficult to realize the movement of the imaging unit 5000 satisfying the conditions. Specifically, even if (ax, ay, az) in the direction of the origin can be imaged from a position represented by (x, y, z), the vector representing the rotation angle around (ax, ay, az) at that time Also, only the specified range is taken, and (bx, by, bz) may not be satisfied. Therefore, in this embodiment, the line of sight direction does not need to include (bx, by, bz), and if (x, y, z) and (ax, ay, az) are satisfied, it is determined that the imaging unit 5000 can move to the viewpoint. information.
在拍摄部5000向与视点信息对应的位置姿势移动完成的情况下,进行由该位置姿势的拍摄部5000实现的拍摄而获取拍摄图像(S2005)。通过拍摄图像与合格图像的比较来进行检查处理,其中,在合格图像中上述(bx、by、bz)使用规定的值,与此相对,对于拍摄拍摄图像的拍摄部5000而言,存在相对于视线方向的旋转角度与由(bx、by、bz)表示的角度不同的可能性。例如,如合格图像为图70A、而拍摄图像为图70B的情形那样,存在两个图像之间产生给定的角度下的旋转的情况。在这种情况下检查区域的切出是不适当的,相似度的计算也同样是不适当的。此外,为了便于说明,图70B与从模型数据制成的合格图像相同地,使背景为单一素色,但是由于图70B为拍摄图像,所以也可能照入其他物体。另外,从照明光等的关系来看,也考虑检查对象物的色调与合格图像不同的情况。When the imaging unit 5000 has completed moving to the position and posture corresponding to the viewpoint information, imaging by the imaging unit 5000 of the position and posture is performed to obtain a captured image ( S2005 ). Inspection processing is performed by comparing a captured image with a pass image. In the pass image, predetermined values are used for (bx, by, bz). The possibility that the rotation angle of the viewing direction is different from the angle indicated by (bx, by, bz). For example, as in the case where the acceptable image is shown in FIG. 70A and the captured image is shown in FIG. 70B , a rotation at a given angle may occur between the two images. In this case, it is inappropriate to cut out the inspection area, and it is also inappropriate to calculate the similarity. In addition, for convenience of explanation, the background of FIG. 70B is a single solid color like the acceptable image created from the model data. However, since FIG. 70B is a captured image, other objects may be included. In addition, from the perspective of the relationship between illumination light and the like, it is conceivable that the color tone of the object to be inspected is different from that of the acceptable image.
由此这里,进行拍摄图像与合格图像之间的图像旋转角度的计算处理(S2006)。具体而言,在生成合格图像时使用上述(bx、by、bz),因此与合格图像对应的拍摄部(假想摄像机)的相对于视线方向的旋转角度为已知的信息。另外,拍摄拍摄图像时的拍摄部5000的位置姿势在用于使拍摄部5000向与视点信息对应的位置姿势移动的机器人控制中也应成为已知,如若不然,则根本无法移动。由此,形成为也能够求出拍摄时的拍摄部5000的相对于视线方向的旋转角度的信息。在S2006的处理中,根据相对于视线方向的两个旋转角度的差分,求出图像间的旋转角度。并且,使用求出的图像旋转角度,进行合格图像与拍摄图像的至少一方的旋转变形处理,从而修正合格图像与拍摄图像的角度的不同。Thus, here, calculation processing of the image rotation angle between the captured image and the acceptable image is performed (S2006). Specifically, since the above-mentioned (bx, by, bz) is used when generating the acceptable image, the rotation angle of the imaging unit (virtual camera) corresponding to the acceptable image with respect to the line-of-sight direction is known information. In addition, the position and posture of the imaging unit 5000 at the time of capturing the captured image should also be known in the robot control for moving the imaging unit 5000 to the position and posture corresponding to the viewpoint information, otherwise, the imaging unit 5000 cannot move at all. In this way, information on the rotation angle of the imaging unit 5000 with respect to the line-of-sight direction at the time of imaging can also be obtained. In the process of S2006, the rotation angle between images is obtained from the difference of the two rotation angles with respect to the line-of-sight direction. Then, at least one of the acceptable image and the captured image is rotated and deformed using the obtained image rotation angle, thereby correcting the angle difference between the acceptable image and the captured image.
由于通过以上的处理而取得了合格图像与拍摄图像的角度的对应,所以提取各图像中的由S100004求出的检查区域(S2007),而使用该区域来进行确认处理(S2008)。在S2008中,计算合格图像与拍摄图像的相似度,若该相似度在由S100005求出的阈值以上则判定为合格,如若不然则判定为不合格即可。Since the correspondence between the acceptable image and the angle of the captured image is obtained through the above processing, the inspection area obtained in S100004 is extracted from each image (S2007), and the confirmation process is performed using this area (S2008). In S2008, the degree of similarity between the acceptable image and the captured image is calculated, and if the similarity is greater than or equal to the threshold obtained in S100005, it is judged as passing, otherwise, it is sufficient to judge as failing.
但是,也可以不如上所述地仅从一个视点信息进行检查处理,而使用多个视点信息。由此,对是否执行了指定次数的确认处理进行判定(S2009),若执行指定次数则结束处理。例如,若为在3处位置的确认处理中没有问题的情况下成为合格的检查处理,则在S2008中进行了3次合格的判定的情况下,在S2009判定中为是,而使检查对象物为合格并结束检查处理。另一方面,即使在S2008中为合格,若该判定为第一次或者第二次,则在S2009中判定中为否,并继续进行针对下一个视点信息的处理。However, instead of performing inspection processing from only one viewpoint information as described above, a plurality of viewpoint information may be used. Thereby, it is judged whether the confirmation process of a designated number of times has been performed (S2009), and when it has performed the designated number of times, the process is complete|finished. For example, if there is no problem in the confirmation process at three positions, if it is an inspection process that is passed, then in the case of three passes in S2008, it is YES in S2009, and the object to be inspected is Pass and end the inspection process. On the other hand, even if it is passed in S2008, if the judgment is the first or second time, the judgment in S2009 is negative, and the processing for the next viewpoint information is continued.
此外,在以上的说明中,在线处理也是由信息获取部11110、处理部11120进行的,但是并不限定于此,也可以如上所述地利用机器人30000的控制部3500来进行上述处理。即在线处理可以由图50的机器人30000的信息获取部11110、处理部11120来进行,也可以由图51A的机器人的控制部3500进行。或者,也可以由图51A的处理装置的信息获取部11110、处理部11120进行,该情况下的处理装置10000能够如上所述地考虑为机器人的控制装置。In addition, in the above description, the online processing is also performed by the information acquisition unit 11110 and the processing unit 11120 , but it is not limited thereto, and the above processing may be performed by the control unit 3500 of the robot 30000 as described above. That is, the online processing can be performed by the information acquisition unit 11110 and the processing unit 11120 of the robot 30000 in FIG. 50 , or can be performed by the robot control unit 3500 in FIG. 51A . Alternatively, it may be performed by the information acquisition unit 11110 and the processing unit 11120 of the processing device in FIG. 51A , and the processing device 10000 in this case can be considered as a robot control device as described above.
在利用机器人30000的控制部3500进行在线处理的情况下,机器人30000的控制部3500在根据机器人30000的可动范围信息,判定为无法使拍摄部5000向与多个视点信息中的第i(i为自然数)视点信息对应的视点位置以及视线方向移动的情况下,跳过与第i视点信息对应的拍摄部5000的移动,而进行与移动顺序中的第i视点信息的下一个的第j(j为满足i≠j的自然数)视点信息相对的控制。In the case of using the control unit 3500 of the robot 30000 to perform online processing, the control unit 3500 of the robot 30000 determines that it is impossible for the imaging unit 5000 to align with the i-th (i is a natural number) when the viewpoint position and line-of-sight direction corresponding to the viewpoint information move, the movement of the imaging unit 5000 corresponding to the i-th viewpoint information is skipped, and the j-th ( j is a natural number that satisfies i≠j) Relative control of viewpoint information.
此外,本实施方式的处理装置10000等也可以通过程序来实现其处理的一部分或者大部分。在这种情况下,CPU等处理器执行程序,从而实现本实施方式的处理装置10000等。具体而言,读出存储于非暂时性信息存储介质的程序,并且CPU等处理器执行读出的程序。这里,信息存储介质(能够利用计算机读取的介质)是储存程序、数据等的介质,其功能能够通过光盘(DVD、CD等)、HDD(硬盘驱动器)、或者存储器(卡式存储器、ROM等)等来实现。而且,CPU等处理器根据储存于信息存储介质的程序(数据),进行本实施方式的各种处理。即,在信息存储介质存储用于使计算机(具备操作部、处理部、存储部、输出部的装置)作为本实施方式的各部而发挥功能的程序(用于使计算机执行各部的处理的程序)。In addition, the processing device 10000 etc. of this embodiment may implement a part or most of the processing by a program. In this case, a processor such as a CPU executes the program, thereby realizing the processing device 10000 and the like of the present embodiment. Specifically, a program stored in a non-transitory information storage medium is read, and a processor such as a CPU executes the read program. Here, the information storage medium (a medium that can be read by a computer) is a medium that stores programs, data, etc., and its function can be performed by an optical disk (DVD, CD, etc.), HDD (hard disk drive), or a memory (card memory, ROM, etc.) ) and so on to achieve. Furthermore, a processor such as a CPU performs various processes in the present embodiment based on a program (data) stored in an information storage medium. That is, a program for causing a computer (a device including an operation unit, a processing unit, a storage unit, and an output unit) to function as each unit of the present embodiment (a program for causing the computer to execute the processing of each unit) is stored in the information storage medium. .
Claims (12)
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510137541.3A CN104802166B (en) | 2013-10-10 | 2014-10-10 | Robot control system, robot, program and robot control method |
CN201510136619.XA CN104959982A (en) | 2013-10-10 | 2014-10-10 | Robot control system, robot, program and robot control method |
CN201711203574.9A CN108081268A (en) | 2013-10-10 | 2014-10-10 | Robot control system, robot, program and robot control method |
CN201510137542.8A CN104802174B (en) | 2013-10-10 | 2014-10-10 | Robot control system, robot, program and robot control method |
Applications Claiming Priority (8)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2013-212930 | 2013-10-10 | ||
JP2013212930A JP6322949B2 (en) | 2013-10-10 | 2013-10-10 | Robot control apparatus, robot system, robot, robot control method, and robot control program |
JP2013226536A JP6390088B2 (en) | 2013-10-31 | 2013-10-31 | Robot control system, robot, program, and robot control method |
JP2013-226536 | 2013-10-31 | ||
JP2013-228655 | 2013-11-01 | ||
JP2013-228653 | 2013-11-01 | ||
JP2013228655A JP6337445B2 (en) | 2013-11-01 | 2013-11-01 | Robot, processing apparatus, and inspection method |
JP2013228653A JP6217322B2 (en) | 2013-11-01 | 2013-11-01 | Robot control apparatus, robot, and robot control method |
Related Child Applications (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510136619.XA Division CN104959982A (en) | 2013-10-10 | 2014-10-10 | Robot control system, robot, program and robot control method |
CN201510137541.3A Division CN104802166B (en) | 2013-10-10 | 2014-10-10 | Robot control system, robot, program and robot control method |
CN201711203574.9A Division CN108081268A (en) | 2013-10-10 | 2014-10-10 | Robot control system, robot, program and robot control method |
CN201510137542.8A Division CN104802174B (en) | 2013-10-10 | 2014-10-10 | Robot control system, robot, program and robot control method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN104552292A true CN104552292A (en) | 2015-04-29 |
Family
ID=53069890
Family Applications (5)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510137542.8A Expired - Fee Related CN104802174B (en) | 2013-10-10 | 2014-10-10 | Robot control system, robot, program and robot control method |
CN201510137541.3A Expired - Fee Related CN104802166B (en) | 2013-10-10 | 2014-10-10 | Robot control system, robot, program and robot control method |
CN201510136619.XA Pending CN104959982A (en) | 2013-10-10 | 2014-10-10 | Robot control system, robot, program and robot control method |
CN201711203574.9A Pending CN108081268A (en) | 2013-10-10 | 2014-10-10 | Robot control system, robot, program and robot control method |
CN201410531769.6A Pending CN104552292A (en) | 2013-10-10 | 2014-10-10 | Control system of robot, robot, program and control method of robot |
Family Applications Before (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510137542.8A Expired - Fee Related CN104802174B (en) | 2013-10-10 | 2014-10-10 | Robot control system, robot, program and robot control method |
CN201510137541.3A Expired - Fee Related CN104802166B (en) | 2013-10-10 | 2014-10-10 | Robot control system, robot, program and robot control method |
CN201510136619.XA Pending CN104959982A (en) | 2013-10-10 | 2014-10-10 | Robot control system, robot, program and robot control method |
CN201711203574.9A Pending CN108081268A (en) | 2013-10-10 | 2014-10-10 | Robot control system, robot, program and robot control method |
Country Status (1)
Country | Link |
---|---|
CN (5) | CN104802174B (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104965489A (en) * | 2015-07-03 | 2015-10-07 | 昆山市佰奥自动化设备科技有限公司 | CCD automatic positioning assembly system and method based on robot |
CN104959982A (en) * | 2013-10-10 | 2015-10-07 | 精工爱普生株式会社 | Robot control system, robot, program and robot control method |
CN108573504A (en) * | 2017-03-13 | 2018-09-25 | 韩国科学技术研究院 | 3D image generation method and system for analyzing plant phenotype |
CN110102490A (en) * | 2019-05-23 | 2019-08-09 | 北京阿丘机器人科技有限公司 | The assembly line packages device and electronic equipment of view-based access control model technology |
CN111095139A (en) * | 2017-07-20 | 2020-05-01 | 西门子股份公司 | Method and system for detecting abnormal state of machine |
CN111565293A (en) * | 2019-02-14 | 2020-08-21 | 电装波动株式会社 | Device, method, and program for analyzing state of manual work by operator |
US20200368918A1 (en) * | 2017-08-25 | 2020-11-26 | Fanuc Corporation | Robot system |
CN112076947A (en) * | 2020-08-31 | 2020-12-15 | 博众精工科技股份有限公司 | Bonding equipment |
CN114466180A (en) * | 2021-12-29 | 2022-05-10 | 北京奕斯伟计算技术有限公司 | Camera testing method, testing device, mounting method and mounting device |
Families Citing this family (38)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11039895B2 (en) * | 2015-08-25 | 2021-06-22 | Kawasaki Jukogyo Kabushiki Kaisha | Industrial remote control robot system |
CN109328126B (en) * | 2016-07-06 | 2022-06-07 | 株式会社富士 | Imaging device and imaging system |
JP6490032B2 (en) * | 2016-08-10 | 2019-03-27 | ファナック株式会社 | Robot controller for assembly robot |
US11373286B2 (en) * | 2016-11-07 | 2022-06-28 | Nabtesco Corporation | Status checking device for built-in object, operation checking device and method for checking built-in object |
JP6833460B2 (en) * | 2016-11-08 | 2021-02-24 | 株式会社東芝 | Work support system, work method, and processing equipment |
JP7314475B2 (en) * | 2016-11-11 | 2023-07-26 | セイコーエプソン株式会社 | ROBOT CONTROL DEVICE AND ROBOT CONTROL METHOD |
WO2018123433A1 (en) * | 2016-12-28 | 2018-07-05 | パナソニックIpマネジメント株式会社 | Tool system |
JP2018126799A (en) * | 2017-02-06 | 2018-08-16 | セイコーエプソン株式会社 | Control device, robot and robot system |
CN106926241A (en) * | 2017-03-20 | 2017-07-07 | 深圳市智能机器人研究院 | A kind of the tow-armed robot assembly method and system of view-based access control model guiding |
JP6788845B2 (en) * | 2017-06-23 | 2020-11-25 | パナソニックIpマネジメント株式会社 | Remote communication methods, remote communication systems and autonomous mobile devices |
JP6963748B2 (en) * | 2017-11-24 | 2021-11-10 | 株式会社安川電機 | Robot system and robot system control method |
CN109992093B (en) * | 2017-12-29 | 2024-05-03 | 博世汽车部件(苏州)有限公司 | Gesture comparison method and gesture comparison system |
JP6873941B2 (en) * | 2018-03-02 | 2021-05-19 | 株式会社日立製作所 | Robot work system and control method of robot work system |
US11276194B2 (en) * | 2018-03-29 | 2022-03-15 | National University Corporation NARA Institute of Science and Technology | Learning dataset creation method and device |
JP6845180B2 (en) * | 2018-04-16 | 2021-03-17 | ファナック株式会社 | Control device and control system |
TWI681487B (en) | 2018-10-02 | 2020-01-01 | 南韓商Komico有限公司 | System for obtaining image of 3d shape |
JP6904327B2 (en) * | 2018-11-30 | 2021-07-14 | オムロン株式会社 | Control device, control method, and control program |
CN109571477B (en) * | 2018-12-17 | 2020-09-22 | 西安工程大学 | An Improved Robot Vision and Conveyor Belt Comprehensive Calibration Method |
CN109625118B (en) * | 2018-12-29 | 2020-09-01 | 深圳市优必选科技有限公司 | Impedance control method and device for biped robot |
JP6892461B2 (en) * | 2019-02-05 | 2021-06-23 | ファナック株式会社 | Machine control device |
CN113439013B (en) * | 2019-02-25 | 2024-05-14 | 国立大学法人东京大学 | Robot system, control device for robot, and control program for robot |
JP2020142323A (en) * | 2019-03-06 | 2020-09-10 | オムロン株式会社 | Robot control device, robot control method and robot control program |
JP6717401B1 (en) * | 2019-04-01 | 2020-07-01 | 株式会社安川電機 | Programming support device, robot system, and programming support method |
JP7424800B2 (en) * | 2019-11-06 | 2024-01-30 | ファナック株式会社 | Control device, control method, and control system |
JP2021094677A (en) * | 2019-12-19 | 2021-06-24 | 本田技研工業株式会社 | Robot control device, robot control method, program and learning model |
US12236663B2 (en) * | 2020-01-17 | 2025-02-25 | Fanuc Corporation | Image processing system |
JP2021133470A (en) * | 2020-02-28 | 2021-09-13 | セイコーエプソン株式会社 | Robot control method and robot system |
CN111482800B (en) * | 2020-04-15 | 2021-07-06 | 深圳市欧盛自动化有限公司 | Electricity core top bridge equipment |
CN111761575B (en) * | 2020-06-01 | 2023-03-03 | 湖南视比特机器人有限公司 | Workpiece, grabbing method thereof and production line |
CN111993423A (en) * | 2020-08-17 | 2020-11-27 | 北京理工大学 | A modular intelligent assembly system |
JP7547940B2 (en) * | 2020-10-30 | 2024-09-10 | セイコーエプソン株式会社 | How to control a robot |
CN112989982B (en) * | 2021-03-05 | 2024-04-30 | 佛山科学技术学院 | Unmanned vehicle image acquisition control method and system |
US11772272B2 (en) * | 2021-03-16 | 2023-10-03 | Google Llc | System(s) and method(s) of using imitation learning in training and refining robotic control policies |
CN113305839B (en) * | 2021-05-26 | 2022-08-19 | 深圳市优必选科技股份有限公司 | Admittance control method and admittance control system of robot and robot |
CN114310063B (en) * | 2022-01-28 | 2023-06-06 | 长春职业技术学院 | A welding optimization method based on six-axis robot |
CN115401695A (en) * | 2022-09-26 | 2022-11-29 | 无锡中嵌教育装备科技有限公司 | A robot adaptive synovial film compliance force control method |
CN115937136A (en) * | 2022-12-05 | 2023-04-07 | 上海飒智智能科技有限公司 | Embedded active visual servo control method, system and defect identification method |
CN115741725A (en) * | 2022-12-26 | 2023-03-07 | 山东旗帜信息有限公司 | Control method and system for robot in net-free warehouse |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0152594A2 (en) * | 1984-02-18 | 1985-08-28 | Telefunken Systemtechnik Gmbh | Means for recording, analysing by measuring techniques, and/or controlling successive phases of technical processes |
US5608847A (en) * | 1981-05-11 | 1997-03-04 | Sensor Adaptive Machines, Inc. | Vision target based assembly |
US20030187548A1 (en) * | 2002-03-29 | 2003-10-02 | Farhang Sakhitab | Methods and apparatus for precision placement of an optical component on a substrate and precision assembly thereof into a fiberoptic telecommunication package |
JP2004009209A (en) * | 2002-06-06 | 2004-01-15 | Yaskawa Electric Corp | Teaching device for robot |
CN102785249A (en) * | 2011-05-16 | 2012-11-21 | 精工爱普生株式会社 | Robot control system, robot system and program |
CN104959982A (en) * | 2013-10-10 | 2015-10-07 | 精工爱普生株式会社 | Robot control system, robot, program and robot control method |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS62192807A (en) * | 1986-02-20 | 1987-08-24 | Fujitsu Ltd | Robot control method |
JPH03220603A (en) * | 1990-01-26 | 1991-09-27 | Citizen Watch Co Ltd | Robot control method |
CN101522377B (en) * | 2006-10-20 | 2011-09-14 | 株式会社日立制作所 | Manipulator |
US8864652B2 (en) * | 2008-06-27 | 2014-10-21 | Intuitive Surgical Operations, Inc. | Medical robotic system providing computer generated auxiliary views of a camera instrument for controlling the positioning and orienting of its tip |
JP5239901B2 (en) * | 2009-01-27 | 2013-07-17 | 株式会社安川電機 | Robot system and robot control method |
JP5509859B2 (en) * | 2010-01-13 | 2014-06-04 | 株式会社Ihi | Robot control apparatus and method |
JP4837116B2 (en) * | 2010-03-05 | 2011-12-14 | ファナック株式会社 | Robot system with visual sensor |
CN102059703A (en) * | 2010-11-22 | 2011-05-18 | 北京理工大学 | Self-adaptive particle filter-based robot vision servo control method |
JP5074640B2 (en) * | 2010-12-17 | 2012-11-14 | パナソニック株式会社 | Control device and control method for elastic actuator drive mechanism, and control program |
CN103517789B (en) * | 2011-05-12 | 2015-11-25 | 株式会社Ihi | motion prediction control device and method |
JP5834545B2 (en) * | 2011-07-01 | 2015-12-24 | セイコーエプソン株式会社 | Robot, robot control apparatus, robot control method, and robot control program |
CN102501252A (en) * | 2011-09-28 | 2012-06-20 | 三一重工股份有限公司 | Method and system for controlling movement of tail end of executing arm |
JP6000579B2 (en) * | 2012-03-09 | 2016-09-28 | キヤノン株式会社 | Information processing apparatus and information processing method |
-
2014
- 2014-10-10 CN CN201510137542.8A patent/CN104802174B/en not_active Expired - Fee Related
- 2014-10-10 CN CN201510137541.3A patent/CN104802166B/en not_active Expired - Fee Related
- 2014-10-10 CN CN201510136619.XA patent/CN104959982A/en active Pending
- 2014-10-10 CN CN201711203574.9A patent/CN108081268A/en active Pending
- 2014-10-10 CN CN201410531769.6A patent/CN104552292A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5608847A (en) * | 1981-05-11 | 1997-03-04 | Sensor Adaptive Machines, Inc. | Vision target based assembly |
EP0152594A2 (en) * | 1984-02-18 | 1985-08-28 | Telefunken Systemtechnik Gmbh | Means for recording, analysing by measuring techniques, and/or controlling successive phases of technical processes |
US20030187548A1 (en) * | 2002-03-29 | 2003-10-02 | Farhang Sakhitab | Methods and apparatus for precision placement of an optical component on a substrate and precision assembly thereof into a fiberoptic telecommunication package |
JP2004009209A (en) * | 2002-06-06 | 2004-01-15 | Yaskawa Electric Corp | Teaching device for robot |
CN102785249A (en) * | 2011-05-16 | 2012-11-21 | 精工爱普生株式会社 | Robot control system, robot system and program |
CN104959982A (en) * | 2013-10-10 | 2015-10-07 | 精工爱普生株式会社 | Robot control system, robot, program and robot control method |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104959982A (en) * | 2013-10-10 | 2015-10-07 | 精工爱普生株式会社 | Robot control system, robot, program and robot control method |
CN104965489A (en) * | 2015-07-03 | 2015-10-07 | 昆山市佰奥自动化设备科技有限公司 | CCD automatic positioning assembly system and method based on robot |
CN108573504A (en) * | 2017-03-13 | 2018-09-25 | 韩国科学技术研究院 | 3D image generation method and system for analyzing plant phenotype |
CN108573504B (en) * | 2017-03-13 | 2022-10-04 | 韩国科学技术研究院 | 3D image generation method and system for analyzing plant phenotype |
CN111095139A (en) * | 2017-07-20 | 2020-05-01 | 西门子股份公司 | Method and system for detecting abnormal state of machine |
CN111095139B (en) * | 2017-07-20 | 2024-03-12 | 西门子股份公司 | Method and system for detecting abnormal state of machine |
US20200368918A1 (en) * | 2017-08-25 | 2020-11-26 | Fanuc Corporation | Robot system |
US11565427B2 (en) * | 2017-08-25 | 2023-01-31 | Fanuc Corporation | Robot system |
CN111565293B (en) * | 2019-02-14 | 2022-07-22 | 电装波动株式会社 | Apparatus and method for analyzing state of manual work of worker, and medium |
CN111565293A (en) * | 2019-02-14 | 2020-08-21 | 电装波动株式会社 | Device, method, and program for analyzing state of manual work by operator |
CN110102490A (en) * | 2019-05-23 | 2019-08-09 | 北京阿丘机器人科技有限公司 | The assembly line packages device and electronic equipment of view-based access control model technology |
CN110102490B (en) * | 2019-05-23 | 2021-06-01 | 北京阿丘机器人科技有限公司 | Assembly line parcel sorting device based on vision technology and electronic equipment |
CN112076947A (en) * | 2020-08-31 | 2020-12-15 | 博众精工科技股份有限公司 | Bonding equipment |
CN114466180A (en) * | 2021-12-29 | 2022-05-10 | 北京奕斯伟计算技术有限公司 | Camera testing method, testing device, mounting method and mounting device |
Also Published As
Publication number | Publication date |
---|---|
CN104802174A (en) | 2015-07-29 |
CN104959982A (en) | 2015-10-07 |
CN104802166A (en) | 2015-07-29 |
CN104802166B (en) | 2016-09-28 |
CN108081268A (en) | 2018-05-29 |
CN104802174B (en) | 2016-09-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104802166B (en) | Robot control system, robot, program and robot control method | |
CN108453701B (en) | Method for controlling robot, method for teaching robot, and robot system | |
US9089971B2 (en) | Information processing apparatus, control method thereof and storage medium | |
JP5904676B2 (en) | Apparatus and method for robust calibration between machine vision system and robot | |
US9884425B2 (en) | Robot, robot control device, and robotic system | |
JP2018176334A (en) | Information processing device, measurement device, system, interference determination method and article manufacturing method | |
CN107639653A (en) | control device, robot and robot system | |
WO2005072917A1 (en) | Machine vision controlled robot tool system | |
JP2016000442A (en) | Robot, robot system and control device | |
JP2018167334A (en) | Teaching device and teaching method | |
JP2019069493A (en) | Robot system | |
US20190030722A1 (en) | Control device, robot system, and control method | |
EP3936286A1 (en) | Robot control device, robot control method, and robot control program | |
CN115003464A (en) | Robot system | |
CN115194755A (en) | Apparatus and method for controlling robot to insert object into insertion part | |
JP2018051634A (en) | Robot control device, robot, robot system, and posture identification device | |
JP2015074061A (en) | Robot control device, robot system, robot, robot control method and robot control program | |
JP6390088B2 (en) | Robot control system, robot, program, and robot control method | |
JP2682763B2 (en) | Automatic measurement method of operation error of robot body | |
JP6217322B2 (en) | Robot control apparatus, robot, and robot control method | |
JP2016182648A (en) | Robot, robot control device and robot system | |
EP3224004B1 (en) | Robotic system comprising a telemetric device with a laser measuring device and a passive video camera | |
JP6337445B2 (en) | Robot, processing apparatus, and inspection method | |
JP2016203282A (en) | Robot with mechanism for changing end effector attitude | |
Shauri et al. | Sensor integration and fusion for autonomous screwing task by dual-manipulator hand robot |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20150429 |