CN109863365B - Method, electronic device and system for picking up objects from container - Google Patents
Method, electronic device and system for picking up objects from container Download PDFInfo
- Publication number
- CN109863365B CN109863365B CN201680089846.4A CN201680089846A CN109863365B CN 109863365 B CN109863365 B CN 109863365B CN 201680089846 A CN201680089846 A CN 201680089846A CN 109863365 B CN109863365 B CN 109863365B
- Authority
- CN
- China
- Prior art keywords
- image
- container
- pixels
- robotic manipulator
- electronic device
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 42
- 238000013459 approach Methods 0.000 claims abstract description 11
- 230000009471 action Effects 0.000 claims abstract description 7
- 230000008569 process Effects 0.000 claims description 13
- 230000003628 erosive effect Effects 0.000 claims description 9
- 238000010191 image analysis Methods 0.000 abstract description 2
- 239000011159 matrix material Substances 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 239000003086 colorant Substances 0.000 description 2
- 230000008878 coupling Effects 0.000 description 2
- 238000010168 coupling process Methods 0.000 description 2
- 238000005859 coupling reaction Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 230000033001 locomotion Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
- B25J9/1697—Vision controlled systems
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1612—Programme controls characterised by the hand, wrist, grip control
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/155—Segmentation; Edge detection involving morphological operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/174—Segmentation; Edge detection involving the use of two or more images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/30—Nc systems
- G05B2219/39—Robotics, robotics to robotics hand
- G05B2219/39543—Recognize object and plan hand shapes in grasping movements
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/30—Nc systems
- G05B2219/40—Robotics, robotics mapping to robotics vision
- G05B2219/40053—Pick 3-D object from pile of objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30164—Workpiece; Machine component
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Orthopedic Medicine & Surgery (AREA)
- Manipulator (AREA)
- Image Analysis (AREA)
Abstract
A method (100) of picking up an object from a container (230) by means of a robotic manipulator (211) is disclosed and comprises: acquiring a first image comprising pixels representing grayscale information of an object (220) and a container (230); acquiring a second image comprising pixels representing 3D spatial information of the object (220) and the container (230); identifying at least one surface of an object (220) in the first image based on the grayscale information; identifying pixels of at least one surface of the object (220) in the second image based on the identified at least one surface of the object (220) in the first image and the 3D spatial information; based on the identified pixels in the second image, the robotic manipulator (211) is controlled to approach the identified at least one surface to pick up an object (220). An apparatus, system or method according to the present disclosure provides an improved solution for making the computational resource requirements needed for image analysis lower. In addition, the processing speed will be greatly increased, and therefore the picking action of the robot manipulator (211) is fast and accurate.
Description
Technical Field
Example embodiments disclosed herein relate generally to a method of picking objects from a container by a robotic manipulator, and also to an electronic device for implementing the method and a system for picking objects from a container.
Background
In the logistics industry or other industries, robots are widely used to pick objects from larger containers. Typically, objects such as boxes are randomly placed in the container. The containers are typically moved on a conveyor belt and the conveyor belt will stop when the robotic manipulator is ready to pick a cassette from the container.
Accordingly, there is a need in the industry to be able to quickly pick up a cassette from a container so that the container can be emptied in a shorter time. After emptying the container, the next container will be put in place for the robot manipulator to empty. Faster emptying of cassettes from a container can result in significant overall efficiency improvements.
Currently, 3D cameras can be used to facilitate the pick-up process. Based on the 3D pictures captured by the 3D camera, the robotic manipulator is able to access one of the many surfaces of the cassette in the container that is relatively easy for the robotic manipulator to pick up from. In general, a robot or system may treat the top surface of an object as an easier object surface from which the robotic manipulator picks up, as the top surface is less likely to be blocked by other objects. However, computing the surface distribution from the captured 3D photograph in order to determine the surface to be approached requires a significant amount of computing resources. Therefore, the analysis of the 3D photograph requires time, which is not satisfactory for the logistics industry.
Disclosure of Invention
Example embodiments disclosed herein provide a method of picking objects from a container with a robotic manipulator.
In one aspect, example embodiments disclosed herein provide a method of picking objects from a container with a robotic manipulator. The method comprises the following steps: acquiring a first image including pixels representing grayscale information of an object and a container; acquiring a second image comprising pixels representing 3D spatial information of the object and the container; identifying at least one surface of an object in the first image based on the grayscale information; identifying pixels of at least one surface of the object in the second image based on the identified at least one surface of the object in the first image and the 3D spatial information; and controlling the robotic manipulator to approach the identified at least one surface to pick up the object based on the identified pixels in the second image.
In another aspect, example embodiments disclosed herein provide an electronic device. The electronic device includes a processing unit and a memory coupled to the processing unit and storing instructions for execution by the processing unit. The instructions, when executed by the processing unit, cause the apparatus to perform acts comprising: identifying at least one surface of the object in a first image based on the grayscale information of the object and the container, the first image containing pixels representing grayscale information of the object and the container; identifying pixels of the at least one surface of the object in a second image based on the identified at least one surface of the object in the first image and the 3D spatial information of the object and the container, the second image containing pixels representing the 3D spatial information of the object and the container; and controlling the robotic manipulator to approach the identified at least one surface to pick up the object based on the identified pixels in the second image.
In yet another aspect, example embodiments disclosed herein provide a system for picking objects from a container. The system comprises: a robot comprising a robot manipulator for picking up an object, a camera and an electronic device as described above. The camera is configured to: acquiring a first image including pixels representing grayscale information of an object and a container; and a second image is acquired containing pixels representing 3D spatial information of the object and the container.
It will be appreciated from the following description that an apparatus, system or method according to the present disclosure provides an improved solution for making the computational resource requirements required for image analysis lower. By taking the grey scale information and the 3D spatial information into account, the processing speed will be greatly increased and thus the picking action of the robotic manipulator is fast and accurate.
Drawings
The foregoing and other objects, features and advantages of the example embodiments disclosed herein will become more readily apparent from the following detailed description, which proceeds with reference to the accompanying drawings. Several exemplary embodiments disclosed herein will be illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which:
fig. 1 shows a flow diagram of a method of picking an object from a container by a robotic manipulator according to an example embodiment;
FIG. 2 illustrates an example work environment for a robot to pick up objects in a container, according to another example embodiment; and
fig. 3-9 show images after each processing stage, respectively, according to an example embodiment.
Throughout the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
Detailed Description
The subject matter described herein will now be discussed with reference to several example embodiments. These examples are discussed only for the purpose of enabling those skilled in the art to better understand and thus implement the subject matter described herein, and do not imply any limitation on the scope of the subject matter.
The terms "include" or "comprise" and variations thereof should be understood as an open-ended term meaning "including, but not limited to". The term "or" should be understood as "and/or" unless the context clearly dictates otherwise. The term "based on" should be understood as "based at least in part on". The term "operable to" means that a function, action, motion, or state may be achieved through an operation caused by a user or an external mechanism. The terms "one embodiment" and "an embodiment" should be understood as "at least one embodiment". The term "another embodiment" should be understood as "at least one other embodiment". Unless specified or limited otherwise, the terms "mounted," "connected," "supported," and "coupled" and variations thereof are used broadly and encompass both direct and indirect mountings, connections, supports, and couplings. Further, "connected" and "coupled" are not restricted to physical or mechanical connections or couplings. In the description below, like reference numerals and characters are used to describe the same, similar or corresponding parts in the several views of fig. 1-10. Other definitions, both explicit and implicit, may be included below.
Fig. 1 shows a flow diagram of a method 100 for picking objects from a container by a robotic manipulator according to an example embodiment. Fig. 2 illustrates an example work environment 200 for a robot 210 to pick up an object 220 in a container 230. Fig. 3-9 show the image 300-900 after each processing stage, respectively. Hereinafter, the method according to the present disclosure will be explained in the order shown in the flowchart of fig. 1. For each block and sub-block (if any) in the method 100, the corresponding image as shown in any of fig. 2-9 will be described accordingly.
Fig. 2 shows a system 200 for picking up an object 220 from a container 230 in an actual working environment. The system 200 includes a robot 210 having a robotic manipulator 211 for picking up an object 220 and a camera (not shown) that may be located near a tip of the robotic manipulator 211. A camera may be used to acquire an image containing the necessary information of the object 220 and the container 230, as will be described in detail with reference to the method 100 in the following paragraphs. Further, the system 200 includes electronics that are typically located somewhere in the robot 210. The electronic device includes a processing unit and a memory coupled to the processing unit and storing instructions for execution by the processing unit, which when executed by the processing unit, cause the device to perform the actions in method 100.
The containers 230 are placed on a conveyor belt 240 and the robot 210 is secured to a table 250. As the container 230 moving on the conveyor 240 reaches an area where the robotic manipulator 211 of the robot 210 can access the entire container 230, the conveyor 240 will stop the container 230 moving thereon. In the configuration of fig. 2, the robotic manipulator 211 may enter the space within the container 230 and grasp any object 220. Such a gripping action may be accomplished by any means known in the art. For example, a pneumatic suction cup may be used to adhere to the surface of the object. Other gripping means exist, which are typically used for robotic manipulators, and they typically require a specific surface of the object to be identified in the space in order to facilitate the gripping action.
Fig. 2 shows that the robot manipulator 211 is brought close to a top surface of the object 220 in a direction substantially perpendicular to the top surface such that the top surface is adhered to the robot manipulator 211. If the robot manipulator 211 attempts to approach the object 220 from another angle or attempts to approach the object 220 at another surface, it may be difficult to pick up the object 220 due to the shape of the object or the limited space limited by the container 230.
It will be appreciated that how the robotic manipulator 211 approaches the object 220 depends to a large extent on the shape and size of the object 220 and the type of robotic manipulator 211. For example, where the object in the container is a rectangular box, one of the six faces of the box may be effectively accessed. In other cases, proximity to one edge of an object may be effective. Before actually running the conveyor belt, the user can customize how objects of a certain size are picked up by what type of robotic manipulator. However, in order to successfully pick up an object, it is always beneficial to identify the exposed surface of the object in terms of positioning in space (container).
For example, the method 100 may be performed for picking up one object 220 from the container 230, assuming that the container 230 is stationary with respect to the robot 210 and placed within an area where the robotic manipulator 211 is able to pick up any object 220 in the container 230. By repeating the method 100, the robotic manipulator 211 may pick up all objects 220 one after the other. Once a particular container 230 is emptied (i.e., leaving an empty container), another container having multiple objects is moved by the conveyor belt to the area to replace the empty container. The robotic manipulator is then controlled to pick up the objects one by one from the new container.
At block 101, a first image is acquired containing pixels representing grayscale information of an object and a container. The first image may be captured by a digital camera that includes a lens, a sensor that converts light into a signal, and a processor for generating the first image. The captured image is typically a bitmap image characterized by a matrix of pixels. Each pixel is assigned some value representing color and/or brightness. To effectively identify the boundary or edge of an object within the perspective of the first image, only the luminance information is used. This means that the captured first image is converted into a grayscale image.
It should be understood that, herein, luminance information may be used interchangeably with grayscale information, and an image containing grayscale information does not mean that it excludes color information. In other words, the captured first image may contain both color information and grayscale information, although only grayscale information is utilized in the example of block 101. That is, according to embodiments of the present disclosure, color information of an image may be maintained in some cases, and the use of the color information may also be beneficial. In addition, throughout the specification, "boundary" and "edge" may be used interchangeably. Both terms may refer to the edge between the surface or peripheral line of the object.
Fig. 3 shows an example of a first image, and fig. 3 is a grayscale photograph showing a perspective view of a portion of a robot 310, a plurality of objects 320 in the form of rectangular boxes, and a border comprising a container 330. As can be seen from the grayscale picture, each object 320 has some edges and some surfaces that are distinguishable due to the abrupt change in brightness from one surface to another.
At block 102, a second image is acquired containing pixels representing 3D spatial information of the object and the container. The second image may be captured by a 3D camera. One example of such a 3D camera is a binocular camera that outputs 3D point cloud data of an object in space. The 3D point cloud data records the position and orientation of the object in the perspective view. In some embodiments, the 3D camera used at block 102 may be integrated with the camera used at block 101. In other words, the same binocular camera is capable of capturing grayscale bitmap images as well as images including 3D spatial information. In this way, the two photographs share the same perspective, and as a result, the position of the surface and the edge are consistent in both images. In an alternative embodiment, two cameras may also be used for blocks 101 and 102.
As an example, fig. 4 shows an actual image including 3D spatial information. The image shares the same perspective as the image generated at block 101. Thus, 3D spatial information of the container 430 and the object 420. The container 430 and the object 420 correspond to the container 330 and the object 320 in fig. 3, respectively. However, the first image of fig. 3 shows a matrix of pixels that each must have gray scale or brightness information, while the second image of fig. 4 shows a matrix of pixels that each must have 3D point cloud data. For one pixel, the 3D point cloud data may include X, Y and Z values (3 axes in 3D space) representing the coordinates of the spatial location of the point. For example, one method for recording 3D point cloud data is to include red, green, and blue colors in one pixel for three axes, and thus the sizes of the three axes can be represented by the levels of the three colors. It should be understood that other three-channel data for recording and representing spatial information are also possible.
From the second image captured by the 3D camera, the position and orientation of each object and each surface of the object can be directly extracted. However, direct extraction is typically time and resource consuming. Also, as can be seen from fig. 4, when objects are close to each other, sometimes the boundaries of different objects 420 are too close. In such a case, the generated second image may have a large domain where several surfaces are connected or merged together, resulting in boundaries that are difficult to identify. By analyzing the second image only, false identifications of surfaces may often occur. To solve the above problem, the first image containing the grey scale information is useful for determining the distribution of the boundaries/edges in the 2D picture before analyzing the second image.
At block 103, at least one surface of an object in the first image is identified based on grayscale information from the first image. The operations at block 103 may be completed in three sub-blocks as will be described with reference to fig. 5-7. In fig. 5, a perspective view of the first image adjusted by removing extraneous regions such as the border of the container is shown. As a result, object 520 is left, which corresponds to object 320 in fig. 3.
Then, as shown in fig. 6, the edge of the object is extracted based on the gradation information. Because each surface of the object faces a different direction (orientation), the reflected light from each surface is captured differently by the camera. As a result, by analyzing the grayscale information in the first image, the edges of the object can be identified. As shown, the object 620 (corresponding to the object 320 in fig. 3 and the object 520 in fig. 5) is marked by highlighting the edge/boundary of the object 620 in the image of fig. 6.
Fig. 7 shows another highlighted image where the edges/borders are thickened to further distinguish the surface of the same object or different objects. In some embodiments, this may be accomplished by, for example, performing an erosion process on the edge. The erosion process is a common image process for eliminating noisy or meaningless points within the boundary. In addition, the expansion process, which is also typically used to eliminate noisy or meaningless points outside the boundary, may be performed before the erosion process. Thus, by means of image processing techniques, a further discriminating surface 720 (corresponding to object 320 in fig. 3, object 520 in fig. 5 and object 620 in fig. 6) in the first image may be acquired. Since morphological open (expansion) and closed (erosion) operations are prior art, they are not discussed in detail in context.
It should be noted that certain operations are not necessarily required in the above process. For example, the boundaries of the container may be maintained in the image, and the expansion or erosion operations may be omitted. With the operations at block 103, edges/boundaries of the object are identified and extracted. Thus, after block 103, the distribution of the edges in terms of their positions is obtained, which will be used later at block 104. Furthermore, after block 103, the surface and edges of the object are typically identified at the same time, since the surface is surrounded by some edges, and the edges in turn surround the surface. Surfaces or edges cannot exist alone, and thus a labeled surface also represents some labeled edge, and vice versa.
Still referring to FIG. 1, at block 104, pixels of at least one surface of an object in the second image are identified based on the identified at least one surface of the object (or edge) in the first image and the 3D spatial information. In particular, the identified edges or surfaces in the first image indicate a distribution of the surfaces or edges of the object, and the perspective view of the first image substantially corresponds to the perspective view of the second image. Therefore, the position information extracted from the first image can be seamlessly applied to the second image. For example, only the domain surrounded by the identified edge in the first image may be retained, with the remaining pixels in the second image removed. With the processing at block 104, the number of active pixels is greatly reduced, as shown in FIG. 8. As a result, the computational power required to analyze the position and orientation of the surface in space is significantly reduced, and thus the time required to perform the calculations is advantageously reduced.
In fig. 8, there are still some unwanted regions 801. This may be due to some erroneously identified edges in the analysis of the first image. The area 801 surrounded by these identified edges may remain in the second image after block 104. These unwanted regions 801 may additionally be removed using some existing image processing algorithms, and thus pixels outside the object may be removed from the second image. For example, where only a cube or rectangular box is involved, the surface is known to be quadrilateral, and thus any shape other than quadrilateral may be eliminated. One example of a resulting image comprising mainly spatial information of the surface of the object is shown in fig. 9. By using the spatial information, the robot can easily know the spatial distribution of the boxes in the container and then determine which surface is the "easy choice" of the robot manipulator.
Then, at block 105, based on the identified pixels in the second image, the robotic manipulator is controlled to approach the identified at least one surface to pick up the object. Generally, the robotic manipulator is more accessible proximate to the top surface of the manipulator. Thus, in one embodiment, the top surface will be determined from a number of surfaces in the identified second image. Then, a control command is issued to the robotic manipulator approaching the top surface to pick up the corresponding object.
It should be understood that the sequence of method 100 need not be the sequence discussed above. For example, block 102 for acquiring the second image may precede block 101 for acquiring the first image, or may follow block 103 for identifying a surface or edge of the object. Further, the objects may have different numbers of surfaces, and the shape of the objects may be regular (e.g., cubes, boxes, pyramids, etc.) or irregular (e.g., chamfered or rounded objects).
While operations are depicted in the above description in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In some cases, multitasking and parallel processing may be advantageous. Likewise, although several details are included in the above discussion, these should not be construed as limitations on the scope of the subject matter described herein, but rather as descriptions of features that may be specific to particular embodiments. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. In another aspect, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
Claims (17)
1. A method of picking an object from a container with a robotic manipulator, comprising:
acquiring a first image comprising pixels representing grayscale information of the object and the container;
acquiring a second image comprising pixels representing 3D spatial information of the object and the container;
identifying at least one surface of the object in the first image based on the grayscale information;
identifying pixels of the at least one surface of the object in the second image based on the identified at least one surface of the object in the first image and the 3D spatial information; and
controlling the robotic manipulator to approach the identified at least one surface to pick up the object based on the identified pixels in the second image.
2. The method of claim 1, wherein acquiring the second image comprises:
capturing the second image by a 3D camera; and
a value of the 3D point cloud data for each pixel in the captured second image is determined.
3. The method of claim 2, wherein the values of the 3D point cloud data for each pixel comprise values in a first dimension, values in a second dimension, and values in a third dimension.
4. The method of claim 1, wherein identifying at least one surface of the object in the first image comprises:
extracting edges of the object in the first image based on the grayscale information; and
performing an erosion process on the edge to identify the at least one surface of the object in the first image.
5. The method of claim 4, wherein identifying at least one surface of the object in the first image prior to performing the erosion process further comprises:
an extension process is performed on the edge.
6. The method of claim 1, wherein identifying pixels of the at least one surface of the object comprises:
removing pixels outside the object from the second image.
7. The method of claim 1, wherein controlling the robotic manipulator further comprises:
determining a top surface from the at least one surface that is closest to the robotic manipulator; and
controlling the robotic manipulator to approach the closest surface to pick up the object.
8. The method of any one of claims 1 to 7, wherein the object is a rectangular box.
9. An electronic device, comprising:
a processing unit; and
a memory coupled to the processing unit and storing instructions for execution by the processing unit, the instructions, when executed by the processing unit, causing the apparatus to perform acts comprising:
identifying at least one surface of an object in a first image based on grayscale information of the object and a container, the first image containing pixels representing grayscale information of the object and the container;
identifying pixels of the at least one surface of the object in a second image based on the identified at least one surface of the object in the first image and the 3D spatial information of the object and the container, the second image containing pixels representing the 3D spatial information of the object and the container; and
controlling a robotic manipulator to approach the identified at least one surface to pick up the object based on the identified pixels in the second image.
10. The electronic device of claim 9, wherein the second image is captured by a 3D camera, and the instructions, when executed by the processing unit, further cause the device to perform acts comprising:
a value of the 3D point cloud data for each pixel in the captured second image is determined.
11. The electronic device of claim 10, wherein the values of the 3D point cloud data for each pixel comprise values in a first dimension, values in a second dimension, and values in a third dimension.
12. The electronic device of claim 9, wherein the instructions, when executed by the processing unit, further cause the device to perform acts comprising:
extracting edges of the object in the first image based on the grayscale information; and
performing an erosion process on the edge to identify the at least one surface of the object in the first image.
13. The electronic device of claim 12, wherein the instructions, when executed by the processing unit, further cause the device to perform actions comprising, prior to performing the erosion process:
an extension process is performed on the edge.
14. The electronic device of claim 9, wherein the instructions, when executed by the processing unit, further cause the device to perform acts comprising:
removing pixels outside the object from the second image.
15. The electronic device of claim 9, wherein the instructions, when executed by the processing unit, further cause the device to perform acts comprising:
determining a top surface from the at least one surface that is closest to the robotic manipulator; and
controlling the robotic manipulator to approach the closest surface to pick up the object.
16. The electronic device of any of claims 9-15, wherein the object is a rectangular box.
17. A system for picking objects from a container, comprising:
a robot including a robot manipulator for picking up the object;
a camera configured to:
acquiring a first image comprising pixels representing grayscale information of the object and the container; and
acquiring a second image comprising pixels representing 3D spatial information of the object and the container; and
the electronic device of any of claims 9-16.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2016/102932 WO2018072208A1 (en) | 2016-10-21 | 2016-10-21 | Method, electronic device and system of picking an object from a container |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109863365A CN109863365A (en) | 2019-06-07 |
CN109863365B true CN109863365B (en) | 2021-05-07 |
Family
ID=62018244
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201680089846.4A Active CN109863365B (en) | 2016-10-21 | 2016-10-21 | Method, electronic device and system for picking up objects from container |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN109863365B (en) |
WO (1) | WO2018072208A1 (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109615658B (en) * | 2018-12-04 | 2021-06-01 | 广东拓斯达科技股份有限公司 | Method and device for taking articles by robot, computer equipment and storage medium |
CN110000783B (en) * | 2019-04-04 | 2021-04-30 | 上海节卡机器人科技有限公司 | Visual grabbing method and device for robot |
WO2021053750A1 (en) * | 2019-09-18 | 2021-03-25 | 株式会社Fuji | Work robot and work system |
FR3135555B1 (en) * | 2022-05-03 | 2024-08-30 | Innodura Tb | Method for gripping objects arranged in bulk |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1833258A (en) * | 2003-08-07 | 2006-09-13 | 皇家飞利浦电子股份有限公司 | Image object processing |
CN1834582A (en) * | 2005-03-15 | 2006-09-20 | 欧姆龙株式会社 | Image processing method, three-dimensional position measuring method and image processing apparatus |
CN102601797A (en) * | 2012-04-07 | 2012-07-25 | 大连镔海自控股份有限公司 | A high-speed parallel robot with three-dimensional translation and one-dimensional rotation |
CN103150544A (en) * | 2011-08-30 | 2013-06-12 | 精工爱普生株式会社 | Method and apparatus for object pose estimation |
US8565515B2 (en) * | 2009-03-12 | 2013-10-22 | Omron Corporation | Three-dimensional recognition result displaying method and three-dimensional visual sensor |
CN105333819A (en) * | 2014-08-15 | 2016-02-17 | 苏州北硕检测技术有限公司 | Robot workpiece assembly and form and location tolerance detection system and method based on face laser sensor |
CN106204620A (en) * | 2016-07-21 | 2016-12-07 | 清华大学 | A kind of tactile three-dimensional power detection method based on micro-vision |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN100357974C (en) * | 2004-12-28 | 2007-12-26 | 北京航空航天大学 | Quick method for picking up stepped edge in sub pixel level |
JP2011083883A (en) * | 2009-10-19 | 2011-04-28 | Yaskawa Electric Corp | Robot device |
CN102721376B (en) * | 2012-06-20 | 2014-12-31 | 北京航空航天大学 | Calibrating method of large-field three-dimensional visual sensor |
DE102014105456B4 (en) * | 2014-04-16 | 2020-01-30 | Minikomp Bogner GmbH | Method for measuring the outer contour of three-dimensional measuring objects and associated measuring system |
-
2016
- 2016-10-21 WO PCT/CN2016/102932 patent/WO2018072208A1/en active Application Filing
- 2016-10-21 CN CN201680089846.4A patent/CN109863365B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1833258A (en) * | 2003-08-07 | 2006-09-13 | 皇家飞利浦电子股份有限公司 | Image object processing |
CN1834582A (en) * | 2005-03-15 | 2006-09-20 | 欧姆龙株式会社 | Image processing method, three-dimensional position measuring method and image processing apparatus |
US8565515B2 (en) * | 2009-03-12 | 2013-10-22 | Omron Corporation | Three-dimensional recognition result displaying method and three-dimensional visual sensor |
CN103150544A (en) * | 2011-08-30 | 2013-06-12 | 精工爱普生株式会社 | Method and apparatus for object pose estimation |
CN102601797A (en) * | 2012-04-07 | 2012-07-25 | 大连镔海自控股份有限公司 | A high-speed parallel robot with three-dimensional translation and one-dimensional rotation |
CN105333819A (en) * | 2014-08-15 | 2016-02-17 | 苏州北硕检测技术有限公司 | Robot workpiece assembly and form and location tolerance detection system and method based on face laser sensor |
CN106204620A (en) * | 2016-07-21 | 2016-12-07 | 清华大学 | A kind of tactile three-dimensional power detection method based on micro-vision |
Non-Patent Citations (3)
Title |
---|
Automatic post-picking using MAPPOS improves particle image detection from cryo-EM micrographs;Ramin Norousi;《Journal of Structural Biology》;20131231;全文 * |
基于OpenCV的嵌入式自动捡乒乓球系统设计;王晓龙;《计算机测量与控制》;20151231;全文 * |
基于逆向工程的鸡蛋拾取机械手设计;魏志刚1;《制造业信息化》;20131231;全文 * |
Also Published As
Publication number | Publication date |
---|---|
WO2018072208A1 (en) | 2018-04-26 |
CN109863365A (en) | 2019-06-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104842361B (en) | Robotic system with 3d box location functionality | |
CN110660104A (en) | Industrial robot visual identification positioning grabbing method, computer device and computer readable storage medium | |
CN109863365B (en) | Method, electronic device and system for picking up objects from container | |
WO2019189661A1 (en) | Learning dataset creation method and device | |
CN107992881A (en) | A kind of Robotic Dynamic grasping means and system | |
CN113191174B (en) | Article positioning method and device, robot and computer readable storage medium | |
US11772271B2 (en) | Method and computing system for object recognition or object registration based on image classification | |
CN105865329A (en) | Vision-based acquisition system for end surface center coordinates of bundles of round steel and acquisition method thereof | |
CN113172636B (en) | Automatic hand-eye calibration method and device and storage medium | |
CN112566758A (en) | Robot control device, robot control method, and robot control program | |
CN114419437A (en) | Workpiece sorting system based on 2D vision and control method and control device thereof | |
JP7408107B2 (en) | Systems and methods for robotic systems with object handling | |
CN116228854B (en) | Automatic parcel sorting method based on deep learning | |
CN114638891A (en) | Target detection positioning method and system based on image and point cloud fusion | |
CN117689716A (en) | Plate visual positioning, identifying and grabbing method, control system and plate production line | |
CN114092428A (en) | Image data processing method, image data processing device, electronic equipment and storage medium | |
CN109388131B (en) | Robot attitude control method and system based on angular point feature recognition and robot | |
WO2023082417A1 (en) | Grabbing point information obtaining method and apparatus, electronic device, and storage medium | |
Kucarov et al. | Transparent slide detection and gripper design for slide transport by robotic arm | |
WO2023083273A1 (en) | Grip point information acquisition method and apparatus, electronic device, and storage medium | |
US20250242498A1 (en) | System and method for pick pose estimation for robotic picking with arbitrarily sized end effectors | |
EP4592040A1 (en) | System and method for pick pose estimation for robotic picking with arbitrarily sized end effectors | |
EP4332900A1 (en) | Automatic bin detection for robotic applications | |
CN119897684B (en) | Memory assembly method, system, electronic device, storage medium and product | |
EP4406709A1 (en) | Adaptive region of interest (roi) for vision guided robotic bin picking |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |