[go: up one dir, main page]

WO2025048792A1 - End-to-end robotic grasping lifecyle and operations - Google Patents

End-to-end robotic grasping lifecyle and operations Download PDF

Info

Publication number
WO2025048792A1
WO2025048792A1 PCT/US2023/031464 US2023031464W WO2025048792A1 WO 2025048792 A1 WO2025048792 A1 WO 2025048792A1 US 2023031464 W US2023031464 W US 2023031464W WO 2025048792 A1 WO2025048792 A1 WO 2025048792A1
Authority
WO
WIPO (PCT)
Prior art keywords
neural network
network model
objects
robot
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/US2023/031464
Other languages
French (fr)
Inventor
Yash SHAHAPURKAR
Eugen SOLOWJOW
Ines UGALDE DIAZ
Husnu Melih ERDOGAN
Kyle COELHO
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Siemens Corp
Original Assignee
Siemens Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Siemens Corp filed Critical Siemens Corp
Priority to PCT/US2023/031464 priority Critical patent/WO2025048792A1/en
Publication of WO2025048792A1 publication Critical patent/WO2025048792A1/en
Anticipated expiration legal-status Critical
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • B25J9/163Programme controls characterised by the control loop learning, adaptive, model based, rule based expert control
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1612Programme controls characterised by the hand, wrist, grip control

Definitions

  • Autonomous operations such as robotic grasping and manipulation, in unknown or dynamic environments present various technical challenges.
  • Autonomous operations in dynamic environments may be applied to mass customization (e.g., high-mix, low-volume manufacturing), on-demand flexible manufacturing processes in smart factories, warehouse automation in smart stores, automated deliveries from distribution centers in smart logistics, and the like.
  • robots may learn skills using machine learning or artificial intelligence (Al), in particular deep neural networks or reinforcement learning.
  • Embodiments of the invention address and overcome one or more of the described- herein shortcomings by providing methods, systems, and apparatuses that automatically perform real-world tests on robots, based on results of machine learning operations and testing, so as to automatically generate neural network models suited for various applications.
  • an autonomous system includes a robot that defines an end effector configured to grasp objects.
  • the system can include a first neural network model configured to determine grasp locations on objects.
  • the system can further include a processor and a memory storing instructions that, when executed by the processor, configure the system to perform various operations.
  • the system can generate a second neural network model configured to determine grasp locations on objects.
  • the system can test the second neural network model using offline data, so as to generate machine learning results associated with the second neural network model. Based on the machine learning results, an automated test with the robot can be triggered, in which the robot implements a grasping application.
  • the system can perform the automated test with the robot.
  • the automated test can include capturing an image of a plurality of objects.
  • the second neural network can determine grasp locations on the plurality of objects.
  • the automated test can further include the robot grasping the plurality of objects at the grasp locations.
  • the system can record the robot grasping the plurality of object, so as to generate real-world test data. Based on the real-world test data, it can be determined whether the second neural network model replaces the first neural network model in the grasping application.
  • the system determines that the second neural network model replaces the first neural network model in the grasping application, and inserts the second neural network model into a new release of the grasping application.
  • the second neural network model can be trained using first training data.
  • the system can make a determination that the second neural network model does not replace the first neural network model.
  • the first training data can be revised so as to generate second training data that is different than the first training data.
  • the second neural network model can be trained with the second training data, so as to generate a third neural network model configured to determine grasp locations on objects.
  • the system can test the third neural network model using offline data, so as to generate second machine learning results associated with the third neural network model.
  • the system can trigger a second automated test with the robot in which the robot implements the grasping application.
  • FIG. 1 shows an example autonomous system in an example physical environment that includes a robot configured to grasp objects, in accordance with an example embodiment.
  • FIG. 2 is a flow diagram that illustrates robotic operations (RobOps) that are triggered by machine learning operations (MLOps), in accordance with example embodiments.
  • RobotOps robotic operations
  • MLOps machine learning operations
  • FIG. 3 illustrates a neural network model that can be included in a computing system, for instance the system shown in FIG. 1, in accordance with an example embodiment.
  • FIG. 4 illustrates example automated testing that can be defined by robotic operations depicted in FIG. 2.
  • FIG. 5 illustrates a computing environment within which embodiments of the disclosure may be implemented.
  • a well-defined operations strategy can be essential.
  • the lifecycle of operations might include, for example, and without limitation, dataset generation (e.g., synthetic or real), neural network training, neural network evaluation on synthetic or real world data, analysis of metrics for comparing new models to legacy models, performance of live tests on target devices, test result collection, analysis of test results, and iterative improvisation to the cycle.
  • the iterative cycle can refer to the cycle of operations above that are performed with the intention of improving metrics.
  • a cycle can begin with data generation and can complete with testing, so that the results of various metrics can analyzed to create new data/models that are further trained and tested to obtain improvements in the outcome of those metrics.
  • an iterative process or cycle is adding more training examples in a dataset or tweaking hyperparameters of a neural network model, with the goal of improving the underlying key performance indicators (KPIs) (e.g., grasp accuracy, runtime statistics, generalization to new objects, grasp efficiency, ergonomics, etc.).
  • KPIs key performance indicators
  • an automated machine learning (ML) operations pipeline for robotic grasping using deep learning is defined.
  • the pipeline is triggered when data or a grasping model is changed, and the pipeline is complete when the change is automatically tested with robots in a loop in the real world.
  • a physical environment or workspace can refer to any unknown or dynamic industrial environment.
  • physical environment and workspace can be used interchangeably herein, without limitation.
  • a reconstruction or model may define a virtual representation of the physical environment or workspace 100, or one or more objects 106 within the physical environment 100.
  • the object 106 can be disposed in a bin or container, for instance a bin 107, so as to be positioned for grasping.
  • bin, container, tray, box, or the like can be used interchangeably, without limitation.
  • the objects 106 can be picked from the bin 107 by one or more robots, and transported or placed in another location, for instance outside the bin 107. It will be understood that the objects 106 in FIG. 1 are mere examples, such that the objects can be alternatively shaped or define alternative structures as desired, and all such objects are contemplated as being within the scope of this disclosure.
  • the physical environment 100 can include a computerized autonomous system 102 configured to perform one or more manufacturing operations, such as assembly, transport, or the like.
  • the autonomous system 102 can include one or more robot devices or autonomous machines, for instance an autonomous machine or robot 104, configured to perform one or more industrial tasks, such as bin picking, grasping, or the like.
  • the system 102 can include one or more computing processors configured to process information and control operations of the system 102, in particular the autonomous machine 104.
  • the autonomous machine 104 can include one or more processors, for instance a processor 108, configured to process information and/or control various operations associated with the autonomous machine 104.
  • An autonomous system for operating an autonomous machine within a physical environment can further include a memory for storing modules.
  • the processors can further be configured to execute the modules so as to process information and generate models based on the information. It will be understood that the illustrated environment 100 and the system 102 are simplified for purposes of example. The environment 100 and the system 102 may vary as desired, and all such systems and environments are contemplated as being within the scope of this disclosure.
  • the robot 104 can further include a robotic arm or manipulator 110 and a base 112 configured to support the robotic manipulator 110.
  • the base 112 can include wheels 114 or can otherwise be configured to move within the physical environment 100.
  • the robot 104 can further include an end effector 116 attached to the robotic manipulator 110.
  • the end effector 116 can include one or more tools configured to grasp and/or move objects 106.
  • Example end effectors 116 include finger grippers or vacuum-based grippers.
  • the robotic manipulator 110 can be configured to move so as to change the position of the end effector 116, for example, so as to place or move objects 106 within the physical environment 100.
  • the system 102 can further include one or more cameras or sensors, for instance a depth camera or three-dimensional (3D) point cloud camera 118, configured to detect or record objects 106 within the physical environment 100.
  • the camera 118 can be mounted to the robotic manipulator 110 or otherwise configured to generate a 3D point cloud of a given scene, for instance the physical environment 100.
  • the one or more cameras of the system 102 can include one or more standard two-dimensional (2D) cameras that can record or capture images (e.g., RGB images or depth images) from different viewpoints. Those images can be used to construct 3D images.
  • a 2D camera can be mounted to the robotic manipulator 110 so as to capture images from perspectives along a given trajectory defined by the manipulator 1 10.
  • the camera 118 can be configured to capture images of the bin 107, and thus the objects 106, along a first or transverse direction 120.
  • a deep neural network is trained on a set of objects. Based on its training, the deep neural network can calculate grasp scores for respective regions of a given object, for instance an object within the bin 107.
  • the robot 104 and/or the system 102 can define one or more neural networks configured to learn various objects so as to identify poses, grasp points (or locations), and/or or affordances of various objects that can be found within various industrial environments.
  • An example system or neural network model can be configured to learn objects and grasp locations, based on images for example, in accordance with various example embodiments. After the neural network is trained, for example, images of objects can be sent to the neural network by the robot device 104 for classification, in particular classification of grasp locations or affordances.
  • the camera 1 18 can define a depth camera configured to capture depth images of the workspace 100 from a perspective along the transverse direction 120.
  • the bin 107 can define a top 109 end and a bottom end 111 opposite the top end 109 along the transverse direction 120.
  • the bin 107 can further define a first side 113 and a second side 115 opposite the first side 113 along a second or lateral direction 122 that is substantially perpendicular to the transverse direction 120.
  • the bin 107 can further define a front end 117 and a rear end 119 opposite the front end 117 along a third or longitudinal direction 124 that is substantially perpendicular to both the transverse and lateral directions 120 and 122, respectively.
  • the illustrated bin 107 defines a rectangular shape, it will be understood that bins or containers can be alternatively shaped or sized, and all such bins or containers are contemplated as being within the scope of this disclosure.
  • MLOps machine learning operations
  • purely data driven deployments e.g., recommendation engines, predictive analytics etc.
  • the key metric for the business is often closely tied to the metrics of the model performance (e.g., accuracy, precision, statistical scores such as Fl scores, etc.).
  • integration MLOps with real world deployments on robotics and automation level can involve key metrics, in particular the performance of the robotic systems, that can only be obtained on the robotic systems.
  • complex robotic systems e.g., robotic grasping
  • errors can result from hardware sensors (e.g., cameras), motion, calibration issues of the robot, or the like, which are separate from the traditional performance of an Al system for grasp prediction. It is further recognized herein that simulation tools can be used for robotic systems to perform experimentation and benchmarking, but there is often gap between benchmarking results in simulation as compared to benchmarking results in the real world that include significant noise from various sources.
  • the system 102 in particular the camera 118, can capture an image, for instance a red-green-blue depth (RGBD) image, of a given scene so as to define a captured image.
  • the scene in the captured image can include the bin 107 and a plurality of objects 106 within the bin 107.
  • the captured image can be fed into one or more grasping neural networks, for instance an example neural network or model 300 (FIG. 3), to generate a grasp output in the image frame.
  • the system 102 can determine a pose of a given grasp. For example, the grasp location on a particular object can be predicted by the grasp neural network 300.
  • the grasp location can define a 3D translation point in camera/robot world coordinates (e.g., x, y, z) together with the orientation of the point by means of normal vectors. Based on the extrinsic hand-eye calibration, that 3D point and its associated normal can used to calculate the robot pose for executing the grasp.
  • a computing system for instance the system 102, can define one or more models or networks 300 that can be trained on a plurality of input images or input data 304.
  • the input data 304 can include depth images or maps in pixels.
  • the network can generate an output map or output 306 that can define a grasp candidate map, which can be compared to a ground truth label map, such that the parameters of the network can be continuously updated based on the differences or similarities that result from the comparison.
  • Depth images can be synthetically generated using physics and rendering engines (e.g., PyBullet and PyRender).
  • the system 102 can perform post-processing on depth images, for instances by adding simulated noise, such that the input data 304 more closely resembles real world images more closely.
  • the training input data is not limited to the examples described herein. That is, the data in various depth images can vary, for instance the data can include various objects (e.g., different shapes and sizes) positioned in a variety of configurations, and all such input data contemplated as being within the scope of this disclosure.
  • the example neural network 300 includes a plurality of layers, for instance an input layer 302a configured to receive data, and an output layer 303 configured to generate an image based on the input data 304.
  • the output layer 303 can define an output layer 303b that can be configured to determine grasp scores for each pixel of a given image based on gripper information.
  • the neural network 300 further includes a plurality of intermediate layers connected between the input layer 302a and the output layer 303.
  • the intermediate layers and the input layer 302a can define a plurality of convolutional layers 302.
  • the intermediate layers can further include one or more fully connected layers.
  • the convolutional layers 302 can include the input layer 302a configured to receive training and test data, such as depth images from a variety of camera heights, or gripper dimensions for a variety of sized suction-based or finger-based grippers.
  • the convolutional layers 302 can further include a final convolutional or last feature layer 302c, and one or more intermediate or second convolutional layers 302b disposed between the input layer 302a and the final convolutional layer 302c.
  • a final convolutional or last feature layer 302c a final convolutional or last feature layer
  • one or more intermediate or second convolutional layers 302b disposed between the input layer 302a and the final convolutional layer 302c.
  • the illustrated model 300 is simplified for purposes of example.
  • models may include any number of layers as desired, in particular any number of intermediate layers, and all such models are contemplated as being within the scope of this disclosure.
  • the output layer 303 which can include a first layer 303a and a second or output layer 303b connected to the first layer 303a. It will again be understood that the model is simplified for purposes of explanation, and that the model 300 is not limited to the number of layers 303.
  • the convolutional layers 302 may be locally connected, such that, for example, the neurons in the intermediate layer 302b might be connected to a limited number of neurons in the final convolutional layer 302c.
  • the convolutional layers 302 can also be configured to share connections strengths associated with the strength of each neuron.
  • the input layer 302a can be configured to receive inputs 304, for instance RGBD images of objects 106.
  • the output 306 can include one or more classifications or scores associated with the input 304.
  • the output 306 can define an output image or map that indicates a plurality of scores 308 (e.g., grasp scores) associated with various portions, for instance pixels, of the corresponding input 304.
  • the system 102 can perform example operations 200 so as to define an operations lifecycle that includes MLOps 402 and robotic operations 404 that defines various automated testing.
  • the system 102 can determine, for instance based on images 201 of objects that are captured by the camera 118, that the class of objects are graspable by the end effector 116. For example, a physical test of a class of objects using an end-effector can be performed by feeding a given object to a vacuum gripper, and checking if seal formation succeeds and the grasp remains stable.
  • the system can perform image processing-based operations such as finding planar and centric regions on an object surface in the image to validate a mathematical model of suction grasp feasibility.
  • properties of the object class can be defined, at 204.
  • test cases for the object class can be defined. Identifying test cases can include identifying properties of objects (e.g., rigidity, porosity, etc.) to evaluate the effects those properties have on maintaining solid grasp. For example, identifying test cases can include finding faces of objects that cannot be grasped physically and/or finding challenging faces of objects that can lead to potential grasp failures. In various examples, objects with holes are not used for vacuum grasping and objects that define a cross section area that is less than gripper size do not work on a vacuum gripper.
  • test cases further include positioning a physical object in a physical bin that exposes very few graspable regions on the object surface, thereby challenging the neural networks to find the few viable grasp poses in the experimentation.
  • object classes can be added into the MLOps 203 processes.
  • steps 208 and 212 can be triggered, thereby also triggering 402 and 404 (described herein) in order to improve the neural network so as to improve grasping performance on such new objects.
  • the system 102 can run experiments (e.g., see real-world testing operations or experiments 400 in FIG. 4) with the current version of the grasping model or neural network and record performances of such experiments.
  • the system 102 can determine whether the camera 118 limits the graspability of the object class. For example, the system 102 can assess the quality of the RGBD image for the objects in the scene (e.g., singular and in clutter). Qualities that are checked can include completeness of depth values and areas of missing depth information. For objects of a transparent/reflective nature, depth profiles of such objects can be poor, and thus the camera can become a bottleneck. Similarly, color images can depend the tuning of the camera settings to obtain clear color images with less blur, enough brightness, etc. Thus, the camera settings can be altered to achieve acceptable quality of RGBD images for the images to be used in the MLOps pipeline 203.
  • the system 102 can assess the quality of the RGBD image for the objects in the scene (e.g., singular and in clutter). Qualities that are checked can include completeness of depth values and areas of missing depth information. For objects of a transparent/reflective nature, depth profiles of such objects can be poor, and thus the camera can become a bottleneck. Similarly, color images can depend the
  • objects that provide very low quality of RGBD images can be discarded.
  • the process can proceed to 210, where the camera 118 is replaced with a better camera, for instance a camera defining a higher resolution. Thereafter the process can return to 206 so that the experiments can be performed again.
  • the process can proceed to 212, where the system can determine whether there is an issue with the data.
  • the existing datasets used to train the neural network models can be evaluated so as to determine the distribution of samples used, and to check if data is diverse so as to contain various edge cases and challenging cases.
  • the associated grasp labels used for training can be validated. Validation can include, for example and without limitation, physical examining of datasets, or performing statistical operations such as ANOVA (analysis of variance).
  • the model architecture can also be evaluated with respect to the training data. For example, certain model architectures may fail to identify objects that are physically small in size, and hence the model architecture might be changed to enable it to identify small objects. Thus, the operations at 212 can determine why the performance of the current system is poor on a new object, so that the training data and/or model architecture can be changed.
  • the real-world testing operations or experiments 400 can validate or invalidate any changes to the training data or model architecture.
  • the grasping neural network (e.g., neural network 300) is based on supervised neural network training. Therefore the training dataset that is generated can consist of RGBD images and associated labels for each image.
  • training datasets can be generated locally (e.g., on-premise dataset generation) or remotely on remote servers (e.g., AWS EC2 servers, MS Azure cloud, etc.). Additionally, or alternatively, at 214, various training parameters (e.g., network architecture, network hyperparameters, etc.) can be revised based on the determination at 212. At 218, the grasping neural network can be trained with the revised training parameters and the additional training data from 214 and 216, respectively. When training datasets are generated, testing can also be performed on the datasets. For example, the datasets can be scanned for validity. In particular, for examples, tests can validate the size of the dataset, format of the dataset, redundancies in the dataset, or the like.
  • neural networks can be trained either on premise or using cloud services.
  • local tests are performed based on the updated training from 218.
  • the tests can validate the changes made to the model architecture and training data at 214 and 216, respectively. When the tests fail, the process can return to 216, where additional or alternative training data is generated.
  • the tests can fail, for example, when there are missing or incomplete data/labels; the distribution of object classes are below a predetermined threshold (e.g., a dataset can be discarded if the data count for the respective class is less than 20% of the average count of all classes); or when there arc duplicate data samples.
  • the model architectures can be tested to ensure that input/output layers follow the mathematical dimensionality guidelines of the given neural network, and input/output normalization is maintained with respect to the data and labels.
  • the tests at 220 can define various experiments. Each experiment can be defined as a result obtained by training some version of the neural network with some version of the dataset. In theory, unlimited such experiments can be performed with various permutations and iterations of datasets and model architectures/hyperparameters. Thus, in some cases, an equipment tracker (e.g., data version control or the like) runs an automated methodology with DevOps tools. For each experiment, various metadata can be recorded. In particular, metadata related to datasets that are used and metadata related to the models that are used can be recorded. In some cases, a series of experiments are performed that use an underlying common dataset but use different learning hyperparameters (changed at 214) for training the model at 218.
  • an equipment tracker e.g., data version control or the like
  • the best performing experiment can be further tested during a model testing phase of the testing performed at 220.
  • model testing phase at 220, a series of tests can be performed to assess the sanity of the model. Such tests can focus on local model performance for expected results. For example, tests can be run to ensure the format of the model output is the same as the expectation, values are in an expected range, etc. Invariance and perturbation tests can be performed to assess whether results of the model change when certain inputs are changed. Such tests can identify if the trained model is susceptible to any noise or if the model displays any bias or overfitting.
  • the model can be used with real world data samples or subsets of data samples to assess the general performance of the model.
  • a master model can refer to the model that exists in production, or the model that is the current best performing model to-date. Such a model can be as the benchmark for new experimental models to be compared against for possible improvement. If a given experimental model proves to be performing better at steps 402 and 404, then it can replace the master model used in production, as further described herein.
  • the results of the experimental model testing can determine which model is automatically tested, at 224. Thus, when a given experimental model performs better than the current master model with the offline test samples, the process can proceed to 224, where the experimental model can be run with the physical robotic system.
  • benchmarking tools can be used to compare the results.
  • common synthetic datasets and common real world RGBD inputs can define test cases for checking grasp accuracy with synthetic and real-world ground truth samples.
  • a series of edge cases observed in the real world can also be used as test inputs.
  • soft metrics such as, for example and without limitation, grasp distribution, number of false positives and false negatives, grasp selection for objects in a heap, amount of grasps on non- graspable regions (e.g., bin bottom), and the like can be calculated.
  • the benchmarking results can be published as reports, for instance using open-source tools (e.g., CML).
  • the reports might include statistical scores, error cases of both models, and results of perturbation tests on both models.
  • a developer can then view these reports of the benchmarking to determine whether an experimental model demonstrated better performance than the current master model.
  • the given model can be used in deployment with a runtime codebase for grasping of the system 102.
  • the model can be implemented with other portions of the grasping application pipeline, such as, for example, and without limitation, image acquisition, image preprocessing, and post-inference processing.
  • the model can be tested for compatibility with the grasping application and tests for runtime, memory, etc. can be performed.
  • the system 102 can determine that the software tests at 220 are sufficiently completed, such that the operations can proceed to 224, where live tests are performed on the robot 104 using the results (model) from 220, so as to trigger the robotic operations 205.
  • example real-world testing operations 400 can be performed at 224 and 206. It will be understood that operations 400 are presented as an example, such that additional or alternative real world tests can be performed, and all such additional or alternative tests are contemplated as being within the scope of this disclosure.
  • the model from 222 and the runtime application can be deployed on a machine within the system 102 that controls the robot 104 for automated tests.
  • the physical environment 100 defines an automated testing suite includes multiple bins, for instance two bins 107, having objects 106 randomly disposed within them.
  • the system 102, in particular the camera 1 18, can capture an image, for instance a red-green-blue depth (RGBD) image, of a given scene so as to define a captured image.
  • RGBD red-green-blue depth
  • the scene in the captured image can include multiple bins and the plurality of objects 106 within the bin.
  • the captured image can be input into the model from the MLOps 203.
  • the model from the MLOps 203 can compute gasp locations on the objects 106, based on the capture image, at 402.
  • the robot 104 can perform grasps based on the computed grasp locations. In some examples, the robot 104 picks and drops objects from one bin to the other and visa-versa, so as to perform a plurality of grasp executions. Alternatively, or additionally, the robot 104 can drop objects outside of the bin from which the objects are grasped, or at a random position within the bin in which the object is grasped.
  • the robot 104 can be configured to shape the object after the object is grasped and before the objected is dropped or placed, so as to collect further data related to the quality or strength of the grasp. While the robot 104 performs gasps, at 406, the grasps are recorded, for instance by the camera 118 or one or more other cameras within the environment 100.
  • the grasps can be recorded so as to define recorded data that can include raw data of video or photographic images, neural network outputs, whether a given grasp defines a success or failure; and video snippets of trials (grasps).
  • the end effector 116 defines a suction-based gripper that can indicate whether it is holding an object, so that a grasp success or failure can be determined after each grasp is attempted.
  • new production data for instance data associated with the robot 104 grasping new objects that were not used in the training the model
  • the results of each attempted grasp can be stored, at 408, so that the results can be fetched for analysis.
  • the grasping data can be added to a central common real world test dataset, at 208, and failure cases for the model can be tagged to the model and added to a central failure samples dataset.
  • the system 102 can compute various metrics related to the grasps, for instance a grasp success rate. Additionally, the system 102 can compute other metrics based on the real-world experiments, such as, for example and without limitation, overall grasp accuracy, various runtime statistics, success in generalizing to new objects, grasp efficiency or ergonomics, failure modes for long tail scenarios, and success of edge cases observed in production data.
  • it can be determined whether the model used in the robotic operations 205 produces results that are better the current master model. For example, at 226, the system 102 can determine whether the newly generated model from the MLOps 201 replaces the current master model. When it is determined that the results of the robotic operations 205 associated with the newly generated model are better than the current master model, the newly generated model can be inserted into a new release of the grasping application, at 228.
  • the metrics are related to each other.
  • the most important metric is the grasp success rate on the physical system (total percentage of successful grasps over number of grasp trials).
  • an improvement in metrics like edge case success or grasp ergonomics can directly contribute to the main metric of grasp success.
  • it is not a weighted decision and there is no mathematical formulation for decision making criteria.
  • an autonomous system includes a robot that defines an end effector configured to grasp objects.
  • the system can include a first neural network model configured to determine grasp locations on objects.
  • the system can further include a processor and a memory storing instructions that, when executed by the processor, configure the system to perform various operations.
  • the system can generate a second neural network model configured to determine grasp locations on objects.
  • the system can test the second neural network model using offline data, so as to generate machine learning results associated with the second neural network model. Based on the machine learning results, an automated test with the robot can be triggered, in which the robot implements a grasping application.
  • the system can perform the automated test with the robot.
  • the automated test can include capturing an image of a plurality of objects.
  • the second neural network can determine grasp locations on the plurality of objects.
  • the automated test can further include the robot grasping the plurality of objects at the grasp locations.
  • the system can record the robot grasping the plurality of object, so as to generate real-world test data. Based on the real-world test data, it can be determined whether the second neural network model performs better than the first neural network model in the grasping application.
  • the system determines that the second neural network model replaces the first neural network model in the grasping application, and inserts the second neural network model into a new release of the grasping application.
  • the second neural network model can be trained using first training data.
  • the system can make a determination that the second neural network model does not replace the first neural network model.
  • the first training data can be revised so as to generate second training data that is different than the first training data.
  • the second neural network model can be trained with the second training data, so as to generate a third neural network model configured to determine grasp locations on objects.
  • the system can test the third neural network model using offline data, so as to generate second machine learning results associated with the third neural network model. Based on the second machine learning results, the system can trigger a second automated test with the robot in which the robot implements the grasping application.
  • FIG. 5 illustrates an example of a computing environment within which embodiments of the present disclosure may be implemented.
  • a computing environment 600 includes a computer system 610 that may include a communication mechanism such as a system bus 621 or other communication mechanism for communicating information within the computer system 610.
  • the computer system 610 further includes one or more processors 620 coupled with the system bus 621 for processing the information.
  • the system 102 may include, or be coupled to, the one or more processors 620.
  • the processors 620 may include one or more central processing units (CPUs), graphical processing units (GPUs), or any other processor known in the art. More generally, a processor as described herein is a device for executing machine-readable instructions stored on a computer readable medium, for performing tasks and may comprise any one or combination of, hardware and firmware. A processor may also comprise memory storing machine-readable instructions executable for performing tasks. A processor acts upon information by manipulating, analyzing, modifying, converting or transmitting information for use by an executable procedure or an information device, and/or by routing the information to an output device. A processor may use or comprise the capabilities of a computer, controller or microprocessor, for example, and be conditioned using executable instructions to perform special purpose functions not performed by a general purpose computer.
  • CPUs central processing units
  • GPUs graphical processing units
  • a processor may include any type of suitable processing unit including, but not limited to, a central processing unit, a microprocessor, a Reduced Instruction Set Computer (RISC) microprocessor, a Complex Instruction Set Computer (CISC) microprocessor, a microcontroller, an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA), a System-on-a-Chip (SoC), a digital signal processor (DSP), and so forth.
  • the processor(s) 620 may have any suitable microarchitecture design that includes any number of constituent components such as, for example, registers, multiplexers, arithmetic logic units, cache controllers for controlling read/write operations to cache memory, branch predictors, or the like.
  • the microarchitecture design of the processor may be capable of supporting any of a variety of instruction sets.
  • a processor may be coupled (electrically and/or as comprising executable components) with any other processor enabling interaction and/or communication there-between.
  • a user interface processor or generator is a known element comprising electronic circuitry or software or a combination of both for generating display images or portions thereof.
  • a user interface comprises one or more display images enabling user interaction with a processor or other device.
  • the system bus 621 may include at least one of a system bus, a memory bus, an address bus, or a message bus, and may permit exchange of information (e.g., data (including computer-executable code), signaling, etc.) between various components of the computer system 610.
  • the system bus 621 may include, without limitation, a memory bus or a memory controller, a peripheral bus, an accelerated graphics port, and so forth.
  • the system bus 621 may be associated with any suitable bus architecture including, without limitation, an Industry Standard Architecture (ISA), a Micro Channel Architecture (MCA), an Enhanced ISA (EISA), a Video Electronics Standards Association (VESA) architecture, an Accelerated Graphics Port (AGP) architecture, a Peripheral Component Interconnects (PCT) architecture, a PCI-Express architecture, a Personal Computer Memory Card International Association (PCMCIA) architecture, a Universal Serial Bus (USB) architecture, and so forth.
  • ISA Industry Standard Architecture
  • MCA Micro Channel Architecture
  • EISA Enhanced ISA
  • VESA Video Electronics Standards Association
  • AGP Accelerated Graphics Port
  • PCT Peripheral Component Interconnects
  • PCI-Express PCI-Express
  • PCMCIA Personal Computer Memory Card International Association
  • USB Universal Serial Bus
  • the computer system 610 may also include a system memory 630 coupled to the system bus 621 for storing information and instructions to be executed by processors 620.
  • the system memory 630 may include computer readable storage media in the form of volatile and/or nonvolatile memory, such as read only memory (ROM) 631 and/or random access memory (RAM) 632.
  • the RAM 632 may include other dynamic storage device(s) (e.g., dynamic RAM, static RAM, and synchronous DRAM).
  • the ROM 631 may include other static storage device(s) (e.g., programmable ROM, erasable PROM, and electrically erasable PROM).
  • system memory 630 may be used for storing temporary variables or other intermediate information during the execution of instructions by the processors 620.
  • a basic input/output system 633 (BIOS) containing the basic routines that help to transfer information between elements within computer system 610, such as during start-up, may be stored in the ROM 631.
  • RAM 632 may contain data and/or program modules that are immediately accessible to and/or presently being operated on by the processors 620.
  • System memory 630 may additionally include, for example, operating system 634, application programs 635, and other program modules 636.
  • Application programs 635 may also include a user portal for development of the application program, allowing input parameters to be entered and modified as necessary.
  • the operating system 634 may be loaded into the memory 630 and may provide an interface between other application software executing on the computer system 610 and hardware resources of the computer system 610. More specifically, the operating system 634 may include a set of computer-executable instructions for managing hardware resources of the computer system 610 and for providing common services to other application programs (e.g., managing memory allocation among various application programs). In certain example embodiments, the operating system 634 may control execution of one or more of the program modules depicted as being stored in the data storage 640.
  • the operating system 634 may include any operating system now known or which may be developed in the future including, but not limited to, any server operating system, any mainframe operating system, or any other proprietary or non-proprietary operating system.
  • the computer system 610 may also include a disk/media controller 643 coupled to the system bus 621 to control one or more storage devices for storing information and instructions, such as a magnetic hard disk 641 and/or a removable media drive 642 (e.g., floppy disk drive, compact disc drive, tape drive, flash drive, and/or solid state drive).
  • Storage devices 640 may be added to the computer system 610 using an appropriate device interface (e.g., a small computer system interface (SCSI), integrated device electronics (IDE), Universal Serial Bus (USB), or FireWire).
  • Storage devices 641, 642 may be external to the computer system 610.
  • the computer system 610 may also include a field device interface 665 coupled to the system bus 621 to control a field device 666, such as a device used in a production line.
  • the computer system 610 may include a user input interface or GUI 661, which may comprise one or more input devices, such as a keyboard, touchscreen, tablet and/or a pointing device, for interacting with a computer user and providing information to the processors 620.
  • the computer system 610 may perform a portion or all of the processing steps of embodiments of the invention in response to the processors 620 executing one or more sequences of one or more instructions contained in a memory, such as the system memory 630. Such instructions may be read into the system memory 630 from another computer readable medium of storage 640, such as the magnetic hard disk 641 or the removable media drive 642.
  • the magnetic hard disk 641 and/or removable media drive 642 may contain one or more data stores and data files used by embodiments of the present disclosure.
  • the data store 640 may include, but are not limited to, databases (e.g., relational, object-oriented, etc.), file systems, flat files, distributed data stores in which data is stored on more than one node of a computer network, peer-to-peer network data stores, or the like.
  • the data stores may store various types of data such as, for example, skill data, sensor data, or any other data generated in accordance with the embodiments of the disclosure.
  • Data store contents and data files may be encrypted to improve security.
  • the processors 620 may also be employed in a multi-processing arrangement to execute the one or more sequences of instructions contained in system memory 630.
  • hard-wired circuitry may be used in place of or in combination with software instructions. Thus, embodiments are not limited to any specific combination of hardware circuitry and software.
  • the computer system 610 may include at least one computer readable medium or memory for holding instructions programmed according to embodiments of the invention and for containing data structures, tables, records, or other data described herein.
  • the term “computer readable medium” as used herein refers to any medium that participates in providing instructions to the processors 620 for execution.
  • a computer readable medium may take many forms including, but not limited to, non-transitory, non-volatile media, volatile media, and transmission media.
  • Non-limiting examples of non-volatile media include optical disks, solid state drives, magnetic disks, and magneto-optical disks, such as magnetic hard disk 641 or removable media drive 642.
  • Non-limiting examples of volatile media include dynamic memory, such as system memory 630.
  • Non-limiting examples of transmission media include coaxial cables, copper wire, and fiber optics, including the wires that make up the system bus 621.
  • Transmission media may also take the form of acoustic or light waves, such as those generated during radio wave and infrared data communications.
  • Computer readable medium instructions for carrying out operations of the present disclosure may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, statesetting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the "C" programming language or similar programming languages.
  • the computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
  • electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present disclosure.
  • the computing environment 400 may further include the computer system 610 operating in a networked environment using logical connections to one or more remote computers, such as remote computing device 680.
  • the network interface 670 may enable communication, for example, with other remote devices 680 or systems and/or the storage devices 641, 642 via the network 671.
  • Remote computing device 680 may be a personal computer (laptop or desktop), a mobile device, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to computer system 610.
  • computer system 610 may include modem 672 for establishing communications over a network 671, such as the Internet.
  • Modem 672 may be connected to system bus 621 via user network interface 670, or via another appropriate mechanism.
  • Network 671 may be any network or system generally known in the art, including the Internet, an intranet, a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), a direct connection or series of connections, a cellular telephone network, or any other network or medium capable of facilitating communication between computer system 610 and other computers (e.g., remote computing device 680).
  • the network 671 may be wired, wireless or a combination thereof. Wired connections may be implemented using Ethernet, Universal Serial Bus (USB), RJ-6, or any other wired connection generally known in the art.
  • Wireless connections may be implemented using Wi-Fi, WiMAX, and Bluetooth, infrared, cellular networks, satellite or any other wireless connection methodology generally known in the art. Additionally, several networks may work alone or in communication with each other to facilitate communication in the network 671.
  • program modules, applications, computer-executable instructions, code, or the like depicted in FIG. 5 as being stored in the system memory 630 are merely illustrative and not exhaustive and that processing described as being supported by any particular module may alternatively be distributed across multiple modules or performed by a different module.
  • various program module(s), script(s), plug-in(s), Application Programming Interface(s) (API(s)), or any other suitable computer-executable code hosted locally on the computer system 610, the remote device 680, and/or hosted on other computing device(s) accessible via one or more of the network(s) 671 may be provided to support functionality provided by the program modules, applications, or computer-executable code depicted in FIG.
  • functionality may be modularized differently such that processing described as being supported collectively by the collection of program modules depicted in FIG. 3 may be performed by a fewer or greater number of modules, or functionality described as being supported by any particular module may be supported, at least in part, by another module.
  • program modules that support the functionality described herein may form part of one or more applications executable across any number of systems or devices in accordance with any suitable computing model such as, for example, a client-server model, a peer-to-peer model, and so forth.
  • any of the functionality described as being supported by any of the program modules depicted in FIG. 5 may be implemented, at least partially, in hardware and/or firmware across any number of devices.
  • the computer system 610 may include alternate and/or additional hardware, software, or firmware components beyond those described or depicted without departing from the scope of the disclosure. More particularly, it should be appreciated that software, firmware, or hardware components depicted as forming part of the computer system 610 are merely illustrative and that some components may not be present or additional components may be provided in various embodiments. While various illustrative program modules have been depicted and described as software modules stored in system memory 630, it should be appreciated that functionality described as being supported by the program modules may be enabled by any combination of hardware, software, and/or firmware. It should further be appreciated that each of the above-mentioned modules may, in various embodiments, represent a logical partitioning of supported functionality.
  • This logical partitioning is depicted for ease of explanation of the functionality and may not be representative of the structure of software, hardware, and/or firmware for implementing the functionality. Accordingly, it should be appreciated that functionality described as being provided by a particular module may, in various embodiments, be provided at least in part by one or more other modules. Further, one or more depicted modules may not be present in certain embodiments, while in other embodiments, additional modules not depicted may be present and may support at least a portion of the described functionality and/or additional functionality. Moreover, while certain modules may be depicted and described as sub-modules of another module, in certain embodiments, such modules may be provided as independent modules or as sub-modules of other modules.
  • any operation, element, component, data, or the like described herein as being based on another operation, element, component, data, or the like can be additionally based on one or more other operations, elements, components, data, or the like. Accordingly, the phrase “based on,” or variants thereof, should be interpreted as “based at least in part on.”
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s).
  • the functions noted in the block may occur out of the order noted in the Figures.
  • two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Orthopedic Medicine & Surgery (AREA)
  • Image Analysis (AREA)

Abstract

Methods, systems, and apparatuses can automatically perform real-world tests on robots, based on results of machine learning operations and testing, so as to automatically generate new neural network models suited for various applications.

Description

END-TO-END ROBOTIC GRASPING LIFECYLE AND OPERATIONS
BACKGROUND
[1] Autonomous operations, such as robotic grasping and manipulation, in unknown or dynamic environments present various technical challenges. Autonomous operations in dynamic environments may be applied to mass customization (e.g., high-mix, low-volume manufacturing), on-demand flexible manufacturing processes in smart factories, warehouse automation in smart stores, automated deliveries from distribution centers in smart logistics, and the like. In order to perform autonomous operations, such as grasping and manipulation, in some cases, robots may learn skills using machine learning or artificial intelligence (Al), in particular deep neural networks or reinforcement learning.
[2] Incorporation of Al systems in production can introduce challenges beyond traditional machine learning metrics like accuracy, precision, etc. Deployment of machine learning based systems in production often need to be tied to business metrics and key performance indicators (KPIs) that are different for each relevant business. Maintaining a machine learning operation lifecycle presents various technical challenges. For example, in some cases, thousands of machine learning models can be trained in a matter of days or weeks. There might be millions of different recipes to generate new models that can vary in various ways. For example, the types of datasets used can, the type of model architecture used can vary, the hyperparameters used to train neural networks can vary, etc. Due to this wide variety of experiment space, comparing the performance of each new experiment to determine which model serves a particular deployment best presents various challenges, particularly in scenarios involving robotic grasping of unknown objects.
BRIEF SUMMARY
[3] Embodiments of the invention address and overcome one or more of the described- herein shortcomings by providing methods, systems, and apparatuses that automatically perform real-world tests on robots, based on results of machine learning operations and testing, so as to automatically generate neural network models suited for various applications.
[4] In an example aspect, an autonomous system includes a robot that defines an end effector configured to grasp objects. The system can include a first neural network model configured to determine grasp locations on objects. The system can further include a processor and a memory storing instructions that, when executed by the processor, configure the system to perform various operations. For example, the system can generate a second neural network model configured to determine grasp locations on objects. The system can test the second neural network model using offline data, so as to generate machine learning results associated with the second neural network model. Based on the machine learning results, an automated test with the robot can be triggered, in which the robot implements a grasping application. The system can perform the automated test with the robot. For example, the automated test can include capturing an image of a plurality of objects. Based on the image, the second neural network can determine grasp locations on the plurality of objects. The automated test can further include the robot grasping the plurality of objects at the grasp locations. Furthermore, the system can record the robot grasping the plurality of object, so as to generate real-world test data. Based on the real-world test data, it can be determined whether the second neural network model replaces the first neural network model in the grasping application.
[5] In an example, based on the real-world test data, the system determines that the second neural network model replaces the first neural network model in the grasping application, and inserts the second neural network model into a new release of the grasping application. The second neural network model can be trained using first training data. In another example, the system can make a determination that the second neural network model does not replace the first neural network model. Based on the determination and the real-world data, the first training data can be revised so as to generate second training data that is different than the first training data. The second neural network model can be trained with the second training data, so as to generate a third neural network model configured to determine grasp locations on objects. The system can test the third neural network model using offline data, so as to generate second machine learning results associated with the third neural network model.
Based on the second machine learning results, the system can trigger a second automated test with the robot in which the robot implements the grasping application.
BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS
[6] The foregoing and other aspects of the present invention are best understood from the following detailed description when read in connection with the accompanying drawings. For the purpose of illustrating the invention, there is shown in the drawings embodiments that are presently preferred, it being understood, however, that the invention is not limited to the specific instrumentalities disclosed. Included in the drawings are the following Figures:
[7] FIG. 1 shows an example autonomous system in an example physical environment that includes a robot configured to grasp objects, in accordance with an example embodiment.
[8] FIG. 2 is a flow diagram that illustrates robotic operations (RobOps) that are triggered by machine learning operations (MLOps), in accordance with example embodiments.
[9] FIG. 3 illustrates a neural network model that can be included in a computing system, for instance the system shown in FIG. 1, in accordance with an example embodiment.
[10] FIG. 4 illustrates example automated testing that can be defined by robotic operations depicted in FIG. 2.
[11] FIG. 5 illustrates a computing environment within which embodiments of the disclosure may be implemented.
DETAILED DESCRIPTION
[12] As an initial matter, it is recognized herein that the variety and number of models that can be generated for robotic grasping of unknown objects can present various technical challenges in establishing a complete operations pipeline for robotic grasping. For example, comparing performances to determine which model serves a given grasping deployment the best presents technical challenges. By way of further example, beyond the obtained final trained model, it can also be necessary to determine the effect of different pre- and postprocessing operations on the model, as well as the interaction of a given model with the target runtime application used to serve the model in production. Further still, the results of a given deployment often need to be studied in order to improve the performance of the next set of models. Such study might include analyzing the runtime performance, scoping the failure cases, and coming up with strategies to mitigate the failure cases, for instance by retraining new models or making changes to the runtime application.
[13] In case of robotic grasping of unknown objects using machine learning, key business metrics might include grasp accuracy, runtime statistics, generalization to new objects, grasp efficiency, ergonomics (e.g., the way objects are grasped so as to constitute minimum damage to objects in a given scene), or the like. Thus, in order to maintain a healthy lifecycle of operations, a well-defined operations strategy can be essential. The lifecycle of operations might include, for example, and without limitation, dataset generation (e.g., synthetic or real), neural network training, neural network evaluation on synthetic or real world data, analysis of metrics for comparing new models to legacy models, performance of live tests on target devices, test result collection, analysis of test results, and iterative improvisation to the cycle. The iterative cycle can refer to the cycle of operations above that are performed with the intention of improving metrics. For example, a cycle can begin with data generation and can complete with testing, so that the results of various metrics can analyzed to create new data/models that are further trained and tested to obtain improvements in the outcome of those metrics. By way of example, an iterative process or cycle is adding more training examples in a dataset or tweaking hyperparameters of a neural network model, with the goal of improving the underlying key performance indicators (KPIs) (e.g., grasp accuracy, runtime statistics, generalization to new objects, grasp efficiency, ergonomics, etc.).
[14] In accordance with various embodiments described herein, an automated machine learning (ML) operations pipeline for robotic grasping using deep learning is defined. In some cases, the pipeline is triggered when data or a grasping model is changed, and the pipeline is complete when the change is automatically tested with robots in a loop in the real world.
[15] Referring initially to FIG. 1, an example industrial or physical environment or workspace 100 is shown. As used herein, a physical environment or workspace can refer to any unknown or dynamic industrial environment. Unless otherwise specified, physical environment and workspace can be used interchangeably herein, without limitation. A reconstruction or model may define a virtual representation of the physical environment or workspace 100, or one or more objects 106 within the physical environment 100. For purposes of example, the object 106 can be disposed in a bin or container, for instance a bin 107, so as to be positioned for grasping. Unless otherwise specified herein, bin, container, tray, box, or the like can be used interchangeably, without limitation. By way of example, the objects 106 can be picked from the bin 107 by one or more robots, and transported or placed in another location, for instance outside the bin 107. It will be understood that the objects 106 in FIG. 1 are mere examples, such that the objects can be alternatively shaped or define alternative structures as desired, and all such objects are contemplated as being within the scope of this disclosure.
[16] The physical environment 100 can include a computerized autonomous system 102 configured to perform one or more manufacturing operations, such as assembly, transport, or the like. The autonomous system 102 can include one or more robot devices or autonomous machines, for instance an autonomous machine or robot 104, configured to perform one or more industrial tasks, such as bin picking, grasping, or the like. The system 102 can include one or more computing processors configured to process information and control operations of the system 102, in particular the autonomous machine 104. The autonomous machine 104 can include one or more processors, for instance a processor 108, configured to process information and/or control various operations associated with the autonomous machine 104. An autonomous system for operating an autonomous machine within a physical environment can further include a memory for storing modules. The processors can further be configured to execute the modules so as to process information and generate models based on the information. It will be understood that the illustrated environment 100 and the system 102 are simplified for purposes of example. The environment 100 and the system 102 may vary as desired, and all such systems and environments are contemplated as being within the scope of this disclosure.
[17] Still referring to FIG. 1, the robot 104 can further include a robotic arm or manipulator 110 and a base 112 configured to support the robotic manipulator 110. The base 112 can include wheels 114 or can otherwise be configured to move within the physical environment 100. The robot 104 can further include an end effector 116 attached to the robotic manipulator 110. The end effector 116 can include one or more tools configured to grasp and/or move objects 106. Example end effectors 116 include finger grippers or vacuum-based grippers. The robotic manipulator 110 can be configured to move so as to change the position of the end effector 116, for example, so as to place or move objects 106 within the physical environment 100. The system 102 can further include one or more cameras or sensors, for instance a depth camera or three-dimensional (3D) point cloud camera 118, configured to detect or record objects 106 within the physical environment 100. The camera 118 can be mounted to the robotic manipulator 110 or otherwise configured to generate a 3D point cloud of a given scene, for instance the physical environment 100. Alternatively, or additionally, the one or more cameras of the system 102 can include one or more standard two-dimensional (2D) cameras that can record or capture images (e.g., RGB images or depth images) from different viewpoints. Those images can be used to construct 3D images. For example, a 2D camera can be mounted to the robotic manipulator 110 so as to capture images from perspectives along a given trajectory defined by the manipulator 1 10. [18] Still referring to FIG. 1, the camera 118 can be configured to capture images of the bin 107, and thus the objects 106, along a first or transverse direction 120. In some cases, a deep neural network is trained on a set of objects. Based on its training, the deep neural network can calculate grasp scores for respective regions of a given object, for instance an object within the bin 107. For example, the robot 104 and/or the system 102 can define one or more neural networks configured to learn various objects so as to identify poses, grasp points (or locations), and/or or affordances of various objects that can be found within various industrial environments. An example system or neural network model can be configured to learn objects and grasp locations, based on images for example, in accordance with various example embodiments. After the neural network is trained, for example, images of objects can be sent to the neural network by the robot device 104 for classification, in particular classification of grasp locations or affordances.
[19] Referring again to FIG. 1 , the camera 1 18 can define a depth camera configured to capture depth images of the workspace 100 from a perspective along the transverse direction 120. For example, the bin 107 can define a top 109 end and a bottom end 111 opposite the top end 109 along the transverse direction 120. The bin 107 can further define a first side 113 and a second side 115 opposite the first side 113 along a second or lateral direction 122 that is substantially perpendicular to the transverse direction 120. The bin 107 can further define a front end 117 and a rear end 119 opposite the front end 117 along a third or longitudinal direction 124 that is substantially perpendicular to both the transverse and lateral directions 120 and 122, respectively. Though the illustrated bin 107 defines a rectangular shape, it will be understood that bins or containers can be alternatively shaped or sized, and all such bins or containers are contemplated as being within the scope of this disclosure.
[20] It is recognized herein that current machine learning operations (MLOps) are generally focused on purely data driven deployments (e.g., recommendation engines, predictive analytics etc.) in which most of the data is structured and can be obtained at deployment. In such systems, the key metric for the business is often closely tied to the metrics of the model performance (e.g., accuracy, precision, statistical scores such as Fl scores, etc.). It is further recognized herein, however, that integration MLOps with real world deployments on robotics and automation level can involve key metrics, in particular the performance of the robotic systems, that can only be obtained on the robotic systems. For example, complex robotic systems (e.g., robotic grasping) can induce various sources of errors. In some cases, errors can result from hardware sensors (e.g., cameras), motion, calibration issues of the robot, or the like, which are separate from the traditional performance of an Al system for grasp prediction. It is further recognized herein that simulation tools can be used for robotic systems to perform experimentation and benchmarking, but there is often gap between benchmarking results in simulation as compared to benchmarking results in the real world that include significant noise from various sources.
[21] In an example, the system 102, in particular the camera 118, can capture an image, for instance a red-green-blue depth (RGBD) image, of a given scene so as to define a captured image. For example, the scene in the captured image can include the bin 107 and a plurality of objects 106 within the bin 107. The captured image can be fed into one or more grasping neural networks, for instance an example neural network or model 300 (FIG. 3), to generate a grasp output in the image frame. Based on the extrinsic hand-eye calibration of the robot, the system 102 can determine a pose of a given grasp. For example, the grasp location on a particular object can be predicted by the grasp neural network 300. The grasp location can define a 3D translation point in camera/robot world coordinates (e.g., x, y, z) together with the orientation of the point by means of normal vectors. Based on the extrinsic hand-eye calibration, that 3D point and its associated normal can used to calculate the robot pose for executing the grasp.
[22] Referring now to FIG. 3, a computing system, for instance the system 102, can define one or more models or networks 300 that can be trained on a plurality of input images or input data 304. The input data 304 can include depth images or maps in pixels. During training of the networks, the network can generate an output map or output 306 that can define a grasp candidate map, which can be compared to a ground truth label map, such that the parameters of the network can be continuously updated based on the differences or similarities that result from the comparison. Depth images can be synthetically generated using physics and rendering engines (e.g., PyBullet and PyRender). Furthermore, in some cases, the system 102 can perform post-processing on depth images, for instances by adding simulated noise, such that the input data 304 more closely resembles real world images more closely. It will be understood that that the training input data is not limited to the examples described herein. That is, the data in various depth images can vary, for instance the data can include various objects (e.g., different shapes and sizes) positioned in a variety of configurations, and all such input data contemplated as being within the scope of this disclosure. [23] With continuing reference to FIG. 3, the example neural network 300 includes a plurality of layers, for instance an input layer 302a configured to receive data, and an output layer 303 configured to generate an image based on the input data 304. For example, the output layer 303 can define an output layer 303b that can be configured to determine grasp scores for each pixel of a given image based on gripper information. The neural network 300 further includes a plurality of intermediate layers connected between the input layer 302a and the output layer 303. In particular, in some cases, the intermediate layers and the input layer 302a can define a plurality of convolutional layers 302. The intermediate layers can further include one or more fully connected layers. The convolutional layers 302 can include the input layer 302a configured to receive training and test data, such as depth images from a variety of camera heights, or gripper dimensions for a variety of sized suction-based or finger-based grippers. The convolutional layers 302 can further include a final convolutional or last feature layer 302c, and one or more intermediate or second convolutional layers 302b disposed between the input layer 302a and the final convolutional layer 302c. It will be understood that the illustrated model 300 is simplified for purposes of example. In particular, for example, models may include any number of layers as desired, in particular any number of intermediate layers, and all such models are contemplated as being within the scope of this disclosure.
[24] The output layer 303, which can include a first layer 303a and a second or output layer 303b connected to the first layer 303a. It will again be understood that the model is simplified for purposes of explanation, and that the model 300 is not limited to the number of layers 303. The convolutional layers 302 may be locally connected, such that, for example, the neurons in the intermediate layer 302b might be connected to a limited number of neurons in the final convolutional layer 302c. The convolutional layers 302 can also be configured to share connections strengths associated with the strength of each neuron.
[25] Still referring to FIG. 3, the input layer 302a can be configured to receive inputs 304, for instance RGBD images of objects 106. The output 306 can include one or more classifications or scores associated with the input 304. For example, the output 306 can define an output image or map that indicates a plurality of scores 308 (e.g., grasp scores) associated with various portions, for instance pixels, of the corresponding input 304.
[26] Referring now to FIG. 2, the system 102 can perform example operations 200 so as to define an operations lifecycle that includes MLOps 402 and robotic operations 404 that defines various automated testing. At 202, the system 102 can determine, for instance based on images 201 of objects that are captured by the camera 118, that the class of objects are graspable by the end effector 116. For example, a physical test of a class of objects using an end-effector can be performed by feeding a given object to a vacuum gripper, and checking if seal formation succeeds and the grasp remains stable. Alternatively, or additionally, the system can perform image processing-based operations such as finding planar and centric regions on an object surface in the image to validate a mathematical model of suction grasp feasibility. After determining that a given class of objects is not graspable, properties of the object class can be defined, at 204. Additionally, at 204, test cases for the object class can be defined. Identifying test cases can include identifying properties of objects (e.g., rigidity, porosity, etc.) to evaluate the effects those properties have on maintaining solid grasp. For example, identifying test cases can include finding faces of objects that cannot be grasped physically and/or finding challenging faces of objects that can lead to potential grasp failures. In various examples, objects with holes are not used for vacuum grasping and objects that define a cross section area that is less than gripper size do not work on a vacuum gripper. Examples of test cases further include positioning a physical object in a physical bin that exposes very few graspable regions on the object surface, thereby challenging the neural networks to find the few viable grasp poses in the experimentation. Such object classes can be added into the MLOps 203 processes. In an example, when the system determines that there are physically graspable regions on the new object, but the neural network struggles to find those grasp location at runtime, then steps 208 and 212 can be triggered, thereby also triggering 402 and 404 (described herein) in order to improve the neural network so as to improve grasping performance on such new objects. At 206 (and 224), the system 102 can run experiments (e.g., see real-world testing operations or experiments 400 in FIG. 4) with the current version of the grasping model or neural network and record performances of such experiments.
[27] Based on the experiments, at 208, the system 102 can determine whether the camera 118 limits the graspability of the object class. For example, the system 102 can assess the quality of the RGBD image for the objects in the scene (e.g., singular and in clutter). Qualities that are checked can include completeness of depth values and areas of missing depth information. For objects of a transparent/reflective nature, depth profiles of such objects can be poor, and thus the camera can become a bottleneck. Similarly, color images can depend the tuning of the camera settings to obtain clear color images with less blur, enough brightness, etc. Thus, the camera settings can be altered to achieve acceptable quality of RGBD images for the images to be used in the MLOps pipeline 203. In some cases, objects that provide very low quality of RGBD images can be discarded. In some cases it is determined that the camera images are of acceptable quality yet the performance of the neural network on those images is not acceptable, then those cases can be tested at 402 and 404. When it is determined that the camera 118 is a limiting factor, the process can proceed to 210, where the camera 118 is replaced with a better camera, for instance a camera defining a higher resolution. Thereafter the process can return to 206 so that the experiments can be performed again. When it is determined that the camera 118 is not a limiting factor, the process can proceed to 212, where the system can determine whether there is an issue with the data.
[28] At 212, it may have been confirmed that camera images are of acceptable quality yet the overall performance with respect to metrics is not good. Thus, at 212, the existing datasets used to train the neural network models can be evaluated so as to determine the distribution of samples used, and to check if data is diverse so as to contain various edge cases and challenging cases. Additionally, the associated grasp labels used for training can be validated. Validation can include, for example and without limitation, physical examining of datasets, or performing statistical operations such as ANOVA (analysis of variance). The model architecture can also be evaluated with respect to the training data. For example, certain model architectures may fail to identify objects that are physically small in size, and hence the model architecture might be changed to enable it to identify small objects. Thus, the operations at 212 can determine why the performance of the current system is poor on a new object, so that the training data and/or model architecture can be changed. The real-world testing operations or experiments 400 can validate or invalidate any changes to the training data or model architecture.
[29] It is recognized herein that generating a robust model for machine learning in robotic grasping applications relies on a combination of quality datasets and a quality model architecture that is trained on the datasets. Thus, still referring to FIG. 2, based on the determination at 212, the revisions can be made to the model architecture and/or training data that is generated. At 216, alternative or additional training data can be generated. In various examples, the grasping neural network (e.g., neural network 300) is based on supervised neural network training. Therefore the training dataset that is generated can consist of RGBD images and associated labels for each image. At 216, training datasets can be generated locally (e.g., on-premise dataset generation) or remotely on remote servers (e.g., AWS EC2 servers, MS Azure cloud, etc.). Additionally, or alternatively, at 214, various training parameters (e.g., network architecture, network hyperparameters, etc.) can be revised based on the determination at 212. At 218, the grasping neural network can be trained with the revised training parameters and the additional training data from 214 and 216, respectively. When training datasets are generated, testing can also be performed on the datasets. For example, the datasets can be scanned for validity. In particular, for examples, tests can validate the size of the dataset, format of the dataset, redundancies in the dataset, or the like. Similarly, at 218, neural networks can be trained either on premise or using cloud services. At 220, local tests are performed based on the updated training from 218. The tests can validate the changes made to the model architecture and training data at 214 and 216, respectively. When the tests fail, the process can return to 216, where additional or alternative training data is generated. The tests can fail, for example, when there are missing or incomplete data/labels; the distribution of object classes are below a predetermined threshold (e.g., a dataset can be discarded if the data count for the respective class is less than 20% of the average count of all classes); or when there arc duplicate data samples. The model architectures can be tested to ensure that input/output layers follow the mathematical dimensionality guidelines of the given neural network, and input/output normalization is maintained with respect to the data and labels.
[30] Still referring to FIG. 2, the tests at 220 can define various experiments. Each experiment can be defined as a result obtained by training some version of the neural network with some version of the dataset. In theory, unlimited such experiments can be performed with various permutations and iterations of datasets and model architectures/hyperparameters. Thus, in some cases, an equipment tracker (e.g., data version control or the like) runs an automated methodology with DevOps tools. For each experiment, various metadata can be recorded. In particular, metadata related to datasets that are used and metadata related to the models that are used can be recorded. In some cases, a series of experiments are performed that use an underlying common dataset but use different learning hyperparameters (changed at 214) for training the model at 218. In such an example, the best performing experiment can be further tested during a model testing phase of the testing performed at 220. In an example model testing phase, at 220, a series of tests can be performed to assess the sanity of the model. Such tests can focus on local model performance for expected results. For example, tests can be run to ensure the format of the model output is the same as the expectation, values are in an expected range, etc. Invariance and perturbation tests can be performed to assess whether results of the model change when certain inputs are changed. Such tests can identify if the trained model is susceptible to any noise or if the model displays any bias or overfitting. At 220, in some examples, the model can be used with real world data samples or subsets of data samples to assess the general performance of the model. In some case, visual analysis of samples, for instance samples that are often problematic for master models. A master model can refer to the model that exists in production, or the model that is the current best performing model to-date. Such a model can be as the benchmark for new experimental models to be compared against for possible improvement. If a given experimental model proves to be performing better at steps 402 and 404, then it can replace the master model used in production, as further described herein. At 222, the results of the experimental model testing can determine which model is automatically tested, at 224. Thus, when a given experimental model performs better than the current master model with the offline test samples, the process can proceed to 224, where the experimental model can be run with the physical robotic system.
[31] In some examples, after a given experiment model passes tests with offline samples, benchmarking tools can be used to compare the results. For example, common synthetic datasets and common real world RGBD inputs can define test cases for checking grasp accuracy with synthetic and real-world ground truth samples. A series of edge cases observed in the real world can also be used as test inputs. Along with hard metrics such as grasp success, soft metrics such as, for example and without limitation, grasp distribution, number of false positives and false negatives, grasp selection for objects in a heap, amount of grasps on non- graspable regions (e.g., bin bottom), and the like can be calculated. In some examples, the benchmarking results can be published as reports, for instance using open-source tools (e.g., CML). The reports might include statistical scores, error cases of both models, and results of perturbation tests on both models. In some cases, a developer can then view these reports of the benchmarking to determine whether an experimental model demonstrated better performance than the current master model.
[32] Still referring to FIG. 2, at 220, further testing can be automatically triggered after an experimental model demonstrates better performance than the current master model. In particular, for example, the given model can be used in deployment with a runtime codebase for grasping of the system 102. Thus, the model can be implemented with other portions of the grasping application pipeline, such as, for example, and without limitation, image acquisition, image preprocessing, and post-inference processing. At 220, the model can be tested for compatibility with the grasping application and tests for runtime, memory, etc. can be performed. At 222, the system 102 can determine that the software tests at 220 are sufficiently completed, such that the operations can proceed to 224, where live tests are performed on the robot 104 using the results (model) from 220, so as to trigger the robotic operations 205.
[33] Referring also to FIG. 4, example real-world testing operations 400 can be performed at 224 and 206. It will be understood that operations 400 are presented as an example, such that additional or alternative real world tests can be performed, and all such additional or alternative tests are contemplated as being within the scope of this disclosure. In some examples, the model from 222 and the runtime application can be deployed on a machine within the system 102 that controls the robot 104 for automated tests. In an example, the physical environment 100 defines an automated testing suite includes multiple bins, for instance two bins 107, having objects 106 randomly disposed within them. The system 102, in particular the camera 1 18, can capture an image, for instance a red-green-blue depth (RGBD) image, of a given scene so as to define a captured image. The scene in the captured image can include multiple bins and the plurality of objects 106 within the bin. The captured image can be input into the model from the MLOps 203. The model from the MLOps 203 can compute gasp locations on the objects 106, based on the capture image, at 402. At 404, the robot 104 can perform grasps based on the computed grasp locations. In some examples, the robot 104 picks and drops objects from one bin to the other and visa-versa, so as to perform a plurality of grasp executions. Alternatively, or additionally, the robot 104 can drop objects outside of the bin from which the objects are grasped, or at a random position within the bin in which the object is grasped. In some cases, the robot 104 can be configured to shape the object after the object is grasped and before the objected is dropped or placed, so as to collect further data related to the quality or strength of the grasp. While the robot 104 performs gasps, at 406, the grasps are recorded, for instance by the camera 118 or one or more other cameras within the environment 100. The grasps can be recorded so as to define recorded data that can include raw data of video or photographic images, neural network outputs, whether a given grasp defines a success or failure; and video snippets of trials (grasps). In an example, the end effector 116 defines a suction-based gripper that can indicate whether it is holding an object, so that a grasp success or failure can be determined after each grasp is attempted. In various examples, new production data, for instance data associated with the robot 104 grasping new objects that were not used in the training the model, is tested and recorded at 404 and 406, respectively. The results of each attempted grasp can be stored, at 408, so that the results can be fetched for analysis. In particular, for example, the grasping data can be added to a central common real world test dataset, at 208, and failure cases for the model can be tagged to the model and added to a central failure samples dataset.
[34] At 410, when the grasp executions are completed, the system 102 can compute various metrics related to the grasps, for instance a grasp success rate. Additionally, the system 102 can compute other metrics based on the real-world experiments, such as, for example and without limitation, overall grasp accuracy, various runtime statistics, success in generalizing to new objects, grasp efficiency or ergonomics, failure modes for long tail scenarios, and success of edge cases observed in production data. Referring again to 226, based on the real-world experiments and associated recorded data, it can be determined whether the model used in the robotic operations 205 produces results that are better the current master model. For example, at 226, the system 102 can determine whether the newly generated model from the MLOps 201 replaces the current master model. When it is determined that the results of the robotic operations 205 associated with the newly generated model are better than the current master model, the newly generated model can be inserted into a new release of the grasping application, at 228.
[35] In some cases, the metrics are related to each other. For example, in some cases, the most important metric is the grasp success rate on the physical system (total percentage of successful grasps over number of grasp trials). In an example, an improvement in metrics like edge case success or grasp ergonomics can directly contribute to the main metric of grasp success. Thus, in some cases, it is not a weighted decision and there is no mathematical formulation for decision making criteria. In another example, if there is an increase in grasp success over n% from previous model results (where n can be determined by a user), it is accepted. The n% number can be obtained in an exponential fashion. For example, as the grasp success rate gets closer to the 100% mark, the threshold factor can go down from n=5 to n=l. In an example, beyond a consistent 95% success rate, improvements of 0.5% are accepted. It will be understood that the threshold factors can vary as desired. When it is determined that the results of the robotic operations 205 are not better than the master model, the process can return to 216 where the training datasets can be modified or additional training datasets can be generated, based on the results of the robotic operations 205. [36] Thus, as described herein, robotic operations can be efficiently tested and data can be generated via a feedback loop from automated robot operations to machine learning operations. Consequently, grasping models and training data can be improved with efficient use of costly real-world testing, while maximizing the statistical results of experimentations performed on online and offline data.
[37] Furthermore, as described herein, an autonomous system includes a robot that defines an end effector configured to grasp objects. The system can include a first neural network model configured to determine grasp locations on objects. The system can further include a processor and a memory storing instructions that, when executed by the processor, configure the system to perform various operations. For example, the system can generate a second neural network model configured to determine grasp locations on objects. The system can test the second neural network model using offline data, so as to generate machine learning results associated with the second neural network model. Based on the machine learning results, an automated test with the robot can be triggered, in which the robot implements a grasping application. The system can perform the automated test with the robot. For example, the automated test can include capturing an image of a plurality of objects. Based on the image, the second neural network can determine grasp locations on the plurality of objects. The automated test can further include the robot grasping the plurality of objects at the grasp locations. Furthermore, the system can record the robot grasping the plurality of object, so as to generate real-world test data. Based on the real- world test data, it can be determined whether the second neural network model performs better than the first neural network model in the grasping application.
[38] In an example, based on the real-world test data, the system determines that the second neural network model replaces the first neural network model in the grasping application, and inserts the second neural network model into a new release of the grasping application. The second neural network model can be trained using first training data. In another example, the system can make a determination that the second neural network model does not replace the first neural network model. Based on the determination and the real- world data, the first training data can be revised so as to generate second training data that is different than the first training data. The second neural network model can be trained with the second training data, so as to generate a third neural network model configured to determine grasp locations on objects. The system can test the third neural network model using offline data, so as to generate second machine learning results associated with the third neural network model. Based on the second machine learning results, the system can trigger a second automated test with the robot in which the robot implements the grasping application.
[39] FIG. 5 illustrates an example of a computing environment within which embodiments of the present disclosure may be implemented. A computing environment 600 includes a computer system 610 that may include a communication mechanism such as a system bus 621 or other communication mechanism for communicating information within the computer system 610. The computer system 610 further includes one or more processors 620 coupled with the system bus 621 for processing the information. The system 102 may include, or be coupled to, the one or more processors 620.
[40] The processors 620 may include one or more central processing units (CPUs), graphical processing units (GPUs), or any other processor known in the art. More generally, a processor as described herein is a device for executing machine-readable instructions stored on a computer readable medium, for performing tasks and may comprise any one or combination of, hardware and firmware. A processor may also comprise memory storing machine-readable instructions executable for performing tasks. A processor acts upon information by manipulating, analyzing, modifying, converting or transmitting information for use by an executable procedure or an information device, and/or by routing the information to an output device. A processor may use or comprise the capabilities of a computer, controller or microprocessor, for example, and be conditioned using executable instructions to perform special purpose functions not performed by a general purpose computer. A processor may include any type of suitable processing unit including, but not limited to, a central processing unit, a microprocessor, a Reduced Instruction Set Computer (RISC) microprocessor, a Complex Instruction Set Computer (CISC) microprocessor, a microcontroller, an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA), a System-on-a-Chip (SoC), a digital signal processor (DSP), and so forth. Further, the processor(s) 620 may have any suitable microarchitecture design that includes any number of constituent components such as, for example, registers, multiplexers, arithmetic logic units, cache controllers for controlling read/write operations to cache memory, branch predictors, or the like. The microarchitecture design of the processor may be capable of supporting any of a variety of instruction sets. A processor may be coupled (electrically and/or as comprising executable components) with any other processor enabling interaction and/or communication there-between. A user interface processor or generator is a known element comprising electronic circuitry or software or a combination of both for generating display images or portions thereof. A user interface comprises one or more display images enabling user interaction with a processor or other device.
[41] The system bus 621 may include at least one of a system bus, a memory bus, an address bus, or a message bus, and may permit exchange of information (e.g., data (including computer-executable code), signaling, etc.) between various components of the computer system 610. The system bus 621 may include, without limitation, a memory bus or a memory controller, a peripheral bus, an accelerated graphics port, and so forth. The system bus 621 may be associated with any suitable bus architecture including, without limitation, an Industry Standard Architecture (ISA), a Micro Channel Architecture (MCA), an Enhanced ISA (EISA), a Video Electronics Standards Association (VESA) architecture, an Accelerated Graphics Port (AGP) architecture, a Peripheral Component Interconnects (PCT) architecture, a PCI-Express architecture, a Personal Computer Memory Card International Association (PCMCIA) architecture, a Universal Serial Bus (USB) architecture, and so forth.
[42] Continuing with reference to FIG. 5, the computer system 610 may also include a system memory 630 coupled to the system bus 621 for storing information and instructions to be executed by processors 620. The system memory 630 may include computer readable storage media in the form of volatile and/or nonvolatile memory, such as read only memory (ROM) 631 and/or random access memory (RAM) 632. The RAM 632 may include other dynamic storage device(s) (e.g., dynamic RAM, static RAM, and synchronous DRAM). The ROM 631 may include other static storage device(s) (e.g., programmable ROM, erasable PROM, and electrically erasable PROM). In addition, the system memory 630 may be used for storing temporary variables or other intermediate information during the execution of instructions by the processors 620. A basic input/output system 633 (BIOS) containing the basic routines that help to transfer information between elements within computer system 610, such as during start-up, may be stored in the ROM 631. RAM 632 may contain data and/or program modules that are immediately accessible to and/or presently being operated on by the processors 620. System memory 630 may additionally include, for example, operating system 634, application programs 635, and other program modules 636. Application programs 635 may also include a user portal for development of the application program, allowing input parameters to be entered and modified as necessary. [43] The operating system 634 may be loaded into the memory 630 and may provide an interface between other application software executing on the computer system 610 and hardware resources of the computer system 610. More specifically, the operating system 634 may include a set of computer-executable instructions for managing hardware resources of the computer system 610 and for providing common services to other application programs (e.g., managing memory allocation among various application programs). In certain example embodiments, the operating system 634 may control execution of one or more of the program modules depicted as being stored in the data storage 640. The operating system 634 may include any operating system now known or which may be developed in the future including, but not limited to, any server operating system, any mainframe operating system, or any other proprietary or non-proprietary operating system.
[44] The computer system 610 may also include a disk/media controller 643 coupled to the system bus 621 to control one or more storage devices for storing information and instructions, such as a magnetic hard disk 641 and/or a removable media drive 642 (e.g., floppy disk drive, compact disc drive, tape drive, flash drive, and/or solid state drive). Storage devices 640 may be added to the computer system 610 using an appropriate device interface (e.g., a small computer system interface (SCSI), integrated device electronics (IDE), Universal Serial Bus (USB), or FireWire). Storage devices 641, 642 may be external to the computer system 610.
[45] The computer system 610 may also include a field device interface 665 coupled to the system bus 621 to control a field device 666, such as a device used in a production line. The computer system 610 may include a user input interface or GUI 661, which may comprise one or more input devices, such as a keyboard, touchscreen, tablet and/or a pointing device, for interacting with a computer user and providing information to the processors 620.
[46] The computer system 610 may perform a portion or all of the processing steps of embodiments of the invention in response to the processors 620 executing one or more sequences of one or more instructions contained in a memory, such as the system memory 630. Such instructions may be read into the system memory 630 from another computer readable medium of storage 640, such as the magnetic hard disk 641 or the removable media drive 642. The magnetic hard disk 641 and/or removable media drive 642 may contain one or more data stores and data files used by embodiments of the present disclosure. The data store 640 may include, but are not limited to, databases (e.g., relational, object-oriented, etc.), file systems, flat files, distributed data stores in which data is stored on more than one node of a computer network, peer-to-peer network data stores, or the like. The data stores may store various types of data such as, for example, skill data, sensor data, or any other data generated in accordance with the embodiments of the disclosure. Data store contents and data files may be encrypted to improve security. The processors 620 may also be employed in a multi-processing arrangement to execute the one or more sequences of instructions contained in system memory 630. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions. Thus, embodiments are not limited to any specific combination of hardware circuitry and software.
[47] As stated above, the computer system 610 may include at least one computer readable medium or memory for holding instructions programmed according to embodiments of the invention and for containing data structures, tables, records, or other data described herein. The term “computer readable medium” as used herein refers to any medium that participates in providing instructions to the processors 620 for execution. A computer readable medium may take many forms including, but not limited to, non-transitory, non-volatile media, volatile media, and transmission media. Non-limiting examples of non-volatile media include optical disks, solid state drives, magnetic disks, and magneto-optical disks, such as magnetic hard disk 641 or removable media drive 642. Non-limiting examples of volatile media include dynamic memory, such as system memory 630. Non-limiting examples of transmission media include coaxial cables, copper wire, and fiber optics, including the wires that make up the system bus 621. Transmission media may also take the form of acoustic or light waves, such as those generated during radio wave and infrared data communications.
[48] Computer readable medium instructions for carrying out operations of the present disclosure may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, statesetting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present disclosure.
[49] Aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, may be implemented by computer readable medium instructions. The computing environment 400 may further include the computer system 610 operating in a networked environment using logical connections to one or more remote computers, such as remote computing device 680. The network interface 670 may enable communication, for example, with other remote devices 680 or systems and/or the storage devices 641, 642 via the network 671. Remote computing device 680 may be a personal computer (laptop or desktop), a mobile device, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to computer system 610. When used in a networking environment, computer system 610 may include modem 672 for establishing communications over a network 671, such as the Internet. Modem 672 may be connected to system bus 621 via user network interface 670, or via another appropriate mechanism.
[50] Network 671 may be any network or system generally known in the art, including the Internet, an intranet, a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), a direct connection or series of connections, a cellular telephone network, or any other network or medium capable of facilitating communication between computer system 610 and other computers (e.g., remote computing device 680). The network 671 may be wired, wireless or a combination thereof. Wired connections may be implemented using Ethernet, Universal Serial Bus (USB), RJ-6, or any other wired connection generally known in the art. Wireless connections may be implemented using Wi-Fi, WiMAX, and Bluetooth, infrared, cellular networks, satellite or any other wireless connection methodology generally known in the art. Additionally, several networks may work alone or in communication with each other to facilitate communication in the network 671.
[51] It should be appreciated that the program modules, applications, computer-executable instructions, code, or the like depicted in FIG. 5 as being stored in the system memory 630 are merely illustrative and not exhaustive and that processing described as being supported by any particular module may alternatively be distributed across multiple modules or performed by a different module. In addition, various program module(s), script(s), plug-in(s), Application Programming Interface(s) (API(s)), or any other suitable computer-executable code hosted locally on the computer system 610, the remote device 680, and/or hosted on other computing device(s) accessible via one or more of the network(s) 671, may be provided to support functionality provided by the program modules, applications, or computer-executable code depicted in FIG. 5 and/or additional or alternate functionality. Further, functionality may be modularized differently such that processing described as being supported collectively by the collection of program modules depicted in FIG. 3 may be performed by a fewer or greater number of modules, or functionality described as being supported by any particular module may be supported, at least in part, by another module. In addition, program modules that support the functionality described herein may form part of one or more applications executable across any number of systems or devices in accordance with any suitable computing model such as, for example, a client-server model, a peer-to-peer model, and so forth. In addition, any of the functionality described as being supported by any of the program modules depicted in FIG. 5 may be implemented, at least partially, in hardware and/or firmware across any number of devices.
[52] It should further be appreciated that the computer system 610 may include alternate and/or additional hardware, software, or firmware components beyond those described or depicted without departing from the scope of the disclosure. More particularly, it should be appreciated that software, firmware, or hardware components depicted as forming part of the computer system 610 are merely illustrative and that some components may not be present or additional components may be provided in various embodiments. While various illustrative program modules have been depicted and described as software modules stored in system memory 630, it should be appreciated that functionality described as being supported by the program modules may be enabled by any combination of hardware, software, and/or firmware. It should further be appreciated that each of the above-mentioned modules may, in various embodiments, represent a logical partitioning of supported functionality. This logical partitioning is depicted for ease of explanation of the functionality and may not be representative of the structure of software, hardware, and/or firmware for implementing the functionality. Accordingly, it should be appreciated that functionality described as being provided by a particular module may, in various embodiments, be provided at least in part by one or more other modules. Further, one or more depicted modules may not be present in certain embodiments, while in other embodiments, additional modules not depicted may be present and may support at least a portion of the described functionality and/or additional functionality. Moreover, while certain modules may be depicted and described as sub-modules of another module, in certain embodiments, such modules may be provided as independent modules or as sub-modules of other modules.
[53] Although specific embodiments of the disclosure have been described, one of ordinary skill in the art will recognize that numerous other modifications and alternative embodiments are within the scope of the disclosure. For example, any of the functionality and/or processing capabilities described with respect to a particular device or component may be performed by any other device or component. Further, while various illustrative implementations and architectures have been described in accordance with embodiments of the disclosure, one of ordinary skill in the art will appreciate that numerous other modifications to the illustrative implementations and architectures described herein are also within the scope of this disclosure. In addition, it should be appreciated that any operation, element, component, data, or the like described herein as being based on another operation, element, component, data, or the like can be additionally based on one or more other operations, elements, components, data, or the like. Accordingly, the phrase “based on,” or variants thereof, should be interpreted as “based at least in part on.”
[54] Although embodiments have been described in language specific to structural features and/or methodological acts, it is to be understood that the disclosure is not necessarily limited to the specific features or acts described. Rather, the specific features and acts are disclosed as illustrative forms of implementing the embodiments. Conditional language, such as, among others, “can,” “could,” “might,” or “may,” unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments could include, while other embodiments do not include, certain features, elements, and/or steps. Thus, such conditional language is not generally intended to imply that features, elements, and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without user input or prompting, whether these features, elements, and/or steps are included or are to be performed in any particular embodiment.
[55] The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions.

Claims

CLAIMS What is claimed is:
1. A method performed within an autonomous system that includes a robot configured to grasp objects, and further includes a first neural network model configured to determine grasp locations on objects, the method comprising: generating a second neural network model configured to determine grasp locations on objects; testing the second neural network model using offline data, so as to generate machine learning results associated with the second neural network model; and based on the machine learning results, triggering an automated test with the robot in which the robot implements a grasping application.
2. The method as recited in claim 1, the method further comprising: performing the automated test with the robot.
3. The method as recited in claim 2, wherein performing the automated test with the robot comprises: capturing an image of a plurality of objects; based on the image, the second neural network determining grasp locations on the plurality of objects; the robot grasping the plurality of objects at the grasp locations; recording the robot grasping the plurality of object, so as to generate real-world test data; and based on the real-world test data, determining whether the second neural network model performs better than the first neural network model.
4. The method as recited in claim 3, the method further comprising: based on the real-world test data, determining that the second neural network model replaces the first neural network model in the grasping application.
5. The method as recited in claim 4, the method further comprising: inserting the second neural network model into a new release of the grasping application.
6. The method as recited in claim 3, the method further comprising: training the second neural network model using first training data; making a determination that the second neural network model does not replace the first neural network model; and based on the determination and the real-world test data, revising the first training data so as to generate second training data that is different than the first training data.
7. The method as recited in claim 6, the method further comprising: training the second neural network model with the second training data, so as to generate a third neural network model configured to determine grasp locations on objects.
8. The method as recited in claim 7, the method further comprising: testing the third neural network model using offline data, so as to generate second machine learning results associated with the third neural network model; and based on the second machine learning results, triggering a second automated test with the robot in which the robot implements the grasping application.
9. An autonomous system comprising: a robot configured to grasp objects; a first neural network model configured to determine grasp locations on objects; one or more processors; and a memory storing instructions that, when executed by the one or more processors, cause the autonomous system to: generate a second neural network model configured to determine grasp locations on objects; test the second neural network model using offline data, so as to generate machine learning results associated with the second neural network model; and based on the machine learning results, trigger an automated test with the robot in which the robot implements a grasping application.
10. The system as recited in claim 9, the memory further storing instructions that, when executed by the one or more processers further cause the system to: capture an image of a plurality of objects; based on the image, determine grasp locations on the plurality of objects with the second neural network; grasp the plurality of objects at the grasp locations; record the robot grasping the plurality of object, so as to generate real-world test data; and based on the real-world test data, determine whether the second neural network model performs better than the first neural network model.
11. The system as recited in claim 10, the memory further storing instructions that, when executed by the one or more processers further cause the system to: based on the real-world test data, determine that the second neural network model replaces the first neural network model in the grasping application.
12. The system as recited in claim 11, the memory further storing instructions that, when executed by the one or more processers further cause the system to: insert the second neural network model into a new release of the grasping application.
13. The system as recited in claim 10, the memory further storing instructions that, when executed by the one or more processers further cause the system to: train the second neural network model using first training data; make a determination that the second neural network model does not replace the first neural network model; and based on the determination and the real-world test data, revise the first training data so as to generate second training data that is different than the first training data.
14. The system as recited in claim 13, the memory further storing instructions that, when executed by the one or more processers further cause the system to: train the second neural network model with the second training data, so as to generate a third neural network model configured to determine grasp locations on objects.
15. The system as recited in claim 14, the memory further storing instructions that, when executed by the one or more processors further cause the system to: test the third neural network model using offline data, so as to generate second machine learning results associated with the third neural network model; and based on the second machine learning results, trigger a second automated test with the robot in which the robot implements the grasping application.
PCT/US2023/031464 2023-08-30 2023-08-30 End-to-end robotic grasping lifecyle and operations Pending WO2025048792A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2023/031464 WO2025048792A1 (en) 2023-08-30 2023-08-30 End-to-end robotic grasping lifecyle and operations

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2023/031464 WO2025048792A1 (en) 2023-08-30 2023-08-30 End-to-end robotic grasping lifecyle and operations

Publications (1)

Publication Number Publication Date
WO2025048792A1 true WO2025048792A1 (en) 2025-03-06

Family

ID=88093552

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2023/031464 Pending WO2025048792A1 (en) 2023-08-30 2023-08-30 End-to-end robotic grasping lifecyle and operations

Country Status (1)

Country Link
WO (1) WO2025048792A1 (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018236753A1 (en) * 2017-06-19 2018-12-27 Google Llc PREDICTION OF ROBOTIC SEIZURE USING NEURAL NETWORKS AND A GEOMETRY-SENSITIVE REPRESENTATION OF OBJECT

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018236753A1 (en) * 2017-06-19 2018-12-27 Google Llc PREDICTION OF ROBOTIC SEIZURE USING NEURAL NETWORKS AND A GEOMETRY-SENSITIVE REPRESENTATION OF OBJECT

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
KLEEBERGER KILIAN ET AL: "A Survey on Learning-Based Robotic Grasping", CURRENT ROBOTICS REPORTS, vol. 1, no. 4, 20 September 2020 (2020-09-20), pages 239 - 249, XP093135330, ISSN: 2662-4087, Retrieved from the Internet <URL:https://link.springer.com/article/10.1007/s43154-020-00021-6/fulltext.html> DOI: 10.1007/s43154-020-00021-6 *

Similar Documents

Publication Publication Date Title
US20240335941A1 (en) Robotic task planning
Basamakis et al. Deep object detection framework for automated quality inspection in assembly operations
US12427674B2 (en) Task-oriented 3D reconstruction for autonomous robotic operations
US20250303558A1 (en) Heuristic-based robotic grasps
US20210229292A1 (en) Confidence-Based Bounding Boxes For Three Dimensional Objects
EP4327299B1 (en) Transformation for covariate shift of grasp neural networks
Tobin Real-world robotic perception and control using synthetic data
US20240066723A1 (en) Automatic bin detection for robotic applications
US12183011B2 (en) Confidence-based segmentation of multiple units
Lupi et al. Next-generation Vision Inspection Systems: a pipeline from 3D model to ReCo file
US20240238968A1 (en) Runtime assessment of suction grasp feasibility
WO2025048792A1 (en) End-to-end robotic grasping lifecyle and operations
US12346120B2 (en) Detecting empty workspaces for robotic material handling
CN118648035A (en) Training systems for surface anomaly detection
Araya-Martinez et al. A fast monocular 6D pose estimation method for textureless objects based on perceptual hashing and template matching
WO2025038086A1 (en) Closed-loop data generation for fine-tuning grasp neural networks
US12450830B2 (en) Methods and systems for estimating physical properties of objects
Ayoub A Machine Learning Approach to Grasp Planning for Forestry Log-loading
Lowe et al. D3. 2 Interim report of DL components and library
Cordeiro Deep learning for bin picking object segmentation
Moberg et al. ANOMALY DETECTION FOR INDUSTRIAL APPLICATIONS USING COMMODITY HARDWARE
Bonincontro Deep Learning-Based Depalletizer: Object Localization with Real and Synthetic Data
Pieters et al. Object detection and sim-to-real 6D pose estimation
WO2024177627A1 (en) Scale aware neural network for robotic grasping
Goszczyński et al. Automatic Parcel Damage Recognition Module for an Inspection Robot

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23772693

Country of ref document: EP

Kind code of ref document: A1