Disclosure of Invention
In view of the above, the present invention provides a method and a system for constructing a robotic art bag, which at least partially solve the above-mentioned problems in the prior art.
In order to achieve the above object, the present invention provides the following technical solutions:
The robot process kit construction method is characterized by comprising the following steps of:
S100, acquiring process scene information, wherein the process scene information comprises process image information, and the process image information comprises a plurality of static pictures or videos reflecting the actual process;
s200, analyzing the process image information, and acquiring process data information from the process image information, wherein the process data information comprises the component data of the robot and the process data of the process;
S300, selecting a part of the robot, selecting a part matched with the part data of the robot from an equipment database according to the acquired process data information, wherein the movement attribute of the part can meet the requirement of process data of a process;
S400, constructing a robot model, constructing a visual model according to selected parts of the robot, and combining the visual model into a visual model of the robot, wherein the visual model is used for interacting with each part data of the robot model by a user;
S500, setting a technological process of the robot model, and configuring the state and the action flow of the robot model in the technological process according to the acquired process data of the technological process to form a robot technological package.
In S100 of the foregoing aspect, the acquiring process scenario information according to the actual application scenario includes,
S110, a user operates the robot to execute the process under the actual application scene, and the camera records the process image information when the robot executes the process and transmits the process image information to the interactive terminal, or
S120, the user directly transmits the pre-manufactured process image information to the interactive terminal;
the process scene information obtaining according to the actual application scene further comprises,
S130, inputting process description information to the interactive terminal by a user, wherein the process description information is used for providing reference for the step of acquiring process data information.
In S200 of the above aspect, the acquiring process data information from the process image information includes,
S210, identifying a robot image from the process image information, and further determining the component data of the robot in the process image information through image identification.
In S210 of the above-described aspect, determining the component data of the robot in the process image information by image recognition includes,
S211, extracting image features of the robot and each part in the process image information;
S212, obtaining image features of various robots and various components from an equipment database;
s213, comparing the image features extracted from the process image information with the image features acquired from the equipment database, and determining the model of the robot in the process image information and the model of each part forming the robot according to the similarity;
S214, taking the model of the robot and the model of each part forming the robot as the part data of the robot;
s215, when the model of the robot and the model of each part forming the robot are difficult to determine, extracting the structural size characteristics of the robot and each part in the process image information;
and S216, taking the structural size characteristics of the robot and each part as the part data of the robot.
In S200 of the above aspect, the acquiring process data information from the process image information includes,
S220, recognizing the starting point, the end point, the running track, the speed, the function and the state transformation of each action in the process from the process image information, and taking the starting point, the end point, the running track, the speed, the function and the state transformation as process data of the process.
In S300 of the above aspect, the means for selecting a robot specifically includes,
S301, when the component data of the robot comprises the model of the robot and the models of all the components forming the robot, selecting the corresponding components of the robot in the equipment database directly according to the models;
s302, when the part data of the robot does not comprise the model, and the model comprises the structural size characteristics, selecting the matched parts of the robot in the equipment database according to the structural size characteristics;
S303, screening the parts of the robot which can meet the requirements of the technological process according to the process data of the technological process.
In S400 of the above solution, the constructing a visualization model according to the selected part of the robot specifically includes,
S401, acquiring a visual model of a corresponding part from an equipment database according to the selected part of the robot, or generating a visual model of the part according to a preset visual rule according to the selected part of the robot;
And S402, combining the visual models of all the parts to obtain a visual model of the robot, displaying the visual model of the robot at the interactive terminal, providing the data of all the parts of the robot model, and allowing a user to change or set parameters for all the parts.
In S500 of the above solution, configuring the state and the action flow of the robot model in the process includes configuring, on the interactive interface of the interactive terminal, a process of controlling the robot terminal to switch the state or execute the action according to the process data of the process.
In S500 of the above solution, configuring the state and the action flow of the robot model in the process includes automatically generating a control program for controlling the robot terminal to switch the state or execute the action according to the process data of the process in a form of graphic programming or code programming, where the control program can be executed on the robot terminal to control the robot terminal to complete the process;
and packaging the control program and the data of each part of the robot model as a process package.
The scheme further comprises S600 that the constructed process package is stored in an equipment database to serve as a new process package or an upgrade of an existing process package, and the theme of the actual application scene, the key action of the process or the key part of the robot model are stored as the label of the process package so as to be searched when the process package is reused later.
In the scheme, the equipment database stores detailed data of various parts of the robot, wherein the detailed data comprises the model of each part and one or more of the following parameters including structural size, an electrical interface, a movement range, movement speed, maximum load, applicable functions and labels, and the equipment database also stores image information or image characteristics of each part.
A robotic pod construction system comprising a processor and a memory, the memory having stored therein computer executable instructions that when executed by the processor, implement the method of any of the above aspects.
By implementing the scheme of the invention, the design of the robot control program is changed from the whole task purpose, the system intelligently analyzes hardware and steps required by the scene image information acquisition completion process, and suitable candidate components are adaptively selected from a database. The new user can quickly realize the robot program design for executing a process under the intelligent assistance of the system, the design efficiency is improved, the constructed process package can be reused for a plurality of times, the process package can be generated again after being modified even if equipment changes, and the process package design scheme and the system with higher automation degree are provided.
Detailed Description
Embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
It is noted that the following embodiments and features of the embodiments may be combined with each other without conflict, and that all other embodiments obtained by persons of ordinary skill in the art without creative efforts based on the embodiments in the present disclosure are within the scope of protection of the present disclosure.
It is noted that various aspects of the embodiments are described below within the scope of the following claims. It should be apparent that the aspects described herein may be embodied in a wide variety of forms and that any specific structure and/or function described herein is merely illustrative. Based on the present disclosure, one skilled in the art will appreciate that one aspect described herein may be implemented independently of any other aspect, and that two or more of these aspects may be combined in various ways. For example, an apparatus may be implemented and/or a method practiced using any number of the aspects set forth herein. In addition, such apparatus may be implemented and/or such methods practiced using other structure and/or functionality in addition to one or more of the aspects set forth herein.
An embodiment of a first aspect of the present invention provides a method for constructing a robot technical package, including the steps of:
s100, acquiring process scene information, wherein the process scene information comprises process image information, and the process image information comprises a plurality of static pictures or videos reflecting the actual process.
In this embodiment, the first problem to be solved is how to describe the process requirements. The requirements of the actual process scene do not require the user to describe using a specific programming language, but can be obtained from the video of the actual process scene in the most intuitive way, and the specific requirements are resolved through image analysis. For example, a user may manually operate the robot terminal on site to perform a set of operations for carrying goods, and record various information of the carrying process by using a camera of the robot terminal at the same time, and record the various information as an input information source for constructing a process package. Considering that the requirement of the recorded video on the shooting quality of the equipment is higher, the requirement on storage, transmission, processing and the like of the video is higher, as an alternative scheme, a mode of replacing the video by a plurality of static pictures can be used, so long as the pictures can show key information, and the requirement of the process can be analyzed from the picture.
In addition to recording the process video on site, other methods may be used to obtain related video, for example, if the video is recorded in advance, then the video is directly transmitted to the system through the interactive terminal, instead of having to be recorded on site. It is also possible that other robots perform a video of a process acquired through a network, and the source of the video is not necessarily limited as long as it can be input into the system for analysis.
Although the manner of using the image is advocated here as an input of the process requirement, it is not excluded that the user can provide descriptive text or codes as an aid through the interactive terminal, for example, the user has already known what the purpose of the process is, the type of process can be input as goods handling or screw assembling, etc., such information necessarily facilitating subsequent image analysis.
The technical scheme can be beneficial to conveniently and rapidly acquiring the demand information of the actual application scene, the input of the pictures or videos is directly visual, and the pressure of a user on the aspect of demand description is reduced.
S200, analyzing the process image information, and acquiring process data information from the process image information, wherein the process data information comprises the component data of the robot and the process data of the process.
The obtaining process data information from the process image information includes,
S210, identifying a robot image from the process image information, and further determining the component data of the robot in the process image information through image identification.
It is not difficult to identify the robot from the picture or video by means of object detection or the like, but not only the robot but also the components contained in the robot are identified further. The simple implementation mode is that the robot in the video is pre-attached with the model number or the information graph of one-dimensional bar code or two-dimensional bar code and the like which can identify the identity of the robot, so that the model number and the part model number of the robot can be easily analyzed only by extracting the image characteristics in the video or the image.
However, there are many cases where such information is not available on the equipment in the process image, or where the source of the video is not known, and where the video with such information cannot be reproduced. Then it is necessary to analyze from the image features of the robot or component in the image, advantageously with images of various types of equipment stored in the equipment database, for example five robotic arms, three jaws, etc. By extracting and comparing the image features of these components one by one with the image features of the components in the handling scene, it can be determined that the A3-type robotic arm and the C2-type gripper are used in the handling scene image. In order to alleviate the image storage pressure of the device database and the processing pressure of extracting the features, the image features of the respective components may be extracted in advance, and then only the image features may be stored in the device database in order to improve efficiency. The image features may use various features in the image processing field, such as shape, color, texture, spatial features, frequency domain features, and the like, without limitation, as long as they are applicable to facilitate image recognition. But in contrast to storing the original images of the various components can fit more processing scenarios, flexibility is increased, and both forms can be used in combination. If images of certain components are identified as being included in the scene image, the model of the components is recorded for subsequent use. If there are a plurality of similar component images as a result of the recognition, a plurality of candidate components may be recommended for the user to select according to their similarity from high to low.
Meanwhile, there may be cases where devices with similar images cannot be found in the device database, it may be that the images of the actual application scene are not clear enough, or it may be that the appearance of the devices is actually different. In such a case, it is necessary to acquire structural size characteristics of the robot and each component from the process image information, and to search for a matching component based on the structural size information. In order to more accurately identify the dimensions of the components, an advantageous implementation consists in providing a reference object, such as a scale, in the scene being photographed, the dimensions of which are easily obtained by comparing other objects with the reference object, in addition to the fact that the scale can be pre-defined on the camera, the dimensions of the components being also easily obtained by geometrical relationships.
S220, recognizing the starting point, the end point, the running track, the speed, the function and the state transformation of each action in the process from the process image information, and taking the starting point, the end point, the running track, the speed, the function and the state transformation as process data of the process.
For example, in a handling process, it is necessary to identify the start point and end point coordinates, the movement trajectory and speed, whether the movement is clamping or unclamping, and the like of each of the respective parts, and the like, as process data of the process, these process data being combined to realize the function of handling the cargo from the point a to the point B. For example, it needs to be identified that the mechanical arm translates from the initial O point to the a point, the clamping jaw opens, descends, clamps and ascends, the mechanical arm translates from the a point to the B point, the clamping jaw descends, opens, ascends and clamps, and the mechanical arm translates from the B point to the O point, and in particular, it needs to identify the three-dimensional coordinate values of each point.
The process image information is analyzed to obtain the process data information, so that the intelligent auxiliary analysis function of the system can be fully exerted. The image analysis can help the user to identify the parts and identify the actions and states of the process, and the difficulty of decomposing the parts and the actions when the user is unfamiliar with the equipment, the parts and other conditions of the system is reduced.
S300, selecting a part of the robot, selecting a part matched with the part data of the robot from a device database according to the acquired process data information, and enabling the motion properties of the part to meet the requirements of process data of a process.
When the component data of the robot does not comprise the model, the components of the robot which are matched with each other are selected in the equipment database according to the structural size characteristics. For example, the identified part data comprise that the mechanical arm is a four-axis mechanical arm, the two sections of the mechanical arm are 220mm and 170mm long, and the A3 type mechanical arm can meet the requirements through inquiring in an equipment database.
In addition, it is also necessary to screen the components of the robot that meet the process requirements based on the process data of the process. For example, the movement range during carrying is obtained by analyzing an image of an application scene and is within a circular range with the diameter of 350mm, so that the A3 type mechanical arm can be determined according to the movement range of the A3 type mechanical arm, and the requirement can be met. When there are a variety of components or combinations of components that can meet the needs, a variety of alternatives may be presented for the user to choose from.
The scheme can help the user to select the proper components to form the robot, so that the confusion of the user that the user does not know how to select among the massive components is reduced, the selection also considers the process data requirement of the technological process, and the problem that the user cannot meet the requirement in the actual scene of selecting the improper components is avoided.
S400, constructing a robot model, constructing a visual model according to the selected parts of the robot, and combining the visual model into the visual model of the robot for interacting with each part data of the robot model by a user.
The method comprises the steps of obtaining a visual model of a corresponding part from a device database according to the selected part of the robot, or generating the visual model of the part according to a preset visual rule according to the selected part of the robot, and combining the visual models of the parts to obtain the visual model of the robot. The visualization model here may be some 3D model diagram, such as a 3D model diagram in STP, DWG, etc. format. However, the visual model is not necessarily a 3D component graphic with a specific format, but may be a 2D graphic, or may even be a simplified graphic in the form of an icon, so long as the user can intuitively recognize that there are components on the interactive interface. These model files may be stored in a device database, and may be queried and obtained by part model number.
The visual model is mainly used for being presented on an interactive interface, so that a user can know which parts are available, the data of each part of the robot model is provided, the user can conveniently select and view the data of each part, and the user is allowed to change or set parameters for each part. If the user feels that some components or parameters thereof are unreasonable or wishes to modify, the components can be clicked very conveniently on the interactive interface to operate without repeated searching in the command line code.
S500, setting a technological process of the robot model, and configuring the state and the action flow of the robot model in the technological process according to the acquired process data of the technological process to form a robot technological package.
Configuring the state and the action flow of the robot model in the process comprises providing a graphical user interface on the interactive terminal, wherein a user can configure the process of controlling the robot terminal to switch the state or execute the action step by step according to the process data of the process on the graphical user interface, and the robot terminal can be controlled to execute corresponding instructions according to the configurations so as to realize the functions of the process.
Alternatively, according to the process data of the technological process, a control program for controlling the robot terminal to switch states or execute actions is automatically generated in a form of graphic programming or code programming, and the control program can be executed on the robot terminal to control the robot terminal to complete the technological process;
and packaging the control program and the data of each part of the robot model as a process package.
For example, the process flow of carrying needs to realize the actions and state changes, namely the robot is powered on and returns to zero, the mechanical arm translates from the initial point O to the point A, the clamping jaw opens, descends, clamps and ascends, the mechanical arm translates from the point A to the point B, the clamping jaw descends, opens, ascends and clamps, the mechanical arm translates from the point B to the point O, returns to zero, and the robot is powered off. And these specific actions are performed based on the A3 type mechanical arm and the C2 type clamping jaw determined through image analysis and identification. Based on this information, a control program is automatically generated that controls the A3-type robotic arm and the C2-type gripper to complete the handling process. The control program is packaged together with the component data of the two types of components as a process package for the handling process. After the user constructs the process package of the carrying process on the interactive terminal, the interactive terminal is connected with the robot terminal, the robot terminal consists of an A3 type mechanical arm and a C2 type clamping jaw or is at least compatible with the two types, and then a control program in the process package can be loaded on the robot terminal and controls the robot terminal to complete the carrying process.
The process kit formed in the method almost does not need a user to master a sufficient programming technology, most of functions are automatically completed, the user only needs to adjust part of parameters according to personalized requirements, the labor intensity of the user is reduced, and the speed and the efficiency of generating the process kit are greatly improved. The program is mainly generated by the system, so that the possibility of bug occurrence is reduced, and the code security is improved.
And S600, storing the constructed process package into an equipment database to serve as a new process package or an upgrade of an existing process package, and storing the theme of the actual application scene, the key action of the process or the key component of the robot model as a label of the process package so as to search when the process package is reused later. The stored process packages can be read again from the equipment database and displayed on the interactive terminal, so that the user can conveniently adjust and confirm parameters again, updated process packages can be automatically generated again after convenient adjustment and modification, and new robot terminals can be quickly loaded and executed.
The process package can be marked as a keyword such as a carrying process or A3C2, and the like which is convenient for memorizing and reflecting the characteristics of the process package, and is convenient for subsequent searching again or using the process package. Meanwhile, the naming of the process kit can also be regarded as a label. The components in the equipment database, such as the A3 mechanical arm or the C2 clamping jaw, can be used as a component and can be labeled with a 'carrying process', so that the components used in similar processes can be conveniently found later when a new carrying process package is constructed.
In the embodiment, the equipment database stores detailed data of various parts of the robot, including the model of each part and one or more of the following parameters, such as structural size, electrical interface, movement range, movement speed, maximum load, applicable function and label, and also stores image information or image characteristics of each part. The equipment database can be arranged on a network server or on the same terminal equipment as the interactive terminal, and the step of analyzing the process image information can be processed by a local processor on the interactive terminal or on a cloud server of a cloud end.
An embodiment of a second aspect of the invention provides a robotic process kit construction system comprising a processor and a memory, the memory having stored therein computer executable instructions which, when executed by the processor, implement a method as described in any of the above aspects.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any changes or substitutions easily contemplated by those skilled in the art within the scope of the present invention should be included in the present invention. Therefore, the protection scope of the invention is subject to the protection scope of the claims.