CN109909998B - Method and device for controlling movement of mechanical arm - Google Patents
Method and device for controlling movement of mechanical arm Download PDFInfo
- Publication number
- CN109909998B CN109909998B CN201711320833.6A CN201711320833A CN109909998B CN 109909998 B CN109909998 B CN 109909998B CN 201711320833 A CN201711320833 A CN 201711320833A CN 109909998 B CN109909998 B CN 109909998B
- Authority
- CN
- China
- Prior art keywords
- target
- image
- mechanical arm
- preset
- label
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 86
- 238000000605 extraction Methods 0.000 claims abstract description 27
- 238000013527 convolutional neural network Methods 0.000 claims description 129
- 238000012549 training Methods 0.000 claims description 49
- 230000008569 process Effects 0.000 claims description 29
- 238000004891 communication Methods 0.000 claims description 22
- 230000006870 function Effects 0.000 claims description 12
- 238000004590 computer program Methods 0.000 claims description 9
- 230000001960 triggered effect Effects 0.000 claims description 5
- 230000009471 action Effects 0.000 description 14
- 238000004364 calculation method Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 238000003909 pattern recognition Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
Images
Landscapes
- Manipulator (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the invention provides a method and a device for controlling the movement of a mechanical arm, wherein the method comprises the following steps: the control equipment controls the mechanical arm to move a preset distance according to a preset movement direction, acquires target images of target objects continuously acquired by the camera when the mechanical arm moves, acquires position information of the mechanical arm when each target image is acquired, performs feature extraction on the target images to obtain image features of the target images, acquires labels corresponding to the target images according to the corresponding relation between the preset image features and the labels, judges whether the labels meeting preset conditions exist in the labels corresponding to the target images, determines the target position information of the mechanical arm when the target images corresponding to the existing labels are acquired if the labels exist, and controls the mechanical arm to move to a position corresponding to the target position information. Therefore, the mechanical arm can be moved to the preset position by controlling the mechanical arm to move twice, and the time for controlling the mechanical arm to move is reduced.
Description
Technical Field
The invention relates to the field of mechanical arm control, in particular to a method and a device for controlling the movement of a mechanical arm.
Background
At present, the application scenarios using the mechanical arm are very wide, for example, automatic sorting of goods, automatic assembly of parts, etc. In the process of using the mechanical arm, the movement of the mechanical arm needs to be controlled.
The process of controlling the motion of the mechanical arm generally comprises the following steps: the control equipment obtains an image of a target object acquired by a camera mounted on the mechanical arm, the image is input into a pre-trained convolutional neural network, the convolutional neural network calculates the action (such as leftward movement or grabbing) to be executed in the next step of the mechanical arm according to the position of the object in the image and the current position of the mechanical arm and outputs the action, and the control equipment can control the mechanical arm to execute the action, and then the process is repeated until the mechanical arm moves to the preset position.
It can be seen that the above method for controlling the movement of the mechanical arm has the following disadvantages: the scheme for controlling the mechanical arm to move is that the next action is calculated according to the current state of the mechanical arm, the mechanical arm is controlled to move, then the next action is calculated according to the current state of the mechanical arm after moving, and then the mechanical arm is controlled to move. That is, the control device needs to control the movement of the robot arm many times to reach the preset position, and since each time the motion is calculated, the robot arm also needs time to perform the motion, it takes a long time for the control device to control the movement of the robot arm many times, for example: assuming that the calculation of the motion takes time t1, the robot arm takes time t2 to perform the motion, and the robot arm moves n steps in total, the process of controlling the motion of the robot arm takes time n (t1+ t2), which is generally larger than t2, and thus the process is time-consuming.
Disclosure of Invention
An object of the embodiments of the present invention is to provide a method and an apparatus for controlling a motion of a robot arm, so as to reduce a time for controlling the motion of the robot arm. The specific technical scheme is as follows:
a method of controlling motion of a robotic arm for use with a control device communicatively coupled to the robotic arm, the control device further communicatively coupled to a camera, the method comprising:
controlling the mechanical arm to move for a preset distance according to a preset movement direction, acquiring target images of target objects continuously acquired by the camera when the mechanical arm moves, and acquiring position information of the mechanical arm when each target image is acquired;
extracting features of the target images to obtain image features of the target images, and obtaining labels corresponding to the target images according to the corresponding relation between preset image features and the labels, wherein the labels are used for identifying the relative position relation between the mechanical arm and the target object;
judging whether a label meeting a preset condition exists in labels corresponding to the target images;
if the label meeting the preset condition exists, determining the target position information of the mechanical arm when the target image corresponding to the existing label is acquired;
and controlling the mechanical arm to move to a position corresponding to the target position information.
Optionally, the process of obtaining the preset movement direction includes:
acquiring a current image of the target object acquired by the camera at the current moment;
extracting features of the current image to obtain image features of the current image, and obtaining a label corresponding to the current image according to a preset corresponding relation between the image features and the label;
and determining a preset movement direction corresponding to the label corresponding to the current image according to the corresponding relation between the preset label and the movement direction.
Optionally, the method further includes:
and if the label meeting the preset condition does not exist, returning to execute the step of controlling the mechanical arm to move for the preset distance according to the preset movement direction.
Optionally, the step of extracting features of the target image to obtain image features of the target image, and obtaining a label corresponding to each target image according to a preset correspondence between the image features and the labels includes:
inputting the target image into a pre-trained target convolutional neural network so that the target convolutional neural network performs feature extraction on the target image to obtain image features of the target image, and obtaining a label corresponding to each target image according to the corresponding relation between the image features of the image sample contained in the target image and the labels;
wherein the target convolutional neural network is: and training a pre-constructed initial convolutional neural network based on the image sample and the corresponding label thereof to obtain a convolutional neural network, wherein the target convolutional neural network comprises the corresponding relation between the image characteristics of the image sample and the label.
Optionally, the training mode of the target convolutional neural network includes:
constructing an initial convolutional neural network;
placing the target object at a preset position, changing the motion direction of the mechanical arm, and acquiring a plurality of image samples of the target object, which are continuously acquired by the camera when the mechanical arm moves according to each motion direction;
determining a label corresponding to each image sample according to the position information of the mechanical arm and the position information of the target object when each image sample is collected and a preset label generation rule;
inputting the image sample and the label corresponding to the image sample into the initial convolutional neural network for training;
and when the value of the target function of the initial convolutional neural network is not changed or the accuracy of the output result corresponding to the image sample reaches a preset accuracy, finishing training to obtain the target convolutional neural network containing the corresponding relation between the image characteristics of the image sample and the label.
Optionally, after the step of controlling the robot arm to move to the position corresponding to the target robot arm position information, the method further includes:
judging whether the existing label is the same as a preset ending label or not;
if yes, controlling the mechanical arm to grab the target object;
optionally, when the robot arm successfully grasps the target object, the method further includes:
and outputting the information of successful grabbing.
A device for controlling motion of a robotic arm for use with a control apparatus communicatively coupled to the robotic arm, the control apparatus further communicatively coupled to a camera, the device comprising:
the mechanical arm moving module is used for controlling the mechanical arm to move for a preset distance according to a preset moving direction, acquiring target images of target objects continuously acquired by the camera when the mechanical arm moves, and acquiring position information of the mechanical arm when each target image is acquired;
the label determining module is used for extracting the features of the target images to obtain the image features of the target images, and obtaining labels corresponding to the target images according to the corresponding relation between the preset image features and the labels, wherein the labels are used for identifying the relative position relation between the mechanical arm and the target object;
the first judging module is used for judging whether a label meeting a preset condition exists in labels corresponding to each target image, and if so, the target position information determining module is triggered;
the target position information determining module is used for determining the target position information of the mechanical arm when the target image corresponding to the existing label is acquired;
and the control module is used for controlling the mechanical arm to move to the position corresponding to the target position information.
Optionally, the apparatus further includes an obtaining module, where the obtaining module is configured to obtain the preset movement direction, and the obtaining module includes:
the current image acquisition unit is used for acquiring a current image of the target object acquired by the camera at the current moment;
the label determining unit is used for extracting the features of the current image to obtain the image features of the current image, and obtaining a label corresponding to the current image according to the corresponding relation between the preset image features and the label;
and the preset movement direction determining unit is used for determining the preset movement direction corresponding to the label corresponding to the current image according to the corresponding relation between the preset label and the movement direction.
Optionally, the apparatus further comprises:
and the returning module is used for triggering the mechanical arm moving module when judging that no label meeting preset conditions exists in the labels corresponding to the target images.
Optionally, the tag determination module is specifically configured to:
inputting the target image into a target convolutional neural network trained in advance by a convolutional neural network training module so that the target convolutional neural network performs feature extraction on the target image to obtain image features of the target image, and obtaining a label corresponding to each target image according to the corresponding relation between the image features of the image sample contained in the target image and the labels;
wherein the target convolutional neural network is: and training a pre-constructed initial convolutional neural network based on the image sample and the corresponding label thereof to obtain a convolutional neural network, wherein the target convolutional neural network comprises the corresponding relation between the image characteristics of the image sample and the label.
Optionally, the convolutional neural network training module includes:
the model building unit is used for building an initial convolutional neural network;
the image sample acquisition unit is used for placing the target object at a preset position, changing the motion direction of the mechanical arm and acquiring a plurality of image samples of the target object continuously acquired by the camera when the mechanical arm moves according to each motion direction;
the label generating unit is used for determining a label corresponding to each image sample according to the position information of the mechanical arm and the position information of the target object when each image sample is collected and according to a preset label generating rule;
the model training unit is used for inputting the image sample and the corresponding label into the initial convolutional neural network for training;
and the training completion unit is used for completing training when the value of the target function of the initial convolutional neural network is not changed or the accuracy of the output result corresponding to the image sample reaches a preset accuracy, so as to obtain the target convolutional neural network containing the corresponding relation between the image characteristics of the image sample and the label.
Optionally, the apparatus further comprises:
the second judgment module is used for judging whether the existing label is the same as a preset end label or not after controlling the mechanical arm to move to the position corresponding to the target mechanical arm position information, and if so, the grabbing module is triggered;
the grabbing module is used for controlling the mechanical arm to grab the target object.
Optionally, the apparatus further comprises:
and the success information output module is used for outputting the grabbing success information when the mechanical arm successfully grabs the target object.
The control equipment comprises a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing mutual communication by the memory through the communication bus;
a memory for storing a computer program;
a processor for implementing any of the above method steps when executing a program stored in the memory.
A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method steps of any of the above.
In the scheme provided by the embodiment of the invention, the control equipment controls the mechanical arm to move a preset distance according to a preset movement direction, acquires target images of target objects continuously acquired by the camera when the mechanical arm moves, acquires position information of the mechanical arm when each target image is acquired, performs feature extraction on the target images to acquire image features of the target images, acquires labels corresponding to the target images according to the corresponding relation between the preset image features and the labels, judges whether the labels meeting preset conditions exist in the labels corresponding to the target images, determines the target position information of the mechanical arm when the target images corresponding to the existing labels are acquired if the labels meet the preset conditions, and controls the mechanical arm to move to a position corresponding to the target position information. When the mechanical arm is controlled to move, firstly, the mechanical arm is controlled to move for a long distance according to a preset movement direction, then a target image is continuously collected in the moving process, the optimal position in the long-distance moving process is determined according to the target image, and then the mechanical arm is controlled to return to the optimal position.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a first flowchart of a method for controlling the movement of a robotic arm according to an embodiment of the present invention;
fig. 2 is a flowchart of acquiring a preset movement direction according to an embodiment of the present invention;
FIG. 3 is a flow chart of a training method of a target convolutional neural network according to an embodiment of the present invention;
FIG. 4 is a second flowchart of a method for controlling the movement of a robotic arm according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an apparatus for controlling the motion of a robot according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a control device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to reduce the time for controlling the operation of the mechanical arm, embodiments of the present invention provide a method, an apparatus, a control device, and a computer-readable storage medium for controlling the motion of the mechanical arm.
First, a method for controlling the movement of a robot arm according to an embodiment of the present invention will be described.
It should be noted that, the method for controlling the motion of the robot arm provided by the embodiment of the present invention may be applied to any control device that establishes a communication connection with the robot arm, and it is understood that data and commands may be sent between the control device and the robot arm. The control device may be an electronic device such as a computer, and is not limited herein.
The control device is further communicatively connected to a camera, which is generally mounted on the mechanical arm for capturing an image of the target object, but of course, the camera may be mounted at any position where the image of the target object can be captured, and is not limited herein.
As shown in fig. 1, a method for controlling the motion of a robot arm is applied to a control device in communication connection with the robot arm, the control device is also in communication connection with a camera, and the method comprises the following steps:
s101: and controlling the mechanical arm to move a preset distance according to a preset movement direction, acquiring a target image of a target object continuously acquired by the camera when the mechanical arm moves, and acquiring position information of the mechanical arm when each target image is acquired.
It can be understood that the camera is used for shooting a target object, and according to actual conditions, the target object may be an object to be grabbed by the mechanical arm or an object or a person to be tracked by the mechanical arm, which is reasonable.
In order to grab the target object or track the target object, the direction in which the target object is located may be predetermined, and the direction may be taken as the preset movement direction. In order to reduce the time for controlling the movement of the robot arm, the control device needs to control the robot arm to move a preset distance according to the preset movement direction.
The preset distance may be determined according to a motion scene of an actual mechanical arm, for example, it is assumed that a target object is placed on a workbench for the mechanical arm to grab. If the robot arm end parking position is located far from the target object, the preset distance may be set to be large, for example, 20 cm, 25 cm, 30 cm, or the like. If the robot arm end parking position is closer to the target object, the preset distance may be set to be smaller, for example, 10cm, 8 cm, 5cm, and the like, and is not particularly limited herein.
For example, if the preset moving direction is to the right and the preset distance is 10cm, the control device may control the robot arm to move to the right by 10 cm.
Because the target object is positioned in the preset movement direction, when the mechanical arm moves, a target image of the target object continuously acquired by the camera can be acquired.
The camera can continuously acquire the target image in various ways, but the way of continuously acquiring the target image can be used as the way of continuously acquiring the target image. For example, 15-30 frames may be acquired for 1 second, or one frame may be acquired for determining how much distance to move according to the moving speed of the mechanical arm, for example: assuming that the moving speed of the mechanical arm is 15cm/s, the mode of continuously acquiring the target image by the camera can be that 1 frame is acquired by moving 1 cm.
It can be understood that when acquiring the target image acquired by the camera, the current position of the mechanical arm is known, and therefore, when the mechanical arm moves, the position information of the mechanical arm when acquiring each target image can be acquired.
For example, during the movement of the robot arm, the acquired data may be represented as: { (I _1, S _1), (I _2, S _2), … …, (I _ n, S _ n) }, where I _ n represents the nth target picture, and S _ n represents the state information of the robot arm itself, such as position information and attitude information, at the time of acquiring the nth target picture, and S _ n mainly represents the position information of the robot arm in this application because the attitude of the robot arm remains substantially unchanged during the movement of the robot arm.
S102: and extracting the features of the target images to obtain the image features of the target images, and obtaining labels corresponding to the target images according to the corresponding relation between the preset image features and the labels, wherein the labels are used for identifying the relative position relation between the mechanical arm and the target object.
In order to obtain the labels corresponding to the target images, a large number of image samples are obtained in advance, wherein the image samples are images of the target objects obtained in advance, the target objects are located at various different positions in the image samples, and the shapes of the target objects in the image samples can be different. For example, the target object is a cup, the cup may be located at the center, the edge, etc. of the image sample, and the cup may be upright, upside down, horizontal, oblique, etc. in the image sample.
It can be understood that, when the image samples are obtained, the current position of the mechanical arm and the position of the target object are known, so that the control device can determine the label corresponding to each image sample according to the current position of the mechanical arm and the position of the target object, that is, the relative position relationship between the mechanical arm and the target object, and thus, the label can be used for identifying the relative position relationship between the mechanical arm and the target object.
Since the image characteristics of each image sample are different, after the label corresponding to each image sample is determined, the correspondence between the image characteristics of the image sample and the label can be determined, and thus, the correspondence between the image characteristics of the image sample and the label is preset.
After the control device obtains the target image, feature extraction can be carried out on the target image to obtain the image feature of the target image, and then the label corresponding to the target image is obtained according to the preset corresponding relation between the image feature and the label and the image feature of the target image.
For clarity and layout, the specific generation manner of the label will be described as an example.
S103: and judging whether the labels meeting preset conditions exist in the labels corresponding to the target images, if so, executing the step S104, and if not, not performing any processing.
After the control device obtains the tags corresponding to the target images, it needs to determine whether the tags meeting the preset conditions exist in the tags corresponding to the target images.
In general, the relative position relationship between the mechanical arm identified by the label satisfying the preset condition and the target object may be: the distance between the tail end of the mechanical arm and the target object is within a preset range. If the target object is an object to be grabbed by the mechanical arm, the object can be grabbed at the moment; if the target object is an object or a person to be tracked by the mechanical arm, at this time, the mechanical arm can swing a preset action to avoid being found. Therefore, the position of the mechanical arm when the mechanical arm moves to the position for collecting the image corresponding to the label meeting the preset condition is in the optimal state, and the optimal state can be the most suitable state for grabbing the object, or the most suitable state for swinging out the preset action and the like. The preset range may be determined according to factors such as the size of the target object, and is not specifically limited herein.
If there is a tag satisfying the preset condition, the control apparatus may perform step S104 at this time.
S104: and determining the target position information of the mechanical arm when the target image corresponding to the existing label is acquired.
When determining that a label meeting a preset condition exists, the control device indicates that the mechanical arm is in an optimal state at a position when the mechanical arm moves to acquire an image corresponding to the label, for example: the optimal state may be the most suitable for grasping an object, or the most suitable for swinging out a preset action, etc., and therefore, the optimal position corresponding to the optimal state needs to be determined.
When the target image is acquired, the position information of the mechanical arm when the target image is acquired at the same time, so that the target position information of the mechanical arm when the target image corresponding to the existing label is acquired can be determined.
S105: and controlling the mechanical arm to move to a position corresponding to the target position information.
After the target position information is determined, the control device may control the robot arm to move to a position corresponding to the target position information, i.e., an optimal position. In order to carry out the subsequent steps, such as: grab an object or swing out a preset action, etc.
It is understood that, in the case of performing the steps S101 to S105, the robot arm performs a long-distance movement and a return movement. That is to say, the control equipment controls the mechanical arm to move to the position corresponding to the target position information and controls the mechanical arm to move twice in total, and compared with a mode that the mechanical arm movement needs to be controlled for multiple times, the time for controlling the mechanical arm movement is greatly reduced.
It can be seen that in the scheme provided in the embodiment of the present invention, the control device controls the mechanical arm to move a preset distance in accordance with a preset movement direction, acquires target images of target objects continuously acquired by the camera when the mechanical arm moves, acquires position information of the mechanical arm when each target image is acquired, performs feature extraction on the target images to obtain image features of the target images, obtains tags corresponding to each target image according to a correspondence between the preset image features and the tags, then determines whether a tag meeting a preset condition exists in the tags corresponding to each target image, determines target position information of the mechanical arm when a target image corresponding to the existing tag is acquired if the tag exists, and controls the mechanical arm to move to a position corresponding to the target position information. When the mechanical arm is controlled to move, firstly, the mechanical arm is controlled to move for a long distance according to a preset movement direction, then a target image is continuously collected in the moving process, the optimal position in the long-distance moving process is determined according to the target image, and then the mechanical arm is controlled to return to the optimal position.
The method shown in fig. 1 is described in detail below with a specific embodiment:
for example: assuming that the preset movement direction is leftward and the preset movement distance is 10cm, assuming that an image is acquired when the mechanical arm moves for 1 cm;
the control equipment controls the mechanical arm to move 10cm leftwards, obtains a target image of a target object continuously acquired by the camera when the mechanical arm moves, and acquires position information of the mechanical arm when each target image is acquired, and obtains { (I _1, S _1), (I _2, S _2), … …, (I _10, S _10) }, wherein I _1 represents a1 st target picture acquired by the camera when the mechanical arm moves 1cm, and S _1 represents the position information of the mechanical arm when the 1 st target picture is acquired;
extracting the features of the target images (I _1-I _10) to obtain the image features of the target images (I _1-I _10), and obtaining labels corresponding to the target images according to the corresponding relation between the preset image features and the labels;
judging whether a label meeting a preset condition exists in labels corresponding to the target images;
assuming that a target image corresponding to a label meeting a preset condition is a 7 th target image, and determining target position information of the mechanical arm when the 7 th target image is acquired;
and controlling the mechanical arm to move to a position corresponding to the target position information, namely 7cm on the left side of the initial position of the mechanical arm.
Thus, the control device controls the robot arm to move 10cm to the left and then 3cm to the right back to 7cm to the left of the initial position. The control device controls the mechanical arm to move twice.
Referring to fig. 2, in an implementation manner of the embodiment of the present invention, the process of acquiring the preset movement direction may include:
s201: and acquiring a current image of the target object acquired by the camera at the current moment.
It can be understood that, after the camera acquires the current image of the target object at the current moment, the current image is sent to the control device, and the control device obtains the current image. It should be noted that the target object may refer to an object to be grasped by the robot arm, or may refer to an object or a person to be tracked by the robot arm, which is not limited to this, and is not limited to this.
In an embodiment, the camera may acquire an image of the target object in real time, and send the acquired image to the control device, so that the image of the target object received by the control device at the current time is the current image. In another embodiment, the camera may acquire an image of the target object at the current moment when receiving a shooting instruction of the control device, and send the image to the control device, so that the control device may also receive the image, that is, the current image. Of course, the camera may also acquire the image of the target object in real time, and when receiving the acquisition instruction of the control device, the camera may send the current image of the target object acquired at the current time to the control device, which is also reasonable.
S202: and extracting the features of the current image to obtain the image features of the current image, and obtaining the label corresponding to the current image according to the corresponding relation between the preset image features and the label.
In order to obtain a label corresponding to a current image, a large number of image samples are obtained in advance, wherein the image samples are images of a target object obtained in advance, the target object is located at various different positions in the image samples, and the shapes of the target object in the image samples can be different. For example, the target object is a cup, the cup may be located at the center, the edge, etc. of the image sample, and the cup may be upright, upside down, horizontal, oblique, etc. in the image sample.
It can be understood that, when the image samples are obtained, the current position of the mechanical arm and the position of the target object are known, so that the control device can determine the label corresponding to each image sample according to the current position of the mechanical arm and the position of the target object, that is, the relative position relationship between the mechanical arm and the target object, and thus, the label can be used for identifying the relative position relationship between the mechanical arm and the target object.
Since the image characteristics of each image sample are different, after the label corresponding to each image sample is determined, the correspondence between the image characteristics of the image sample and the label can be determined, and thus, the correspondence between the image characteristics of the image sample and the label is preset.
After the control device obtains the current image, feature extraction can be carried out on the current image to obtain the image feature of the current image, and then the label corresponding to the current image is obtained according to the preset corresponding relation between the image feature and the label and the image feature of the current image.
The above feature extraction is performed on the current image to obtain the image feature of the current image, and then, according to the preset corresponding relationship between the image feature and the tag and the image feature of the current image, there are various ways of obtaining the tag corresponding to the current image, including but not limited to the following:
the first mode is as follows: structural pattern recognition
The method comprises the steps of identifying a current image to achieve the purpose of feature extraction, obtaining image features of the current image, and then evaluating the matching degree of the image features of the current image and each mode through a preset matching degree calculation mode, wherein one mode comprises a corresponding relation between the image features and labels, so that the labels corresponding to the current image are obtained according to the matched modes.
The second mode is as follows: convolutional neural network
Inputting the current image into a pre-trained target convolutional neural network so that the target convolutional neural network performs feature extraction on the current image to obtain image features of the current image, and obtaining a label corresponding to the current image according to the corresponding relation between the image features of the image sample and the label included in the current image.
Specifically, the target convolutional neural network is: and training the pre-constructed initial convolutional neural network based on the image sample and the corresponding label to obtain the convolutional neural network. The image sample is an image of a target object acquired in advance, the target object is located at various different positions in the image sample, and the forms of the target object in the image sample may be different. For example, the target object is a cup, the cup may be located at the center, the edge, etc. of the image sample, and the cup may be upright, upside down, horizontal, oblique, etc. in the image sample.
The label is used for identifying the relative position relationship between the mechanical arm and the target object. It can be understood that, when the image samples are obtained, the current position of the mechanical arm and the position of the target object are known, so that the control device can determine the label corresponding to each image sample according to the current position of the mechanical arm and the position of the target object, that is, the relative position relationship between the mechanical arm and the target object, and thus, the label can be used for identifying the relative position relationship between the mechanical arm and the target object.
Therefore, the trained target convolutional neural network contains the corresponding relation between the image characteristics of the image sample and the label, and further, the control device inputs the current image into the target convolutional neural network, so that the target convolutional neural network can obtain the label corresponding to the current image according to the corresponding relation between the image characteristics of the image sample and the label contained in the target convolutional neural network and the image characteristics of the current image.
For clarity of the scheme and clear layout, a specific training mode of the target convolutional neural network will be described as an example.
S203: and determining the preset movement direction corresponding to the label corresponding to the current image according to the corresponding relation between the preset label and the movement direction.
In order to determine the movement direction of the mechanical arm, after determining the label corresponding to the current image, the control device may determine a preset movement direction corresponding to the label corresponding to the current image according to a preset correspondence between the label and the movement direction.
Since the labels identify the relative positional relationship between the robot arm and the target object, the control device may pre-establish a correspondence between the labels and the movement directions, each label corresponding to one movement direction. For example, assuming that the number of tags is 27, from 0 to 26, tag 1 identifies the relative position relationship between the robot arm and the target object as follows: if the target object is located at the lower left 30 degrees of the end of the mechanical arm, the corresponding relationship between the tag 1 and the moving direction is: tag 1 corresponds to a 30 degree down left orientation. Further, when the label is 1, the target moving direction is a left 30 degrees lower direction.
Therefore, the label corresponding to the current image is obtained through a characteristic extraction mode, and then the preset movement direction is determined according to the corresponding relation between the preset label and the movement direction.
As an implementation manner of the embodiment of the present invention, when it is determined in step S103 in fig. 1 that there is no tag that satisfies the preset condition, the process returns to step S101.
It can be understood that if no label meeting the preset condition exists, it indicates that the position of the mechanical arm from the movement to the acquisition of the image corresponding to a certain label is in the optimal state, at this time, the control device needs to find the optimal position corresponding to the optimal state of the mechanical arm, at this time, the mechanical arm can be continuously controlled to move for the preset distance according to the preset movement direction, and the process of controlling the movement of the mechanical arm is repeatedly executed; it is also possible to reconfirm the acquisition of the preset moving direction, i.e., to perform S201-S203.
Therefore, when the mechanical arm is controlled to move for a long distance and the optimal position corresponding to the optimal state of the mechanical arm is not found, the mechanical arm is continuously controlled to move for a long distance until the optimal position corresponding to the optimal state is found, and the mechanical arm is controlled to move to the found optimal position.
The above-mentioned feature extraction is performed on the target image to obtain the image features of the target image, and various ways of obtaining the label corresponding to each target image according to the preset corresponding relationship between the image features and the labels include, but are not limited to, the following:
the first mode is as follows: structural pattern recognition
The method comprises the steps of identifying a target image to achieve the purpose of feature extraction, obtaining image features of the target image, and then evaluating the matching degree of the image features of the target image and each mode through a preset matching degree calculation mode, wherein one mode comprises one corresponding relation between the image features and labels, so that the labels corresponding to the target image are obtained according to the matched modes.
The second mode is as follows: convolutional neural network
Inputting the target image into a target convolutional neural network trained in advance so that the target convolutional neural network performs feature extraction on the target image to obtain image features of the target image, and obtaining a label corresponding to each target image according to the corresponding relation between the image features of the image sample contained in the target convolutional neural network and the labels.
Specifically, the target convolutional neural network is: and training the pre-constructed initial convolutional neural network based on the image sample and the corresponding label to obtain the convolutional neural network. The image sample is an image of a target object acquired in advance, the target object is located at various different positions in the image sample, and the forms of the target object in the image sample may be different. For example, the target object is a cup, the cup may be located at the center, the edge, etc. of the image sample, and the cup may be upright, upside down, horizontal, oblique, etc. in the image sample.
The label is used for identifying the relative position relationship between the mechanical arm and the target object. It can be understood that, when the image samples are obtained, the current position of the mechanical arm and the position of the target object are known, so that the control device can determine the label corresponding to each image sample according to the current position of the mechanical arm and the position of the target object, that is, the relative position relationship between the mechanical arm and the target object, and thus, the label can be used for identifying the relative position relationship between the mechanical arm and the target object.
Therefore, the trained target convolutional neural network contains the corresponding relation between the image characteristics of the image sample and the label, and further, the control device inputs the target image into the target convolutional neural network, so that the target convolutional neural network can obtain the label corresponding to the target image according to the corresponding relation between the image characteristics of the image sample and the label contained in the target convolutional neural network and the image characteristics of the target image.
For clarity of the scheme and clear layout, a specific training mode of the target convolutional neural network will be described as an example.
As an implementation manner of the embodiment of the present invention, as shown in fig. 3, the above training manner of the target convolutional neural network may include the following steps:
s301: and constructing an initial convolutional neural network.
It can be understood that the control device first needs to construct an initial convolutional neural network, and then trains it to obtain the target convolutional neural network. In one embodiment, a caffe tool may be used to construct an initial convolutional neural network that includes a plurality of convolutional layers.
S302: the target object is placed at a preset position, the motion direction of the mechanical arm is changed, and a plurality of image samples of the target object continuously collected when the camera moves in each motion direction are obtained.
The image samples are images of the target object acquired by the camera, and generally, the target object is located at various different positions in each image sample, and the form of the target object in the image sample may also be different. In this way, the image sample can represent the characteristics of the target object in various forms, and the initial convolutional neural network is convenient to train subsequently. For example, the target object is a cup, the cup may be located at the center, the edge, etc. of the image sample, and the cup may be upright, upside down, horizontal, oblique, etc. in the image sample. Conditions such as light may also be different when acquiring an image sample.
When the plurality of image samples are acquired, the target object may be placed at a preset position, and then the moving direction of the mechanical arm is changed, for example: the directions of movement are forward, left and right. Therefore, the camera arranged on the mechanical arm collects the image samples of the target object continuously collected when the mechanical arm moves along each motion direction. For example, the target object may be placed on a platform such as a console, and then the robot arm is controlled to change the movement direction, so that a plurality of image samples in each direction can be obtained.
Because the more image samples are obtained in unit time, the more accurate the trained target convolutional neural network is, and when the image samples are obtained, the image samples of the target object are continuously collected in each motion direction.
S303: and determining a label corresponding to each image sample according to the position information of the mechanical arm and the position information of the target object when each image sample is collected and according to a preset label generation rule.
It can be understood that when each image sample is obtained, the current position of the mechanical arm and the position of the target object are known, so that the control device can determine the label corresponding to each image sample according to the current position of the mechanical arm and the position of the target object and according to a preset label generation rule.
Specifically, in one embodiment, the position information of the target object may be represented as (x1, y1, z1), the position information of the robotic arm may be represented as (x2, y2, z2), and the way to determine the corresponding label for each image sample may be:
and determining a label corresponding to each image sample according to a preset label generation rule according to the size relationship between x2 in the position information of the mechanical arm and x1 in the position information of the target object, the size relationship between y2 and y1 and the size relationship between z2 and z1 when each image sample is acquired.
In general, (x1, y1, z1) and (x2, y2, z2) may be the coordinates of the center of the target object and the end of the robot arm, respectively, in the environment coordinate system. The environment coordinate system may be a preset three-dimensional coordinate system as long as the positions of the target object and the robot arm can be represented, and is not particularly limited herein.
That is, for each image sample, the control device may determine the label corresponding to the image sample according to the magnitude relationship of three coordinate values among the coordinates of the end of the robot arm and the coordinates of the center of the target object at the present time. It can be understood that, because the coordinates of the end of the mechanical arm and the coordinates of the center of the target object represent the positions of the mechanical arm and the target object, the label generated according to the size relationship between the two identifies the relative position relationship between the mechanical arm and the target object.
S304: and inputting the image sample and the corresponding label into an initial convolutional neural network for training.
After the label corresponding to each image sample is determined, the control device may input the image sample and the label corresponding thereto into the initial convolutional neural network for training. Specifically, the initial convolutional neural network predicts the label corresponding to the image sample according to the image feature of the image sample, and for clarity of description, the label predicted by the initial convolutional neural network according to the image feature of the image sample is referred to as a prediction label in this step, and the label corresponding to the image sample determined in the above step S303 is referred to as a true label.
After the initial convolutional neural network obtains the prediction label of the image sample, the prediction label is compared with the real label of the image sample, the difference value of the two is calculated through a predefined target function, and the parameters of the initial convolutional neural network are adjusted through a back propagation method according to the difference value. In the training process, all image samples can be circularly traversed, and the parameters of the initial convolutional neural network are continuously adjusted.
The specific implementation manner of the back propagation method may adopt any back propagation manner in the related art, and is not specifically limited and described herein. The manner of defining the objective function and the specific expression of the objective function may be set according to factors such as capture precision, and are not specifically limited herein.
S305: and when the value of the target function of the initial convolutional neural network is not changed any more or the accuracy of the output result corresponding to the image sample reaches a preset accuracy, finishing training to obtain the target convolutional neural network containing the corresponding relation between the image characteristics of the image sample and the label.
When the value of the target function of the initial convolutional neural network is not changed or the accuracy of the output result corresponding to the image sample reaches a preset accuracy, it is indicated that the initial convolutional neural network can be already applied to most of the image samples at the moment, and an accurate result is obtained, so that training can be stopped, the parameters of the initial convolutional neural network are not adjusted, and further the target convolutional neural network is obtained.
The preset accuracy may be determined according to the accuracy required for capturing, and may be, for example, 85%, 90%, 95%, and the like, which is not specifically limited herein.
Therefore, the initial convolutional neural network is trained through the training mode, a target convolutional neural network containing the corresponding relation between the image characteristics of the image sample and the label can be obtained, the label corresponding to the image can be obtained through the target convolutional neural network, and then the motion direction of the mechanical arm is determined.
Referring to fig. 4, after step S105 in fig. 1, a method for controlling the movement of a robot arm according to an embodiment of the present invention may further include:
s106: and judging whether the existing label is the same as a preset end label or not, and if so, executing the step S107.
After the control of the robot arm moving to the position corresponding to the target position information, it is described that the robot arm has returned to the optimal position corresponding to the optimal state during the long-distance movement, and for grabbing the object, the optimal position is not necessarily suitable for grabbing, so that, in order to determine whether the optimal position is suitable for grabbing, the control device needs to determine whether the existing tag is the same as the preset end tag.
In general, the relative position relationship between the robot arm identified by the end-preset tag and the target object is as follows: the distance between the tail end of the mechanical arm and the target object is within a suitable grabbing range. The mechanical arm can grab the object at the moment. The suitable grasping range may be determined according to factors such as the size of the target object, and is not specifically limited herein.
If the label present is the same as the preset end label, indicating that gripping is appropriate at this time, the control device may then execute step S107.
S107: and controlling the mechanical arm to grab the target object.
If the existing label is the same as the preset finishing label, the mechanical arm can perform grabbing action at the moment, so that the control equipment can control the mechanical arm to grab the target object at the moment, and the grabbing of the target object is completed.
Therefore, under the condition that the existing label is the same as the preset ending label, the mechanical arm is controlled to grab the target object.
If the existing tag is different from the preset end tag, it indicates that the existing tag is not suitable for grabbing, and the robot arm needs to continue moving to grab, so on the basis of the method shown in fig. 4, when it is determined that the existing tag is different from the preset end tag, the method may further include:
acquiring a current image of a target object acquired by a camera at the current moment, performing feature extraction on the current image to obtain image features of the current image, obtaining a label corresponding to the current image according to a corresponding relation between preset image features and the label, determining a preset movement direction corresponding to the label corresponding to the current image according to a corresponding relation between the preset label and the movement direction, and returning to the step S101.
Because the existing tag is different from the preset ending tag, the tag is not suitable for grabbing at this time, and the mechanical arm needs to continue to move to grab, the position of the target object to be grabbed needs to be determined in order to control the mechanical arm to continue to move, so that the direction of the target object is determined, and the mechanical arm is controlled to grab the target object.
Therefore, the control device needs to acquire a current image of the target object acquired by the camera at the current time, perform feature extraction on the current image to obtain image features of the current image, obtain a tag corresponding to the current image according to a corresponding relationship between a preset image feature and the tag, determine a preset movement direction corresponding to the tag corresponding to the current image according to a corresponding relationship between the preset tag and the movement direction, and refer to steps S201 to S203 in fig. 2 for the process of determining the preset movement direction, which is not described herein again.
After the preset movement direction is determined, the control device may control the mechanical arm to move a preset distance according to the preset movement direction so as to execute subsequent control of the mechanical arm to move to a position corresponding to the target position information, and execute the steps S101 to S105 in a loop until a tag meeting the preset condition exists. It is understood that the robot arm performs one long-distance movement and one return movement per cycle of the steps S101 to S105.
As an implementation manner of the embodiment of the present invention, in order to facilitate a user to check a capture state of a target object, when a robot arm successfully captures the target object, the method may further include:
and outputting the information of successful grabbing.
When the mechanical arm successfully grabs the target object, the control device can output grabbing success information to prompt a user that grabbing is successful. Of course, the control device may also record the information of successful grabbing, so as to calculate the information of accurate grabbing rate, successful grabbing rate and the like in the following process.
As to the specific manner of outputting the grabbing success information by the control device, the embodiment of the present invention is not specifically limited herein, as long as the user can obtain the grabbing success information. For example, it is reasonable that the control device may display the information on the success of the grabbing through the display screen, or may output the information on the success of the grabbing through voice broadcast or the like.
The following describes a specific generation method of the tag:
for the case where the position information of the target object is represented as (x1, y1, z1) and the position information of the robot arm is represented as (x2, y2, z2), as an embodiment of the present invention, the preset tag generation rule includes:
when the position information of the mechanical arm and the position information of the target object meet a preset combination condition, generating a label corresponding to the preset combination condition, wherein the preset combination condition is a combination of any one of a first group of preset conditions, any one of a second group of preset conditions and any one of a third group of preset conditions, and the first group of preset conditions include: the second set of preset conditions comprises three conditions that | x2-x1| is not more than a preset value, | x2-x1| is more than the preset value, x2 > x1 and | x2-x1| are more than the preset value, and x2 < x 1: the third set of preset conditions comprises three conditions of | y2-y1| being not more than a preset value, | y2-y1| being more than the preset value, y2 > y1 and | y2-y1| being more than the preset value, and y2 < y 1: the absolute value of the z2-z1 is not more than a preset value, the absolute value of the z2-z1 is more than a preset value, the absolute value of the z2 is more than z1, the absolute value of the z2-z1 is more than a preset value, and the absolute value of the z2 is more than z 1.
Specifically, the coordinates of the center of the target object are (x1, y1, z1) and the coordinates of the end of the robot arm are (x2, y2, z2), then | x2-x1| represents the distance of the target object from the end of the robot arm in the x-axis direction. Similarly, | y2-y1| represents the distance of the target object from the end of the robot arm in the y-axis direction, | z2-z1| represents the distance of the target object from the end of the robot arm in the z-axis direction.
Then, when | x2-x1| is not greater than the preset value, it indicates that the target object is very close to the end of the robot arm in the x-axis direction, and when | x2-x1| is greater than the preset value, it indicates that the target object is far from the end of the robot arm in the x-axis direction, then at this time, if x2 > x1, it indicates that the target object is at the right side of the end of the robot arm in the x-axis direction, and if x2 < x1, it indicates that the target object is at the left side of the end of the robot arm in the x-axis direction.
Similarly, when y2-y1 is not greater than the preset value, it indicates that the target object is very close to the end of the robot arm in the y-axis direction, and when y2-y1 is greater than the preset value, it indicates that the target object is far from the end of the robot arm in the y-axis direction, then at this time, if y2 > y1, it indicates that the target object is in front of the end of the robot arm in the y-axis direction, and if y2 < y1, it indicates that the target object is behind the end of the robot arm in the y-axis direction. When | z2-z1| is not greater than a preset value, it indicates that the target object is very close to the robot arm in the z-axis direction, and when | z2-z1| is greater than the preset value, it indicates that the target object is far from the end of the robot arm in the z-axis direction, and at this time, if z2 > z1, it indicates that the target object is above the end of the robot arm in the z-axis direction, and if z2 < z1, it indicates that the target object is below the end of the robot arm in the z-axis direction.
It should be noted that, if the target object is an object to be grasped by the robot arm, the preset value may be determined according to grasping accuracy and factors such as the type and size of the target object, and if the target object is small, the preset value may be small, for example, 3cm, 5cm, 7cm, and the like; if the target object is larger, the preset value may be larger, for example, 10cm, 15cm, 18 cm, etc., and is not limited herein. Of course, the preset value may also be set to 0, and then | x2-x1| is not greater than the preset value, that is, | x2-x1| is 0, which indicates that at this time, in the x-axis direction, the position of the end of the robot arm coincides with the position of the center of the target object, and at this time, the grabbing precision is high.
If the target object is an object or a person to be tracked by the mechanical arm, the preset value can be determined according to tracking compactness, the type and the size of the target object and other factors, wherein the tracking compactness refers to the distance between the mechanical arm and the tracked object during tracking, the tracking compactness is high when the distance is small, and the tracking compactness is low when the distance is large. If the target object is an object, since the object cannot be tracked, the preset value can be set to be smaller, such as 3cm, 5cm, 7cm and the like, if the tracking compactness is high; if the target object is a person, in order to avoid being found, the preset value may be set to be larger, for example, 1 meter, 1.5 meters, 2 meters, and the like, which is not particularly limited herein.
It is understood that, for the first set of preset conditions, the second set of preset conditions and the third set of preset conditions, each set of preset conditions includes three conditions, and thus 27 preset combination conditions can be combined. The 27 preset combination conditions correspond to 27 positional relationships between the robot arm and the target object, and the 27 positional relationships are determined by coordinate values of the end of the robot arm and the center of the target object. The 27 preset combination conditions correspond to 27 tags, and in one embodiment, the 27 tags may be numbers 0 to 26, but the 27 tags may also be tags in other forms as long as the 27 positional relationships can be represented, for example, a1, a2 … a27, and the like, which is reasonable.
For example, taking the target object as an object to be grasped by the robot arm as an example, if a certain preset combination condition includes: if the preset combination condition corresponds to a label 5, it can be understood that the moving direction corresponding to the label 5 is right below, and if the preset combination condition corresponds to the label 5, the target object is located below the end of the mechanical arm in the z-axis direction. For another example, some predetermined combination conditions include: if | x2-x1| is not greater than the preset value, | y2-y1| is not greater than the preset value, and | z2-z1| is not greater than the preset value, it indicates that the distance between the target object and the end of the robot arm is very short at this time, and a grabbing action can be performed, and if the label corresponding to the preset combination condition is 0, it can be understood that the label 0 is the preset end label.
Therefore, through the labels generated by the label generation rule, 27 position relationships between the mechanical arm and the target object can be identified, the 27 labels correspond to 27 movement directions, and in the process of controlling the movement of the mechanical arm, the optimal movement direction of the mechanical arm at the current moment can be obtained according to the 27 labels.
Corresponding to the method embodiment, the embodiment of the invention also provides a device for controlling the mechanical motion.
The following describes a device for controlling the motion of a robot arm according to an embodiment of the present invention.
As shown in fig. 5, an apparatus for controlling the motion of a robot arm is applied to a control device communicatively connected to the robot arm, the control device is further communicatively connected to a camera, and the apparatus may include:
a mechanical arm moving module 401, configured to control the mechanical arm to move a preset distance in a preset movement direction, acquire a target image of a target object continuously acquired by the camera when the mechanical arm moves, and acquire position information of the mechanical arm when each target image is acquired;
a tag determination module 402, configured to perform feature extraction on the target image to obtain image features of the target image, and obtain a tag corresponding to each target image according to a preset correspondence between the image features and the tags, where the tag is used to identify a relative position relationship between the mechanical arm and the target object;
a first judging module 403, configured to judge whether a tag meeting a preset condition exists in tags corresponding to the target images, and if yes, trigger a target location information determining module 404;
the target position information determining module 404 is configured to determine target position information of the robot arm when a target image corresponding to the existing tag is acquired;
and a control module 405, configured to control the mechanical arm to move to a position corresponding to the target position information.
It can be seen that in the scheme provided in the embodiment of the present invention, the control device controls the mechanical arm to move a preset distance in accordance with a preset movement direction, acquires target images of target objects continuously acquired by the camera when the mechanical arm moves, acquires position information of the mechanical arm when each target image is acquired, performs feature extraction on the target images to obtain image features of the target images, obtains tags corresponding to each target image according to a correspondence between the preset image features and the tags, then determines whether a tag meeting a preset condition exists in the tags corresponding to each target image, determines target position information of the mechanical arm when a target image corresponding to the existing tag is acquired if the tag exists, and controls the mechanical arm to move to a position corresponding to the target position information. When the mechanical arm is controlled to move, firstly, the mechanical arm is controlled to move for a long distance according to a preset movement direction, then a target image is continuously collected in the moving process, the optimal position in the long-distance moving process is determined according to the target image, and then the mechanical arm is controlled to return to the optimal position.
As an implementation manner of the embodiment of the present invention, the apparatus may further include an obtaining module, where the obtaining module is configured to obtain the preset movement direction, and the obtaining module may include:
the current image acquisition unit is used for acquiring a current image of the target object acquired by the camera at the current moment;
the label determining unit is used for extracting the features of the current image to obtain the image features of the current image, and obtaining a label corresponding to the current image according to the corresponding relation between the preset image features and the label;
and the preset movement direction determining unit is used for determining the preset movement direction corresponding to the label corresponding to the current image according to the corresponding relation between the preset label and the movement direction.
As an implementation manner of the embodiment of the present invention, the apparatus may further include:
and the returning module is used for triggering the mechanical arm moving module when judging that no label meeting preset conditions exists in the labels corresponding to the target images.
As an implementation manner of the embodiment of the present invention, the tag determining module 402 may be specifically configured to:
inputting the target image into a target convolutional neural network trained in advance by a convolutional neural network training module so that the target convolutional neural network performs feature extraction on the target image to obtain image features of the target image, and obtaining a label corresponding to each target image according to the corresponding relation between the image features of the image sample contained in the target image and the labels;
wherein the target convolutional neural network is: and training a pre-constructed initial convolutional neural network based on the image sample and the corresponding label thereof to obtain a convolutional neural network, wherein the target convolutional neural network comprises the corresponding relation between the image characteristics of the image sample and the label.
As an implementation manner of the embodiment of the present invention, the convolutional neural network training module may include:
the model building unit is used for building an initial convolutional neural network;
the image sample acquisition unit is used for placing the target object at a preset position, changing the motion direction of the mechanical arm and acquiring a plurality of image samples of the target object continuously acquired by the camera when the mechanical arm moves according to each motion direction;
the label generating unit is used for determining a label corresponding to each image sample according to the position information of the mechanical arm and the position information of the target object when each image sample is collected and according to a preset label generating rule;
the model training unit is used for inputting the image sample and the corresponding label into the initial convolutional neural network for training;
and the training completion unit is used for completing training when the value of the target function of the initial convolutional neural network is not changed or the accuracy of the output result corresponding to the image sample reaches a preset accuracy, so as to obtain the target convolutional neural network containing the corresponding relation between the image characteristics of the image sample and the label.
As an implementation manner of the embodiment of the present invention, the apparatus may further include:
the second judgment module is used for judging whether the existing label is the same as a preset end label or not after controlling the mechanical arm to move to the position corresponding to the target mechanical arm position information, and if so, the grabbing module is triggered;
the grabbing module is used for controlling the mechanical arm to grab the target object.
As an implementation manner of the embodiment of the present invention, the apparatus may further include:
and the success information output module is used for outputting the grabbing success information when the mechanical arm successfully grabs the target object.
The embodiment of the present invention further provides a control device, as shown in fig. 6, which includes a processor 601, a communication interface 602, a memory 603, and a communication bus 604, where the processor 601, the communication interface 602, and the memory 603 complete mutual communication through the communication bus 604,
a memory 603 for storing a computer program;
the processor 601 is configured to implement the following steps when executing the program stored in the memory 603:
controlling the mechanical arm to move for a preset distance according to a preset movement direction, acquiring target images of target objects continuously acquired by the camera when the mechanical arm moves, and acquiring position information of the mechanical arm when each target image is acquired;
extracting features of the target images to obtain image features of the target images, and obtaining labels corresponding to the target images according to the corresponding relation between preset image features and the labels, wherein the labels are used for identifying the relative position relation between the mechanical arm and the target object;
judging whether a label meeting a preset condition exists in labels corresponding to the target images;
if the label meeting the preset condition exists, determining the target position information of the mechanical arm when the target image corresponding to the existing label is acquired;
and controlling the mechanical arm to move to a position corresponding to the target position information.
It can be seen that in the scheme provided in the embodiment of the present invention, the control device controls the mechanical arm to move a preset distance in accordance with a preset movement direction, acquires target images of target objects continuously acquired by the camera when the mechanical arm moves, acquires position information of the mechanical arm when each target image is acquired, performs feature extraction on the target images to obtain image features of the target images, obtains tags corresponding to each target image according to a correspondence between the preset image features and the tags, then determines whether a tag meeting a preset condition exists in the tags corresponding to each target image, determines target position information of the mechanical arm when a target image corresponding to the existing tag is acquired if the tag exists, and controls the mechanical arm to move to a position corresponding to the target position information. When the mechanical arm is controlled to move, firstly, the mechanical arm is controlled to move for a long distance according to a preset movement direction, then a target image is continuously collected in the moving process, the optimal position in the long-distance moving process is determined according to the target image, and then the mechanical arm is controlled to return to the optimal position.
The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the electronic equipment and other equipment.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component.
As an implementation manner of the embodiment of the present invention, the process of acquiring the preset movement direction may include:
acquiring a current image of the target object acquired by the camera at the current moment;
extracting features of the current image to obtain image features of the current image, and obtaining a label corresponding to the current image according to a preset corresponding relation between the image features and the label;
and determining a preset movement direction corresponding to the label corresponding to the current image according to the corresponding relation between the preset label and the movement direction.
As an implementation manner of the embodiment of the present invention, the method may further include:
and if the label meeting the preset condition does not exist, returning to execute the step of controlling the mechanical arm to move for the preset distance according to the preset movement direction.
As an implementation manner of the embodiment of the present invention, the step of performing feature extraction on the target image to obtain image features of the target image, and obtaining a label corresponding to each target image according to a preset correspondence between the image features and the labels may include:
inputting the target image into a pre-trained target convolutional neural network so that the target convolutional neural network performs feature extraction on the target image to obtain image features of the target image, and obtaining a label corresponding to each target image according to the corresponding relation between the image features of the image sample contained in the target image and the labels;
wherein the target convolutional neural network is: and training a pre-constructed initial convolutional neural network based on the image sample and the corresponding label thereof to obtain a convolutional neural network, wherein the target convolutional neural network comprises the corresponding relation between the image characteristics of the image sample and the label.
As an implementation manner of the embodiment of the present invention, the training manner of the target convolutional neural network may include:
constructing an initial convolutional neural network;
placing the target object at a preset position, changing the motion direction of the mechanical arm, and acquiring a plurality of image samples of the target object, which are continuously acquired by the camera when the mechanical arm moves according to each motion direction;
determining a label corresponding to each image sample according to the position information of the mechanical arm and the position information of the target object when each image sample is collected and a preset label generation rule;
inputting the image sample and the label corresponding to the image sample into the initial convolutional neural network for training;
and when the value of the target function of the initial convolutional neural network is not changed or the accuracy of the output result corresponding to the image sample reaches a preset accuracy, finishing training to obtain the target convolutional neural network containing the corresponding relation between the image characteristics of the image sample and the label.
As an implementation manner of the embodiment of the present invention, after the step of controlling the robot arm to move to the position corresponding to the target robot arm position information, the method may further include:
judging whether the existing label is the same as a preset ending label or not;
and if so, controlling the mechanical arm to grab the target object.
As an implementation manner of the embodiment of the present invention, when the robot arm successfully grasps the target object, the method may further include:
and outputting the information of successful grabbing.
An embodiment of the present invention further provides a computer-readable storage medium, in which a computer program is stored, and when executed by a processor, the computer program implements the following steps:
controlling the mechanical arm to move for a preset distance according to a preset movement direction, acquiring target images of target objects continuously acquired by the camera when the mechanical arm moves, and acquiring position information of the mechanical arm when each target image is acquired;
extracting features of the target images to obtain image features of the target images, and obtaining labels corresponding to the target images according to the corresponding relation between preset image features and the labels, wherein the labels are used for identifying the relative position relation between the mechanical arm and the target object;
judging whether a label meeting a preset condition exists in labels corresponding to the target images;
if the label meeting the preset condition exists, determining the target position information of the mechanical arm when the target image corresponding to the existing label is acquired;
and controlling the mechanical arm to move to a position corresponding to the target position information.
As can be seen, in the solution provided in the embodiment of the present invention, when the computer program is executed by the processor, the mechanical arm is controlled to move a preset distance in accordance with a preset movement direction, and a target image of a target object continuously acquired by the camera when the mechanical arm moves is acquired, and position information of the mechanical arm when each target image is acquired, feature extraction is performed on the target image to obtain image features of the target image, a tag corresponding to each target image is obtained according to a correspondence between the preset image features and the tag, and then it is determined whether a tag meeting a preset condition exists in the tags corresponding to each target image, and if the tag exists, target position information of the mechanical arm when the target image corresponding to the existing tag is acquired is determined, and the mechanical arm is controlled to move to a position corresponding to the target position information. When the mechanical arm is controlled to move, firstly, the mechanical arm is controlled to move for a long distance according to a preset movement direction, then a target image is continuously collected in the moving process, the optimal position in the long-distance moving process is determined according to the target image, and then the mechanical arm is controlled to return to the optimal position.
As an implementation manner of the embodiment of the present invention, the process of acquiring the preset movement direction may include:
acquiring a current image of the target object acquired by the camera at the current moment;
extracting features of the current image to obtain image features of the current image, and obtaining a label corresponding to the current image according to a preset corresponding relation between the image features and the label;
and determining a preset movement direction corresponding to the label corresponding to the current image according to the corresponding relation between the preset label and the movement direction.
As an implementation manner of the embodiment of the present invention, the method may further include:
and if the label meeting the preset condition does not exist, returning to execute the step of controlling the mechanical arm to move for the preset distance according to the preset movement direction.
As an implementation manner of the embodiment of the present invention, the step of performing feature extraction on the target image to obtain image features of the target image, and obtaining a label corresponding to each target image according to a preset correspondence between the image features and the labels may include:
inputting the target image into a pre-trained target convolutional neural network so that the target convolutional neural network performs feature extraction on the target image to obtain image features of the target image, and obtaining a label corresponding to each target image according to the corresponding relation between the image features of the image sample contained in the target image and the labels;
wherein the target convolutional neural network is: and training a pre-constructed initial convolutional neural network based on the image sample and the corresponding label thereof to obtain a convolutional neural network, wherein the target convolutional neural network comprises the corresponding relation between the image characteristics of the image sample and the label.
As an implementation manner of the embodiment of the present invention, the training manner of the target convolutional neural network may include:
constructing an initial convolutional neural network;
placing the target object at a preset position, changing the motion direction of the mechanical arm, and acquiring a plurality of image samples of the target object, which are continuously acquired by the camera when the mechanical arm moves according to each motion direction;
determining a label corresponding to each image sample according to the position information of the mechanical arm and the position information of the target object when each image sample is collected and a preset label generation rule;
inputting the image sample and the label corresponding to the image sample into the initial convolutional neural network for training;
and when the value of the target function of the initial convolutional neural network is not changed or the accuracy of the output result corresponding to the image sample reaches a preset accuracy, finishing training to obtain the target convolutional neural network containing the corresponding relation between the image characteristics of the image sample and the label.
As an implementation manner of the embodiment of the present invention, after the step of controlling the robot arm to move to the position corresponding to the target robot arm position information, the method may further include:
judging whether the existing label is the same as a preset ending label or not;
and if so, controlling the mechanical arm to grab the target object.
As an implementation manner of the embodiment of the present invention, when the robot arm successfully grasps the target object, the method may further include:
and outputting the information of successful grabbing.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.
Claims (16)
1. A method of controlling motion of a robotic arm, for use with a control device communicatively coupled to the robotic arm, the control device further communicatively coupled to a camera, the method comprising:
the method comprises the steps of controlling the mechanical arm to move for a preset distance according to a preset movement direction, acquiring target images of target objects continuously acquired by a camera when the mechanical arm moves, and acquiring position information of the mechanical arm when each target image is acquired, wherein the preset movement direction is the direction of the target object which is determined in advance;
extracting features of the target images to obtain image features of the target images, and obtaining labels corresponding to the target images according to the corresponding relation between preset image features and the labels, wherein the labels are used for identifying the relative position relation between the mechanical arm and the target object;
judging whether a label meeting a preset condition exists in labels corresponding to the target images;
if the label meeting the preset condition exists, determining the target position information of the mechanical arm when the target image corresponding to the existing label is acquired;
and controlling the mechanical arm to move to a position corresponding to the target position information.
2. The method according to claim 1, wherein the process of obtaining the preset moving direction comprises:
acquiring a current image of the target object acquired by the camera at the current moment;
extracting features of the current image to obtain image features of the current image, and obtaining a label corresponding to the current image according to a preset corresponding relation between the image features and the label;
and determining a preset movement direction corresponding to the label corresponding to the current image according to the corresponding relation between the preset label and the movement direction.
3. The method of claim 1, further comprising:
and if the label meeting the preset condition does not exist, returning to execute the step of controlling the mechanical arm to move for the preset distance according to the preset movement direction.
4. The method according to claim 1, wherein the step of extracting features of the target images to obtain image features of the target images and obtaining labels corresponding to the target images according to a preset correspondence between the image features and the labels comprises:
inputting the target image into a pre-trained target convolutional neural network so that the target convolutional neural network performs feature extraction on the target image to obtain image features of the target image, and obtaining a label corresponding to each target image according to the corresponding relation between the image features of the image sample contained in the target image and the labels;
wherein the target convolutional neural network is: and training a pre-constructed initial convolutional neural network based on the image sample and the corresponding label thereof to obtain a convolutional neural network, wherein the target convolutional neural network comprises the corresponding relation between the image characteristics of the image sample and the label.
5. The method of claim 4, wherein the training mode of the target convolutional neural network comprises:
constructing an initial convolutional neural network;
placing the target object at a preset position, changing the motion direction of the mechanical arm, and acquiring a plurality of image samples of the target object, which are continuously acquired by the camera when the mechanical arm moves according to each motion direction;
determining a label corresponding to each image sample according to the position information of the mechanical arm and the position information of the target object when each image sample is collected and a preset label generation rule;
inputting the image sample and the label corresponding to the image sample into the initial convolutional neural network for training;
and when the value of the target function of the initial convolutional neural network is not changed or the accuracy of the output result corresponding to the image sample reaches a preset accuracy, finishing training to obtain the target convolutional neural network containing the corresponding relation between the image characteristics of the image sample and the label.
6. The method according to claim 1, wherein after the step of controlling the robot arm to move to the position corresponding to the target robot arm position information, the method further comprises:
judging whether the existing label is the same as a preset ending label or not;
and if so, controlling the mechanical arm to grab the target object.
7. The method of claim 6, wherein when the robotic arm succeeds in grasping the target object, the method further comprises:
and outputting the information of successful grabbing.
8. An apparatus for controlling motion of a robotic arm, for use with a control device communicatively coupled to the robotic arm, the control device further communicatively coupled to a camera, the apparatus comprising:
the mechanical arm moving module is used for controlling the mechanical arm to move for a preset distance according to a preset moving direction, acquiring target images of target objects continuously acquired by the camera when the mechanical arm moves and acquiring position information of the mechanical arm when each target image is acquired, wherein the preset moving direction is the direction in which the target object is determined in advance;
the label determining module is used for extracting the features of the target images to obtain the image features of the target images, and obtaining labels corresponding to the target images according to the corresponding relation between the preset image features and the labels, wherein the labels are used for identifying the relative position relation between the mechanical arm and the target object;
the first judging module is used for judging whether a label meeting a preset condition exists in labels corresponding to each target image, and if so, the target position information determining module is triggered;
the target position information determining module is used for determining the target position information of the mechanical arm when the target image corresponding to the existing label is acquired;
and the control module is used for controlling the mechanical arm to move to the position corresponding to the target position information.
9. The apparatus according to claim 8, further comprising an obtaining module, configured to obtain the preset moving direction, wherein the obtaining module comprises:
the current image acquisition unit is used for acquiring a current image of the target object acquired by the camera at the current moment;
the label determining unit is used for extracting the features of the current image to obtain the image features of the current image, and obtaining a label corresponding to the current image according to the corresponding relation between the preset image features and the label;
and the preset movement direction determining unit is used for determining the preset movement direction corresponding to the label corresponding to the current image according to the corresponding relation between the preset label and the movement direction.
10. The apparatus of claim 8, further comprising:
and the returning module is used for triggering the mechanical arm moving module when judging that no label meeting preset conditions exists in the labels corresponding to the target images.
11. The apparatus of claim 8, wherein the tag determination module is specifically configured to:
inputting the target image into a target convolutional neural network trained in advance by a convolutional neural network training module so that the target convolutional neural network performs feature extraction on the target image to obtain image features of the target image, and obtaining a label corresponding to each target image according to the corresponding relation between the image features of the image sample contained in the target image and the labels;
wherein the target convolutional neural network is: and training a pre-constructed initial convolutional neural network based on the image sample and the corresponding label thereof to obtain a convolutional neural network, wherein the target convolutional neural network comprises the corresponding relation between the image characteristics of the image sample and the label.
12. The apparatus of claim 11, wherein the convolutional neural network training module comprises:
the model building unit is used for building an initial convolutional neural network;
the image sample acquisition unit is used for placing the target object at a preset position, changing the motion direction of the mechanical arm and acquiring a plurality of image samples of the target object continuously acquired by the camera when the mechanical arm moves according to each motion direction;
the label generating unit is used for determining a label corresponding to each image sample according to the position information of the mechanical arm and the position information of the target object when each image sample is collected and according to a preset label generating rule;
the model training unit is used for inputting the image sample and the corresponding label into the initial convolutional neural network for training;
and the training completion unit is used for completing training when the value of the target function of the initial convolutional neural network is not changed or the accuracy of the output result corresponding to the image sample reaches a preset accuracy, so as to obtain the target convolutional neural network containing the corresponding relation between the image characteristics of the image sample and the label.
13. The apparatus of claim 8, further comprising:
the second judgment module is used for judging whether the existing label is the same as a preset end label or not after controlling the mechanical arm to move to the position corresponding to the target mechanical arm position information, and if so, the grabbing module is triggered;
the grabbing module is used for controlling the mechanical arm to grab the target object.
14. The apparatus of claim 13, further comprising:
and the success information output module is used for outputting the grabbing success information when the mechanical arm successfully grabs the target object.
15. The control equipment is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing mutual communication by the memory through the communication bus;
a memory for storing a computer program;
a processor for implementing the method steps of any of claims 1 to 7 when executing a program stored in the memory.
16. A computer-readable storage medium, characterized in that a computer program is stored in the computer-readable storage medium, which computer program, when being executed by a processor, carries out the method steps of any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711320833.6A CN109909998B (en) | 2017-12-12 | 2017-12-12 | Method and device for controlling movement of mechanical arm |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711320833.6A CN109909998B (en) | 2017-12-12 | 2017-12-12 | Method and device for controlling movement of mechanical arm |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109909998A CN109909998A (en) | 2019-06-21 |
CN109909998B true CN109909998B (en) | 2020-10-02 |
Family
ID=66957787
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711320833.6A Active CN109909998B (en) | 2017-12-12 | 2017-12-12 | Method and device for controlling movement of mechanical arm |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109909998B (en) |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110942552B (en) * | 2019-10-29 | 2022-03-18 | 宁波宇东金属箱柜有限公司 | Compact shelf quick pickup method, system and computer storage medium |
CN112775955B (en) * | 2019-11-06 | 2022-02-11 | 深圳富泰宏精密工业有限公司 | Mechanical arm coordinate determination method and computer device |
CN111230866B (en) * | 2020-01-16 | 2021-12-28 | 山西万合智能科技有限公司 | Calculation method for real-time pose of six-axis robot tail end following target object |
CN111251296B (en) * | 2020-01-17 | 2021-05-18 | 温州职业技术学院 | A Visual Inspection System for Palletizing Motor Rotors |
CN111890365B (en) * | 2020-07-31 | 2022-07-12 | 平安科技(深圳)有限公司 | Target tracking method and device, computer equipment and storage medium |
CN113184767B (en) * | 2021-04-21 | 2023-04-07 | 湖南中联重科智能高空作业机械有限公司 | Aerial work platform navigation method, device and equipment and aerial work platform |
CN113183141A (en) * | 2021-06-09 | 2021-07-30 | 乐聚(深圳)机器人技术有限公司 | Walking control method, device, equipment and storage medium for biped robot |
CN116512271A (en) * | 2023-05-24 | 2023-08-01 | 阳光新能源开发股份有限公司 | Mechanical arm control method, device, equipment and storage medium |
CN117226824A (en) * | 2023-06-20 | 2023-12-15 | 金锐 | Mechanical arm sorting control method, medium, equipment and device |
CN116699166B (en) * | 2023-08-08 | 2024-01-02 | 国网浙江省电力有限公司宁波供电公司 | Visual identification-based oil chromatography sample automatic positioning method and system |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104325268A (en) * | 2014-11-04 | 2015-02-04 | 南京赫曼机器人自动化有限公司 | Industrial robot three-dimensional space independent assembly method based on intelligent learning |
CN106020024A (en) * | 2016-05-23 | 2016-10-12 | 广东工业大学 | Mechanical arm tail end motion compensation device and compensation method thereof |
CN106094516A (en) * | 2016-06-08 | 2016-11-09 | 南京大学 | A kind of robot self-adapting grasping method based on deeply study |
CN107053168A (en) * | 2016-12-09 | 2017-08-18 | 南京理工大学 | A kind of target identification method and hot line robot based on deep learning network |
CN107220667A (en) * | 2017-05-24 | 2017-09-29 | 北京小米移动软件有限公司 | Image classification method, device and computer-readable recording medium |
CN107225571A (en) * | 2017-06-07 | 2017-10-03 | 纳恩博(北京)科技有限公司 | Motion planning and robot control method and apparatus, robot |
CN107263480A (en) * | 2017-07-21 | 2017-10-20 | 深圳市萨斯智能科技有限公司 | A kind of robot manipulation's method and robot |
CN107428004A (en) * | 2015-04-10 | 2017-12-01 | 微软技术许可有限责任公司 | The automatic collection of object data and mark |
CN107414832A (en) * | 2017-08-08 | 2017-12-01 | 华南理工大学 | A kind of mobile mechanical arm crawl control system and method based on machine vision |
CN107450376A (en) * | 2017-09-09 | 2017-12-08 | 北京工业大学 | A kind of service mechanical arm crawl attitude angle computational methods based on intelligent family moving platform |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9902071B2 (en) * | 2015-12-18 | 2018-02-27 | General Electric Company | Control system and method for brake bleeding |
-
2017
- 2017-12-12 CN CN201711320833.6A patent/CN109909998B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104325268A (en) * | 2014-11-04 | 2015-02-04 | 南京赫曼机器人自动化有限公司 | Industrial robot three-dimensional space independent assembly method based on intelligent learning |
CN107428004A (en) * | 2015-04-10 | 2017-12-01 | 微软技术许可有限责任公司 | The automatic collection of object data and mark |
CN106020024A (en) * | 2016-05-23 | 2016-10-12 | 广东工业大学 | Mechanical arm tail end motion compensation device and compensation method thereof |
CN106094516A (en) * | 2016-06-08 | 2016-11-09 | 南京大学 | A kind of robot self-adapting grasping method based on deeply study |
CN107053168A (en) * | 2016-12-09 | 2017-08-18 | 南京理工大学 | A kind of target identification method and hot line robot based on deep learning network |
CN107220667A (en) * | 2017-05-24 | 2017-09-29 | 北京小米移动软件有限公司 | Image classification method, device and computer-readable recording medium |
CN107225571A (en) * | 2017-06-07 | 2017-10-03 | 纳恩博(北京)科技有限公司 | Motion planning and robot control method and apparatus, robot |
CN107263480A (en) * | 2017-07-21 | 2017-10-20 | 深圳市萨斯智能科技有限公司 | A kind of robot manipulation's method and robot |
CN107414832A (en) * | 2017-08-08 | 2017-12-01 | 华南理工大学 | A kind of mobile mechanical arm crawl control system and method based on machine vision |
CN107450376A (en) * | 2017-09-09 | 2017-12-08 | 北京工业大学 | A kind of service mechanical arm crawl attitude angle computational methods based on intelligent family moving platform |
Also Published As
Publication number | Publication date |
---|---|
CN109909998A (en) | 2019-06-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109909998B (en) | Method and device for controlling movement of mechanical arm | |
CN109407603B (en) | Method and device for controlling mechanical arm to grab object | |
CN111508066B (en) | Unordered stacking workpiece grabbing system based on 3D vision and interaction method | |
US11111785B2 (en) | Method and device for acquiring three-dimensional coordinates of ore based on mining process | |
WO2018098824A1 (en) | Photographing control method and apparatus, and control device | |
EP4102458A1 (en) | Method and apparatus for identifying scene contour, and computer-readable medium and electronic device | |
CN112119627A (en) | Target following method and device based on holder, holder and computer storage medium | |
CN114102585A (en) | Article grabbing planning method and system | |
JP2018116599A (en) | Information processor, method for processing information, and program | |
JP2013132742A (en) | Object gripping apparatus, control method for object gripping apparatus, and program | |
CN106808472A (en) | Location of workpiece posture computing device and handling system | |
CN110293553B (en) | Method and device for controlling mechanical arm to operate object and method and device for model training | |
CN109444146A (en) | A kind of defect inspection method, device and the equipment of industrial processes product | |
JP6907206B2 (en) | Exercise planning methods, exercise planning equipment and non-temporary computer-readable recording media | |
US10945888B2 (en) | Intelligent blind guide method and apparatus | |
CN112775967A (en) | Mechanical arm grabbing method, device and equipment based on machine vision | |
CN110910628B (en) | Interactive processing method and device for vehicle damage image shooting and electronic equipment | |
CN109871829A (en) | A kind of detection model training method and device based on deep learning | |
CN117890922A (en) | Target tracking and track predicting method, device, equipment and storage medium | |
CN117428779A (en) | Robot grabbing control method, device, equipment and storage medium | |
CN108121347A (en) | For the method, apparatus and electronic equipment of control device movement | |
CN110181504B (en) | Method and device for controlling mechanical arm to move and control equipment | |
CN112631333A (en) | Target tracking method and device of unmanned aerial vehicle and image processing chip | |
TWI717772B (en) | Method, device, mobile terminal and storage medium for calling target function | |
CN103901885B (en) | Information processing method and messaging device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |