CN120328089A - A method and device for identifying and grabbing empty luggage basket - Google Patents
A method and device for identifying and grabbing empty luggage basketInfo
- Publication number
- CN120328089A CN120328089A CN202510383102.4A CN202510383102A CN120328089A CN 120328089 A CN120328089 A CN 120328089A CN 202510383102 A CN202510383102 A CN 202510383102A CN 120328089 A CN120328089 A CN 120328089A
- Authority
- CN
- China
- Prior art keywords
- basket
- motion
- luggage
- grabbing
- mechanical arm
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1656—Programme controls characterised by programming, planning systems for manipulators
- B25J9/1664—Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B65—CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
- B65G—TRANSPORT OR STORAGE DEVICES, e.g. CONVEYORS FOR LOADING OR TIPPING, SHOP CONVEYOR SYSTEMS OR PNEUMATIC TUBE CONVEYORS
- B65G43/00—Control devices, e.g. for safety, warning or fault-correcting
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B65—CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
- B65G—TRANSPORT OR STORAGE DEVICES, e.g. CONVEYORS FOR LOADING OR TIPPING, SHOP CONVEYOR SYSTEMS OR PNEUMATIC TUBE CONVEYORS
- B65G43/00—Control devices, e.g. for safety, warning or fault-correcting
- B65G43/08—Control devices operated by article or material being fed, conveyed or discharged
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B65—CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
- B65G—TRANSPORT OR STORAGE DEVICES, e.g. CONVEYORS FOR LOADING OR TIPPING, SHOP CONVEYOR SYSTEMS OR PNEUMATIC TUBE CONVEYORS
- B65G47/00—Article or material-handling devices associated with conveyors; Methods employing such devices
- B65G47/74—Feeding, transfer, or discharging devices of particular kinds or types
- B65G47/90—Devices for picking-up and depositing articles or materials
- B65G47/905—Control arrangements
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/243—Classification techniques relating to the number of classes
- G06F18/24323—Tree-organised classifiers
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/042—Knowledge-based neural networks; Logical representations of neural networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
- G06V10/449—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
- G06V10/451—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
- G06V10/454—Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/62—Extraction of image or video features relating to a temporal dimension, e.g. time-based feature extraction; Pattern tracking
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B65—CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
- B65G—TRANSPORT OR STORAGE DEVICES, e.g. CONVEYORS FOR LOADING OR TIPPING, SHOP CONVEYOR SYSTEMS OR PNEUMATIC TUBE CONVEYORS
- B65G2201/00—Indexing codes relating to handling devices, e.g. conveyors, characterised by the type of product or load being conveyed or handled
- B65G2201/02—Articles
- B65G2201/0235—Containers
- B65G2201/0258—Trays, totes or bins
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B65—CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
- B65G—TRANSPORT OR STORAGE DEVICES, e.g. CONVEYORS FOR LOADING OR TIPPING, SHOP CONVEYOR SYSTEMS OR PNEUMATIC TUBE CONVEYORS
- B65G2203/00—Indexing code relating to control or detection of the articles or the load carriers during conveying
- B65G2203/02—Control or detection
- B65G2203/0208—Control or detection relating to the transported articles
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B65—CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
- B65G—TRANSPORT OR STORAGE DEVICES, e.g. CONVEYORS FOR LOADING OR TIPPING, SHOP CONVEYOR SYSTEMS OR PNEUMATIC TUBE CONVEYORS
- B65G2203/00—Indexing code relating to control or detection of the articles or the load carriers during conveying
- B65G2203/02—Control or detection
- B65G2203/0208—Control or detection relating to the transported articles
- B65G2203/0233—Position of the article
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B65—CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
- B65G—TRANSPORT OR STORAGE DEVICES, e.g. CONVEYORS FOR LOADING OR TIPPING, SHOP CONVEYOR SYSTEMS OR PNEUMATIC TUBE CONVEYORS
- B65G2203/00—Indexing code relating to control or detection of the articles or the load carriers during conveying
- B65G2203/04—Detection means
- B65G2203/041—Camera
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30241—Trajectory
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Mechanical Engineering (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Robotics (AREA)
- Biodiversity & Conservation Biology (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Human Computer Interaction (AREA)
- Manipulator (AREA)
Abstract
The invention discloses a luggage empty basket identification method and a device, the method comprises the steps of collecting images of a luggage conveyer belt and the movement conditions of surrounding personnel, predicting the movement track and the behavior of the surrounding personnel according to the movement conditions of the surrounding personnel, the method comprises the steps of calculating the motion trail of the grabbing mechanical arm based on a prediction result, obtaining an optimal motion trail based on the establishment of multi-target trail planning, determining grabbing sequences of a plurality of grabbing targets and performing algorithm optimization. The invention realizes efficient luggage empty basket grabbing, can simultaneously recover a plurality of empty baskets, predicts the state of the luggage basket in advance based on the motion trail and the motion of passengers, ensures the running stability and the safety of the empty basket recovery mechanical arm under the condition of considering the optimal time and the energy consumption, and can make path planning more quickly and simply for the existing path modes in the historical data.
Description
Technical Field
The invention relates to the field of luggage empty basket recognition and grabbing, in particular to a luggage empty basket recognition and grabbing method and device.
Background
As the volume of aviation passenger continues to rise, the scale of modern hub airport terminal continues to expand, and the high-efficiency guarantee capability of a luggage system faces serious challenges. For non-standardized luggage such as soft packages, special-shaped backpacks and the like, the luggage is normally loaded in a luggage basket during luggage delivery. The operation not only ensures the passing stability of the baggage in the high-speed conveying line, but also can effectively avoid the winding of the flexible package in the conveying process.
In the prior art, an automatic luggage basket grabbing system is provided, an empty basket image grabbing is completed through a visual camera, a cooperative robot is transmitted in real time, and the grabbing of the empty basket is achieved through the robot, but the grabbing robot in the prior art only grabs one basket at a time, so that the efficiency is low, intelligent planning and obstacle avoidance cannot be achieved, the luggage extraction action of passengers cannot be predicted in advance, the overall efficiency is low, and the performance is poor.
Disclosure of Invention
(One) solving the technical problems
In order to solve the technical problems, the invention provides a luggage empty basket identification grabbing method and device.
(II) technical scheme
In order to solve the technical problems and achieve the aim of the invention, the invention is realized by the following technical scheme:
a luggage empty basket recognition and grabbing method comprises the following steps:
s1, image acquisition and preprocessing, including acquiring a baggage conveyor belt image and surrounding personnel movement conditions through a camera;
S2, luggage basket identification and position tracking;
identifying the position of the basket in the image based on the specific shape of the basket, and tracking the position of the basket in combination with the running speed of the conveyor belt;
S3, predicting the motion trail and behaviors of surrounding personnel according to the motion condition of the surrounding personnel, and if the personnel are identified to pick up luggage, acquiring the future luggage basket emptying time and the moving position according to the motion speed and the moving speed of the conveyor belt;
s4, calculating a motion trail of the grabbing mechanical arm based on a prediction result, wherein the method comprises the steps of obtaining an optimal motion trail based on the establishment of multi-target trail planning, and determining grabbing sequences of a plurality of grabbing targets;
s5, optimizing a motion trail planning algorithm;
Further, the step S3 further includes processing the image acquired by the camera based on YoloV target detection algorithm, extracting the pedestrian from the background area of the image, adding a1×1 convolution module between YoloV and CSPBlock of the recognition model, and in the spatial pyramid pooling module, parallel adopting maximum pooling and average pooling and adding the results of the two.
Further, the motion trail prediction for surrounding people comprises prediction based on a pedestrian history trail sequence and pedestrian directions.
Furthermore, in the behavior prediction, the action recognition model based on the graph neural network realizes action data modeling and classification by constructing a dynamic relationship topological structure.
Further, the objective function of the multi-objective trajectory planning is as follows:
L=w1L1+w2L2+w3L3+ε
L 1 is a time objective function, L 2 is an energy consumption objective function, L 3 is a risk level objective function, epsilon is a penalty factor, and w 1、w2、w3 is a weight factor of the three respectively.
Further, the L 3 risk degree objective function is determined based on the height and the distance from surrounding personnel and the movement errors when the mechanical arm moves after clamping the empty basket and the mechanical arm rotates;
r 1 is a distance risk weight during linear motion, dis 1 is a minimum distance between the mechanical arm and surrounding personnel during linear motion, sigma 1 is a distance sensitivity coefficient during linear motion, h 1 is a mechanical arm end height during linear motion, delta 1 is a height adjustment coefficient during linear motion, r 2 is a distance risk weight during rotational motion, dis 2 is a minimum distance between the mechanical arm and surrounding personnel during rotational motion, sigma 2 is a distance sensitivity coefficient during rotational motion, h 2 is a mechanical arm end height during rotation, delta 2 is a height adjustment coefficient during rotation, r 3 is a motion error risk weight, x ac is an actual position, and x de is a set position.
Furthermore, epsilon comprises acceleration and jerk penalty, and the mechanical vibration and abrasion are caused by the speed and the change of the acceleration in the movement process of the mechanical arm, so that the acceleration and jerk are introduced;
Wherein k a,kj is a parameter adjusted according to the rigidity of the mechanical arm and the task dynamics respectively; In order for the acceleration to be a function of the acceleration, For jerk, n 1 and n 2 are the acceleration and the number of jerk changes, respectively.
Further, the step S5 comprises constructing a path mode, matching the similarity and selecting the path mode, wherein the step comprises the steps of acquiring a group of data of the position of an empty basket to be processed with a complete motion track and the predicted free time in the historical data, clustering the number of the empty baskets collected according to the complete motion track and the relative positions according to the complete motion track from the empty position of a mechanical arm to the position of a recovery empty basket returned to the stacking position, and carrying out similarity matching on the data to be processed, and planning by adopting the existing motion track in a historical database according to the matching situation.
The invention also provides a luggage empty basket identification grabbing device, which comprises an image acquisition and preprocessing module, a control module and a control module, wherein the image acquisition and preprocessing module is used for acquiring images of a luggage conveyor belt and the movement condition of surrounding personnel through a camera;
A basket identification and position tracking module for identifying the position of the basket in the image based on the basket specific shape and tracking the basket position in combination with the running speed of the conveyor belt;
the surrounding personnel movement condition prediction module is used for predicting movement tracks and behaviors of surrounding personnel;
The grabbing mechanical arm motion track calculation module is used for obtaining an optimal motion track based on the establishment of multi-target track planning and determining grabbing sequences of a plurality of grabbing targets;
and the motion trail planning algorithm optimization module is used for optimizing the motion trail planning algorithm in a lightweight way based on the path pattern recognition.
In addition, to achieve the above object, the present invention also provides a computer-readable storage medium having stored thereon program instructions of a baggage basket identification grabbing method, the program instructions of the baggage basket identification grabbing being executable by one or more processors to implement the steps of the baggage basket identification grabbing method as described above.
(III) beneficial effects
Compared with the prior art, the invention has the beneficial effects that:
(1) According to the invention, efficient luggage empty basket grabbing is realized, a plurality of empty baskets are recovered while starting from an origin point and returning to the origin point at one time, and the state of the luggage basket is predicted in advance based on the motion trail and the motion of a passenger, so that the grabbing mechanical arm can reach a designated position in advance, and the efficiency is improved;
(2) The invention intelligently plans the grabbing motion track and the action of the mechanical arm based on an improved path planning method, combines the planning of time, energy consumption and risk degree mechanical energy track, combines punishment factors, and ensures the running stability and safety of the empty basket recovery mechanical arm under the condition of considering the optimal time and energy consumption.
(3) The invention optimizes the motion trail planning algorithm, selects the existing planning trail by matching the constructed path modes, reduces the complexity of path operation, and can make path planning more quickly and simply for the existing path modes in the historical data.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
fig. 1 is a schematic flow chart of a baggage empty basket recognition grabbing method according to an embodiment of the application.
Detailed Description
Embodiments of the present disclosure are described in detail below with reference to the accompanying drawings.
Other advantages and effects of the present disclosure will become readily apparent to those skilled in the art from the following disclosure, which describes embodiments of the present disclosure by way of specific examples. It will be apparent that the described embodiments are merely some, but not all embodiments of the present disclosure. The disclosure may be embodied or practiced in other different specific embodiments, and details within the subject specification may be modified or changed from various points of view and applications without departing from the spirit of the disclosure. It should be noted that the following embodiments and features in the embodiments may be combined with each other without conflict. All other embodiments, which can be made by one of ordinary skill in the art without inventive effort, based on the embodiments in this disclosure are intended to be within the scope of this disclosure.
It should also be noted that the illustrations provided in the following embodiments merely illustrate the basic concepts of the disclosure by way of illustration, and only the components related to the disclosure are shown in the drawings and are not drawn according to the number, shape and size of the components in actual implementation, and the form, number and proportion of the components in actual implementation may be arbitrarily changed, and the layout of the components may be more complicated.
Referring to fig. 1, a baggage empty basket recognition grasping method includes the steps of:
s1, image acquisition and preprocessing, including acquiring a baggage conveyor belt image and surrounding personnel movement conditions through a camera;
S2, luggage basket identification and position tracking;
identifying the position of the basket in the image based on the specific shape of the basket, and tracking the position of the basket in combination with the running speed of the conveyor belt;
s3, predicting the motion trail and the behavior of surrounding personnel according to the motion condition of the surrounding personnel, wherein the method comprises the following steps:
s31, extracting surrounding personnel, processing an image acquired by a camera based on YoloV target detection algorithm, and extracting pedestrians from an image background area;
Because the complex environment area is identified in an airport, people can be interfered by surrounding environment during identification, especially, some humanoid billboards and the like can be used for carrying out false identification on the identification, in order to further improve the identification of surrounding people, a 1X1 convolution module is added between YoloV and CSPBlock of an identification model, and in a space pyramid pooling module, a strategy of maximum pooling and average pooling and adding results of the maximum pooling and the average pooling is adopted in parallel, so that the traditional single pooling operation is replaced. The design can better keep the characteristic information of different layers and enhance the expression capability of the characteristics. Then, through the improved feature pyramid structure, the feature image output by CSPBlock is added with the feature image output by the feature pyramid, so that the loss of shallow semantic information is effectively avoided, and the overall performance of the model is improved.
S32 motion trail prediction
Let the foot position of the ith person at time t be expressed as
Wherein, the Coordinates of foot positions, the foot position sequence is
In the area close to the luggage conveyor where there is a basket, i.e. indicating that he is about to pick up the luggage or not. The invention adds a pedestrian direction as input based on a history track sequence of people, namely learning distribution p (FP f|FPobs,OPobs), wherein FP f is a predicted pedestrian motion track and OR obs is a pedestrian direction.
Alternatively, the present invention obtains the trajectory of the person's movement at each future time based on the LSTM network.
S33, predicting the behavior, namely predicting whether surrounding personnel pick up luggage of a luggage basket or not, wherein the method is realized by the following steps:
The action recognition algorithm based on the multi-feature fusion graph neural network can model and classify and predict the action data of surrounding people by fusing various feature information and utilizing the graph neural network model. Assume that there are N samples and M action categories, each sample having K features including joint angle, body posture, movement speed, etc.
And the action recognition model based on the graph neural network realizes data modeling and classification by constructing a dynamic relationship topological structure. The architecture adopts a dynamic relationship modeling and characteristic propagation mechanism between nodes, and captures the time-space association characteristic in the action sequence by using the message transmission paradigm of the graph structure. In the feature extraction stage, the layering graph rolling operation respectively learns the local joint movement mode and the global limb cooperation rule, and the expression capability of key action features is enhanced through self-adaptive edge weight adjustment. The network depth merges node attributes (coordinates and speeds) and edge attributes (joint distance and motion phase), and multi-modal feature interaction is achieved by adopting a multi-head attention mechanism. And finally, mapping the high-order graph representation to an action tag space through a full-connection classification layer to finish the end-to-end identification task. The framework fully mines topological relevance in action data through hierarchical feature aggregation and context sensing mechanisms of the graph structure, and excellent classification robustness and generalization capability are shown in a complex scene.
S34, predicting corresponding luggage basket vacation time and moving position according to personnel picking luggage intention.
If the person is identified to pick up the luggage, the luggage basket emptying time and the moving position at the future moment are obtained according to the action speed and the moving speed of the conveyor belt.
S4, calculating motion trail of the grabbing mechanical arm based on prediction results
According to the invention, the process of starting from the origin point and returning to the origin point at one time is realized, a plurality of empty baskets are recovered simultaneously, an optimal motion track is obtained based on the establishment of a multi-target track plan, the grabbing sequence of a plurality of grabbing targets is determined, and the mechanical arm is rotated according to the position and the angle of the next empty basket to be grabbed, and the grabbed empty basket and the next empty basket to be grabbed are overlapped and grabbed together at the same angle. The method comprises the following steps:
S41, constructing a mechanical arm motion model
The method comprises the steps of establishing a mechanical arm motion model based on an improved D-H modeling method, setting an origin of a connecting rod coordinate system at a joint of the joint and a previous connecting rod joint, setting a Z axis of the coordinate system as a joint line of the joint and a next connecting rod joint, obtaining a homogeneous transformation matrix between adjacent coordinate systems according to an improved D-H parameter method, and establishing the origin of the coordinate at the head end of the connecting rod by the improved D-H parameter coordinate system, so that parameter interpretation ambiguity caused by coordinate system drift is eliminated, and the derivation process of the homogeneous transformation matrix between the adjacent coordinate systems is remarkably simplified.
S42, establishing a multi-target track planning model
S421, establishing a model
The invention performs track planning based on an improved Diikstra algorithm, combines the planning of time, energy consumption and risk degree mechanical energy tracks based on future luggage basket vacation time and moving position obtained in the step S3, combines punishment factors, and ensures the operation stability and safety of the empty basket recovery mechanical arm under the condition of considering optimal time and energy consumption.
S422, determining an objective function
L=w1L1+w2L2+w3L3+ε
L 1 is a time objective function, L 2 is an energy consumption objective function, L 3 is a risk degree objective function, epsilon is a penalty factor, and w 1、w2、w3 is a weight factor of the three respectively;
wherein, the
T i is the time taken by the ith sub-path to move, and n is the number of sub-paths contained in the whole movement path;
P i is the power of the i-th discretization sub-path linear motion, the motions 1 to n 1 are idle, the motions n 1 +1 to n are the loads with empty basket, k is the number of the mechanical arm rotation and the end execution action, and P j is the power consumption of the corresponding mechanical arm rotation and the end execution action.
The L 3 risk degree objective function is determined based on the height and the distance from surrounding personnel and the movement errors when the mechanical arm moves after clamping the empty basket and the mechanical arm rotates;
r 1 is a distance risk weight during linear motion, dis 1 is a minimum distance between the mechanical arm and surrounding personnel during linear motion, sigma 1 is a distance sensitivity coefficient during linear motion, h 1 is a mechanical arm end height during linear motion, delta 1 is a height adjustment coefficient during linear motion, r 2 is a distance risk weight during rotational motion, dis 2 is a minimum distance between the mechanical arm and surrounding personnel during rotational motion, sigma 2 is a distance sensitivity coefficient during rotational motion, h 2 is a mechanical arm end height during rotation, delta 2 is a height adjustment coefficient during rotation, r 3 is a motion error risk weight, x ac is an actual position, and x de is a set position.
Wherein the risk weight in the case of rotational movement is higher than in the case of linear movement;
Epsilon comprises acceleration and jerk penalty, and the mechanical vibration and abrasion are caused by the speed and the change of the acceleration in the motion process of the mechanical arm, so that the acceleration and jerk are introduced;
Wherein k a,kj is a parameter adjusted according to the rigidity of the mechanical arm and the task dynamics respectively; In order for the acceleration to be a function of the acceleration, For jerk, n 1 and n 2 are the acceleration and the number of jerk changes, respectively.
S423, performing dynamic track planning;
and carrying out motion track planning on the mechanical arm by adopting a dynamic planning method, specifically, predicting the motion track and the behavior of surrounding motion personnel according to real-time video, matching the planned track of the mechanical arm, and if collision between the planned track of the mechanical arm and the behavior of the surrounding personnel is detected to possibly happen, carrying out motion track planning on the mechanical arm again.
S5, optimizing a motion trail planning algorithm;
Because the motion trail planning algorithm has higher time complexity and larger required calculation resource, and the environment and personnel motion modes at the luggage extraction place are relatively fixed, the invention optimizes the motion trail planning algorithm in a light-weight way based on path mode identification, and specifically comprises the steps of constructing a path mode, matching the similarity and selecting the path mode;
S51, constructing a path mode, namely acquiring a group of data of the position of an empty basket to be processed and the predicted empty time in the historical data by taking a complete motion track as a unit, starting from a stacking position according to empty load of a mechanical arm to returning the empty basket to the stacking position as a complete motion track, and clustering according to the quantity and the relative positions of the empty baskets collected by the complete motion track to construct R path modes;
Optionally, the clustering method includes clustering by using a random forest.
Matching the number and the relative positions of empty baskets of the data to be processed with the R path modes in the previous step, and if the matching degree is greater than a set threshold value, considering that the two path modes belong to the same path mode;
And S53, planning by adopting the existing motion trail in the historical database according to the matching situation.
Therefore, the complexity of path operation is reduced, and path planning can be more quickly and simply performed on the path modes existing in the historical data.
According to the method, efficient luggage empty basket grabbing is achieved, the state of the luggage basket is predicted in advance based on the motion trail and the motion of a passenger, the grabbing mechanical arm can reach a designated position in advance, efficiency is improved, intelligent planning is conducted on the grabbing motion trail and the motion of the mechanical arm based on an improved path planning method, planning of time, energy consumption and risk degree mechanical energy trail is combined, punishment factors are combined, and running stability and safety of the empty basket recovery mechanical arm are guaranteed under the condition that optimal time and energy consumption are considered. The motion trail planning algorithm is optimized, and the existing planning trail is selected by matching the constructed path modes, so that the complexity of path operation is reduced, and the path planning can be more quickly and simply performed on the existing path modes in the historical data.
The embodiment of the invention also provides a luggage empty basket identification grabbing device, which comprises:
the image acquisition and preprocessing module is used for acquiring images of the luggage conveyor belt and the movement condition of surrounding personnel through the camera;
A basket identification and position tracking module for identifying the position of the basket in the image based on the basket specific shape and tracking the basket position in combination with the running speed of the conveyor belt;
the surrounding personnel movement condition prediction module is used for predicting movement tracks and behaviors of surrounding personnel;
The grabbing mechanical arm motion track calculation module is used for obtaining an optimal motion track based on the establishment of multi-target track planning and determining grabbing sequences of a plurality of grabbing targets;
and the motion trail planning algorithm optimization module is used for optimizing the motion trail planning algorithm in a lightweight way based on the path pattern recognition.
In addition, the embodiment of the invention also provides a computer readable storage medium, and the computer readable storage medium stores program instructions of the luggage basket identification grabbing method, and the program instructions of the luggage basket identification grabbing method can be executed by one or more processors to realize the steps of the luggage basket identification grabbing method.
The above examples are only illustrative of the preferred embodiments of the present invention and are not intended to limit the scope of the present invention, and various modifications and improvements made by those skilled in the art to the technical solution of the present invention should fall within the scope of protection defined by the claims of the present invention without departing from the spirit of the present invention.
Claims (10)
1. A luggage empty basket identification method, characterized by comprising the following steps:
s1, image acquisition and preprocessing, including acquiring a baggage conveyor belt image and surrounding personnel movement conditions through a camera;
S2, luggage basket identification and position tracking;
identifying the position of the basket in the image based on the specific shape of the basket, and tracking the position of the basket in combination with the running speed of the conveyor belt;
S3, predicting the motion trail and behaviors of surrounding personnel according to the motion condition of the surrounding personnel, and if the personnel are identified to pick up luggage, acquiring the future luggage basket emptying time and the moving position according to the motion speed and the moving speed of the conveyor belt;
s4, calculating a motion trail of the grabbing mechanical arm based on a prediction result, wherein the method comprises the steps of obtaining an optimal motion trail based on the establishment of multi-target trail planning, and determining grabbing sequences of a plurality of grabbing targets;
and S5, optimizing a motion trail planning algorithm.
2. A baggage claim 1 wherein step S3 further comprises processing the image captured by the camera based on YoloV target detection algorithm, extracting pedestrians from the background area of the image, adding a 1 x 1 convolution module between YoloV and CSPBlock of the identification model, and in the spatial pyramid pooling module, using the maximum pool and average pool in parallel and adding the results of the two.
3. A luggage void identification method as claimed in claim 2, wherein the prediction of the motion profile of surrounding personnel comprises a prediction based on a historical sequence of pedestrian profiles and pedestrian direction.
4. A luggage void basket recognition method according to claim 2, wherein the action recognition model based on the graph neural network in the action prediction realizes the modeling and classification of the action data by constructing a dynamic relationship topological structure.
5. A baggage empty basket recognition method according to claim 1, wherein the objective function of the multi-objective trajectory planning is as follows:
L=w1L1+w2L2+w3L3+ε
L 1 is a time objective function, L 2 is an energy consumption objective function, L 3 is a risk level objective function, epsilon is a penalty factor, and w 1、w2、w3 is a weight factor of the three respectively.
6. A luggage void basket recognition method according to claim 5, wherein the L 3 risk level objective function is determined based on the height and distance from surrounding personnel and movement errors when the robotic arm moves after gripping the void basket and the robotic arm rotates;
r 1 is a distance risk weight during linear motion, dis 1 is a minimum distance between the mechanical arm and surrounding personnel during linear motion, sigma 1 is a distance sensitivity coefficient during linear motion, h 1 is a mechanical arm end height during linear motion, delta 1 is a height adjustment coefficient during linear motion, r 2 is a distance risk weight during rotational motion, dis 2 is a minimum distance between the mechanical arm and surrounding personnel during rotational motion, sigma 2 is a distance sensitivity coefficient during rotational motion, h 2 is a mechanical arm end height during rotation, delta 2 is a height adjustment coefficient during rotation, r 3 is a motion error risk weight, x ac is an actual position, and x de is a set position.
7. A method of identifying a luggage void as claimed in claim 5 wherein epsilon includes acceleration and jerk penalties, the speed and acceleration variations during the movement of the mechanical arm causing mechanical vibration and wear, the invention introducing accelerations and jerks;
Wherein k a,kj is a parameter adjusted according to the rigidity of the mechanical arm and the task dynamics respectively; In order for the acceleration to be a function of the acceleration, For jerk, n 1 and n 2 are the acceleration and the number of jerk changes, respectively.
8. The method for identifying empty basket of luggage as claimed in claim 1, wherein the step S5 comprises constructing path modes, matching similarity and selecting path modes, wherein the method comprises obtaining a group of data of the position of an empty basket to be processed with a complete motion track and predicted free time in historical data, starting from a stacking position to a position of a recovered empty basket returning to the stacking position according to empty load of a mechanical arm as a complete motion track, clustering according to the number and relative positions of the empty baskets collected by the complete motion track, constructing a plurality of path modes, matching the similarity of the data to be processed, and planning by adopting the existing motion track in a historical database according to the situation of matching.
9. An apparatus based on a baggage air basket identification method according to any one of claims 1 to 8, comprising:
the image acquisition and preprocessing module is used for acquiring images of the luggage conveyor belt and the movement condition of surrounding personnel through the camera;
A basket identification and position tracking module for identifying the position of the basket in the image based on the basket specific shape and tracking the basket position in combination with the running speed of the conveyor belt;
the surrounding personnel movement condition prediction module is used for predicting movement tracks and behaviors of surrounding personnel;
The grabbing mechanical arm motion track calculation module is used for obtaining an optimal motion track based on the establishment of multi-target track planning and determining grabbing sequences of a plurality of grabbing targets;
and the motion trail planning algorithm optimization module is used for optimizing the motion trail planning algorithm in a lightweight way based on the path pattern recognition.
10. A computer readable storage medium having stored thereon program instructions of a baggage basket identification method, the baggage basket identification program instructions being executable by one or more processors to implement the steps of the baggage basket identification method according to any one of claims 1 to 8.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202510383102.4A CN120328089A (en) | 2025-03-28 | 2025-03-28 | A method and device for identifying and grabbing empty luggage basket |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202510383102.4A CN120328089A (en) | 2025-03-28 | 2025-03-28 | A method and device for identifying and grabbing empty luggage basket |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN120328089A true CN120328089A (en) | 2025-07-18 |
Family
ID=96360853
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202510383102.4A Pending CN120328089A (en) | 2025-03-28 | 2025-03-28 | A method and device for identifying and grabbing empty luggage basket |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN120328089A (en) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN120620230A (en) * | 2025-08-12 | 2025-09-12 | 中国东方航空设备集成有限公司 | Luggage tractor with hand-eye integrated mechanical arm |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2010048146A1 (en) * | 2008-10-20 | 2010-04-29 | Carnegie Mellon University | System, method and device for predicting navigational decision-making behavior |
| CN105700530A (en) * | 2016-04-11 | 2016-06-22 | 南京埃斯顿自动化股份有限公司 | Track planning method for robot joint space conveyor belt following movement |
| CN111027473A (en) * | 2019-12-09 | 2020-04-17 | 山东省科学院自动化研究所 | Target identification method and system based on human body joint motion real-time prediction |
| US11151668B1 (en) * | 2020-12-08 | 2021-10-19 | Umm Al-Qura University | Capacity constrained and user preference based scheduler, rescheduler, simulation, and prediction for crowds in spatio-temporal environments |
| CN119106888A (en) * | 2024-09-09 | 2024-12-10 | 江苏省南京工程高等职业学校 | Robot behavior data analysis method and system based on artificial intelligence |
| CN119323905A (en) * | 2024-10-30 | 2025-01-17 | 成都双流国际机场股份有限公司 | Airport entering and exiting flight information accurate display system for passenger service |
-
2025
- 2025-03-28 CN CN202510383102.4A patent/CN120328089A/en active Pending
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2010048146A1 (en) * | 2008-10-20 | 2010-04-29 | Carnegie Mellon University | System, method and device for predicting navigational decision-making behavior |
| CN105700530A (en) * | 2016-04-11 | 2016-06-22 | 南京埃斯顿自动化股份有限公司 | Track planning method for robot joint space conveyor belt following movement |
| CN111027473A (en) * | 2019-12-09 | 2020-04-17 | 山东省科学院自动化研究所 | Target identification method and system based on human body joint motion real-time prediction |
| US11151668B1 (en) * | 2020-12-08 | 2021-10-19 | Umm Al-Qura University | Capacity constrained and user preference based scheduler, rescheduler, simulation, and prediction for crowds in spatio-temporal environments |
| CN119106888A (en) * | 2024-09-09 | 2024-12-10 | 江苏省南京工程高等职业学校 | Robot behavior data analysis method and system based on artificial intelligence |
| CN119323905A (en) * | 2024-10-30 | 2025-01-17 | 成都双流国际机场股份有限公司 | Airport entering and exiting flight information accurate display system for passenger service |
Non-Patent Citations (1)
| Title |
|---|
| 王茹;王柳舒;: "BIM技术下IPD项目团队激励池分配研究", 科技管理研究, no. 13, 10 July 2017 (2017-07-10) * |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN120620230A (en) * | 2025-08-12 | 2025-09-12 | 中国东方航空设备集成有限公司 | Luggage tractor with hand-eye integrated mechanical arm |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11559885B2 (en) | Method and system for grasping an object | |
| Jaspers et al. | Multi-modal local terrain maps from vision and lidar | |
| Yu et al. | Machine learning optimizes the efficiency of picking and packing in automated warehouse robot systems | |
| CN120328089A (en) | A method and device for identifying and grabbing empty luggage basket | |
| KR102837653B1 (en) | Method of estimating position in local area in large sapce and robot and cloud server implementing thereof | |
| CN120215514B (en) | A reinforcement learning unmanned forklift obstacle avoidance scheduling method and system for dynamic obstacles | |
| CN120116222A (en) | A method for dynamic environment adaptation and behavior optimization of humanoid robots based on artificial intelligence | |
| CN120178713A (en) | A heterogeneous multi-machine collaboration method, terminal and readable storage medium | |
| Zhou et al. | An autonomous navigation approach for unmanned vehicle in outdoor unstructured terrain with dynamic and negative obstacles | |
| Hossain et al. | Toponav: Topological navigation for efficient exploration in sparse reward environments | |
| CN117621032A (en) | An object picking method and related equipment | |
| CN119472658B (en) | Unmanned vessel cluster surface partitioning collaborative classification salvage method based on deep learning | |
| Fujiyoshi et al. | Team C2M: two cooperative robots for picking and stowing in Amazon picking challenge 2016 | |
| Sebbata et al. | An adaptive robotic grasping with a 2-finger gripper based on deep learning network | |
| CN113074737B (en) | Multi-robot distributed collaborative vision mapping method based on scene identification | |
| Vyas et al. | Robotic grasp synthesis using deep learning approaches: a survey | |
| Liu et al. | An IMM-enabled adaptive 3D multi-object tracker for autonomous driving | |
| Simenthy et al. | Hybrid deep learning object detection algorithm for autonomous vehicles with hybrid optimization technique | |
| US12517525B1 (en) | Path creation, detection and prediction using primitives | |
| Wicaksono et al. | Optimizing uav navigation through non-uniform b-spline trajectory for tracking uav enemy | |
| CN117928539B (en) | Bridge mass surveying method based on four-rotor unmanned aerial vehicle | |
| CN121386773A (en) | Real-time early warning and autonomous obstacle avoidance system for loading and unloading operations | |
| Shi | Research on robot autonomous inspection visual perception system based on improved YOLOv11 and SLAM | |
| Ferreira et al. | A visual memory system for humanoid robots | |
| Shah et al. | A Deep Learning Approach for Autonomous Navigation of UAV |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |