CN113128298B - Loading and unloading behavior analysis method and monitoring system - Google Patents
Loading and unloading behavior analysis method and monitoring system Download PDFInfo
- Publication number
- CN113128298B CN113128298B CN201911423835.7A CN201911423835A CN113128298B CN 113128298 B CN113128298 B CN 113128298B CN 201911423835 A CN201911423835 A CN 201911423835A CN 113128298 B CN113128298 B CN 113128298B
- Authority
- CN
- China
- Prior art keywords
- loading
- unloading
- images
- personnel
- cargo
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/08—Logistics, e.g. warehousing, loading or distribution; Inventory or stock management
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Business, Economics & Management (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Economics (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Marketing (AREA)
- Tourism & Hospitality (AREA)
- Human Resources & Organizations (AREA)
- Development Economics (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
- Strategic Management (AREA)
- Entrepreneurship & Innovation (AREA)
- General Business, Economics & Management (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
Abstract
Analysis method and monitored control system of loading and unloading goods action, analysis method includes: identifying body joint points of a person in each image to be analyzed through a first deep neural network to obtain a plurality of groups of coordinate arrays related to the body joint points; deducing the behavior type of the person carrying out loading and unloading behaviors in the first video and the change rate of the carrying speed through the second deep neural network; deducing the cargo type of the cargo loaded in the first video through a third deep neural network; judging whether the loading and unloading behaviors are reasonable or not according to the change rate of the conveying speed, the behavior type of the personnel carrying out the loading and unloading behaviors, the cargo type of the loaded and unloaded cargoes and the range of the reasonable conveying speed change rate corresponding to the behavior type and the cargo type. The invention can timely identify the violent loading and unloading actions of loading and unloading workers by only carrying out video monitoring on the loading and unloading sites of the logistics park, and the process does not need on-site supervision of staff, and has low cost and good effect.
Description
Technical Field
The invention relates to the technical field of logistics, in particular to a loading and unloading behavior analysis method and a monitoring system.
Background
The logistics can comprise a plurality of links such as transportation, storage, loading and unloading, carrying, packaging, distribution and the like. The loading and unloading are an important link in logistics, if the handling is improper, the damage and even scrapping of the goods are easily caused, and the loading and unloading are carried out normally and reasonably, so that the damage rate of the goods can be effectively reduced.
In the prior art, the behavior of loading workers is lack of scientific and effective monitoring, and violent loading and unloading situations occur. And the goods damaged by violent handling are often difficult to catch after, i.e. when the goods are found damaged, it has been difficult to determine when and for what reason the goods are damaged.
Therefore, a scientific and effective analysis and monitoring method for loading and unloading behaviors is needed to timely identify and correct violent loading and unloading behaviors of loading and unloading workers, so that damage to goods due to violent loading and unloading is reduced.
Disclosure of Invention
The invention solves the technical problems that: and identifying the violent loading and unloading behaviors of loading and unloading workers in time.
In order to solve the above technical problems, an embodiment of the present invention provides a method for analyzing loading and unloading behaviors, including:
Capturing a first video;
acquiring a plurality of images to be analyzed from the first video, wherein the images to be analyzed can be used for seeing personnel carrying out loading and unloading behaviors and loaded and unloaded cargoes;
according to the multiple images to be analyzed, respectively identifying body joint points of a person in each image to be analyzed through a first deep neural network to obtain a plurality of groups of coordinate arrays related to the body joint points;
according to the coordinate arrays of the joint points of the body and the time of each image to be analyzed in the video, deducing the behavior type and the carrying speed change rate of the person carrying out loading and unloading behaviors in the first video through a second depth neural network;
deducing the cargo type of the cargo loaded in the first video through a third deep neural network according to one or more images to be analyzed;
judging whether the loading and unloading behaviors are reasonable or not according to the change rate of the conveying speed, the behavior type of the personnel carrying out the loading and unloading behaviors, the cargo type of the loaded and unloaded cargoes and the range of the reasonable conveying speed change rate corresponding to the behavior type and the cargo type.
Optionally, the method further comprises: pre-training a first deep neural network, comprising:
acquiring a plurality of images containing personnel with various different postures from a video;
training a first depth neural network through a plurality of images containing personnel with various different postures, and deducing personnel position areas of the personnel in the images;
And respectively outputting a coordinate array of the joint point positions of the personnel in the personnel position area of each image according to the images of the personnel with various different postures.
Optionally, the gesture includes: one or more of standing, bending over, or turning over half.
Optionally, the method further comprises: pre-training a second deep neural network, comprising:
The second depth neural network is trained through a plurality of images containing personnel with various different postures, a coordinate array of joint points of personnel in a personnel position area and the time of the images in a video, and the type of behavior of the personnel in the video is deduced according to the change of the joint points of the personnel in the images along with the time.
Optionally, the behavior types include: one or more of quick lifting of hands, quick kicking of legs, or quick falling of hands.
Optionally, the method further comprises: pre-training a third deep neural network, comprising:
acquiring a plurality of images of goods comprising various different goods types;
The third deep neural network is trained by a plurality of images of the cargo containing various cargo types for inferring the cargo type of the cargo in the images.
Optionally, the cargo type includes: one or more of a carton, wooden box, sack, knitted bag or bagged goods.
Optionally, the method further comprises: the range of reasonable conveying speed change rates corresponding to various behavior types and cargo types is preset.
Optionally, the determining whether the loading and unloading behavior is reasonable according to the rate of change of the conveying speed, the behavior type of the person performing the loading and unloading behavior, the cargo type of the loaded and unloaded cargo, and the range of the reasonable conveying speed rate of change corresponding to the behavior type and the cargo type includes:
If the change rate of the conveying speed is within the range of the reasonable change rate of the conveying speed corresponding to the behavior type and the cargo type, judging that the loading and unloading are normal;
and if the change rate of the conveying speed is out of the range of the reasonable change rate of the conveying speed corresponding to the behavior type and the cargo type, judging that the conveying speed is violently loaded and unloaded.
Optionally, the method further comprises: when the condition that loading and unloading behaviors are unreasonable is detected in the first video, the warning information is triggered.
In order to solve the above technical problem, an embodiment of the present invention further provides a monitoring system for loading and unloading behavior, including:
A processor adapted to load and execute instructions of a software program;
a memory adapted to store a software program comprising instructions for performing the steps of:
Capturing a first video;
acquiring a plurality of images to be analyzed from the first video, wherein the images to be analyzed can be used for seeing personnel carrying out loading and unloading behaviors and loaded and unloaded cargoes;
according to the multiple images to be analyzed, respectively identifying body joint points of a person in each image to be analyzed through a first deep neural network to obtain a plurality of groups of coordinate arrays related to the body joint points;
according to the coordinate arrays of the joint points of the body and the time of each image to be analyzed in the video, deducing the behavior type and the carrying speed change rate of the person carrying out loading and unloading behaviors in the first video through a second depth neural network;
deducing the cargo type of the cargo loaded in the first video through a third deep neural network according to one or more images to be analyzed;
judging whether the loading and unloading behaviors are reasonable or not according to the change rate of the conveying speed, the behavior type of the personnel carrying out the loading and unloading behaviors, the cargo type of the loaded and unloaded cargoes and the range of the reasonable conveying speed change rate corresponding to the behavior type and the cargo type.
Optionally, the software program further comprises instructions for performing the steps of: pre-training a first deep neural network, comprising:
acquiring a plurality of images containing personnel with various different postures from a video;
training a first depth neural network through a plurality of images containing personnel with various different postures, and deducing personnel position areas of the personnel in the images;
And respectively outputting a coordinate array of the joint point positions of the personnel in the personnel position area of each image according to the images of the personnel with various different postures.
Optionally, the software program further comprises instructions for performing the steps of: pre-training a second deep neural network, comprising:
The second depth neural network is trained through a plurality of images containing personnel with various different postures, a coordinate array of joint points of personnel in a personnel position area and the time of the images in a video, and the type of behavior of the personnel in the video is deduced according to the change of the joint points of the personnel in the images along with the time.
Optionally, the software program further comprises instructions for performing the steps of: pre-training a third deep neural network, comprising:
acquiring a plurality of images of goods comprising various different goods types;
The third deep neural network is trained by a plurality of images of the cargo containing various cargo types for inferring the cargo type of the cargo in the images.
Compared with the prior art, the technical scheme of the invention has the following beneficial effects:
According to the multiple images to be analyzed, respectively identifying body joint points of a person in each image to be analyzed through a first deep neural network to obtain a plurality of groups of coordinate arrays related to the body joint points; according to the coordinate arrays of the joint points of the body and the time of each image to be analyzed in the video, deducing the behavior type and the carrying speed change rate of the person carrying out loading and unloading behaviors in the first video through a second depth neural network; deducing the cargo type of the cargo loaded in the first video through a third deep neural network according to one or more images to be analyzed; judging whether the loading and unloading actions are reasonable or not according to the change rate of the conveying speed, the action type of the person carrying out the loading and unloading actions, the cargo type of the loaded and unloaded cargo and the range of the reasonable conveying speed change rate corresponding to the action type and the cargo type, so that the violent loading and unloading actions of loading and unloading workers can be timely identified only by carrying out video monitoring on the loading and unloading sites of a logistics park, and the process does not need on-site supervision of the staff, and has low cost and good effect.
Further, a first deep neural network is trained in advance for deducing a person position area of a person in an image, a second deep neural network is trained in advance for deducing a behavior type of the person in the video according to the change of joint points of the person in the image along with time, a third deep neural network is trained in advance for deducing a cargo type of the cargo in the image, and whether the cargo loading and unloading behavior is reasonable or not is judged according to the carrying speed change rate, the behavior type of the person carrying out cargo loading and unloading behavior, the cargo type of the cargo loaded and unloaded and the range of reasonable carrying speed change rates corresponding to the behavior type and the cargo type, so that judgment is more accurate, and a specific mode for training the 3 deep neural networks is disclosed.
Further, when the condition that loading and unloading behaviors are unreasonable is detected in the first video, warning information is triggered, for example, normal operation of loading and unloading workers is prompted by playing voice, the warning information is recorded in a system, and the like, so that the violent loading and unloading behaviors of the loading and unloading workers are corrected timely, and basis is provided for overtaking of goods damage caused by violent loading and unloading.
Drawings
FIG. 1 is a flow chart of a method for analyzing loading and unloading behavior according to an embodiment of the present invention.
Detailed Description
From the analysis in the background art section, it is known that in the prior art, there is a lack of scientific and effective monitoring of the behavior of loading workers, and that violent loading and unloading occurs. And the goods damaged by violent handling are often difficult to catch after, i.e. when the goods are found damaged, it has been difficult to determine when and for what reason the goods are damaged.
Therefore, a scientific and effective analysis and monitoring method for loading and unloading behaviors is needed to timely identify and correct violent loading and unloading behaviors of loading and unloading workers, so that damage to goods due to violent loading and unloading is reduced.
According to the multiple images to be analyzed, body joint points of a person in each image to be analyzed are identified through a first depth neural network, and a plurality of groups of coordinate arrays related to the body joint points are obtained; according to the coordinate arrays of the joint points of the body and the time of each image to be analyzed in the video, deducing the behavior type and the carrying speed change rate of the person carrying out loading and unloading behaviors in the first video through a second depth neural network; deducing the cargo type of the cargo loaded in the first video through a third deep neural network according to one or more images to be analyzed; judging whether the loading and unloading actions are reasonable or not according to the change rate of the conveying speed, the action type of the person carrying out the loading and unloading actions, the cargo type of the loaded and unloaded cargo and the range of the reasonable conveying speed change rate corresponding to the action type and the cargo type, so that the violent loading and unloading actions of loading and unloading workers can be timely identified only by carrying out video monitoring on the loading and unloading sites of a logistics park, and the process does not need on-site supervision of the staff, and has low cost and good effect.
In order that those skilled in the art will better understand and practice the invention, a detailed description will be given below with reference to specific embodiments thereof.
Example 1
As described below, the embodiment of the invention provides a method for analyzing loading and unloading behaviors.
The analysis method of loading and unloading behaviors in the embodiment is suitable for installing a camera at a dock (or other goods loading and unloading positions) of a logistics park, detecting and analyzing the video captured by the camera to obtain violent loading and unloading behaviors of loading and unloading workers in the video, and determining whether loading and unloading operations are standard or not through analyzing human body behaviors of the workers, types of goods, moving tracks and the like.
The following describes the method for analyzing loading and unloading behavior in detail by specific steps with reference to the flow chart of the method shown in fig. 1:
s101, training a first deep neural network in advance.
The method specifically comprises the following steps:
acquiring a plurality of images containing personnel with various different postures from a video;
training a first depth neural network through a plurality of images (the images can be the whole body of the person or partial shielding) containing the personnel with various different postures, and deducing the personnel position area of the personnel (whole) in the images;
And respectively outputting a coordinate array of the joint point positions of the personnel in the personnel position area of each image according to the images of the personnel with various different postures.
In some embodiments, the personnel location area may be enclosed by a rectangular frame.
In some embodiments, the gesture includes: one or more of standing, bending over, or turning over half.
In some embodiments, the first deep neural network may employ a top-down (top-down) deep neural network.
The input of the first depth neural network is a single frame image, the output is a set of coordinate arrays related to the joint points of the body, a plurality of images to be analyzed are respectively input into the first depth neural network to obtain a plurality of sets of coordinate arrays related to the joint points of the body, the time of the plurality of images to be analyzed in the video is recorded, and the change of the joint points of the person along with the time is obtained according to the information.
S102, training a second deep neural network in advance.
The method specifically comprises the following steps:
The second depth neural network is trained through a plurality of images containing personnel with various different postures, a coordinate array of joint points of personnel in a personnel position area and the time of the images in a video, and the type of behavior of the personnel in the video is deduced according to the change of the joint points of the personnel in the images along with the time.
In some embodiments, the behavior types include: one or more of quick lifting of hands, quick kicking of legs, or quick falling of hands.
Step S102 may be performed after step S101, so that the coordinate array of the human joint points in the human position area of each image output in step S101 may be utilized.
S103, training a third deep neural network in advance.
The method specifically comprises the following steps:
acquiring a plurality of images of goods comprising various different goods types;
The third deep neural network is trained by a plurality of images of the cargo containing various cargo types for inferring the cargo type of the cargo in the images.
The cargo types include: one or more of a carton, wooden box, sack, knitted bag or bagged goods.
Step S103 and step S101/S102 have no precedence relation and can be executed in parallel.
S104, presetting the range of reasonable conveying speed change rates corresponding to various behavior types and cargo types.
In the subsequent step S110, the range of the reasonable conveying speed change rate corresponding to the preset behavior types and the goods types is utilized for determination.
As can be seen from the above description of the technical solution: in this embodiment, the first deep neural network is trained in advance, which is used to infer a person position area in an image, the second deep neural network is trained in advance, which is used to infer a behavior type of a person in the video according to a change of a joint point of the person in the image over time, the third deep neural network is trained in advance, which is used to infer a cargo type of a cargo in the image, and whether the cargo loading and unloading behavior is reasonable or not is judged according to the rate of change of the transport speed, the behavior type of the person carrying out cargo loading and unloading behavior, the cargo type of the cargo loaded and the range of reasonable transport speed change rates corresponding to the behavior type and the cargo type, so that the judgment is more accurate, and a specific mode of training the 3 deep neural networks is disclosed.
S105, capturing a first video.
S106, acquiring a plurality of images to be analyzed from the first video, wherein the images to be analyzed can see personnel performing loading and unloading behaviors and loaded and unloaded cargoes.
In some embodiments, as previously described, cameras may be installed at the dock of the logistics park (or other cargo handling location) to capture video about personnel loading and unloading activities.
In some embodiments, for example, the image to be analyzed may be in RGB format.
Of course, in other embodiments, the image to be analyzed may be in an image format other than RGB, which is not limited by the present invention.
And S107, identifying the body joint points of the person in each image to be analyzed through a first depth neural network according to the images to be analyzed, and obtaining a plurality of groups of coordinate arrays related to the body joint points.
S108, according to the coordinate arrays of the joint points of the body and the time of each image to be analyzed in the video, the behavior type and the conveying speed change rate of the person performing loading and unloading behaviors in the first video are deduced through the second deep neural network.
S109, deducing the cargo type of the loaded cargo in the first video through a third deep neural network according to one or more images to be analyzed.
S110, judging whether the loading and unloading actions are reasonable or not according to the change rate of the conveying speed, the action type of the personnel carrying out the loading and unloading actions, the type of the loaded and unloaded goods and the range of the reasonable conveying speed change rate corresponding to the action type and the type of the goods.
As can be seen from the above description of the technical solution: in this embodiment, according to the multiple images to be analyzed, body joint points of a person in each image to be analyzed are identified through a first depth neural network, so as to obtain multiple sets of coordinate arrays related to the body joint points; according to the coordinate arrays of the joint points of the body and the time of each image to be analyzed in the video, deducing the behavior type and the carrying speed change rate of the person carrying out loading and unloading behaviors in the first video through a second depth neural network; deducing the cargo type of the cargo loaded in the first video through a third deep neural network according to one or more images to be analyzed; judging whether the loading and unloading actions are reasonable or not according to the change rate of the conveying speed, the action type of the person carrying out the loading and unloading actions, the cargo type of the loaded and unloaded cargo and the range of the reasonable conveying speed change rate corresponding to the action type and the cargo type, so that the violent loading and unloading actions of loading and unloading workers can be timely identified only by carrying out video monitoring on the loading and unloading sites of a logistics park, and the process does not need on-site supervision of the staff, and has low cost and good effect.
In some embodiments, the specific determination method may include:
If the change rate of the conveying speed is within the range of the reasonable change rate of the conveying speed corresponding to the behavior type and the cargo type, judging that the loading and unloading are normal;
and if the change rate of the conveying speed is out of the range of the reasonable change rate of the conveying speed corresponding to the behavior type and the cargo type, judging that the conveying speed is violently loaded and unloaded.
And S111, triggering warning information when the condition that loading and unloading behaviors are unreasonable is detected in the first video.
For example, the standard operation of the loading and unloading workers, the record in the system and the like can be prompted by playing voice, so that basis is provided for timely correcting violent loading and unloading behaviors of the loading and unloading workers and overtaking goods damage caused by violent loading and unloading.
Example two
As described below, embodiments of the present invention provide a monitoring system for loading and unloading behavior.
The monitoring system for loading and unloading behavior comprises: one or more processors, one or more memories; wherein,
A processor adapted to load and execute instructions of a software program;
a memory adapted to store a software program comprising instructions for performing the steps of:
Capturing a first video;
acquiring a plurality of images to be analyzed from the first video, wherein the images to be analyzed can be used for seeing personnel carrying out loading and unloading behaviors and loaded and unloaded cargoes;
according to the multiple images to be analyzed, respectively identifying body joint points of a person in each image to be analyzed through a first deep neural network to obtain a plurality of groups of coordinate arrays related to the body joint points;
according to the coordinate arrays of the joint points of the body and the time of each image to be analyzed in the video, deducing the behavior type and the carrying speed change rate of the person carrying out loading and unloading behaviors in the first video through a second depth neural network;
deducing the cargo type of the cargo loaded in the first video through a third deep neural network according to one or more images to be analyzed;
judging whether the loading and unloading behaviors are reasonable or not according to the change rate of the conveying speed, the behavior type of the personnel carrying out the loading and unloading behaviors, the cargo type of the loaded and unloaded cargoes and the range of the reasonable conveying speed change rate corresponding to the behavior type and the cargo type.
As can be seen from the above description of the technical solution: in this embodiment, according to the multiple images to be analyzed, body joint points of a person in each image to be analyzed are identified through a first depth neural network, so as to obtain multiple sets of coordinate arrays related to the body joint points; according to the coordinate arrays of the joint points of the body and the time of each image to be analyzed in the video, deducing the behavior type and the carrying speed change rate of the person carrying out loading and unloading behaviors in the first video through a second depth neural network; deducing the cargo type of the cargo loaded in the first video through a third deep neural network according to one or more images to be analyzed; judging whether the loading and unloading actions are reasonable or not according to the change rate of the conveying speed, the action type of the person carrying out the loading and unloading actions, the cargo type of the loaded and unloaded cargo and the range of the reasonable conveying speed change rate corresponding to the action type and the cargo type, so that the violent loading and unloading actions of loading and unloading workers can be timely identified only by carrying out video monitoring on the loading and unloading sites of a logistics park, and the process does not need on-site supervision of the staff, and has low cost and good effect.
In some embodiments, the software program further comprises instructions for performing the steps of: pre-training a first deep neural network, comprising:
acquiring a plurality of images containing personnel with various different postures from a video;
training a first depth neural network through a plurality of images containing personnel with various different postures, and deducing personnel position areas of the personnel in the images;
And respectively outputting a coordinate array of the joint point positions of the personnel in the personnel position area of each image according to the images of the personnel with various different postures.
In some embodiments, the software program further comprises instructions for performing the steps of: pre-training a second deep neural network, comprising:
The second depth neural network is trained through a plurality of images containing personnel with various different postures, a coordinate array of joint points of personnel in a personnel position area and the time of the images in a video, and the type of behavior of the personnel in the video is deduced according to the change of the joint points of the personnel in the images along with the time.
In some embodiments, the software program further comprises instructions for performing the steps of: pre-training a third deep neural network, comprising:
acquiring a plurality of images of goods comprising various different goods types;
The third deep neural network is trained by a plurality of images of the cargo containing various cargo types for inferring the cargo type of the cargo in the images.
As can be seen from the above description of the technical solution: in this embodiment, according to the multiple images to be analyzed, body joint points of a person in each image to be analyzed are identified through a first depth neural network, so as to obtain multiple sets of coordinate arrays related to the body joint points; according to the coordinate arrays of the joint points of the body and the time of each image to be analyzed in the video, deducing the behavior type and the carrying speed change rate of the person carrying out loading and unloading behaviors in the first video through a second depth neural network; deducing the cargo type of the cargo loaded in the first video through a third deep neural network according to one or more images to be analyzed; judging whether the loading and unloading actions are reasonable or not according to the change rate of the conveying speed, the action type of the person carrying out the loading and unloading actions, the cargo type of the loaded and unloaded cargo and the range of the reasonable conveying speed change rate corresponding to the action type and the cargo type, so that the violent loading and unloading actions of loading and unloading workers can be timely identified only by carrying out video monitoring on the loading and unloading sites of a logistics park, and the process does not need on-site supervision of the staff, and has low cost and good effect.
Further, a first deep neural network is trained in advance for deducing a person position area of a person in an image, a second deep neural network is trained in advance for deducing a behavior type of the person in the video according to the change of joint points of the person in the image along with time, a third deep neural network is trained in advance for deducing a cargo type of the cargo in the image, and whether the cargo loading and unloading behavior is reasonable or not is judged according to the carrying speed change rate, the behavior type of the person carrying out cargo loading and unloading behavior, the cargo type of the cargo loaded and unloaded and the range of reasonable carrying speed change rates corresponding to the behavior type and the cargo type, so that judgment is more accurate, and a specific mode for training the 3 deep neural networks is disclosed.
Those of ordinary skill in the art will appreciate that in the various methods of the above embodiments, all or part of the steps may be performed by hardware associated with program instructions, and the program may be stored in a computer readable storage medium, where the storage medium may include: ROM, RAM, magnetic or optical disks, etc.
Although the present invention is disclosed above, the present invention is not limited thereto. Various changes and modifications may be made by one skilled in the art without departing from the spirit and scope of the invention, and the scope of the invention should be assessed accordingly to that of the appended claims.
Claims (14)
1. A method of analyzing loading and unloading behavior, comprising:
Capturing a first video;
acquiring a plurality of images to be analyzed from the first video, wherein the images to be analyzed can be used for seeing personnel carrying out loading and unloading behaviors and loaded and unloaded cargoes;
according to the multiple images to be analyzed, respectively identifying body joint points of a person in each image to be analyzed through a first deep neural network to obtain a plurality of groups of coordinate arrays related to the body joint points;
according to the coordinate arrays of the joint points of the body and the time of each image to be analyzed in the video, deducing the behavior type and the carrying speed change rate of the person carrying out loading and unloading behaviors in the first video through a second depth neural network;
deducing the cargo type of the cargo loaded in the first video through a third deep neural network according to one or more images to be analyzed;
judging whether the loading and unloading behaviors are reasonable or not according to the change rate of the conveying speed, the behavior type of the personnel carrying out the loading and unloading behaviors, the cargo type of the loaded and unloaded cargoes and the range of the reasonable conveying speed change rate corresponding to the behavior type and the cargo type.
2. Method for analysing a loading and unloading behaviour according to claim 1, further comprising: pre-training a first deep neural network, comprising:
acquiring a plurality of images containing personnel with various different postures from a video;
training a first depth neural network through a plurality of images containing personnel with various different postures, and deducing personnel position areas of the personnel in the images;
And respectively outputting a coordinate array of the joint point positions of the personnel in the personnel position area of each image according to the images of the personnel with various different postures.
3. Method for analysing a loading and unloading behaviour according to claim 2, wherein the attitude comprises: one or more of standing, bending over, or turning over half.
4. Method for analysing a loading and unloading behaviour according to claim 2, further comprising: pre-training a second deep neural network, comprising:
The second depth neural network is trained through a plurality of images containing personnel with various different postures, a coordinate array of joint points of personnel in a personnel position area and the time of the images in a video, and the type of behavior of the personnel in the video is deduced according to the change of the joint points of the personnel in the images along with the time.
5. Method for analysing a loading and unloading behaviour according to claim 4, wherein the behaviour types comprise: one or more of quick lifting of hands, quick kicking of legs, or quick falling of hands.
6. Method for analysing a loading and unloading behaviour according to claim 1, further comprising: pre-training a third deep neural network, comprising:
acquiring a plurality of images of goods comprising various different goods types;
The third deep neural network is trained by a plurality of images of the cargo containing various cargo types for inferring the cargo type of the cargo in the images.
7. Method for analysing a loading and unloading behaviour according to claim 6, wherein the type of cargo comprises: one or more of a carton, wooden box, sack, knitted bag or bagged goods.
8. Method for analysing a loading and unloading behaviour according to claim 1, further comprising: the range of reasonable conveying speed change rates corresponding to various behavior types and cargo types is preset.
9. The method for analyzing loading and unloading operations according to claim 1, wherein the determining whether the loading and unloading operations are reasonable based on the rate of change of the transport speed, the type of operations of the person performing the loading and unloading operations, the type of the goods to be loaded and the range of reasonable rate of change of the transport speed corresponding to the type of operations and the type of goods comprises:
If the change rate of the conveying speed is within the range of the reasonable change rate of the conveying speed corresponding to the behavior type and the cargo type, judging that the loading and unloading are normal;
and if the change rate of the conveying speed is out of the range of the reasonable change rate of the conveying speed corresponding to the behavior type and the cargo type, judging that the conveying speed is violently loaded and unloaded.
10. Method for analysing a loading and unloading behaviour according to claim 1, further comprising: when the condition that loading and unloading behaviors are unreasonable is detected in the first video, the warning information is triggered.
11. A monitoring system for loading and unloading activities, comprising:
A processor adapted to load and execute instructions of a software program;
a memory adapted to store a software program comprising instructions for performing the steps of:
Capturing a first video;
acquiring a plurality of images to be analyzed from the first video, wherein the images to be analyzed can be used for seeing personnel carrying out loading and unloading behaviors and loaded and unloaded cargoes;
according to the multiple images to be analyzed, respectively identifying body joint points of a person in each image to be analyzed through a first deep neural network to obtain a plurality of groups of coordinate arrays related to the body joint points;
according to the coordinate arrays of the joint points of the body and the time of each image to be analyzed in the video, deducing the behavior type and the carrying speed change rate of the person carrying out loading and unloading behaviors in the first video through a second depth neural network;
deducing the cargo type of the cargo loaded in the first video through a third deep neural network according to one or more images to be analyzed;
judging whether the loading and unloading behaviors are reasonable or not according to the change rate of the conveying speed, the behavior type of the personnel carrying out the loading and unloading behaviors, the cargo type of the loaded and unloaded cargoes and the range of the reasonable conveying speed change rate corresponding to the behavior type and the cargo type.
12. The loading and unloading behavior monitoring system of claim 11, wherein the software program further comprises instructions for performing the steps of: pre-training a first deep neural network, comprising:
acquiring a plurality of images containing personnel with various different postures from a video;
training a first depth neural network through a plurality of images containing personnel with various different postures, and deducing personnel position areas of the personnel in the images;
And respectively outputting a coordinate array of the joint point positions of the personnel in the personnel position area of each image according to the images of the personnel with various different postures.
13. The loading and unloading behavior monitoring system of claim 12, wherein the software program further comprises instructions for performing the steps of: pre-training a second deep neural network, comprising:
The second depth neural network is trained through a plurality of images containing personnel with various different postures, a coordinate array of joint points of personnel in a personnel position area and the time of the images in a video, and the type of behavior of the personnel in the video is deduced according to the change of the joint points of the personnel in the images along with the time.
14. The loading and unloading behavior monitoring system of claim 11, wherein the software program further comprises instructions for performing the steps of: pre-training a third deep neural network, comprising:
acquiring a plurality of images of goods comprising various different goods types;
The third deep neural network is trained by a plurality of images of the cargo containing various cargo types for inferring the cargo type of the cargo in the images.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201911423835.7A CN113128298B (en) | 2019-12-30 | 2019-12-30 | Loading and unloading behavior analysis method and monitoring system |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201911423835.7A CN113128298B (en) | 2019-12-30 | 2019-12-30 | Loading and unloading behavior analysis method and monitoring system |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN113128298A CN113128298A (en) | 2021-07-16 |
| CN113128298B true CN113128298B (en) | 2024-07-02 |
Family
ID=76769992
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201911423835.7A Active CN113128298B (en) | 2019-12-30 | 2019-12-30 | Loading and unloading behavior analysis method and monitoring system |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN113128298B (en) |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN116682055A (en) * | 2023-05-16 | 2023-09-01 | 哈尔滨工业大学 | A multi-source information fusion loading and unloading behavior monitoring and analysis system for stevedores |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN105245828A (en) * | 2015-09-02 | 2016-01-13 | 北京旷视科技有限公司 | Item analysis method and equipment |
| CN109598229A (en) * | 2018-11-30 | 2019-04-09 | 李刚毅 | Monitoring system and its method based on action recognition |
Family Cites Families (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108431863B (en) * | 2015-12-29 | 2022-07-26 | 乐天集团股份有限公司 | Logistics system, package delivery method and recording medium |
| MA50387A (en) * | 2017-10-20 | 2020-08-26 | Bxb Digital Pty Ltd | FREIGHT CARRIER TRACKING SYSTEMS AND METHODS |
| CN109051306A (en) * | 2018-05-09 | 2018-12-21 | 刘云帆 | The intelligent movable storage box of locking son |
| CN110033027A (en) * | 2019-03-15 | 2019-07-19 | 深兰科技(上海)有限公司 | A kind of item identification method, device, terminal and readable storage medium storing program for executing |
-
2019
- 2019-12-30 CN CN201911423835.7A patent/CN113128298B/en active Active
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN105245828A (en) * | 2015-09-02 | 2016-01-13 | 北京旷视科技有限公司 | Item analysis method and equipment |
| CN109598229A (en) * | 2018-11-30 | 2019-04-09 | 李刚毅 | Monitoring system and its method based on action recognition |
Also Published As
| Publication number | Publication date |
|---|---|
| CN113128298A (en) | 2021-07-16 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN111310645B (en) | Method, device, equipment and storage medium for warning overflow bin of goods accumulation | |
| US10269108B2 (en) | Methods and systems for improved quality inspection of products using a robot | |
| US20220270229A1 (en) | Automated detection of carton damage | |
| US20220051175A1 (en) | System and Method for Mapping Risks in a Warehouse Environment | |
| CN110910355A (en) | Package blocking detection method and device and computer storage medium | |
| CN114819821A (en) | Goods warehouse-out checking method and device, computer equipment and storage medium | |
| US11697558B2 (en) | Automated detection of carton damage | |
| CN111008561A (en) | Livestock quantity determination method, terminal and computer storage medium | |
| CN114140684A (en) | Coal plugging and leakage detection method, device, equipment and storage medium | |
| KR102243039B1 (en) | Smart factory system for automated product packaging and delivery service | |
| CN113128298B (en) | Loading and unloading behavior analysis method and monitoring system | |
| CN111597857A (en) | Logistics package detection method, device and equipment and readable storage medium | |
| EP3647236B1 (en) | Projection instruction device, parcel sorting system, and projection instruction method | |
| CN111890343B (en) | Robot object collision detection method and device | |
| CN114441021A (en) | Vehicle weighing method, device, storage medium and processor based on video recognition | |
| CN117440180B (en) | Video processing method, device, equipment, readable storage medium and product | |
| CN111523522A (en) | Intelligent operation and maintenance management method and management system for equipment | |
| CN114463690A (en) | Abnormity detection method and device, electronic equipment and storage medium | |
| CN111582778A (en) | Operation site cargo accumulation measuring method, device, equipment and storage medium | |
| CN115082841A (en) | Method for monitoring abnormity of working area of warehouse logistics robot | |
| CN114596239B (en) | Method, device, computer equipment and storage medium for detecting loading and unloading events | |
| CN112686930A (en) | Package sorting detection method and device, computer equipment and storage medium | |
| CN116189093A (en) | Slag soil vehicle dust on-line monitoring method and system | |
| CN113449617A (en) | Track safety detection method, system, device and storage medium | |
| CN120875586B (en) | A port cargo handling early warning method, system, device and storage medium |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant |