[go: up one dir, main page]

CN119461057A - Driving crane monitoring method, system and medium based on multi-paradigm visual fusion - Google Patents

Driving crane monitoring method, system and medium based on multi-paradigm visual fusion Download PDF

Info

Publication number
CN119461057A
CN119461057A CN202510026221.4A CN202510026221A CN119461057A CN 119461057 A CN119461057 A CN 119461057A CN 202510026221 A CN202510026221 A CN 202510026221A CN 119461057 A CN119461057 A CN 119461057A
Authority
CN
China
Prior art keywords
dangerous
area
target
tracking
real
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202510026221.4A
Other languages
Chinese (zh)
Inventor
陈东东
丁靖洋
李志存
仇浩浩
马雪
王兵
何幸珊
郭桦宜
赵庆浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yunnan Paidong Technology Co ltd
Original Assignee
Yunnan Paidong Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yunnan Paidong Technology Co ltd filed Critical Yunnan Paidong Technology Co ltd
Priority to CN202510026221.4A priority Critical patent/CN119461057A/en
Publication of CN119461057A publication Critical patent/CN119461057A/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66CCRANES; LOAD-ENGAGING ELEMENTS OR DEVICES FOR CRANES, CAPSTANS, WINCHES, OR TACKLES
    • B66C13/00Other constructional features or details
    • B66C13/16Applications of indicating, registering, or weighing devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66CCRANES; LOAD-ENGAGING ELEMENTS OR DEVICES FOR CRANES, CAPSTANS, WINCHES, OR TACKLES
    • B66C13/00Other constructional features or details
    • B66C13/18Control systems or devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66CCRANES; LOAD-ENGAGING ELEMENTS OR DEVICES FOR CRANES, CAPSTANS, WINCHES, OR TACKLES
    • B66C15/00Safety gear
    • B66C15/06Arrangements or use of warning devices
    • B66C15/065Arrangements or use of warning devices electrical
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Automation & Control Theory (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a crane lifting monitoring method, a crane lifting monitoring system and a crane lifting monitoring medium based on multi-paradigm vision fusion, and relates to the technical field of computer vision. The method comprises the steps of collecting images of a target collecting area through image collecting equipment to obtain a real-time image collecting video stream, preprocessing the real-time image collecting video stream to obtain real-time image frame data, identifying the real-time image frame data based on a visual detection model to generate detection information, inputting the detection information into a visual tracking module, tracking an identification object to obtain tracking information, obtaining a fixed dangerous area or a dynamic dangerous area based on the detection information and the tracking information, identifying dangerous conditions in the video stream according to the dynamic dangerous area or the fixed dangerous area, obtaining dangerous condition accumulated data and triggering an alarm. The application solves the technical problems that the monitoring means is single and the dynamic danger is difficult to accurately identify in the prior art, and achieves the technical effect of accurately identifying the potential danger of the hoisting operation in real time.

Description

Driving hoisting monitoring method, system and medium based on multi-paradigm vision fusion
Technical Field
The application relates to the technical field of computer vision, in particular to a crane lifting monitoring method, a crane lifting monitoring system and a crane lifting monitoring medium based on multi-paradigm vision fusion.
Background
With the rapid development of industrial automation technology, the efficiency of loading and unloading operations is improved remarkably, but more complex safety challenges are brought about at the same time. The problems of collision, blocked operation view and the like caused by unexpected unhooking, moving or swinging of the weight all form serious threat to the safety of operators. Traditional monitoring techniques rely primarily on manual monitoring and simple sensor alarms, such as closed-circuit television monitoring systems. However, manual monitoring is easy to cause a monitoring blind spot due to human negligence, and a simple sensor alarm can only react to specific environmental changes and cannot monitor and predict complex and changeable working environments in real time.
In recent years, with the progress of computer vision technology, especially the application of deep learning technology, new possibilities are brought to the field of industrial engineering safety. By combining a camera, a sensor and an advanced image processing algorithm, real-time monitoring of engineering scenes can be realized. However, the variability of the engineering field, the dynamic complexity of the handling equipment, and the high accuracy requirements of the monitoring system all present significant challenges to the prior art. The prior art often adopts a single traditional visual algorithm or a modern deep learning method, relies on simple mobile detection or a fixed warning line to trigger an alarm, and is difficult to adapt to different working environments and complex lifting operation scenes.
Disclosure of Invention
The application provides a crane lifting monitoring method, a crane lifting monitoring system and a crane lifting monitoring medium based on multi-paradigm vision fusion, which solve the technical problem that the dynamic danger in a complex lifting operation scene cannot be accurately identified and predicted due to single monitoring means in the prior art, and the crane lifting operation process is monitored in real time through the multi-paradigm vision fusion technology, so that the dynamic and static danger in the operation process is intelligently identified and an alarm is triggered, and the technical effects of improving the accuracy and reliability of crane lifting dangerous condition monitoring identification and guaranteeing the safety of the crane lifting operation process are achieved.
In view of the problems, on one hand, the application provides a crane lifting monitoring method based on multi-paradigm vision fusion, which comprises the steps of carrying out image acquisition of a target acquisition area based on image acquisition equipment, acquiring real-time image acquisition video streams, preprocessing the real-time image acquisition video streams, acquiring real-time image frame data, carrying out real-time image frame data identification based on a vision detection model, generating detection information, inputting the detection information into a vision tracking module, tracking an identification object, acquiring tracking information based on the detection information and the tracking information, acquiring a fixed dangerous area or a dynamic dangerous area, carrying out dangerous condition identification in the real-time image acquisition video streams according to the dynamic dangerous area or the fixed dangerous area, acquiring dangerous condition accumulated data, and carrying out alarm triggering according to the dangerous condition accumulated data.
On the other hand, the application also provides a crane lifting monitoring system based on multi-paradigm vision fusion, which comprises an image acquisition unit, a preprocessing unit, a detection information generation unit, a tracking identification unit and an alarm triggering unit, wherein the image acquisition unit is used for acquiring an image of a target acquisition area based on image acquisition equipment, acquiring a real-time image acquisition video stream, the target acquisition area comprises a lifting operation area, the preprocessing unit is used for preprocessing the real-time image acquisition video stream, acquiring real-time image frame data, the detection information generation unit is used for carrying out real-time image frame data identification based on a vision detection model, generating detection information, the tracking identification unit is used for inputting the detection information into a vision tracking module, tracking and identifying an object to acquire tracking information, the dangerous area acquisition unit is used for acquiring a fixed dangerous area or a dynamic dangerous area based on the detection information, the dangerous area identification unit is used for carrying out dangerous situation identification in the real-time image acquisition video stream according to the dynamic dangerous area or the fixed dangerous area, acquiring dangerous situation accumulated data, and the alarm triggering unit is used for triggering alarm data according to the accumulated dangerous situation data.
In a third aspect, the application also provides a computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of any of the methods described above.
One or more technical schemes provided by the application have at least the following technical effects or advantages:
The image acquisition device is used for acquiring the image of the target acquisition area to acquire a real-time image acquisition video stream, and the target acquisition area comprises a hoisting operation area, so that continuous real-time monitoring of the operation area is realized, and original visual information is provided for subsequent processing. And preprocessing the real-time image acquisition video stream to acquire real-time image frame data, thereby improving the usability and quality of the image data. And carrying out the real-time image frame data identification based on the visual detection model, generating detection information, and intelligently identifying key elements in the monitored scene. And inputting the detection information into a visual tracking module, tracking the identification object to obtain tracking information, and obtaining the real-time state of the identification object in the scene, thereby providing basis for predicting the future position and activity of the identification object. The method comprises the steps of detecting information, acquiring a fixed dangerous area or a dynamic dangerous area based on the detected information and the tracking information, carrying out real-time image acquisition and video stream dangerous condition identification according to the dynamic dangerous area or the fixed dangerous area to acquire dangerous condition accumulated data, providing quantitative data support for safety management, helping to identify accident modes and trends and providing basis for preventive measures. And according to the accumulated data of the dangerous situation, alarm triggering is carried out, so that the response speed to sudden danger is improved, the occurrence of accidents is prevented, and the safety of staff is ensured.
Therefore, the application monitors the hoisting operation area in real time, divides the fixed dangerous area and the dynamic dangerous area, and carries out intelligent recognition and tracking and data analysis through the visual detection model and the visual tracking model, thereby accurately predicting the dangerous condition and giving an early warning. The accuracy and the reliability of monitoring and identifying the dangerous condition of the crane are improved, the crane lifting operation is ensured to be carried out safely, and the safety management level of the crane lifting operation is further remarkably improved.
The foregoing description is only an overview of the present application, and is intended to be implemented in accordance with the teachings of the present application in order that the same may be more clearly understood and to make the same and other objects, features and advantages of the present application more readily apparent.
Drawings
Fig. 1 is a schematic flow chart of a crane lifting monitoring method based on multi-paradigm vision fusion provided by an embodiment of the application;
Fig. 2 is a schematic flow chart of drawing a dangerous area in a crane lifting monitoring method based on multi-paradigm vision fusion according to an embodiment of the present application;
Fig. 3 is a schematic structural diagram of a crane lifting monitoring system based on multi-paradigm vision fusion according to an embodiment of the present application;
Reference numerals describe the image acquisition unit 10, the preprocessing unit 20, the detection information generation unit 30, the tracking recognition unit 40, the dangerous area acquisition unit 50, the dangerous situation recognition unit 60, and the alarm triggering unit 70.
Detailed Description
The embodiment of the application solves the technical problem that the dynamic danger in a complex lifting operation scene cannot be accurately identified and predicted due to single monitoring means in the prior art by providing the driving lifting monitoring method, system and medium based on Yu Duofan type vision fusion, monitors the driving lifting operation process in real time by using a multi-paradigm vision fusion technology, intelligently identifies the dynamic and static danger in the operation process and triggers an alarm, thereby achieving the technical effects of improving the accuracy and reliability of monitoring and identifying the driving lifting dangerous condition and guaranteeing the safety of the driving lifting operation process.
Embodiment 1 As shown in FIG. 1, the embodiment of the application provides a crane lifting monitoring method based on multi-paradigm vision fusion, which comprises the following steps:
And step S1, image acquisition is carried out on the basis of image acquisition equipment in a target acquisition area, and a real-time image acquisition video stream is acquired, wherein the target acquisition area comprises a hoisting operation area.
In particular, an image acquisition device refers to means for capturing still or moving images, such as cameras, video cameras, etc., which are capable of converting real world scenes into electronic signals forming digital or analog video streams. The target acquisition area refers to a specific physical space range monitored by the image acquisition equipment, and in this embodiment, refers to a hoisting operation area. The hoisting operation area refers to an area where an object is hoisted and moved when hoisting operation is performed, which is a high risk operation area, and special attention is required to safety. The real-time image acquisition video stream refers to a sequence of images that are continuously captured and transmitted by the image acquisition device to form a real-time video stream that can be used to observe and analyze activity within the monitored area.
The hoisting operation area is monitored in real time in all directions through image acquisition equipment arranged around the hoisting operation area, dynamic pictures of the hoisting operation area are captured, and the pictures are converted into digital signals to form continuous real-time video streams. In order to ensure the effectiveness of monitoring, the deployment of the image acquisition equipment needs to consider the factors such as coverage, view angle, light condition and the like so as to be capable of clearly capturing all activities in the hoisting operation area.
Step S1 establishes a basic framework for monitoring the hoisting operation area in real time by using image acquisition equipment, and provides necessary data sources for subsequent image processing and safety analysis.
And step S2, preprocessing the real-time image acquisition video stream to acquire real-time image frame data.
In particular, preprocessing refers to the preliminary processing steps performed on an image prior to its primary analysis in computer vision and image processing. These steps include denoising, enhancement, scaling, cropping, etc., in order to improve the image quality, making it more suitable for subsequent analysis and processing. Real-time image frame data refers to information of a single image frame extracted from a real-time image acquisition video stream, which typically contains all pixel values of the image and other related information.
First, the real-time image capture video stream acquired in step S1 is decoded using a specific video codec, such as h.264 or h.265, and converted back from the compressed format to the original image data. Then the video stream is subjected to preprocessing operations such as denoising, enhancement, standardization and the like, so that the image quality is improved. Denoising refers to removing noise in an image and can be achieved through various algorithms, such as mean filtering, median filtering or a denoising model based on deep learning. Enhancement is the adjustment of the image in terms of brightness, contrast, saturation, etc. to enhance certain features of the image making it easier to analyze. Normalization is the normalization of the size and color space of an image for subsequent processing. For example, the image is scaled to a uniform resolution and the color space is converted to grayscale or RGB. Finally, individual image frames are extracted from the processed video stream by reading the video stream frame by frame or setting a specific time interval. The above-described preprocessing of the real-time image capture video stream can be accomplished through various existing image processing software and libraries, such as OpenCV, FFmpeg, which provide rich functions and interfaces for processing the video stream for decoding, denoising, enhancement, frame extraction, etc. Through the preprocessing step, the real-time image frame data is prepared for further visual analysis and hazard condition identification.
And step S3, carrying out the real-time image frame data identification based on the visual detection model to generate detection information.
Further, step S3 of the embodiment of the present application further includes:
And S31, collecting historical operation video stream data.
And S32, performing frame extraction processing on the historical operation video stream data to obtain frame extraction image data, and performing data identification on the frame extraction image data to obtain a data identification result.
And step S33, constructing an image database based on the data identification result and the frame-drawing image data.
And step S34, training the visual detection model based on the image database.
Preferably, the detection information comprises three target categories of lifting hooks, cargoes and workers and corresponding position and pixel area parameters.
In particular, the visual inspection model is a specially designed algorithmic model that is capable of analyzing image data and identifying specific objects, features, or patterns therein. The model is typically based on deep learning techniques such as Convolutional Neural Networks (CNNs) that enable automatic learning and extraction of key information in the image through training of large amounts of image data. The detection information refers to a result obtained by analyzing the real-time image frame data by the visual detection model, and comprises three target categories of a lifting hook, goods and workers, and corresponding position and pixel area parameters. The pixel area parameter refers to the number of pixels occupied by the target object in the image, and is used for quantifying the size of the target.
Historical operation video stream data refers to video data recorded in the past during a lifting operation, which reflects the actual condition of the operation area, and is used for training and verifying a visual inspection model. A visual detection model is trained in advance according to historical operation video stream data, and the specific training process is that firstly, video data recorded in the past hoisting operation process are collected through image collecting equipment. These data are raw initial video streams that contain rich scene information. These initial video streams are then subjected to a frame extraction process using video processing software, such as FFmpeg or the like, to obtain frame extracted image data. The frame extraction process is a process of selecting image frames at specific time points from a video stream, and the frames represent key moments or representative scenes of the video. And manually cleaning and marking the frame extraction image data, namely marking the positions of the lifting hook, the goods and the workers in the image, and distributing category labels for each identified object to obtain a data identification result. The identified decimated image data is organized into a native database. In order to further enhance the data diversity and the generalization capability of the model, a data enhancement technology is used for processing the data in the original database, so that various target samples are ensured to be balanced. The data enhancement is to locally cut out a large number of sample maps of cargoes with different appearances and collection angles from data outside the original database, and paste the random positions, sizes and angles of the sample maps into random images in the original database so as to generate an image database. The image database is a set of identified image data used to train and test the visual inspection model to improve the accuracy and generalization ability of the model. This database contains not only the images themselves, but also identification information associated with them, such as object category and location. Model training is performed using a deep learning framework, such as TensorFlow or PyTorch, and related libraries and tools, such as Keras, openCV, etc. Based on recall rate and mAP@0.5 as evaluation indexes, the multi-card parallel tuning iterative training is realized by adjusting the number of training samples of an image database, the configuration and the number of GPU display cards, the batch size, the initial learning rate, SGD momentum, target frame loss weight and category loss weight, and a visual detection model is generated. Through the series of steps, a visual detection model special for a lifting operation area is trained, and the model can identify and report the positions and pixel areas of a lifting hook, cargoes and workers, so that powerful technical support is provided for operation safety.
Loading a trained visual inspection model. The preprocessed real-time image frame data is prepared according to a format required by a model, and the method comprises the steps of adjusting the size of an image, normalizing pixel values and the like. The prepared image data is input into a visual detection model for reasoning analysis, the object and the position thereof in the image are identified, and one or more detection frames, class labels corresponding to each frame and pixel area parameters are given. Post-processing the detection information output by the model may include filtering out detection results with low confidence, merging overlapping detection frames, non-maximal suppression (NMS) of the detection results, etc., to obtain final detection information.
And step S3, accurate and effective detection information can be extracted from the real-time image frame data by training a visual detection model, so that basis is provided for further safety analysis and decision.
And S4, inputting the detection information into a visual tracking module, and tracking the identification object to obtain tracking information.
Further, step S4 of the embodiment of the present application further includes:
Step S41, constructing a visual tracking module comprising two types of trackers
Step S42, the trackerThe tracker is constructed by adopting a second-order matching methodAnd constructing by adopting a Top-K matching method.
Step S43, the trackerCarrying out worker target tracking, outputting worker target tracking information, and the trackerAnd carrying out tracking of the lifting hook target and the cargo target, and outputting lifting hook target tracking information and cargo target tracking information.
In particular, the visual tracking module is a specially designed system or algorithm for tracking moving objects in a sequence of images. It may receive the detection information and further analyze it to determine the motion profile of the object in successive frames. Tracking information refers to the results generated by the visual tracking module after analyzing the detection information, and generally includes the movement direction, speed, path and other relevant dynamic information of the object. The second order matching method is a tracking algorithm that may take into account more than one observation when matching objects, i.e. not only rely on the information of the current frame, but may also combine the information of the previous frame or frames to perform a more stable object matching. The Top-K matching method is a selection strategy, where K represents a number, indicating that the Top K matches are selected as the final result, among all possible matches. This approach is often used to address many-to-many matching issues, which can improve tracking robustness. An artificial target refers to a person in a video picture, i.e., a worker. The hook target refers to a hook used in a lifting operation. Cargo objects refer to cargo being lifted or transported.
Constructing a visual tracking module, wherein the visual tracking module comprises two types of trackers,For the purpose of tracking the workers,For tracking hooks, loads. Wherein, trackerA second-order matching method is adopted, the second-order matching method comprises the following calculation formula:
;
,,;
Wherein, superscript Detection information representing low confidence, subscriptRepresenting the t frame, subscriptAndRepresenting the status of whether the information matches or not,Representing the trace of the trace,Representing the detection information of all targets of one frame, T representing the tracking information of all targets of one frame,The detection information representing the individual targets is displayed,Representing the IOU distance cost matrix calculation function,Representing the hungarian algorithm for information distribution.
Tracking deviceThe method comprises the steps of adopting a Top-K matching method to track the existence of only one lifting hook and only one cargo, designing a removal condition judging method to process unmatched detection information or tracking information, and firstly creating a motion state equation for all tracking tracks by the removal condition judging methodAnd initializing a motion state equationAnd gain matrixThe specific formula is as follows:
;
;
,;
;
Wherein, Representing initial detected information center coordinates, aspect ratio, high,Representing respectively the detected information center coordinates, aspect ratio, high variation values,The initial value is set to 0,Is an 8 x 8 diagonal matrix representing the initial state covariance,Is a4 x 8 matrix representing the observation matrix,Representing a state transition matrix of the device,Is a matrix of all zeros and is formed in the matrix,Representative of dependence onAnd then when tracking the trackMatching to detection informationUpdating filter parametersAndThe specific update formula is:
;
Wherein, Representing measurement noise covariance when tracking a trajectoryDoes not match the detection informationThe motion estimation is carried out on the target in the tracking losing state by utilizing a Kalman filter, the target is considered to disappear after the motion estimation value reaches the visual field boundary or is 0 for a long time, the motion information is obtained by calculation according to the tracking information, and the specific calculation formula is as follows:
;
Wherein, Respectively representing the motion information of the current frameDirection(s),The magnitude of the velocity in the direction,Respectively represent outputs in tracking information of the current frameDirection(s),The magnitude of the velocity in the direction,Respectively representing outputs in the trace information of the previous frameDirection(s),The magnitude of the velocity in the direction,Representing the smoothing coefficients.
The detection information obtained from the visual detection model is passed to the visual tracking module. The visual tracking module uses these detection information to match objects in adjacent frames and predicts their likely positions in subsequent frames. By analyzing the object position in the continuous frames, the visual tracking module can calculate the motion trail of the object, including speed, acceleration, motion direction and other information, and integrate the motion trail obtained by analysis and other dynamic information into tracking information, which can be used for further data analysis or real-time monitoring.
Step S4, by constructing the visual tracking module, key objects in the operation area can be identified, the motion states of the key objects can be tracked in real time, and more comprehensive information support is provided for operation monitoring and safety management.
And S5, acquiring a fixed dangerous area or a dynamic dangerous area based on the detection information and the tracking information.
Further, as shown in fig. 2, step S5 of the embodiment of the present application further includes:
Step one, judging whether the detection information or the tracking information contains a cargo target, if so, obtaining a range coefficient and executing a step three, otherwise, executing a step two;
judging whether the detection information or the tracking information contains a lifting hook target, if so, obtaining a range coefficient and executing a step III;
Judging whether the hook target state is in a low posture and whether the swing amplitude is larger than a threshold value, if the hook target state is in the low posture and the hook swing amplitude is smaller than the threshold value, not drawing the fixed dangerous area and executing the fourth step, otherwise, drawing the fixed dangerous area based on the range coefficient and executing the fourth step;
And step four, judging the moving state of the travelling crane, if the travelling crane is in the moving state, drawing a dynamic dangerous area based on the range coefficient, otherwise, not drawing the dynamic dangerous area.
Preferably, step S5 of the embodiment of the present application further includes:
acquiring the pixel area of the lifting hook based on detection information of the lifting hook target or tracking information of the lifting hook target Setting the height of the reference planeAnd the pixel area of the hook reaching the reference planeAccording to the formulaCalculating the height of the lifting hookSetting a height thresholdWhen the lifting hook is at the heightLess than or equal to the height thresholdWhen the lifting hook is in the low attitude.
In particular, a fixed hazard zone refers to a potentially hazardous zone formed during a lifting operation due to the static position of the load or hook. This area is fixed, is determined by the static position of the load or hook, and does not change with the movement of the trolley. The fixed hazard zone is plotted by calculating range coefficients. Dynamic hazard areas refer to potentially hazardous areas that change as the work environment and object motion state change. These areas may be created by the movement of hooks, cargo or workers and require real time monitoring to prevent accidents. The detection information from the visual detection model and the tracking information from the visual tracking module are integrated, whether the dynamic dangerous area needs to be drawn is determined through logic judgment, and if the dynamic dangerous area needs to be drawn, the length of the dynamic dangerous area is calculated through the visual optical flow module, so that the drawing of the dynamic dangerous area is completed.
First, checking the detection information and the tracking information to judge whether the cargo object or the hook object is contained. If cargo objects are contained, calculating range coefficients and then performing step three, wherein the range coefficientsAnd if the cargo object is not contained, executing the second step, judging whether the monitoring information contains the lifting hook object, and when the lifting hook object is detected, calculating a range coefficient and executing the third step, wherein corresponding safety measures can be adopted for different operation objects in the process. The status of the hook target is then further analyzed. If the lifting hook is in a low attitude, namely the height of the lifting hook is smaller than or equal to a height threshold value and the shaking amplitude is smaller than the threshold value, the lifting hook is considered to be relatively stable, and a fixed dangerous area is not drawn. Otherwise, if the swing amplitude of the lifting hook is larger than the threshold value or the lifting hook is not in a low posture, drawing a fixed dangerous area according to the calculated range coefficient so as to warn the potential risk. And step four, after the fixed dangerous area is drawn, executing the step four.
And step four, firstly judging the moving state of the travelling crane, and if the travelling crane is in the moving state, drawing a dynamic dangerous area on the basis of fixing the dangerous area. If the travelling crane is not in a moving state, the dynamic dangerous area is not drawn. This dynamic hazard zone is a condition that the fixed hazard zone moves with the traveling crane, so that real-time updating and marking are required to ensure the safety of personnel. The judgment of the moving state of the vehicle can be carried out through various sensors configured on the vehicle, such as a speed sensor, a gyroscope, an accelerometer and the like, the sensors can monitor the moving state of the vehicle in real time, and when the speed, the angular speed or the acceleration of the vehicle exceeds a set threshold value, the system can judge that the vehicle is in the moving state. Visual images can also be captured by the image acquisition device, and the positions and the motion trajectories of objects in the picture are analyzed by image processing technology. If the position of the object in the continuous images changes, it can be judged that the object is in a moving state.
And then constructing a visual light stream module for calculating the length of the dynamic dangerous area. The visual light flow module is a technical module for estimating the motion of an object in a sequence of images by analyzing the movement of pixels between adjacent frames to infer the motion trajectory of the object. The visual-optical-flow module first initializes first-frame optical-flow parameters in the first frame of the video sequence, where optical-flow parameters refer to various parameters used in optical-flow computation, including but not limited to pixel gradients, temporal gradients, and the like. Pixel gradients refer to the direction and rate of change of pixel intensity in an image, typically used to detect edges and textures. Temporal gradients refer to the variation in pixel intensity at the same location between adjacent frames in a video sequence. The module then selects a fixed region of the video frame where there is no moving object as a reference region for optical flow calculation. The area generally comprises a plurality of pixel points, the optical flow can be calculated as a stable reference point, the pixel gradient and the time gradient of each pixel point around each pixel point are obtained, and finally the moving speed of each pixel coordinate is generated by adopting a least square fitting method, wherein the specific calculation formula is as follows:
;
calculating the moving speed of each pixel coordinate To generate the moment of inertia of the current frameAccording to the motion inertia of the current frameCoefficient of riskCalculating the length of the dynamic dangerous area. Wherein the risk factorIs a predefined parameter and can be predefined by safety specialists or engineers according to the type and size of the travelling crane, the working environment and other factors.
Finally, according to the calculated length of the dynamic dangerous areaAnd a preset range coefficientAnd drawing a dynamic dangerous area in the moving direction of the vehicle. This area will move with the movement of the trolley to mark the potentially dangerous area in real time.
Step S5 can automatically draw corresponding dangerous areas aiming at different operation objects, so that the dangerous conditions can be conveniently identified later and corresponding safety measures can be adopted.
In this embodiment, a preferred method for determining whether the hook is in a low attitude is provided, that is, the pixel area of the hook is obtained based on the detection information or tracking information of the hook targetSetting the height of the reference planeAnd the pixel area of the hook reaching the reference planeAccording to the formulaCalculating the height of the lifting hookSetting a height thresholdWhen the lifting hook is at the heightLess than or equal to the height thresholdWhen the lifting hook is in the low attitude.
And S5, judging the target type and the lifting hook state, and intelligently determining and updating the dynamic dangerous area according to the real-time working environment and the object state, so that accidents are effectively prevented, and the working safety is ensured.
And S6, identifying dangerous conditions in the real-time image acquisition video stream according to the dynamic dangerous area or the fixed dangerous area, updating a dangerous accumulation list and acquiring dangerous condition accumulated data.
Further, step S6 of the embodiment of the present application further includes:
and step A, carrying out worker target recognition based on the visual detection model and the visual tracking module, if the detection information or the tracking information contains a worker target, executing step B, otherwise, updating the dangerous accumulation list, inserting a safety state element into the head of the dangerous accumulation list, removing an end element, and executing step E.
And B, judging whether the dynamic dangerous area exists, if so, judging the intrusion of the dynamic dangerous area based on the target position of the worker in the detection information or the tracking information, and executing the step C, otherwise, executing the step D.
And C, judging whether the area invasion of the worker target pixel is larger than a dangerous threshold, updating the dangerous accumulation list when the invasion is larger than the dangerous threshold, inserting a dangerous state element into the head of the dangerous accumulation list, removing an end element, otherwise, updating the dangerous accumulation list, inserting a safe state element into the head of the dangerous accumulation list, removing an end element, and executing the step E.
And D, carrying out intrusion judgment on the fixed dangerous area based on the detection information or the tracking information, judging whether the intrusion of the pixel area of the worker target is larger than a dangerous threshold value or not, updating the dangerous accumulation list when the intrusion of the pixel area of the worker target is larger than the dangerous threshold value, inserting a dangerous state element into the head of the dangerous accumulation list, removing an end element, otherwise, updating the dangerous accumulation list, inserting a safe state element into the head of the dangerous accumulation list, removing an end element, and executing the step E.
And E, counting the number of the dangerous state elements in the dangerous accumulation list, and obtaining dangerous condition accumulated data.
Specifically, the dangerous situation identification refers to automatically detecting and identifying dangerous situations in video by analyzing real-time image acquisition video streams. Including subject entry into dangerous areas, abnormal movement patterns, signs of equipment failure, etc. The hazard condition cumulative data refers to a recorded data set of all identified hazard conditions. Such data may include information on the type, time of occurrence, duration, frequency, etc. of the hazardous event.
And receiving and processing the real-time image acquisition video stream captured by the image acquisition equipment in real time. The analysis detects whether each frame of image has an object, including a hook, a load, or a worker, entered a dynamic hazard area or a fixed hazard area. The occurrence of a dangerous condition is automatically detected by comparing the object position in the image with known dangerous area boundaries. Whenever a dangerous condition is detected, relevant data is recorded including, but not limited to, the type of dangerous event, time of occurrence, duration, object involved, etc. These data are then accumulated to form a history of the dangerous situation. The specific dangerous condition identification process is as follows:
Firstly, analyzing a real-time image acquisition video stream by using a visual detection model, and identifying a worker target in an image. And if the worker target is detected, continuing to execute the step B, and if the worker target is not detected, continuing to monitor the next frame of image, updating a danger accumulation list, inserting a safety state element into the head of the danger accumulation list, and rejecting an end element. When the worker's target is detected, it is further determined whether a dynamic hazard zone exists. If the dynamic dangerous area is not present, the fixed dangerous area intrusion judgment is carried out. The dynamic dangerous area intrusion judgment refers to judging whether a worker enters a dynamic dangerous area or not, and is a judgment process based on real-time data and a specific algorithm. The fixed dangerous area invasion judgment refers to judging whether a worker enters a fixed dangerous area or not, and is a judgment process based on a preset area and real-time data.
And judging whether the invasion of the target pixel area of the worker is larger than a preset dangerous threshold or not under the invasion condition of the dynamic dangerous area or the fixed dangerous area. If the intrusion degree exceeds the dangerous threshold, inserting a dangerous state element into the head of the dangerous accumulation list and removing an end element, and if the intrusion degree does not reach the dangerous threshold, inserting a safe state element and removing an end element. The dangerous threshold is a preset value, and is used for judging whether the degree of invasion of the target pixel area of the worker reaches the dangerous level. According to the actual operation requirement, the analysis history case is customized in advance by an expert or a technician. The risk accumulation list is a data structure for storing and managing the detected risk status elements, which can dynamically add and delete elements. The dangerous state element refers to a record representing the occurrence of a dangerous situation in the dangerous accumulation list. The security state element refers to a record representing a security state in the risk accumulation list, i.e., a record in which a dangerous condition does not occur.
And counting the number of dangerous state elements in the dangerous accumulation list, and taking the number of dangerous state elements as accumulated data of dangerous conditions. These data may be used to analyze the safety performance of workers during the operation and to evaluate the safety of the job site.
And step S6, through dangerous condition identification, dangerous behaviors of workers in the operation process are monitored and recorded in real time, accumulated dangerous condition data are obtained, real-time monitoring of the safety condition of an operation site is realized, the dangerous conditions are found and recorded in time, and the safety of the driving operation process is ensured.
And S7, triggering an alarm according to the dangerous condition accumulated data.
Specifically, the dangerous condition cumulative data collected in step S6 is compared with a preset alarm threshold value. This threshold may be set based on historical data, industry standards, or expert advice for deciding when to activate an alarm. If the analysis results show that the cumulative data of the dangerous condition meets or exceeds the alarm threshold, the system determines that an alarm needs to be triggered. And activates a preset alarm mechanism including lighting an indicator light, sounding an alarm, sending a message or email notification to the relevant personnel, etc. And continuously monitoring the working process according to the steps S1-S6, and stopping alarming when the dangerous condition accumulated data is smaller than the alarm threshold value.
The preset alarm threshold is, for example, half the number of dangerous state elements in the dangerous accumulation list. Comparing the acquired dangerous condition accumulated data with a preset alarm threshold value, judging whether the number of dangerous state elements in a dangerous accumulation list is half, if so, starting an alarm, prompting surrounding personnel through flashing of a red light and beeping sound, recording relevant information in alarm time and sending a short message to an administrator, judging whether the number of safe state elements in the dangerous accumulation list is half, and if so, canceling the alarm after delaying N frames.
Step S7, through analyzing the dangerous condition accumulated data, the potential safety threat can be responded in real time, an alarm is triggered timely, and operators are reminded to take necessary preventive measures, so that the overall safety of the operation site is improved.
In addition, after triggering the alarm, the system may also record detailed information of the alarm, including trigger time, cause, duration, etc., for subsequent analysis and reporting. At the same time, some instructional information or suggested course of action is provided to the operator to help them quickly and effectively cope with dangerous situations.
In summary, the crane lifting monitoring method based on multi-paradigm vision fusion provided by the embodiment of the application has the following technical effects:
The image acquisition device is used for acquiring the image of the hoisting operation area to acquire a real-time image acquisition video stream, so that continuous real-time monitoring of the operation area is realized, the continuity and the comprehensiveness of monitoring are ensured, and original visual information is provided for subsequent data processing analysis. And preprocessing the real-time image acquisition video stream to acquire real-time image frame data, thereby improving the usability and quality of the image data. Training a visual detection model by using a deep learning model according to historical operation video stream data, and identifying the real-time image frame data based on the visual detection model to generate detection information, wherein key elements in a monitoring scene are intelligently identified. And a visual tracking module is constructed by adopting a second-order matching method and a Top-K matching method, and a worker target, a lifting hook and a cargo target are respectively tracked and identified, so that tracking information is obtained, the real-time state of the object in a scene is accurately identified, and a basis is provided for predicting the future position and activity of the object. And carrying out real-time image acquisition and video stream dangerous condition identification according to the dynamic dangerous region or the fixed dangerous region to acquire dangerous condition accumulated data so as to realize real-time monitoring of the safety condition of the operation site, discover and record the dangerous condition in time, provide quantized data support for safety management and ensure the safety of the driving operation process. And triggering an alarm according to the accumulated data of the dangerous condition, responding to the potential safety threat in real time, preventing accidents, and guaranteeing the safety of staff.
Therefore, the method and the device realize comprehensive continuous real-time monitoring of the hoisting operation area, divide the fixed dangerous area and the dynamic dangerous area by utilizing the range coefficient and the visual optical flow model, intelligently identify and track the targets in the area through the visual detection model and the visual tracking model, accurately predict the dangerous condition and early warn. The accuracy and the reliability of monitoring and identifying the dangerous condition of the crane are improved, so that the safety management level of crane lifting operation is remarkably improved, and the safety of crane lifting operation process is ensured.
Embodiment 2 As shown in FIG. 3, the embodiment of the application provides a crane lifting monitoring system based on multi-paradigm vision fusion, which comprises:
the image acquisition unit 10, the image acquisition unit 10 is used for carrying out image acquisition of a target acquisition area based on image acquisition equipment to acquire a real-time image acquisition video stream, and the target acquisition area comprises a hoisting operation area.
And the preprocessing unit 20 is used for preprocessing the real-time image acquisition video stream to acquire real-time image frame data.
And a detection information generation unit 30, wherein the detection information generation unit 30 is used for performing the real-time image frame data identification based on a visual detection model, and generating detection information.
And the tracking identification unit 40 is used for inputting the detection information into the visual tracking module, and tracking the identification object to acquire tracking information.
And a dangerous area acquiring unit 50, wherein the dangerous area acquiring unit 50 is used for acquiring a dynamic dangerous area based on the detection information and the tracking information.
The dangerous situation identification unit 60 is configured to identify a dangerous situation in a real-time image acquisition video stream according to the dynamic dangerous area or the fixed dangerous area, and acquire dangerous situation accumulated data by using the dangerous situation identification unit 60.
And an alarm triggering unit 70, wherein the alarm triggering unit 70 is used for triggering an alarm according to the dangerous condition accumulation data.
Further, the detection information generating unit 30 according to the embodiment of the present application is further configured to perform the following steps:
Historical job video stream data is collected.
And performing frame extraction processing on the historical operation video stream data to obtain frame extraction image data, and performing data identification on the frame extraction image data to obtain a data identification result.
And constructing an image database based on the data identification result and the frame-drawing image data.
Training the visual inspection model based on the image database.
Preferably, the detection information comprises three target categories of lifting hooks, cargoes and workers and corresponding position and pixel area parameters.
Further, the tracking and identifying unit 40 according to the embodiment of the present application is further configured to perform the following steps:
building a visual tracking module comprising two classes of trackers
The trackerThe tracker is constructed by adopting a second-order matching methodAnd constructing by adopting a Top-K matching method.
The trackerCarrying out worker target tracking, outputting worker target tracking information, and the trackerAnd carrying out tracking of the lifting hook target and the cargo target, and outputting lifting hook target tracking information and cargo target tracking information.
Further, the dangerous area acquiring unit 50 according to the embodiment of the present application is further configured to perform the following steps:
Step one, judging whether the detection information or the tracking information contains a cargo target, if so, obtaining a range coefficient and executing a step three, otherwise, executing a step two;
judging whether the detection information or the tracking information contains a lifting hook target, if so, obtaining a range coefficient and executing a step III;
Judging whether the hook target state is in a low posture and whether the swing amplitude is larger than a threshold value, if the hook target state is in the low posture and the hook swing amplitude is smaller than the threshold value, not drawing the fixed dangerous area and executing the fourth step, otherwise, drawing the fixed dangerous area based on the range coefficient and executing the fourth step;
And step four, judging the moving state of the travelling crane, if the travelling crane is in the moving state, drawing a dynamic dangerous area based on the range coefficient, otherwise, not drawing the dynamic dangerous area.
Preferably, the dangerous area acquiring unit 50 according to the embodiment of the present application is further configured to perform the following steps:
acquiring the pixel area of the lifting hook based on detection information of the lifting hook target or tracking information of the lifting hook target Setting the height of the reference planeAnd the pixel area of the hook reaching the reference planeAccording to the formulaCalculating the height of the lifting hookSetting a height thresholdWhen the lifting hook is at the heightLess than or equal to the height thresholdWhen the lifting hook is in the low attitude.
Further, the dangerous situation identification unit 60 according to the embodiment of the present application is further configured to perform the following steps:
And step A, carrying out worker target recognition based on the visual detection model and the visual tracking module, if the detection information or the tracking information contains a worker target, executing step B, otherwise, updating the dangerous accumulation list, inserting a safety state element into the head of the dangerous accumulation list, removing an end element, and executing step E.
And B, judging whether the dynamic dangerous area exists, if so, judging the intrusion of the dynamic dangerous area based on the target position of the worker in the detection information or the tracking information, and executing the step C, otherwise, executing the step D.
And C, judging whether the area invasion of the worker target pixel is larger than a dangerous threshold, updating the dangerous accumulation list when the invasion is larger than the dangerous threshold, inserting a dangerous state element into the head of the dangerous accumulation list, removing an end element, otherwise, updating the dangerous accumulation list, inserting a safe state element into the head of the dangerous accumulation list, removing an end element, and executing the step E.
And D, carrying out intrusion judgment on the fixed dangerous area based on the detection information or the tracking information, judging whether the intrusion of the pixel area of the worker target is larger than a dangerous threshold value or not, updating the dangerous accumulation list when the intrusion of the pixel area of the worker target is larger than the dangerous threshold value, inserting a dangerous state element into the head of the dangerous accumulation list, removing an end element, otherwise, updating the dangerous accumulation list, inserting a safe state element into the head of the dangerous accumulation list, removing an end element, and executing the step E.
And E, counting the number of the dangerous state elements in the dangerous accumulation list, and obtaining dangerous condition accumulated data.
The foregoing detailed description of the method for monitoring crane lifting based on multi-paradigm vision fusion will clearly enable those skilled in the art to know the system for monitoring crane lifting based on Yu Duofan vision fusion in this embodiment, and for the system disclosed in the second embodiment, the system corresponds to the method disclosed in the first embodiment, and has corresponding functional modules and beneficial effects, and relevant points refer to the description of the method section.
In addition, the application also provides a computer readable storage medium, on which a computer program is stored, the computer program realizes each process of the above-mentioned crane lifting monitoring method embodiment based on multi-paradigm vision fusion when being executed by a processor, and the same technical effects can be achieved, and for avoiding repetition, the description is omitted here.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (9)

1. The crane lifting monitoring method based on multi-paradigm vision fusion is characterized by comprising the following steps of:
acquiring a real-time image acquisition video stream based on image acquisition equipment in a target acquisition area, wherein the target acquisition area comprises a hoisting operation area;
preprocessing the real-time image acquisition video stream to obtain real-time image frame data;
Performing the real-time image frame data identification based on a visual detection model to generate detection information;
inputting the detection information into a visual tracking module, and tracking an identification object to obtain tracking information;
Acquiring a fixed dangerous area or a dynamic dangerous area based on the detection information and the tracking information;
carrying out real-time image acquisition and video stream dangerous condition identification according to the dynamic dangerous area or the fixed dangerous area to acquire dangerous condition accumulated data;
and triggering an alarm according to the dangerous condition accumulated data.
2. The crane lifting monitoring method based on multi-paradigm vision fusion of claim 1, wherein the real-time image frame data identification based on the vision detection model is further included before generating the detection information:
Collecting historical operation video stream data;
Performing frame extraction processing on historical operation video stream data to obtain frame extraction image data, and performing data identification on the frame extraction image data to obtain a data identification result;
constructing an image database based on the data identification result and the frame-extracted image data;
training the visual inspection model based on the image database.
3. The crane lifting monitoring method based on multi-paradigm vision fusion of claim 1, wherein the detection information comprises three target categories of lifting hooks, cargoes and workers, and corresponding position and pixel area parameters.
4. The crane lifting monitoring method based on multi-paradigm vision fusion of claim 1, wherein the detecting information is input into a vision tracking module, and before tracking the identified object to obtain the tracking information, the method further comprises:
building a visual tracking module comprising two classes of trackers ;
The trackerThe tracker is constructed by adopting a second-order matching methodConstructing by using a Top-K matching method;
The tracker Carrying out worker target tracking, outputting worker target tracking information, and the trackerAnd carrying out tracking of the lifting hook target and the cargo target, and outputting lifting hook target tracking information and cargo target tracking information.
5. The method for monitoring crane lifting based on multi-paradigm vision fusion of claim 1, wherein a fixed dangerous area or a dynamic dangerous area is obtained based on the detection information and the tracking information, the method further comprising:
Step one, judging whether the detection information or the tracking information contains a cargo target, if so, obtaining a range coefficient and executing a step three, otherwise, executing a step two;
judging whether the detection information or the tracking information contains a lifting hook target, if so, obtaining a range coefficient and executing a step III;
Judging whether the hook target state is in a low posture and whether the swing amplitude is larger than a threshold value, if the hook target state is in the low posture and the hook swing amplitude is smaller than the threshold value, not drawing the fixed dangerous area and executing the fourth step, otherwise, drawing the fixed dangerous area based on the range coefficient and executing the fourth step;
And step four, judging the moving state of the travelling crane, if the travelling crane is in the moving state, drawing a dynamic dangerous area based on the range coefficient, otherwise, not drawing the dynamic dangerous area.
6. The method for monitoring crane lifting based on multi-paradigm vision fusion of claim 5, further comprising:
acquiring the pixel area of the lifting hook based on detection information of the lifting hook target or tracking information of the lifting hook target Setting the height of the reference planeAnd the pixel area of the hook reaching the reference planeAccording to the formulaCalculating the height of the lifting hookSetting a height thresholdWhen the lifting hook is at the heightLess than or equal to the height thresholdWhen the lifting hook is in the low attitude.
7. The method of claim 1, wherein identifying and updating a hazard accumulation list of hazard conditions in a real-time image acquisition video stream based on the dynamic hazard zone or the fixed hazard zone, and obtaining hazard condition accumulation data, comprises:
Step A, carrying out worker target recognition based on the visual detection model and the visual tracking module, if the detection information or the tracking information contains a worker target, executing step B, otherwise, updating the dangerous accumulation list, inserting a safety state element into the head of the dangerous accumulation list, removing an end element, and executing step E;
B, judging whether the dynamic dangerous area exists, if so, judging the intrusion of the dynamic dangerous area based on the target position of the worker in the detection information or the tracking information, and executing the step C, otherwise, executing the step D;
Step C, judging whether the area invasion of the worker target pixel is larger than a dangerous threshold, updating the dangerous accumulation list when the invasion is larger than the dangerous threshold, inserting a dangerous state element into the head of the dangerous accumulation list, removing an end element, otherwise, updating the dangerous accumulation list, inserting a safe state element into the head of the dangerous accumulation list, removing an end element, and executing the step E;
Step D, based on the detection information or the tracking information, carrying out intrusion judgment on the fixed dangerous area, judging whether the intrusion of the pixel area of the worker target is larger than a dangerous threshold value or not, updating the dangerous accumulation list when the intrusion of the pixel area of the worker target is larger than the dangerous threshold value, inserting a dangerous state element into the head of the dangerous accumulation list, removing an end element, otherwise, updating the dangerous accumulation list, inserting a safe state element into the head of the dangerous accumulation list, removing an end element, and executing the step E;
And E, counting the number of the dangerous state elements in the dangerous accumulation list, and obtaining dangerous condition accumulated data.
8. The crane lifting monitoring system based on the multi-paradigm vision fusion, which is characterized by being used for executing the crane lifting monitoring method based on the multi-paradigm vision fusion as claimed in any one of claims 1 to 7, and comprising the following steps:
the image acquisition unit is used for acquiring an image of a target acquisition area based on the image acquisition equipment, and acquiring a real-time image acquisition video stream, wherein the target acquisition area comprises a hoisting operation area;
The preprocessing unit is used for preprocessing the real-time image acquisition video stream to acquire real-time image frame data;
the detection information generation unit is used for carrying out the real-time image frame data identification based on a visual detection model to generate detection information;
The tracking identification unit is used for inputting the detection information into the visual tracking module, and tracking the identification object to obtain tracking information;
The dangerous area acquisition unit is used for acquiring a fixed dangerous area or a dynamic dangerous area based on the detection information and the tracking information;
The dangerous situation identification unit is used for carrying out dangerous situation identification in a real-time image acquisition video stream according to the dynamic dangerous area or the fixed dangerous area, and acquiring dangerous situation accumulated data;
and the alarm triggering unit is used for triggering an alarm according to the dangerous condition accumulated data.
9. A computer readable storage medium having stored thereon a computer program, which when executed by a processor performs the steps of the method for crane lifting monitoring based on multiple-paradigm vision fusion as claimed in any one of claims 1 to 7.
CN202510026221.4A 2025-01-08 2025-01-08 Driving crane monitoring method, system and medium based on multi-paradigm visual fusion Pending CN119461057A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202510026221.4A CN119461057A (en) 2025-01-08 2025-01-08 Driving crane monitoring method, system and medium based on multi-paradigm visual fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202510026221.4A CN119461057A (en) 2025-01-08 2025-01-08 Driving crane monitoring method, system and medium based on multi-paradigm visual fusion

Publications (1)

Publication Number Publication Date
CN119461057A true CN119461057A (en) 2025-02-18

Family

ID=94593759

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202510026221.4A Pending CN119461057A (en) 2025-01-08 2025-01-08 Driving crane monitoring method, system and medium based on multi-paradigm visual fusion

Country Status (1)

Country Link
CN (1) CN119461057A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119741663A (en) * 2025-03-05 2025-04-01 中国矿业大学(北京) Dangerous area identification method and device based on machine vision
CN119888629A (en) * 2025-03-26 2025-04-25 苏州飞搜科技有限公司 Computer vision-based dynamic detection method for falling area under crane load
CN120783437A (en) * 2025-08-26 2025-10-14 法兰泰克重工股份有限公司 Personnel intrusion detection method and system for bridge crane operation dangerous area

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07179290A (en) * 1993-12-24 1995-07-18 Komatsu Ltd Crane load shake detection device
JPH10258987A (en) * 1997-03-17 1998-09-29 Mitsubishi Heavy Ind Ltd Anti-swinging device for slung load
CN106006417A (en) * 2016-08-17 2016-10-12 徐州重型机械有限公司 Crane hook swing monitoring system and method
CN111392619A (en) * 2020-03-25 2020-07-10 广东博智林机器人有限公司 Tower crane early warning method, device and system and storage medium
CN118470626A (en) * 2024-04-23 2024-08-09 鞍钢集团自动化有限公司 Production site dangerous behavior detection method based on machine vision

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07179290A (en) * 1993-12-24 1995-07-18 Komatsu Ltd Crane load shake detection device
JPH10258987A (en) * 1997-03-17 1998-09-29 Mitsubishi Heavy Ind Ltd Anti-swinging device for slung load
CN106006417A (en) * 2016-08-17 2016-10-12 徐州重型机械有限公司 Crane hook swing monitoring system and method
CN111392619A (en) * 2020-03-25 2020-07-10 广东博智林机器人有限公司 Tower crane early warning method, device and system and storage medium
CN118470626A (en) * 2024-04-23 2024-08-09 鞍钢集团自动化有限公司 Production site dangerous behavior detection method based on machine vision

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
罗余洋;徐为民;张梦杰;刘玉强;: "采用单目视觉的桥吊负载空间定位方法", 计算机应用, no. 04, 10 April 2016 (2016-04-10), pages 280 - 286 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119741663A (en) * 2025-03-05 2025-04-01 中国矿业大学(北京) Dangerous area identification method and device based on machine vision
CN119888629A (en) * 2025-03-26 2025-04-25 苏州飞搜科技有限公司 Computer vision-based dynamic detection method for falling area under crane load
CN120783437A (en) * 2025-08-26 2025-10-14 法兰泰克重工股份有限公司 Personnel intrusion detection method and system for bridge crane operation dangerous area

Similar Documents

Publication Publication Date Title
CN119461057A (en) Driving crane monitoring method, system and medium based on multi-paradigm visual fusion
US11776274B2 (en) Information processing apparatus, control method, and program
KR102315371B1 (en) Smart cctv control and warning system
KR101735365B1 (en) The robust object tracking method for environment change and detecting an object of interest in images based on learning
CN111460917B (en) Airport abnormal behavior detection system and method based on multi-mode information fusion
CN105306892B (en) A kind of generation of ship video of chain of evidence form and display methods
CN111178424A (en) Petrochemical production site safety compliance real-time detection system and method
Lee et al. Real-time fire detection using camera sequence image in tunnel environment
CN114663479B (en) An intelligent monitoring and early warning method and system based on computer vision
CN111046797A (en) Oil pipeline warning method based on personnel and vehicle behavior analysis
CN116403162B (en) Airport scene target behavior recognition method and system and electronic equipment
KR101454644B1 (en) Loitering Detection Using a Pedestrian Tracker
KR102761723B1 (en) Monitoring system for accident worker
JP6140436B2 (en) Shooting system
JP3361399B2 (en) Obstacle detection method and device
CN117809234A (en) Security detection method and device, equipment and storage medium
JP6978986B2 (en) Warning system, warning control device and warning method
CN115082850A (en) Template support safety risk identification method based on computer vision
JP7241011B2 (en) Information processing device, information processing method and program
CN119722610A (en) A method and system for identifying abnormality of chain bucket of continuous ship unloader
CN119206626A (en) Abnormal alarm method, device, equipment and storage medium for traffic infrastructure
KR102874570B1 (en) Method for judgment of safety gear wearing state based on deep learning and computer-readable recording medium including the same
CN118135490A (en) Potential safety hazard investigation and accident early warning method and system based on graphic configuration
Foresti et al. Vehicle detection and tracking for traffic monitoring
CN120071259B (en) A method and system for detecting abnormal connection of containers on the ship side based on machine vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination