[go: up one dir, main page]

CN120689840A - Real-time safety hazard identification system for robots based on multimodal sensor fusion - Google Patents

Real-time safety hazard identification system for robots based on multimodal sensor fusion

Info

Publication number
CN120689840A
CN120689840A CN202510775800.9A CN202510775800A CN120689840A CN 120689840 A CN120689840 A CN 120689840A CN 202510775800 A CN202510775800 A CN 202510775800A CN 120689840 A CN120689840 A CN 120689840A
Authority
CN
China
Prior art keywords
time
information
value
abnormal
path
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202510775800.9A
Other languages
Chinese (zh)
Inventor
饶平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Heyn Security Technology Co ltd
Original Assignee
Shenzhen Heyn Security Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Heyn Security Technology Co ltd filed Critical Shenzhen Heyn Security Technology Co ltd
Priority to CN202510775800.9A priority Critical patent/CN120689840A/en
Publication of CN120689840A publication Critical patent/CN120689840A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0635Risk analysis of enterprise or organisation activities
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Economics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Computing Systems (AREA)
  • Quality & Reliability (AREA)
  • Operations Research (AREA)
  • Marketing (AREA)
  • Game Theory and Decision Science (AREA)
  • Educational Administration (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Tourism & Hospitality (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Development Economics (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a robot real-time potential safety hazard identification system based on multi-mode sensor fusion, and particularly relates to the technical field of intelligent inspection and safety monitoring, which comprises five parts, namely data acquisition, information fusion, behavior response, track analysis and risk output, wherein numerical normalization and feature extraction are carried out by acquiring images, thermal imaging, gas concentration and temperature and humidity information, potential anomalies are identified, and early warning is generated; the method comprises the steps of triggering data reinforcement collection and track recording in a target area, constructing a space-time path model to analyze abnormal evolution trend, finally combining reinforcement information and track characteristics, and outputting a potential safety hazard assessment result according to a risk level classification rule.

Description

Real-time potential safety hazard identification system of robot based on multi-mode sensor fusion
Technical Field
The invention relates to the technical field of intelligent inspection and safety monitoring, in particular to a robot real-time potential safety hazard identification system based on multi-mode sensor fusion.
Background
With the increasing large scale of various industrial infrastructure systems, the operation complexity of equipment is continuously improved, and the safety management pressure is obviously increased. Particularly in places with changeable environmental conditions, such as power substations, petrochemical plant pipe gallery areas, underground distribution wells, closed tunnels, high-temperature and high-humidity industrial cabins and the like and complex space structures, the traditional manual inspection-dependent mode is low in efficiency, has potential safety hazards and has obvious limitation in identifying early fault characteristics.
Currently, automated inspection robots are gradually applied to the above scenario to replace manual completion of part of the daily monitoring tasks. Some commercial schemes have the perceptibility of image changes, thermal anomalies or single environmental factors, but because most schemes only integrate single-mode sensors or lack effective data fusion mechanisms, the accurate recognition capability is still insufficient when the problems of 'early abnormal signals are not obvious', 'multi-type change cross superposition', 'hidden danger distribution is in a space expansion trend' and the like which are common under actual complex working conditions are faced. Therefore, the invention provides a robot real-time potential safety hazard identification system based on multi-mode sensor fusion, so as to solve the problems.
Disclosure of Invention
In order to achieve the above purpose, the present invention provides the following technical solutions:
The real-time potential safety hazard identification system of the robot based on the multi-mode sensor fusion comprises a multi-source data acquisition module, an information fusion processing module, an identification behavior response module, a track joint analysis module and a risk output control module, wherein the composition and the cooperative relationship are as follows:
Respectively acquiring image information, thermal imaging information, environmental gas concentration and temperature and humidity change signals through a multi-source data acquisition module, and synchronously marking an acquisition position and a time stamp in a moving state;
The information fusion processing module carries out numerical normalization on various sensing information according to a preset sequence, builds a correlation modeling structure among signals in a cross feature extraction mode, recognizes potential abnormal change features in continuous signals in a rule driving mode, and forms early warning initial values based on behavior correlation logic;
the recognition behavior response module receives the early warning initial value, judges whether to start the enhanced acquisition mechanism according to a threshold comparison rule, and if the conditions are met, corresponds to the sampling frequency of the enhanced related information source and synchronously activates the track recording behavior of the target area, and takes newly generated characteristic parameters in the response process as an enhanced result;
The target area is determined by the information fusion processing module based on the space position of the abnormal signal when the early warning initial value is generated and is used as the starting position of the trigger track record in the recognition behavior response module, the area is mapped to the space division grids in the track joint analysis module, and the corresponding grid units are taken as the basis of the judgment of the movable units;
The track joint analysis module is accessed to real-time position data of a target area, responds to the acquired updated information and time sequence change thereof, constructs a space-time evolution path model, and identifies abnormal continuous growth, movement distribution change or aggregation diffusion modes existing in the space-time evolution path model for tracking the abnormal dynamic development process;
And the risk output control module generates a corresponding potential safety hazard assessment conclusion based on the track joint analysis result and the enhanced acquisition information obtained in the recognition behavior response and in combination with a preset risk level classification condition.
In a preferred embodiment, the image information acquisition is performed by adopting a continuous frame sequence, the inter-frame difference identifies the image target change in a background subtraction mode, and the combination characteristic is formed by combining the gradient change of the pixel values in the thermal imaging, so as to locate the image change region;
The method comprises the steps of collecting ambient gas concentration and temperature and humidity change signals at fixed time intervals, adding a time mark and the current position of a robot to each collecting result, outputting the current position in an inertial and visual fusion mode, and carrying out numerical normalization processing on all sensing data before entering an information fusion processing module, wherein the processing mode is that a first extreme value and a second extreme value are determined from a historical operation record, and standardized calculation of dividing (the current value minus the first extreme value) by (the second extreme value minus the first extreme value) is carried out, and normalization results are taken as characteristics to participate in subsequent modeling.
In a preferred embodiment, the cross feature extraction mode in the information fusion processing module constructs a multi-feature correlation model by calculating the linear correlation between the image change frequency and the thermal imaging average heating value;
The image change frequency is the average value of the occurrence times of a change area in unit time, the average temperature rise value is the average value of the temperature change of all pixels in the unit area, the average temperature rise value and the average value are fused in a rule driving framework by taking a first coefficient and a second coefficient as weights, a multi-feature association model is updated regularly, an updating result is used for early warning initial value generation, and the correlation weights are derived from prior analysis and a system operation feedback mechanism and stored in an adjustable structure cache for subsequent reference.
In a preferred embodiment, the recognition behavior response module increases the sampling frequency of the image, thermal imaging and gas concentration information from the initial frequency to a first frequency value after receiving the early warning initial value, and activates a data focusing acquisition mechanism of the target area, the mechanism executes area focusing processing in the image acquisition process to enhance the local image resolution, and sets the duration time limit of the response behavior as a first time period value, after the time limit expires, the sampling frequency and the acquisition strategy are restored to the initial state, all parameter changes of the sampling strategy are controlled by the recognition behavior response function through a task instruction stack, and an execution time tag and a strategy priority are added in the instruction stack.
In a preferred embodiment, the robot position data in the track recording behavior is obtained by fusion calculation of the feature point matching result between the visual image frames and the inertial measurement acceleration value, and the specific calculation process is as follows:
extracting a first preset number of feature point sets between two adjacent frames of images by using a scale-invariant feature transformation mode, matching the feature point sets by a matching algorithm, and calculating pixel coordinate difference vectors between each pair of matching points, wherein the difference vectors are converted into three-dimensional relative displacement vectors through a camera internal reference matrix;
Acquiring an acceleration signal from an inertial measurement device, performing integral operation on the acceleration signal to obtain a speed vector, and performing integral again to obtain a second relative displacement vector;
The two groups of relative displacement vectors respectively take a matching confidence coefficient value and a preset fusion coefficient as weights to carry out linear weighting, wherein the matching confidence coefficient value is the reciprocal of the ratio of the average Euclidean distance to the maximum distance of all matching points in image matching, the higher the confidence coefficient is, the higher the registration accuracy is, and the range is between zero and one;
The fusion coefficient is set to be a fixed proportion according to the system deployment stage and is used for balancing the image matching precision and the inertial signal noise, the final fusion result is subjected to position offset correction by taking the geometric center of the robot body as a reference coordinate system, and a complete track sequence is formed by combining a time stamp, so that a track joint analysis module can construct a space-time evolution path model and track the dynamic change of an abnormal path.
In a preferred embodiment, the space-time evolution path model is constructed by using a region segmentation mode, the target region is divided into a plurality of equal cells and marked as an active unit and an inactive unit, the active units are connected in sequence according to the time stamps to form an abnormal path chain, and the abnormal trend is judged by calculating the following three parameters:
firstly, calculating the total European length difference value of a movable unit coordinate set in a path chain in two continuous time windows and recording a difference sequence, and judging that the path grows if the difference value of three continuous time windows is positive;
Secondly, the continuous density of the movable units is defined as the number of continuous adjacent movable unit pairs in each path chain divided by the total length of the path chain, and if the density value is greater than a first preset density threshold value, the distribution is judged to be concentrated;
Thirdly, the direction consistency, the unit direction of each section of vector of the path chain is calculated, the included angle between adjacent vectors is calculated, and if all the included angles are smaller than a first angle threshold value, the path direction is judged to be stable;
And when at least two of the three parameters meet the preset judging condition, triggering the risk output control module to execute the high-level potential safety hazard identification process.
In a preferred embodiment, when generating the potential safety hazard assessment conclusion, the risk output control function not only is based on the enhanced acquired information obtained in the recognition behavior response function, but also integrates the spatial path change characteristics output in the trajectory joint analysis function, and combines the following three judging elements, namely, the total length of the first abnormal path chain, the enhancement amplitude of the abnormal signal, namely, the difference value between the enhanced acquired information and the previous data of the same area, and the third, the average moving rate of the target area in the appointed time.
In a preferred embodiment, after generating a safety hidden trouble assessment conclusion, the risk output control function automatically starts an intervention execution mechanism, wherein the mechanism comprises a step of sending a path adjustment command to a robot body scheduling control system and simultaneously generating a risk information packet;
The path adjustment command comprises a current position withdrawal instruction and a coordinate positioning rule of a next detection point, wherein the rule performs angle adjustment based on the current abnormal path chain direction and positions the next target point to be a vertical direction safety area;
The generated risk information package comprises an abnormal signal type, position information, an evaluation grade and a time tag.
The invention has the technical effects and advantages that:
According to the invention, the multi-source data acquisition module acquires image information, thermal imaging information, ambient gas concentration and temperature and humidity change signals at the same time in a moving state, and accurately marks the acquisition position and the time stamp, so that the acquired data has synchronism and a space reference basis. The information fusion processing module performs normalization processing on different types of sensing data according to a preset sequence, and builds modeling structures among various signals by adopting a cross feature extraction mode, so that response association among multidimensional data is enhanced. The processing mode not only improves the identification capability of the fine anomalies, but also realizes the dynamic extraction of potential anomaly characteristics in the continuous change signals through a rule driving mode, and forms an early warning initial value with a behavior logic basis, thereby providing a reliable basis for the follow-up response and judgment flow.
After receiving the early warning initial value, the recognition behavior response module determines whether to start the enhanced acquisition mechanism or not based on the threshold comparison result. When the starting condition is met, the system increases the sampling frequency of the corresponding information source, and synchronously activates the track recording behavior in the target area, so that the centralized monitoring of the suspicious area is realized. Meanwhile, the newly added characteristic parameters generated in the response process are used as enhancement results to participate in subsequent analysis. The behavior response process forms a closed loop logic from early warning judgment to target focusing and then to feature enhancement, so that the data density of a high-risk area is improved, the redundant consumption of resources in a low-risk area is avoided, and the response efficiency and the recognition accuracy of the system are remarkably improved.
The method and the system establish a tracking mechanism for the evolution process of the abnormal behavior by accessing the real-time position data of the target area and updating time sequence information thereof through the track joint analysis module, constructing a space-time evolution path model, and identifying the possible trends of continuous abnormal growth, movement distribution change or aggregation diffusion and the like in the space-time evolution path model. The risk output control module further generates a potential safety hazard assessment conclusion based on the analysis result and the enhanced acquisition information obtained in the recognition behavior response and in combination with a preset risk level classification condition, so that the system has a complete operation link of recognition, response, tracking and assessment. The process is favorable for realizing dynamic positioning, grading judgment and accurate output of risks, and the coping capability of hidden safety problems in complex inspection scenes is remarkably improved.
Drawings
For the convenience of those skilled in the art, the present invention will be further described with reference to the accompanying drawings;
Fig. 1 is a schematic diagram of a real-time potential safety hazard identification system of a robot based on multi-mode sensor fusion in the invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The following examples are obtained with reference to fig. 1:
Example 1:
The real-time potential safety hazard identification system of the robot based on the multi-mode sensor fusion comprises a multi-source data acquisition module, an information fusion processing module, an identification behavior response module, a track joint analysis module and a risk output control module, wherein the composition and the cooperative relationship are as follows:
the multi-source data acquisition module provides various basic perception data sources including image information, thermal imaging information, ambient gas concentration and temperature and humidity change signals for the system, and the current environment state can be more comprehensively and three-dimensionally depicted by fusing the original data obtained by the plurality of sensors of different types. In the acquisition process, the module not only acquires the data signal, but also synchronously marks the space position and the time stamp corresponding to the data, so that all acquired information is ensured to have space-time attributes. The data acquisition mode with the position information and the time tag in the moving state provides a high-quality input basis for subsequent abnormal recognition and evolution modeling based on space-time characteristics, and effectively enhances the response capability and tracking precision of the system to hidden danger changes in a dynamic scene.
The information fusion processing module performs unified processing on various sensing information from the multi-source data acquisition module, and the sensing data of different dimensions and different units have comparability and fusibility through normalization operation according to a preset sequence. The module analyzes the coupling relation between various perception signals in terms of numerical trend, change frequency and time evolution in a cross feature extraction mode, and establishes a correlation modeling structure between the signals, so that potential abnormal behaviors hidden under surface layer change are excavated. By adopting a rule driving mode, the module can identify the change fragments which do not accord with the normal mode from the continuous change signals, extract the change fragments as abnormal change characteristics, and further carry out causality judgment on the abnormal characteristics by combining with behavior association logic so as to generate early warning initial values. The early warning initial value is used as the preliminary quantitative description of the abnormal event and is the key judgment basis for the whole system to advance to the response and decision stage.
The recognition behavior response module receives and judges the early warning initial value generated by the information fusion processing module in real time, and determines whether to activate the subsequent enhanced acquisition process by matching with a threshold comparison rule set in the system. When the early warning initial value meets the preset abnormal judgment condition, the module immediately promotes the sampling frequency of the related information source so as to acquire data with higher density and higher resolution, and the detail observation of the potential risk area is ensured to be clearer and more accurate. Meanwhile, the module also synchronously activates track recording behaviors of the target area, takes new characteristic parameters generated in the sampling response process as an enhancement result, and provides support for subsequent space-time evolution analysis. The recognition behavior response module plays a turning role from static recognition to dynamic tracking in the system, so that the system has the capability of deep observation and continuous monitoring for a target area.
The track joint analysis module acquires real-time position data, updated information after response acquisition and evolution process of the updated information along with time based on target area track recording behaviors activated by the recognition behavior response module, and models the data to construct a space-time evolution path model. The model can reflect the dynamic expansion, migration, aggregation and other change trends of the abnormal region in the time axis and the space dimension. The target area is determined by an information fusion processing module according to the space position of the abnormal signal in the early warning initial value generation stage, and then mapped to a space division grid structure in the module to form a series of basic units for judging abnormal activities, namely activity units. By comprehensively analyzing the evolution modes of the units along with time, the track joint analysis module can identify typical abnormal evolution forms such as abnormal continuous growth, movement distribution change or aggregation diffusion and the like, and provides basic support of a space dynamic layer for risk level judgment of the system.
The risk output control module receives and integrates the path analysis result output by the track joint analysis module and the enhanced acquisition information formed in the recognition behavior response module, and comprehensively evaluates the overall influence of the current abnormal event according to preset risk grade classification conditions in the system. The module forms a judging model of risk identification by comparing the data characteristics in the enhanced result with the evolution trend of the abnormal path in space, and generates a corresponding potential safety hazard assessment conclusion according to the judging model. The risk output control module not only bears the responsibility of final grading and quantification of the abnormal event in the system, but also is a trigger point for executing actions such as an intervention strategy, pushing alarm information, triggering a path adjustment command and the like, and the output result has direct influence on the adjustment of the operation strategy of the whole system, so that the risk output control module is a key terminal link for realizing 'recognition-judgment-response' closed-loop logic.
The image information refers to an image frame sequence acquired by a visible light camera device mounted on a robot, and the image sequence is continuously acquired at fixed time intervals to form a continuous frame sequence. In the process, in order to identify dynamic change areas in the image, namely safety risk factors such as appearance change of equipment, foreign matter entering, personnel approaching and the like, an inter-frame difference analysis mode is adopted by the system. The calculation of the inter-frame difference is completed through a background subtraction mode, wherein the background subtraction mode is to compare the current frame with a static background model at the pixel level, and identify the positions where the pixel value changes significantly, and the positions are regarded as the image target change areas.
The thermal imaging information refers to an infrared image acquired by a thermal infrared sensor, wherein the gray value of each pixel corresponds to a temperature value. In order to enhance the judgment precision of the image target change area, the pixel temperature value of the corresponding area in the thermal imaging image is extracted, and the gradient change of the temperature in the area is calculated. The gradient change of the pixel value refers to the change rate of the temperature value difference between adjacent pixels in space, and can be used for judging whether abnormal temperature rise, hot spot aggregation and the like exist in the area. The image target change area and the thermal imaging area are matched through spatial alignment, so that a joint characteristic is formed, namely, the image change identified based on background subtraction and the thermal change obtained based on gradient calculation are combined and analyzed. The joint features are used to further locate regions of image variation, thereby enhancing the robustness of recognition of abnormal objects.
The environmental gas concentration signal refers to the concentration value of combustible gas, toxic gas or other industrial abnormal gas collected by a gas detection device arranged on the robot body, the temperature and humidity change signal refers to the air temperature and relative humidity value in the environment, and the temperature and humidity change signal is collected in real time by an integrated temperature and humidity sensor. The two types of signals are collected at fixed time intervals, and the time intervals are set to be a preset value in the deployment stage. For example, it may be set to collect every 5 seconds. After each acquisition is completed, the system takes the current time point as a time mark and binds the current position of the robot body during the acquisition.
The current position is obtained by adopting an inertial and visual fusion mode. The inertial mode is to obtain displacement estimation in a short time by utilizing acceleration and angular velocity information measured by a gyroscope and an accelerometer through integral operation, and the visual mode is to calculate image displacement by utilizing characteristic point matching among continuous image frames and obtain relative position change by combining camera parameters. The positioning result obtained after the fusion of the two is superior to the single source in pose estimation precision. The time marks and the position data recorded in the acquisition result together form a space-time label of the sampling data, and space-time dimension support is provided for subsequent path modeling and dynamic analysis.
Before all the sensing data are sent to the information fusion processing module, numerical normalization processing is needed first. The sensing data comprise pixel number of an image change area, thermal imaging temperature difference value, gas concentration, temperature, humidity and the like, and the dimensions are different from each other and numerical intervals, so that joint analysis cannot be directly carried out. A unified normalization strategy is thus employed. The normalization process is to extract the minimum observed value, defined as a first extremum, and the maximum observed value, defined as a second extremum, of each type of perceived signal in a given period of time (e.g., the past 24 hours) from the historical operating record, and then perform a normalization calculation on the current collected value, where the calculation formula is (the current value minus the first extremum) divided by (the second extremum minus the first extremum). The calculation result is a dimensionless value between zero and one, and represents a position interval of the current value in a history range, and the position interval is used for eliminating magnitude deviations caused by different signals due to physical units and sensor types, and the normalization result is used as a unified feature vector to participate in a subsequent modeling flow.
After the numerical normalization is completed, the information fusion processing module constructs a correlation modeling structure between signals in a cross feature extraction mode. The cross feature extraction mode refers to that the change trend of various sensing signals is not analyzed independently, but the cross association relationship among different types of signals is focused at the same time. For example, whether the frequency of the change in the region in the image information has some synchronous relationship with the degree of temperature rise in the region in the thermographic image is a joint feature of potential anomalies.
Specifically, the cross feature extraction mode is realized by calculating the linear correlation degree between the image change frequency and the thermal imaging average heating value. The image change frequency is defined as an average value of the occurrence times of the change region identified in the same spatial region in a unit time, for example, the frequency of identifying a moving object or a structural change in a video sampled per second, and the thermal imaging average temperature rise value is defined as an average rise amount of the temperature of all pixel points in the unit region in a certain period of time, calculated by recording the temperature change of each pixel in the region and calculating the sum of the temperature rise divided by the number of pixels.
The two features of the image change frequency and the average heating value are constructed into a multi-feature correlation model in an information fusion processing module. The model is constructed in a linear weighting mode, wherein the image change frequency is multiplied by a first coefficient, the thermal imaging average heating value is multiplied by a second coefficient, and the two coefficients are added to form a comprehensive evaluation factor. The first coefficient and the second coefficient are weight parameters set by the system in an initialization stage and are used for reflecting the relative importance degree of the two features in anomaly identification according to an empirical rule or a data-driven optimization result. For example, if the image change occurs generally earlier than the temperature rise in some sort of hazard, the first coefficient may be set to a value higher than the second coefficient to enhance the influence of the image signal. The multi-feature correlation model is not fixed after one-time construction, but is updated at certain time intervals, called timing update. For example, the correlation between the image change frequency and the thermal imaging temperature rise value is recalculated every ten minutes based on the latest perceived data, and the weight coefficient is recalculated and corrected. The updated model is used for early warning initial value generation, namely under a rule driving framework, whether an abnormal event is formed is judged according to the threshold value of the comprehensive factor. The rule driving mode is to generate an abnormal early warning initial value through a preset numerical rule, for example, when the comprehensive factor exceeds a certain early warning threshold value.
The weight, model update parameters and execution logic used in the above process are all derived from a priori analysis and system operation feedback mechanisms. The prior analysis refers to offline model training or statistical rule extraction based on historical inspection data at the initial stage of system deployment, and the system operation feedback mechanism refers to automatic correction of original model parameters according to the accuracy and response effect of the identification result in actual operation. All generated weight information and model structures are stored in an adjustable structure buffer memory, and the buffer memory supports quick reading and writing of parameters and online updating so as to call subsequent processing flow or dynamically adjust.
The initial value of the abnormal early warning is not input, but an output variable generated by the information fusion processing module according to the perceived data characteristics is used for triggering the judgment flow of a downstream recognition behavior response module, and is the first data with logic triggering significance in the abnormal recognition process. The abnormal early warning initial value is a quantitative judgment basis for indicating whether the environment state of the specific space position at the current moment has potential safety hazard characteristics or not. The initial value is not directly used for finally judging as abnormal, but is used as judging input of whether to execute intensified acquisition and track recording in the recognition behavior response module, and plays a role of a pre-signal of early warning triggering.
The generation of the abnormal early warning initial value comprises the following three parts:
And the characteristic fusion factor is formed by a weighted summation result of the image change frequency and the thermal imaging average heating value in the multi-characteristic correlation model, and represents the joint abnormality degree of the visual and thermal signals in the current area.
The rule driving judging threshold value is that the system sets an early warning triggering threshold value, for example, if the comprehensive factor is larger than a certain value, the comprehensive factor is regarded as having abnormal tendency, and the value is dynamically updated by prior analysis and operation feedback mechanism.
And combining the time stamp and the current position coordinate provided by the data acquisition module to ensure that each early warning initial value has unique localization in time and space and support subsequent behavior response and track analysis.
For example, when the robot is inspecting a certain area, the image information shows that a high-frequency structural change occurs between consecutive frames (the image change frequency is 0.8), and the thermal imaging data shows that the temperature of the area rises by 2.5 ℃ in the past 30 seconds (the average temperature rise value is 0.75). The multi-feature correlation model weights the two with a first coefficient of 0.6 and a second coefficient of 0.4 to generate a fusion factor of (0.8x0.6) + (0.75x0.4) =0.48+0.30=0.78, and the early warning threshold set by the system is 0.70, so that the early warning initial value is judged to be established. Meanwhile, the data are marked with time labels '2025-05-2814:21:36' and space coordinates (X=12.6m and Y=3.2m) to form an abnormality early warning initial value of the time.
After the recognition behavior response module receives the early warning initial value, whether the current early warning initial value reaches the condition of triggering response is judged according to a threshold comparison rule set in the system. The threshold comparison rule is set by a system design stage, and refers to a numerical critical standard predefined by the system in the running process for judging whether to execute the reinforced acquisition behavior, and the numerical critical standard is usually compared with a threshold value based on the numerical value of a fusion factor in the early warning initial value. And when the early warning initial value is larger than the threshold value, the potential abnormal state is considered to exist currently, and the next processing flow is entered.
If the comparison condition is met, the recognition behavior response module starts the enhanced acquisition mechanism. The enhanced acquisition mechanism is to perform sampling frequency promotion and data refinement processing on a sensing channel related to a target area indicated by an early warning initial value so as to acquire sensing information with higher density and higher resolution. The relevant information sources comprise image information, thermal imaging information and gas concentration information, and concretely refer to three sensor channels provided by the multi-source data acquisition module, namely visible light image acquisition equipment, an infrared thermal imaging sensor and a gas detection sensor. The recognition behavior response module increases the sampling frequency of the three channels from an initial frequency set by the system to a first frequency value, wherein the first frequency value is set according to the equipment processing capacity and the field response timeliness when being deployed, and can be two to five times of the original frequency.
The system also activates the data focus acquisition mechanism of the target area while performing the frequency boost. The target area is an area determined according to the space position of the abnormal signal when the information fusion processing module generates the early warning initial value, and the recognition behavior response module uses the area as a reference to execute the area focusing processing in the image acquisition. The regional focusing processing refers to controlling the image acquisition device to increase the frame rate, improve the image resolution or execute digital zooming and other operations within the range of the early warning region so as to improve the definition and detail feature recognition capability of the image within the region, thereby facilitating the subsequent modeling and path tracking processing.
The whole reinforced acquisition behavior has time limit control. The recognition behavior response module sets the duration time limit of the response behavior to be a first time period value, namely, the execution of the acquisition strategy does not last infinitely, but automatically exits the reinforcement mode after exceeding a preset time period. The first time period value is set by the system running environment, for example, 60 seconds or 300 seconds, and the specific value is stored in the system parameter configuration table.
When the time limit expires, the recognition behavior response module automatically restores the sampling frequency of the image, the thermal imaging and the gas concentration to the initial frequency, and simultaneously cancels the focusing mechanism and returns to the conventional inspection state. All parameter changes of the sampling strategy (including sampling frequency promotion and restoration, focusing mechanism activation and cancellation) are controlled by the recognition behavior response module through the task instruction stack. The task instruction stack refers to an operation sequence scheduling structure arranged in the system, wherein each control instruction is added with two elements, namely an execution time tag and a strategy priority when being generated. The execution time tag is used for accurately scheduling the start and stop time of sampling behaviors, and the strategy priority is used for reasonably scheduling sampling resources when a plurality of early warning areas are triggered simultaneously, so that preferential response to high-risk areas is ensured.
In the response process, the system gathers newly added feature data acquired by data sources such as images, thermal imaging, gas concentration and the like, and generates an 'enhancement result' by combining the parameter states of the sampling strategies in the response behaviors. The enhancement result is used as one of the outputs of the recognition behavior response module, contains the current multidimensional perception enhancement performance of the target area, and is important input data of the follow-up track joint analysis module and the risk output control module.
The temperature and humidity change signals do not participate in starting the enhanced acquisition mechanism, and although the temperature and humidity change signals participate in information fusion and formation of abnormal early warning initial values in initial multi-source data acquisition, the recognition behavior response module does not contain sampling frequency improvement of the temperature and humidity signals when starting the enhanced acquisition mechanism, and the specific reasons are as follows:
The temperature and humidity change slowly in a short period of time, and the sensing characteristic determines that the signal change rate is far lower than that of images, thermal imaging and gas concentration. Even if potential safety hazards exist in the environment, temperature and humidity signals tend not to fluctuate severely within tens of seconds, and the lifting significance of sampling frequency is limited. The data bandwidth and the processing priority limit that the image and thermal imaging data lifting sampling frequency can obviously increase the system calculation load, and in an embedded platform with limited processing resources, a high-frequency and high-timeliness data channel should be preferentially ensured. The temperature and humidity data is usually used as a background environment factor to participate in the information fusion process, but is not used as an event triggering main factor, so that the temperature and humidity data do not need to participate in dynamic response control, and periodic acquisition is kept all the time. The three types of data sources of the focused image, the thermal imaging and the gas concentration in the recognition behavior response module are used as related information sources to conduct response optimization, and the recognition behavior response module is a design choice which is optimized and reasonably avoided in the technology.
The robot position data in the track recording behavior is obtained by fusion calculation of characteristic point matching results among visual image frames and inertial measurement acceleration values. The fusion calculation mode is used for improving the accuracy and the robustness of the position information, and is basic data input required by the track joint analysis module for constructing the space-time evolution path model.
The image feature point matching acquisition process is that a first preset number of feature point sets are extracted between two adjacent frames of images, namely, image data of two time points continuously acquired by a robot in a scale-invariant feature transformation mode. The scale-invariant feature transformation mode refers to a feature extraction technology capable of stably extracting local key points of an image under the conditions of image size, rotation or illumination change, and for example, an SIFT (scale-invariant feature transformation) algorithm or an ORB algorithm can be adopted to ensure that a feature extraction effect with high robustness can be maintained in a complex inspection environment. The first preset number may be set according to the image resolution and processing power, such as extracting 200 high-responsiveness key points.
And matching the feature point sets of the two frames of images through a matching algorithm, wherein the matching algorithm identifies an optimal matching result according to similarity measurement of feature descriptors, for example, euclidean distance or Hamming distance. The pixel coordinate differences between each pair of matching points form a two-dimensional vector representing the relative displacement of the feature point on the image plane. The difference vector is converted into a three-dimensional relative displacement vector via a camera internal reference matrix. The camera internal reference matrix refers to a matrix containing parameters inside the image capturing apparatus such as a focal length, a principal point position, a pixel size, and the like, and is used to map pixel coordinates to three-dimensional displacement values of an actual space unit (for example, meters), thereby obtaining a first set of three-dimensional relative displacement vectors.
The data processing of the inertial measurement part is as follows, the acceleration signal is obtained from the inertial measurement device, the acceleration signal is derived from an accelerometer inside the robot, and the linear acceleration values on three spatial axes are provided. And carrying out one-time integration operation on the acceleration value to obtain a speed vector in unit time, and then carrying out the second-time integration to obtain a second relative displacement vector of the robot in the time period. This vector can represent the rough path of motion of the robot in space without external reference, but because of the accumulated error of the inertial system, the accuracy needs to be corrected with the aid of the visual matching results. The process of displacement vector fusion is as follows:
The two groups of relative displacement vectors, namely the visual computing result and the inertial computing result, respectively execute linear weighted fusion by taking the matching confidence coefficient value and the preset fusion coefficient as weights. The match confidence value is defined as the reciprocal of the average euclidean distance to maximum distance ratio for all the matched points in the image match. The higher the confidence, the better the image matching quality, i.e. more concentrated matching points, less deviation, indicating that the estimated displacement is more reliable. The range of values is between zero and one, for example, if the average euclidean distance is 5 pixels and the maximum distance is 20 pixels, the confidence level is 0.25.
The preset fusion coefficient is a fixed proportion parameter set by the system in the deployment stage and is used for balancing the trust degree of the visual data and the inertial data according to the application scene. For example, inertial weights may be increased appropriately in strongly vibrating or visually occluded scenes, while visual data weights may be increased in clearly imaged, more inertial drift scenes.
And (3) carrying out position offset correction on the final fusion displacement result by taking the geometric center of the robot body as a reference coordinate system. The geometric center is a predefined center reference point in the structural design of the robot, usually a structural body center or a symmetry axis center of the sensor arrangement, and is used for unifying various data reference systems and reducing accumulated errors.
The fused position data and the time stamp are bound to form track points, and the process is continuously carried out to form a complete track data sequence. The sequence comprises position coordinates, time labels, calculation precision weights and the like, and is used as a core data source for constructing a space-time evolution path model by a subsequent track joint analysis module, and the sequence is used for identifying dynamic change behaviors of abnormal paths, including continuous growth, movement distribution change or aggregation diffusion modes and the like.
And after receiving the target region position data, the enhanced acquisition information and the time stamp output by the identification behavior response module, the track joint analysis module starts to execute the construction operation of the space-time evolution path model. The model is constructed in a region segmentation mode, specifically, an early warning target region is divided into a plurality of grid units with the same size on a two-dimensional plane, the shape of the model is generally rectangular or square, and the coverage area of the model is equal to the boundary of the early warning target region.
The construction of the unit cell is used for mapping the continuous track points into discrete grid units in space, so that the construction of a path chain structure and the analysis of evolution trend are realized. In the continuous input process of the track data sequence, the system judges the cell number of the space coordinate corresponding to each time point based on the actual moving path of the robot and the timestamp record, and divides the grid into an active unit and an inactive unit according to the number and the density of the track points.
An "active cell" refers to a cell that has a trace point hit for a selected period of time, meaning that the robot has performed a sample or an event has occurred at that location, and an "inactive cell" is a cell that has not been hit. The system connects the active units corresponding to the continuous time stamps according to the time sequence to form an abnormal path chain, namely a space discrete expression form representing a potential risk evolution path.
The anomaly trend identification is computationally analyzed based on three parameters:
The first item is path length variation trend, which is used for judging whether the abnormal path chain has continuous expansion trend along with time. The method comprises the steps of dividing analysis time into a plurality of continuous time windows, taking 5 minutes as one time window, extracting coordinate sets of all active units in a path chain in the time period in each time window, calculating total Euclidean length after the coordinate points are connected according to time sequence, namely the sum of distances between every two points, obtaining a length sequence under the plurality of time windows, carrying out difference on the lengths in the two continuous time windows, and judging that the path continuously grows if the length difference in the three continuous time windows is positive (namely the path length is continuously increased). The parameter reflects the spreading trend of the abnormal track in space, and is suitable for risk modes such as a high-temperature zone, gradual expansion of a gas leakage source and the like.
And the second term is that the continuous density of the movable units is used for identifying whether the track path presents a concentrated trend on spatial distribution. The method is specifically defined as that in each path chain, the number of all ' continuous adjacent active unit pairs ', namely the number of the unit cell pairs which hit continuously in space ', is identified, the number of the unit cell pairs is divided by the total length of the path chain to obtain continuous hit density in unit length, the density value is compared with a first preset density threshold, and if the density value is higher than the threshold, the distribution is judged to be concentrated. A higher continuous density indicates that the anomaly activity does not appear as a sporadic distribution, but is focused on a particular region, which may be a "hot spot" region of security risk, such as a corner of a wall where an image anomaly occurs multiple times in succession.
And thirdly, direction consistency, which is used for evaluating whether the abnormal path chain has stable movement trend or not, namely judging whether the abnormal path chain continuously advances along a certain main direction or not. The method comprises the steps of extracting displacement vectors formed among each segment of continuous units in a path chain, converting each segment of vectors into unit vectors, wherein the direction is not influenced by displacement length, calculating included angles between each pair of adjacent unit vectors one by one, and judging that the path direction is stable if all included angles are smaller than a first angle threshold value, for example 30 degrees. The index can identify typical phenomena such as gas leakage spreading along a single direction of the ventilation pipeline, cable overheating spreading along the wiring trend, and the like.
When any two or more of the three trend parameters meet preset judging conditions set in the system, namely, the current target area is considered to have structural risk evolution trend, the system automatically triggers a risk output control module to enter a high-level potential safety hazard identification process, and further intervention mechanism and result output are executed. When the robot is used for inspection of a certain channel, the lengths of abnormal paths in three continuous time windows are respectively 8 meters, 10 meters and 12.5 meters, and the difference sequences are +2 meters and +2.5 meters. 8 pairs of 10 movable units in the path chain are continuously adjacent, the density is 0.8 and exceeds the set 0.6 density threshold value of the system, the included angle between all path sections is smaller than 20 degrees, and the direction is stable. The system makes a decision according to the judging condition 'any two of the three indexes are met', and the result triggers the judgment of the high-level potential safety hazard and is transmitted to the risk output control module.
When the risk output control function generates a safety hidden danger assessment conclusion, the input data according to the risk output control function not only comprises the enhanced acquisition information obtained in the recognition behavior response function, but also synthesizes the spatial path change characteristics output in the track joint analysis function to form quantitative and hierarchical judgment on the current safety risk, so as to determine whether to execute a further intervention instruction. The evaluation mechanism makes a comprehensive decision around three explicit decision elements, each element having been generated by a structural processing chain in a pre-module.
The first type of data is enhanced acquisition information, wherein the enhanced acquisition information refers to image information, thermal imaging information and gas concentration information acquired by an identification behavior response module in a target area at an elevated sampling frequency after an enhanced acquisition mechanism is started. In the acquisition process, the module executes regional focusing processing and strategy adjustment, and the generated new data is endowed with time labels and strategy context information, which is a local refinement and enhancement result of the original inspection data. This information, as one of the risk assessment inputs, provides a high-precision observation data base for the current state of the target area.
The second type of data is spatial path change characteristics, wherein the spatial path change characteristics come from a track joint analysis module and refer to a modeling result of the activity trend of a target area in space, and the modeling result comprises the geometric structure, the time evolution characteristics, the directivity characteristics and the density distribution of an abnormal path chain. The feature is formed by dynamic evolution of an 'active unit' in a path chain, and is a core basis for identifying abnormal development trend of the system. On the basis, the risk output control function takes the following three judging elements as calculation input for generating an evaluation conclusion, wherein the judging element I is the total length of an abnormal path chain, and the abnormal path chain is a space structure formed by sequentially connecting movable units according to time stamps and is used for representing an abnormal behavior path of the robot in a target area. The "total length" is defined as the sum of the Euclidean distances between all adjacent active cells in the path, in meters or relative space. The longer the path, the greater the range over which the abnormal activity may be swept, and the higher the risk level. The parameter is output by the track joint analysis module, and has direct calculation basis and verifiability.
And the second judging element is the enhancement amplitude of the abnormal signal, wherein the abnormal signal is a data item which is obtained by carrying out difference value calculation on the data item and standard data in the history record of the same target area in the enhancement acquisition information and is obviously deviated from a normal state. The magnitude of the deviation is indicated by, for example, the magnitude of the increase in the area of the change region for image changes, the magnitude of the increase in the average temperature rise over the historical average for thermal imaging, and the difference between the current value and the steady background concentration over time for gas concentration. The difference value is set as a standardized quantization index in the system, and is compared with a preset amplitude threshold value to determine whether the signal is abnormal in high intensity.
It is noted that here the "anomaly signal" is not directly equivalent to the "active unit", but its spatial location map necessarily falls within the aforementioned "target area" grid structure. That is, the spatial projection of the anomaly signal determines its contribution to the path chain, which is a connected representation of the evolution behavior of the plurality of anomaly signals in space.
And the third judging element is the average moving speed of the target area in the appointed time, wherein the average moving speed of the target area in the appointed time is defined as the Euclidean distance between the center points of the head and tail two movable units of the abnormal path chain divided by the time difference between the two points, and the unit is meter per second or lattice number per second. The parameter reflects whether the abnormal signal is in a continuous motion trend or not, and can be used for identifying dynamic hidden dangers (such as gas leakage moving along with the airflow, cable overload area expansion and the like). The system can set a speed threshold value, and if the abnormal signal moving speed in a certain area continuously exceeds the speed threshold value, the current risk can be considered to be in a rapid evolution state, and the evaluation level should be improved.
And the risk output control function generates a corresponding potential safety hazard assessment conclusion according to the combined result of the three judging elements and the grading rule set in the system. The evaluation level is generally divided into a plurality of levels, such as general hidden danger, moderate hidden danger, high risk hidden danger and the like, and different intervention strategies can be corresponding. After the evaluation conclusion is generated, the system decides whether to invoke a subsequent intervention execution mechanism, such as path adjustment, risk broadcasting, monitoring and warning, etc., and the relevant contents are fully described in the subsequent dependent claims.
Assuming that the total length of an abnormal path chain in a certain area is 12 m, the corresponding average density is 0.7, the continuous expansion time is 15 minutes, the image thermal imaging enhancement amplitude is 1.6 times of the standard value, the gas concentration enhancement value is 30ppm (20 ppm higher than the background mean value), the head-tail distance of the abnormal path is 6m, the time span is 120 seconds, and the average moving speed is 0.05 m/s. After comparing the three indexes, the system triggers high risk level judgment, generates an alarm and starts an intervention mechanism.
And the risk output control function immediately and automatically starts an intervention execution mechanism after the potential safety hazard assessment conclusion is generated. The mechanism is a comprehensive response flow for field execution and information reporting, and aims to rapidly intervene in dynamic scheduling of the inspection task based on a risk identification result, avoid potential safety risks and ensure recording and transmission integrity of abnormal events.
And sending a path adjustment command to the robot dispatching control system, wherein the path adjustment command is the first execution action generated by the risk output control module and is directly directed to the dispatching control system of the robot body and used for redefining the current task path so as to prevent the robot from continuously approaching or staying in the area with the risk. The path adjustment command comprises two contents, namely a current position withdraw command, a command of the robot to immediately stop the inspection behavior near the current track point, a certain distance of the robot to a preset safety direction, and a current target area withdrawn. This direction may be the opposite direction of the path or calculated from the system's predefined emergency evacuation vector, typically 180 degrees opposite the direction of the current abnormal path chain, to ensure maximum distance from the risk source. And the coordinate positioning rule of the next detection point is that after the withdrawal action is completed, the system needs to re-plan the next detection point in the inspection path for the robot. The positioning rule performs angle adjustment based on the current abnormal path chain direction, and aims to enable the next target point to fall in the vertical direction safety area. Here, "vertical direction" refers to a direction that forms an angle of 90 degrees with the main direction of the current path chain to avoid the risk diffusion direction. And after the system calculates the main direction through the path chain vector, rotating to generate a perpendicular direction, and combining the positions marked as inactive units and far away from the abnormal point set in the space grid to generate the coordinate of the next target point.
And generating a risk information packet, namely synchronously generating the risk information packet by the system after the path adjustment command is sent out, and recording the core characteristics of the current risk event and submitting the core characteristics to a monitoring platform or a storage module. The information packet consists of the following fields:
The abnormal signal type is used for identifying the signal source type triggering the current risk judgment and is derived from an identification field added by the information fusion processing module in the early warning initial value generation process. Specific values may include, but are not limited to, image change anomalies caused by areas of inter-frame change in the image or by excessive frequency, thermal imaging anomalies caused by average temperature rise values of pixels in the image within the target area exceeding a preset range, gas concentration anomalies caused by abnormal rise or abrupt change of one or more types of gas concentration within the target area compared with background values, and combination identification (e.g. "image + gas") in the information package if multiple signals are triggered simultaneously.
And the position information is determined by referring to the geometrical center coordinate system of the robot body for the space coordinate of the latest track point of the current abnormal path chain, and the precision is supported by the track recording behavior of the fusion of the image and inertia.
The evaluation grade is a grade mark generated by the risk output control module according to the comprehensive result of three judging elements (the total length of the abnormal path chain, the enhancement amplitude and the average moving speed), and can be divided into three grades of low, medium and high and corresponds to different system response strategies. And the time label is an absolute time stamp generated for the information packet, keeps consistent with the enhanced acquisition information and the track point data and is used for whole system log association and event recurrence tracking.
In a certain inspection process, after the robot enters a ventilation pipeline intersection area, the recognition behavior response module triggers image and gas concentration enhancement collection. The image change frequency is increased from 1 time per second to 5 times per second, the gas concentration is instantaneously increased by 40ppm, the path chain length reaches 15 meters, the direction stability is judged to pass, and the enhancement amplitude is 2.1 times of the standard value. The risk output control module generates a high risk assessment conclusion according to the detection result, and then sends a withdrawal instruction to enable the robot to move 2 meters in the opposite direction of the abnormal path chain at the speed of 0.5 meter per second, and positions the next detection point in a grid area in the direction of 90 degrees left in the path direction. The system simultaneously generates a risk information package, the type is marked as 'image+gas', the position coordinates are (x=34.2, y=17.6), the grade is 'high', and the time labels are '2025-05-2814:13:12'. The information is sent by the communication system to the upper monitoring platform for triggering a subsequent remote intervention command or starting an automatic lockout operation.
The model formulas are all model formulas which are obtained by removing dimensions and taking numerical calculation, the model formulas are model formulas which are obtained by acquiring a large amount of data and performing software simulation to obtain the latest real situation, and preset parameters in the model formulas are set by a person skilled in the art according to the actual situation.
It should be understood that, in various embodiments of the present application, the sequence number of each process described above does not mean that the execution sequence of each process should be determined by the modules and the internal logic, and should not be construed as limiting the implementation process of the embodiments of the present application.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such modules are implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. The skilled person may use different methods to implement the described modules for each particular application, but such implementation should not be considered to be beyond the scope of the present application.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (8)

1. The robot real-time potential safety hazard identification system based on multi-mode sensor fusion is characterized by comprising a multi-source data acquisition module, an information fusion processing module, an identification behavior response module, a track joint analysis module and a risk output control module, wherein the composition and the cooperative relationship are as follows:
Respectively acquiring image information, thermal imaging information, environmental gas concentration and temperature and humidity change signals through a multi-source data acquisition module, and synchronously marking an acquisition position and a time stamp in a moving state;
The information fusion processing module carries out numerical normalization on various sensing information according to a preset sequence, builds a correlation modeling structure among signals in a cross feature extraction mode, recognizes potential abnormal change features in continuous signals in a rule driving mode, and forms early warning initial values based on behavior correlation logic;
the recognition behavior response module receives the early warning initial value, judges whether to start the enhanced acquisition mechanism according to a threshold comparison rule, and if the conditions are met, corresponds to the sampling frequency of the enhanced related information source and synchronously activates the track recording behavior of the target area, and takes newly generated characteristic parameters in the response process as an enhanced result;
The target area is determined by the information fusion processing module based on the space position of the abnormal signal when the early warning initial value is generated and is used as the starting position of the trigger track record in the recognition behavior response module, the area is mapped to the space division grids in the track joint analysis module, and the corresponding grid units are taken as the basis of the judgment of the movable units;
The track joint analysis module is accessed to real-time position data of a target area, responds to the acquired updated information and time sequence change thereof, constructs a space-time evolution path model, and identifies abnormal continuous growth, movement distribution change or aggregation diffusion modes existing in the space-time evolution path model for tracking the abnormal dynamic development process;
And the risk output control module generates a corresponding potential safety hazard assessment conclusion based on the track joint analysis result and the enhanced acquisition information obtained in the recognition behavior response and in combination with a preset risk level classification condition.
2. The system for identifying real-time potential safety hazards of a robot based on multi-mode sensor fusion according to claim 1, wherein image information acquisition is carried out by adopting a continuous frame sequence, the inter-frame difference identifies image target change in a background subtraction mode, and a joint feature is formed by combining gradient changes of pixel values in thermal imaging and is used for positioning an image change region;
The acquisition of the environmental gas concentration and temperature and humidity change signals is carried out at fixed time intervals, a time mark and the current position of the robot are added to each acquisition result, the current position is output in an inertial and visual fusion mode, and numerical normalization processing is carried out on all sensing data before the sensing data enter an information fusion processing module.
3. The multi-mode sensor fusion-based robot real-time potential safety hazard identification system according to claim 2, wherein the cross feature extraction mode in the information fusion processing module constructs a multi-feature association model by calculating the linear correlation degree between the image change frequency and the thermal imaging average heating value;
The image change frequency is the average value of the occurrence times of the change area in unit time, the average temperature rise value is the average value of the temperature change of all pixels in the unit area, the average temperature rise value and the average value are fused in a rule driving frame by taking a first coefficient and a second coefficient as weights, the multi-feature association model is updated at regular time, and the updated result is used for early warning initial value generation.
4. The system for identifying real-time potential safety hazards of a robot based on multi-mode sensor fusion according to claim 3, wherein the identification behavior response module increases the sampling frequency of image, thermal imaging and gas concentration information from the initial frequency to a first frequency value after receiving an early warning initial value, and activates a data focusing acquisition mechanism of a target area, the mechanism executes area focusing processing in the image acquisition process to enhance the resolution of local images, and sets the duration time limit of response behavior as a first time period value, and the sampling frequency and the acquisition strategy are restored to the initial state after the time limit expires.
5. The system for identifying real-time potential safety hazards of a robot based on multi-modal sensor fusion according to claim 4, wherein the robot position data in track recording behavior is obtained by fusion calculation of characteristic point matching results among visual image frames and inertial measurement acceleration values, and the specific calculation process is as follows:
extracting a first preset number of feature point sets between two adjacent frames of images by using a scale-invariant feature transformation mode, matching the feature point sets by a matching algorithm, and calculating pixel coordinate difference vectors between each pair of matching points, wherein the difference vectors are converted into three-dimensional relative displacement vectors through a camera internal reference matrix;
Acquiring an acceleration signal from an inertial measurement device, performing integral operation on the acceleration signal to obtain a speed vector, and performing integral again to obtain a second relative displacement vector;
The two groups of relative displacement vectors respectively take a matching confidence coefficient value and a preset fusion coefficient as weights to carry out linear weighting, wherein the matching confidence coefficient value is the reciprocal of the average Euclidean distance and maximum distance ratio of all matching points in image matching;
The fusion coefficient is set to be a fixed proportion according to the system deployment stage and is used for balancing the image matching precision and the inertial signal noise, the final fusion result is subjected to position offset correction by taking the geometric center of the robot body as a reference coordinate system, and a complete track sequence is formed by combining a time stamp.
6. The multi-modal sensor fusion-based real-time potential safety hazard identification system for the robot of claim 5, wherein the space-time evolution path model is constructed in a region segmentation mode, a target region is divided into a plurality of equal cells and marked as an active unit and an inactive unit, the active units are connected in sequence according to time stamps to form an abnormal path chain, and abnormal trend judgment is carried out by calculating the following three parameters:
firstly, calculating the total European length difference value of a movable unit coordinate set in a path chain in two continuous time windows and recording a difference sequence, and judging that the path grows if the difference value of three continuous time windows is positive;
Secondly, the continuous density of the movable units is defined as the number of continuous adjacent movable unit pairs in each path chain divided by the total length of the path chain, and if the density value is greater than a first preset density threshold value, the distribution is judged to be concentrated;
Thirdly, the direction consistency, the unit direction of each section of vector of the path chain is calculated, the included angle between adjacent vectors is calculated, and if all the included angles are smaller than a first angle threshold value, the path direction is judged to be stable;
And when at least two of the three parameters meet the preset judging condition, triggering the risk output control module to execute the high-level potential safety hazard identification process.
7. The system for identifying real-time potential safety hazards of a robot based on multi-modal sensor fusion according to claim 6, wherein the risk output control function is not only based on the enhanced acquisition information obtained in the identification behavior response function when generating the potential safety hazard assessment conclusion, but also combines the spatial path change characteristics output in the trajectory joint analysis function, and combines the following three judging elements, namely the total length of a first abnormal path chain, the enhancement amplitude of an abnormal signal, namely the difference value between the enhanced acquisition information and the previous data of the same area, and the average moving rate of a target area in a designated time.
8. The system for identifying real-time potential safety hazards of a robot based on multi-mode sensor fusion according to claim 7, wherein the risk output control function automatically starts an intervention execution mechanism after generating a potential safety hazard assessment conclusion, and the mechanism comprises sending a path adjustment command to a robot body scheduling control system and simultaneously generating a risk information packet;
The path adjustment command comprises a current position withdrawal instruction and a coordinate positioning rule of a next detection point, wherein the rule performs angle adjustment based on the current abnormal path chain direction and positions the next target point to be a vertical direction safety area;
The generated risk information package comprises an abnormal signal type, position information, an evaluation grade and a time tag.
CN202510775800.9A 2025-06-11 2025-06-11 Real-time safety hazard identification system for robots based on multimodal sensor fusion Pending CN120689840A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202510775800.9A CN120689840A (en) 2025-06-11 2025-06-11 Real-time safety hazard identification system for robots based on multimodal sensor fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202510775800.9A CN120689840A (en) 2025-06-11 2025-06-11 Real-time safety hazard identification system for robots based on multimodal sensor fusion

Publications (1)

Publication Number Publication Date
CN120689840A true CN120689840A (en) 2025-09-23

Family

ID=97076016

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202510775800.9A Pending CN120689840A (en) 2025-06-11 2025-06-11 Real-time safety hazard identification system for robots based on multimodal sensor fusion

Country Status (1)

Country Link
CN (1) CN120689840A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN120873927A (en) * 2025-09-25 2025-10-31 瓦房店第二防爆电器制造有限公司 Main board control method and system based on multi-source signals
CN120997802A (en) * 2025-10-23 2025-11-21 山东高速千方国际科技有限公司 High-speed hazard warning method and system based on machine vision
CN121026243A (en) * 2025-10-29 2025-11-28 陕西金合信息科技股份有限公司 A sensor-based intelligent inspection and monitoring method and system for unmanned aerial vehicles (UAVs)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN120873927A (en) * 2025-09-25 2025-10-31 瓦房店第二防爆电器制造有限公司 Main board control method and system based on multi-source signals
CN120873927B (en) * 2025-09-25 2026-02-03 瓦房店第二防爆电器制造有限公司 Main board control method and system based on multi-source signals
CN120997802A (en) * 2025-10-23 2025-11-21 山东高速千方国际科技有限公司 High-speed hazard warning method and system based on machine vision
CN120997802B (en) * 2025-10-23 2026-01-20 山东高速千方国际科技有限公司 High-speed hazard warning method and system based on machine vision
CN121026243A (en) * 2025-10-29 2025-11-28 陕西金合信息科技股份有限公司 A sensor-based intelligent inspection and monitoring method and system for unmanned aerial vehicles (UAVs)
CN121026243B (en) * 2025-10-29 2026-01-23 陕西金合信息科技股份有限公司 Sensor-based intelligent inspection monitoring method and system for unmanned aerial vehicle

Similar Documents

Publication Publication Date Title
CN106593534B (en) A kind of intelligent tunnel construction safety monitoring system
CN120689840A (en) Real-time safety hazard identification system for robots based on multimodal sensor fusion
CN119990627A (en) An auxiliary decision-making platform for water conservancy project operation and maintenance based on AI drones
CN120234769B (en) Ship final assembly construction safety quality real-time monitoring method based on AI image recognition
CN120316888B (en) Building health state detection system
LU507108B1 (en) Ai-based system and method for acquiring and processing image of pipe wall of water-cooled wall of boiler furnace
CN119445691B (en) Large model and natural man-machine interaction-based generation type MR (magnetic resonance) industrial inspection method
CN119573713B (en) Anomaly detection and repair method for non-Gaussian multi-modal sensor data
CN118351654A (en) A geological disaster and engineering safety monitoring intelligent early warning method and system
CN113095160A (en) Power system personnel safety behavior identification method and system based on artificial intelligence and 5G
CN120726559A (en) Intelligent safety management method and system for wind power construction based on intelligent AI monitoring
CN119784135A (en) Intelligent safety management system and method for construction sites based on BIM technology
CN120708146A (en) AI-based intelligent security remote inspection and control method
CN119593303A (en) A construction monitoring method and system for a cross-railway rotating bridge based on digital twin
CN120612332B (en) Foundation pile intelligent detection cloud platform based on image recognition
CN120524396A (en) A multimodal industrial data anomaly collaborative detection method and system
CN120542924A (en) A digital twin-based active control method, system, and equipment for major engineering construction safety
CN120472398A (en) A multi-dimensional monitoring and identification method for inspection robots in rail transit
CN120180937A (en) AI-based intelligent monitoring method and system for assembly status
CN120495671A (en) Intelligent recognition method for construction behavior of working area based on attention mechanism
Wu et al. Research on 3D modeling digital twin technology based on Kraftwerk-Kennzeichen-System coding for hydropower stations
CN120856223B (en) A method and system for intelligent location and tracking of faults in optical communication network operation and maintenance.
CN120976837B (en) A method, electronic equipment, and storage medium for identifying dangerous postures of personnel in boom trucks based on multi-source sensor fusion.
Wang The application of improved SURF optical flow algorithm in seismic support safety monitoring system
CN121353842A (en) Security management method, device and equipment based on UWB positioning module

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination