[go: up one dir, main page]

CN118857308B - An outdoor visual positioning system and method for an intelligent fault elimination robot for power transmission lines - Google Patents

An outdoor visual positioning system and method for an intelligent fault elimination robot for power transmission lines Download PDF

Info

Publication number
CN118857308B
CN118857308B CN202411346052.4A CN202411346052A CN118857308B CN 118857308 B CN118857308 B CN 118857308B CN 202411346052 A CN202411346052 A CN 202411346052A CN 118857308 B CN118857308 B CN 118857308B
Authority
CN
China
Prior art keywords
data
vibration
visual
robot
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202411346052.4A
Other languages
Chinese (zh)
Other versions
CN118857308A (en
Inventor
花国祥
闫纪源
黄兴
李伟伟
王升旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuxi Guangying Group Co ltd
Wuxi Power Supply Co of State Grid Jiangsu Electric Power Co Ltd
Original Assignee
Wuxi Guangying Group Co ltd
Wuxi Power Supply Co of State Grid Jiangsu Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuxi Guangying Group Co ltd, Wuxi Power Supply Co of State Grid Jiangsu Electric Power Co Ltd filed Critical Wuxi Guangying Group Co ltd
Priority to CN202411346052.4A priority Critical patent/CN118857308B/en
Publication of CN118857308A publication Critical patent/CN118857308A/en
Application granted granted Critical
Publication of CN118857308B publication Critical patent/CN118857308B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/02Picture taking arrangements specially adapted for photogrammetry or photographic surveying, e.g. controlling overlapping of pictures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1656Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with passive imaging devices, e.g. cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/183Compensation of inertial measurements, e.g. for temperature effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/98Detection or correction of errors, e.g. by rescanning the pattern or by human intervention; Evaluation of the quality of the acquired patterns
    • G06V10/993Evaluation of the quality of the acquired pattern
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/682Vibration or motion blur correction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/76Circuitry for compensating brightness variation in the scene by influencing the image signals

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Automation & Control Theory (AREA)
  • Quality & Reliability (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses an outdoor visual positioning system of an intelligent defect eliminating robot of a power transmission line and a method thereof, wherein the system comprises an environment sensing module, an illumination compensation module, a visual inertia fusion module, a vibration compensation module, a self-adaptive visual algorithm module, a multi-sensor fusion module and a path planning and navigation module, wherein the environment sensing module is used for collecting surrounding environment data, the illumination compensation module is used for dynamically adjusting camera parameters, the visual inertia fusion module is used for compensating instability of visual data of the robot in a moving or vibrating process, the vibration compensation module is used for correcting visual positioning deviation caused by line vibration, the self-adaptive visual algorithm module can dynamically adjust parameter setting of the visual system, the multi-sensor fusion module fuses sensor data, and the path planning and navigation module are used for accurately planning paths. The invention can dynamically adjust the parameters of the camera, reduce positioning errors and ensure the positioning accuracy and stability of the robot in a complex environment.

Description

Outdoor visual positioning system and method for intelligent defect eliminating robot of power transmission line
Technical Field
The invention relates to the technical field of defect eliminating robots, in particular to an intelligent defect eliminating robot outdoor visual positioning system for a power transmission line and a method thereof.
Background
In the power industry, the safety and stability of a transmission line, which is an important channel for power transmission, are directly related to the reliable operation of the whole power grid. However, the power transmission line often spans a wide region and is exposed to a complex outdoor environment, and is influenced by wind and rain erosion, temperature change, illumination difference and vibration caused by natural and human factors for a long time, and the factors can damage or defect the power transmission line, so that the safety of a power grid is threatened.
The traditional power transmission line inspection and maintenance work mainly depends on manual inspection, so that the labor intensity is high, the efficiency is low, the power transmission line inspection and maintenance work is difficult to be performed under severe environmental conditions, and great potential safety hazards exist. With the progress of technology, particularly the development of robot technology and computer vision technology, intelligent inspection robots for power transmission lines gradually become a research hotspot. However, when the existing intelligent inspection robot performs visual positioning in an outdoor complex environment, challenges such as illumination change, unstable visual data, line vibration and the like are often faced, and the positioning accuracy and the working efficiency of the robot are seriously affected by the problems.
Specifically, the quality of an image acquired by a camera is reduced due to the change of illumination conditions, the accuracy of visual positioning is affected, and the data acquired by a visual sensor can be unstable and error when the robot moves or is subjected to line vibration, so that the positioning accuracy is further reduced.
Disclosure of Invention
This section is intended to outline some aspects of embodiments of the application and to briefly introduce some preferred embodiments. Some simplifications or omissions may be made in this section as well as in the description of the application and in the title of the application, which may not be used to limit the scope of the application.
In order to solve the technical problems, the invention provides the following technical scheme that the outdoor visual positioning system of the intelligent defect eliminating robot for the power transmission line mainly comprises the following components:
the environment sensing module is used for collecting surrounding environment data through the sensor;
the illumination compensation module dynamically adjusts the parameters of the camera according to the illumination data provided by the environment sensing module;
The visual inertial fusion module is used for fusing the data of the visual sensor and the inertial sensor and compensating the instability of the visual data of the robot in the moving or vibrating process by adopting a visual inertial odometer technology;
The vibration compensation module is used for analyzing the vibration frequency and amplitude of the power transmission line monitored by the environment sensing module and correcting visual positioning deviation caused by line vibration;
the self-adaptive visual algorithm module can dynamically adjust the parameter setting of the visual system according to the environmental change;
The multi-sensor fusion module adopts a multi-sensor fusion algorithm to fuse sensor data;
And the path planning and navigation module is used for carrying out accurate path planning by analyzing the position information and the surrounding environment information of the current robot on the power transmission line according to the data of the multiple sensors.
The intelligent defect eliminating robot outdoor visual positioning system for the power transmission line is characterized in that the sensor comprises an illumination sensor, a wind speed sensor, a temperature and humidity sensor and a vibration sensor, wherein the illumination sensor is used for monitoring illumination intensity and change in real time, the wind speed sensor is used for collecting wind speed information of the environment where the power transmission line is located, the temperature and humidity sensor is used for collecting temperature and humidity information of the environment where the power transmission line is located, and the vibration sensor is used for detecting vibration frequency and amplitude of the power transmission line and judging line change caused by wind power or mechanical movement.
As a preferable scheme of the intelligent defect eliminating robot outdoor visual positioning system for the power transmission line, the parameters of the camera comprise exposure, gain and contrast.
As a preferable scheme of the intelligent defect eliminating robot outdoor visual positioning system for the power transmission line, the intelligent defect eliminating robot outdoor visual positioning system for the power transmission line further comprises a feedback control module, a remote monitoring module and a fault detection module;
the feedback control module adjusts the motion parameters of the robot in real time according to the feedback data of each module;
the remote monitoring module can check the running state and positioning information of the robot on the power transmission line in real time through a remote monitoring system, and is matched with an AR technology to help ground operators to remotely adjust the robot;
the fault detection module is used for detecting faults and timely reporting sensor abnormality or data failure.
As a preferable scheme of the intelligent defect eliminating robot outdoor visual positioning system for the power transmission line, the invention comprises the following steps of:
S11, setting a threshold value of illumination change by the system, and triggering a dynamic adjustment flow when the illumination intensity change exceeds the set threshold value;
s12, classifying current illumination conditions according to the monitored illumination data, evaluating the type and the amplitude of illumination change through a preset algorithm, and judging whether parameter adjustment is needed;
s13, calculating a required compensation amount according to the deviation between the current illumination data and the normal working illumination range, and then converting the compensation amount into adjustment values of exposure time, gain and contrast according to a mapping formula;
S14, calling a control interface of the camera, and adjusting exposure time, gain and contrast according to the calculated compensation value;
S15, the camera feeds back the adjusted image data to the system, and the vision system checks the brightness, definition and noise level of the picture through an image analysis algorithm to judge whether the adjustment effect reaches the expected value;
s16, when the illumination condition is stable, the system locks the current parameter setting, unnecessary adjustment is avoided, and the vision system is ensured to continuously provide high-quality images in the illumination stable stage.
As a preferable scheme of the intelligent defect eliminating robot outdoor visual positioning system for the power transmission line, the formula for calculating the required compensation quantity is as follows:
,
wherein:
is shown at the moment Is used for the illumination compensation amount of the lens;
Representing an initial time;
Representing time;
setting the ideal image brightness expected by the camera as the standard brightness required by the system;
indicating the current time The brightness of the actual image captured by the lower camera;
Indicating the moment of time The ambient illumination intensity of (2) is acquired in real time through an illumination sensor;
indicating the current time Is measured by an ambient noise detector;
indicating the current time Gain of the camera, parameters that the system can dynamically adjust;
indicating the current time The exposure time of the camera, the parameter that the system can adjust dynamically;
indicating the current time The contrast of the camera image is a measure of the difference of the brightness of the image;
Is an exponential decay function, is used for controlling the influence of gain and contrast adjustment at the past moment on the current compensation quantity, The degree of influence of the past adjustment on the current is determined for the attenuation coefficient to be positive, the larger the value is, the smaller the influence of the past adjustment is,A small scale for representing a summation term for discrete adjustment calculations in the time dimension;
The square of the difference between ideal brightness and actual brightness is represented, the square is amplified, and the system can also react to small deviation of brightness;
representing the superposition effect of the ambient illumination intensity and noise, the illumination intensity And noiseTwo key factors influencing the image quality, the larger the two sums, the stronger the environmental influence is, and the less compensation is required for brightness difference;
Representing an exponential function And gainAnd contrast ratioReflecting their common effect in the time dimension, the exponential decay controls the effect of the past time on the current compensation,AndThe adjustment strength of the gain and the contrast to the image is determined;
Indicating exposure time And the intensity of ambient lightEnsuring a balance between exposure time and illumination conditions in the compensation calculation, avoiding excessive adjustment;
Representing cumulative adjustment in the time dimension by an exponential decay function To ensure that the past gain and contrast have progressively less impact on the current, the summation expressing the time accumulation effect in camera parameter adjustment, ensuring that the system is able to respond dynamically to the environment over time;
From the initial time To the current timeThe accumulated compensation quantity of the brightness difference along with time is calculated through integration, so that the system can track illumination change and perform dynamic adjustment;
Value range meaning:
Value range of formula At the position ofAn inner part;
when (when) When the image quality is close to 0, the illumination condition is ideal, the image quality is close to the expected value, and great adjustment is not needed;
when (when) When the value increases, indicating that the light conditions deviate from expected, the system needs to increase the exposure or gain, or make contrast adjustments.
As a preferable scheme of the intelligent defect eliminating robot outdoor visual positioning system for the power transmission line, the invention comprises the following steps of:
step 21, initializing a visual sensor and an inertial sensor, ensuring the data time stamps of the visual sensor and the inertial sensor to be synchronous, then acquiring visual frame sequence data through the visual sensor, and acquiring acceleration and angular velocity through the inertial sensor;
Step 22, detecting and extracting characteristic points in the current image frame, tracking the matching relation of the characteristic points between adjacent frames, and recording the displacement information of the characteristic points;
Step 23, integrating the acquired acceleration and angular velocity data, and calculating the position and posture change quantity of the robot at each moment;
step 24, fusing the inertial data and the visual data through a Kalman filter or an extended Kalman filter, filtering noise and errors, and updating the current position information of the robot according to the fused result;
Step 25, performing visual inertial odometer calculation on each continuous frame, calculating pose changes of the robot at continuous moments according to image data provided by a visual sensor and motion data of the inertial sensor, and accumulating pose results of each frame with results of previous frames to obtain a complete moving track of the robot;
Step 26, analyzing high-frequency vibration information in the inertial sensor, identifying the motion characteristic of the robot at the vibration moment, dynamically correcting the visual positioning data according to the vibration characteristic, and smoothing noise caused by high-frequency vibration;
step 27, updating the position information of the robot on the power transmission line in real time according to the corrected vision and inertia data, and feeding back the current position information to a path planning and navigation module so as to carry out the next movement decision;
and 28, monitoring the positioning accuracy in real time, feeding back to the control system, and adjusting the compensation algorithm and the sensor parameters according to the environmental change.
As a preferable scheme of the intelligent defect eliminating robot outdoor visual positioning system for the power transmission line, the invention comprises the following steps of:
Step 31, monitoring vibration frequency and vibration amplitude of the power transmission line in real time by a sensor, and analyzing the vibration frequency and the vibration amplitude in each time period to determine the vibration characteristics of the line;
step 32, acquiring inertial data of the robot, ensuring time synchronization with vibration data, synchronizing the inertial data with vibration sensor data, and facilitating subsequent fusion calculation;
Step 33, performing spectrum analysis on the vibration data, determining high-frequency vibration components, and identifying main vibration components affecting visual positioning accuracy according to analysis results;
step 34, extracting and tracking characteristic points in the image, detecting the offset of the characteristic points between adjacent frames, comparing the data of the visual sensor and the inertial sensor, and calculating a visual positioning error;
Step 35, establishing a vibration compensation model based on frequency and amplitude, calculating compensation quantity, correcting sensor data and filtering high-frequency interference;
And 36, adjusting the positions of the visual characteristic points in real time according to the vibration compensation model, correcting visual data deviation, updating corrected position information and ensuring the visual positioning stability of the robot.
The invention relates to an optimal scheme of an intelligent defect eliminating robot outdoor visual positioning system for a power transmission line, wherein the method for establishing a vibration compensation model based on frequency and amplitude and calculating compensation quantity comprises the following steps:
step 351, preprocessing data acquired by a vibration sensor arranged on a power transmission line by using a filtering algorithm, and extracting key information of frequency and amplitude from the filtered data;
Step 352, performing Fourier transform on the acquired vibration data to obtain a spectrogram of vibration frequency, classifying high-frequency and low-frequency vibration components according to the spectrogram, and analyzing the change of amplitude;
step 353 defining a vibration compensation function Constructing a compensation model;
Vibration compensation function:
wherein:
And Representing the amplitudes of the high-frequency and low-frequency vibrations, respectively;
And The frequencies of the high-frequency and low-frequency vibrations, respectively;
And Is an influence coefficient for adjusting the influence of high-frequency and low-frequency vibration on the compensation amount;
And Is an adjustment parameter, controlling the response of the compensation function to frequency;
step 354, the acquired frequency and amplitude data is brought into a compensation function Calculating real-time compensation quantity, and adjusting characteristic points acquired by the visual sensor by using the compensation quantity to reduce positioning errors;
and 355, applying the calculated compensation quantity to the vision positioning system in real time, correcting the position of the characteristic point, and automatically adjusting the compensation model parameters by the feedback system according to the correction result to ensure that the vision positioning system is suitable for different vibration conditions.
The positioning method using the intelligent defect eliminating robot outdoor visual positioning system of the power transmission line comprises the following steps:
Collecting environmental data around the power transmission line by various sensors in the environmental perception module, wherein the environmental data comprise illumination intensity, wind speed, temperature and humidity and line vibration data;
Starting a visual sensor, acquiring a visual image sequence, and synchronously starting an inertial sensor to ensure that the time stamps of visual data and inertial data are synchronous;
Triggering an illumination compensation module according to illumination data acquired by the environment sensing module, and dynamically adjusting camera parameters to adapt to current illumination conditions so as to ensure stable image quality;
extracting visual characteristic points from a current image frame, tracking the displacement of the characteristic points in adjacent frames, and recording the displacement information of the characteristic points to perform preliminary visual positioning;
integrating acceleration and angular velocity data of the inertial sensor, calculating pose variation of the robot at each moment, and estimating displacement and angle variation of the robot;
The visual inertial fusion module is used for fusing visual data with inertial sensor data, and a Kalman filter or an extended Kalman filter is used for carrying out data filtering, so that noise and errors are eliminated, and the position information of the robot is updated in real time;
Monitoring the vibration frequency and amplitude of the power transmission line through a vibration sensor, performing spectrum analysis on vibration data, and extracting high-frequency vibration components affecting visual positioning;
Calculating vibration compensation quantity according to a frequency and amplitude-based compensation model constructed by the vibration compensation module, correcting visual positioning deviation caused by line vibration, and correcting the position of a characteristic point in visual data;
the multi-sensor fusion module is used for fusing the environment data, the visual data, the inertia data and the vibration compensation data to acquire more accurate robot position information;
According to the fused positioning data, combining with the geographical position information of the power transmission line, planning a moving path of the robot on the power transmission line through a path planning and navigation module, so as to ensure that the robot can stably and accurately travel;
the positioning accuracy of the robot is monitored in real time through a feedback control module, the motion parameters of the robot are adjusted if necessary, and the compensation algorithm and the sensor parameters are dynamically adjusted according to the environmental change;
The ground operator can check the positioning information and the running state of the robot in real time through the remote monitoring module to carry out remote adjustment, and meanwhile, the fault detection module is used for monitoring system abnormality and alarming in time.
The invention has the beneficial effects that:
1. According to the invention, through comprehensive use of the environment sensing module, the illumination compensation module, the visual inertia fusion module and the vibration compensation module, the system can effectively process the illumination change, the instability of visual data and the interference caused by line vibration, in addition, the camera parameters are dynamically adjusted through the illumination compensation module so as to adapt to different illumination conditions, and the visual inertia fusion module and the vibration compensation module can reduce positioning errors caused by vibration and movement, so that the positioning accuracy and stability of the robot in a complex environment are ensured by the measures, and the accuracy and efficiency of the defect elimination task are improved.
2. The self-adaptive vision algorithm module in the system can automatically adjust the parameter settings of the vision system, such as exposure time, contrast and the like, according to the change of the external environment so as to adapt to different vision environments and ensure the validity and accuracy of vision information. The self-adaptive capacity greatly enhances the adaptability of the robot to complex and changeable outdoor environments. Meanwhile, the multi-sensor fusion module integrates data from different sensors, so that the reliability and redundancy of the data are improved, the system can sense and understand the surrounding environment more comprehensively, and more accurate and rich information support is provided for subsequent path planning and navigation. The path planning and navigation module is based on the multi-source fusion data and combines an intelligent algorithm to realize accurate path planning and navigation, so that the operation of the robot on the power transmission line is more efficient and safer. The characteristics improve the intelligent level of the system together, and provide powerful technical support for intelligent inspection and maintenance of the power transmission line.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art. Wherein:
fig. 1 is a block diagram of an outdoor visual positioning system of an intelligent defect eliminating robot for a power transmission line.
Detailed Description
In order that the above-recited objects, features and advantages of the present invention will become more readily apparent, a more particular description of the invention will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, but the present invention may be practiced in other ways other than those described herein, and persons skilled in the art will readily appreciate that the present invention is not limited to the specific embodiments disclosed below.
Further, reference herein to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic can be included in at least one implementation of the invention. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments.
Further, in describing the embodiments of the present invention in detail, the cross-sectional view of the device structure is not partially enlarged to a general scale for convenience of description, and the schematic is only an example, which should not limit the scope of protection of the present invention. In addition, the three-dimensional dimensions of length, width and depth should be included in actual fabrication.
Referring to fig. 1, for one embodiment of the present invention, an outdoor visual positioning system of an intelligent defect eliminating robot for a power transmission line is provided, which mainly includes:
the environment sensing module is used for collecting surrounding environment data including illumination intensity, wind speed, temperature, humidity, vibration frequency of the power transmission line and the like through the sensor. The module provides environmental information input for parameter adjustment of the vision system and optimization of the overall positioning system.
The sensor comprises:
The illumination sensor is used for monitoring illumination intensity and change in real time;
the wind speed sensor is used for collecting wind speed information of the environment where the power transmission line is located;
The temperature and humidity sensor is used for collecting temperature and humidity information of the environment where the power transmission line is located;
And the vibration sensor is used for detecting the vibration frequency and amplitude of the power transmission line and judging line variation caused by wind power or mechanical motion.
And the illumination compensation module dynamically adjusts camera parameters according to illumination data provided by the environment sensing module, wherein the parameters comprise exposure, gain, contrast and the like, so that the definition and stability of the vision system can be maintained under the condition that illumination is severely changed (such as strong light, shadow and night). The module cooperates with the environment sensing module to adjust the image acquisition quality in real time.
The visual inertial fusion module is used for fusing the data of the visual sensor and the inertial sensor, adopting a visual inertial odometer technology to compensate the instability of the visual data of the robot in the moving or vibrating process, and particularly carrying out position correction through the inertial data under the condition of larger vibration.
And the vibration compensation module is used for analyzing the vibration frequency and amplitude of the power transmission line monitored by the environment sensing module and correcting the visual positioning deviation caused by line vibration. The module can effectively improve the positioning accuracy of the robot in the windy weather or under the condition of mechanical movement of the line.
The adaptive vision algorithm module can dynamically adjust the parameter setting of the vision system according to environmental changes (such as wind speed, temperature, humidity and the like). The module acquires real-time environmental data through the environmental perception module, and adjusts algorithm parameters to enhance the anti-interference capability of the system, so that higher visual positioning accuracy can be maintained in severe weather.
The multi-sensor fusion module adopts a multi-sensor fusion algorithm to fuse sensor data, so that the overall positioning accuracy of the robot is improved, and other sensors can provide effective compensation especially when a single sensor fails or the data is unreliable.
And the path planning and navigation module performs accurate path planning by analyzing the position information and the surrounding environment information of the current robot on the power transmission line according to the data of the multiple sensors. The module ensures that the robot can avoid obstacles and travel according to a planned path on the basis of high-precision positioning.
The system also comprises a feedback control module, a remote monitoring module and a fault detection module;
The feedback control module adjusts the motion parameters of the robot in real time according to the feedback data of each module, including speed, gesture and position control, so as to ensure that the robot can stably and accurately travel and operate in a complex environment;
The remote monitoring module can check the running state and positioning information of the robot on the power transmission line in real time through a remote monitoring system, and the remote monitoring module is matched with an AR technology to help ground operators to remotely adjust the robot;
the fault detection module is used for detecting faults and timely reporting sensor abnormality or data failure.
The modules cooperate with each other, and the outdoor visual positioning precision and stability of the intelligent defect eliminating robot of the power transmission line can be enhanced.
Specifically, dynamically adjusting the camera parameters includes the following steps:
S11, setting a threshold value of illumination change by the system, and triggering a dynamic adjustment flow when the illumination intensity change exceeds the set threshold value;
s12, classifying current illumination conditions, such as strong light, weak light, backlight or shadow environments, according to the monitored illumination data, evaluating the type and the amplitude of illumination change through a preset algorithm, and judging whether parameter adjustment is needed;
s13, calculating a required compensation amount according to the deviation between the current illumination data and the normal working illumination range, and then converting the compensation amount into adjustment values of exposure time, gain and contrast according to a mapping formula;
The mapping formula is as follows:
wherein:
wherein:
For the amount of adjustment of the exposure time, For the weight coefficient of exposure time adjustment, the sensitivity of exposure to illumination compensation is controlled,For a nonlinear mapping function, the illumination compensation quantityMapped to an appropriate exposure time adjustment value,The sensitivity coefficient is adjusted to control the influence of the compensation quantity on exposure adjustment;
For the amount of adjustment of the gain, For the weight coefficient of the gain adjustment,Converting the illumination compensation amount into an adjustment value of the gain as a nonlinear mapping function,The amplitude of gain change is controlled, so that the gain is increased when the light is insufficient, and the gain is reduced when the light is too strong;
as the amount of adjustment of the contrast ratio, For the weight coefficient of the contrast adjustment,As a function, the illumination compensation amount is converted into a contrast adjustment value,Controlling the frequency and amplitude of contrast adjustment to ensure that the contrast fluctuates within a proper range;
Combining all adjustments, the final exposure time, gain and contrast values can be expressed as:
,,
wherein, The initial exposure time, gain and contrast set values of the camera are respectively calculated to obtain adjustment amounts of three parameters through calculation of illumination compensation amounts, so that the adaptability of the camera in different illumination environments is ensured;
Weighting of The setting may be made according to the priority of the actual scene. In general, exposure time is preferentially adjusted in a low-light environment, and adjustment of gain and contrast should be fine-tuned based on exposure adjustment. The adjustment of the weight can be changed in real time according to the environmental conditions, so that the accuracy of the overall compensation is ensured.
S14, calling a control interface of the camera, and adjusting exposure time, gain and contrast according to the calculated compensation value;
S15, the camera feeds back the adjusted image data to the system, and the vision system checks the brightness, definition and noise level of the picture through an image analysis algorithm to judge whether the adjustment effect reaches the expected value;
s16, when the illumination condition is stable, the system locks the current parameter setting, unnecessary adjustment is avoided, and the vision system is ensured to continuously provide high-quality images in the illumination stable stage.
Further, the formula for calculating the required compensation amount is as follows:
wherein:
is shown at the moment Is used for the illumination compensation amount of the lens;
Representing an initial time;
Representing time;
setting the ideal image brightness expected by the camera as the standard brightness required by the system;
indicating the current time The brightness of the actual image captured by the lower camera;
Indicating the moment of time The ambient illumination intensity of (2) is acquired in real time through an illumination sensor;
indicating the current time Is measured by an ambient noise detector;
indicating the current time Gain of the camera, parameters that the system can dynamically adjust;
indicating the current time The exposure time of the camera, the parameter that the system can adjust dynamically;
indicating the current time The contrast of the camera image is a measure of the difference of the brightness of the image;
Is an exponential decay function, is used for controlling the influence of gain and contrast adjustment at the past moment on the current compensation quantity, The degree of influence of the past adjustment on the current is determined for the attenuation coefficient, which is usually a positive value, and the larger the value is, the smaller the influence of the past adjustment is,The subscript used to represent the summation term, typically used for discrete adjustment calculations in the time dimension;
The square of the difference between ideal brightness and actual brightness is represented, the square is amplified, and the system can also react to small deviation of brightness;
representing the superposition effect of the ambient illumination intensity and noise, the illumination intensity And noiseTwo key factors influencing the image quality, the larger the two sums, the stronger the environmental influence is, and the less compensation is required for brightness difference;
Representing an exponential function And gainAnd contrast ratioReflecting their common effect in the time dimension, the exponential decay controls the effect of the past time on the current compensation,AndThe adjustment strength of the gain and the contrast to the image is determined;
Indicating exposure time And the intensity of ambient lightEnsuring a balance between exposure time and illumination conditions in the compensation calculation, avoiding excessive adjustment;
Representing cumulative adjustment in the time dimension by an exponential decay function To ensure that the past gain and contrast have progressively less impact on the current, the summation expressing the time accumulation effect in camera parameter adjustment, ensuring that the system is able to respond dynamically to the environment over time;
From the initial time To the current timeThe accumulated compensation quantity of the brightness difference along with time is calculated through integration, so that the system can track illumination change and perform dynamic adjustment;
Value range meaning:
Value range of formula At the position ofAn inner part;
when (when) When the image quality is close to 0, the illumination condition is ideal, the image quality is close to the expected value, and great adjustment is not needed;
when (when) When the value increases, indicating that the light conditions deviate from expected, the system needs to increase the exposure or gain, or make contrast adjustments.
Specifically, the method for compensating the instability of visual data of the robot in the moving or vibrating process comprises the following steps:
step 21, initializing a visual sensor and an inertial sensor, ensuring the data time stamps of the visual sensor and the inertial sensor to be synchronous, then acquiring visual frame sequence data through the visual sensor, and acquiring acceleration and angular velocity through the inertial sensor;
Step 22, detecting and extracting characteristic points in the current image frame by using image data in a visual sensor and adopting a characteristic extraction algorithm (such as SIFT, ORB and the like), tracking the characteristic point matching relation between adjacent frames, and recording the displacement information of the characteristic points;
step 23, acquiring acceleration and angular velocity data by using an inertial sensor, integrating the acquired acceleration and angular velocity data, and calculating the position and posture change of the robot at each moment;
step 24, fusing the inertial data and the visual data through a Kalman filter or an extended Kalman filter, filtering noise and errors, and updating the current position information of the robot according to the fused result;
Step 25, performing visual inertial odometer calculation on each continuous frame, calculating pose changes of the robot at continuous moments according to image data provided by a visual sensor and motion data of the inertial sensor, and accumulating pose results of each frame with results of previous frames to obtain a complete moving track of the robot;
Step 26, analyzing high-frequency vibration information in the inertial sensor, identifying the motion characteristic of the robot at the vibration moment, dynamically correcting the visual positioning data according to the vibration characteristic, and smoothing noise caused by high-frequency vibration;
step 27, updating the position information of the robot on the power transmission line in real time according to the corrected vision and inertia data, and feeding back the current position information to a path planning and navigation module so as to carry out the next movement decision;
and 28, monitoring the positioning accuracy in real time, feeding back to the control system, and adjusting the compensation algorithm and the sensor parameters according to the environmental change.
Through the steps, the system can effectively fuse vision and inertial data, and the position information of the robot on the power transmission line is accurately calculated by utilizing a Vision Inertial Odometer (VIO) technology, particularly when the robot is subjected to external vibration or movement, the error of the vision data can be compensated in real time, so that the positioning accuracy is maintained.
Specifically, correcting the visual positioning deviation due to line vibration includes the steps of:
Step 31, installing vibration sensors (such as an accelerometer and a gyroscope) on a power transmission line, monitoring the vibration frequency and the vibration amplitude of the power transmission line in real time through the vibration sensors, acquiring real-time data of the line vibration through an environment sensing module, and analyzing the vibration frequency and the vibration amplitude in each time period to determine the characteristics of the line vibration;
step 32, acquiring inertial data of the robot through an inertial measurement unit of the robot, ensuring time synchronization with vibration data, synchronizing the inertial data with vibration sensor data, and facilitating subsequent fusion calculation;
Step 33, carrying out frequency spectrum analysis on the vibration frequency and amplitude of the circuit, determining high-frequency vibration components, and identifying main vibration components affecting the visual positioning accuracy according to analysis results;
Step 34, acquiring real-time visual data from a robot camera, including image frames of environmental feature points, extracting and tracking feature points in the images, detecting the offset of the feature points between adjacent frames, comparing the data of a visual sensor and an inertial sensor, and calculating a visual positioning error;
step 35, establishing a vibration compensation model according to the vibration frequency, amplitude and visual positioning error, calculating compensation quantity, correcting sensor data by using the technology such as Kalman filtering, and filtering high-frequency interference;
And 36, adjusting the positions of the vision characteristic points in real time according to the correction amount output by the vibration compensation model, correcting vision data deviation, updating corrected position information and ensuring the vision positioning stability of the robot.
Through the steps, the system can effectively analyze the vibration characteristics of the power transmission line and correct the visual positioning deviation caused by the vibration characteristics. The vibration compensation module combines the data of multiple sensors, realizes the dynamic correction of instability of visual data through a compensation model and a filtering algorithm, and ensures the accurate positioning of the robot in a complex environment.
Further, a vibration compensation model based on frequency and amplitude is established, and the compensation amount is calculated by the following steps:
step 351, preprocessing data acquired by a vibration sensor arranged on a power transmission line by using a filtering algorithm, and extracting key information of frequency and amplitude from the filtered data;
Step 352, performing Fourier transform on the acquired vibration data to obtain a spectrogram of vibration frequency, classifying high-frequency and low-frequency vibration components according to the spectrogram, and analyzing the change of amplitude;
step 353 defining a vibration compensation function Constructing a compensation model;
Vibration compensation function:
wherein:
And Representing the amplitudes of the high-frequency and low-frequency vibrations, respectively;
And The frequencies of the high-frequency and low-frequency vibrations, respectively;
And Is an influence coefficient for adjusting the influence of high-frequency and low-frequency vibration on the compensation amount;
And Is an adjustment parameter, controlling the response of the compensation function to frequency;
step 354, the acquired frequency and amplitude data is brought into a compensation function Calculating real-time compensation quantity, and adjusting characteristic points acquired by the visual sensor by using the compensation quantity to reduce positioning errors;
and 355, applying the calculated compensation quantity to the vision positioning system in real time, correcting the position of the characteristic point, and automatically adjusting the compensation model parameters by the feedback system according to the correction result to ensure that the vision positioning system is suitable for different vibration conditions.
Through the steps, the vibration compensation model can dynamically calculate and apply the compensation amount according to the frequency and amplitude information. Therefore, visual positioning deviation caused by vibration of the power transmission line can be effectively reduced, and stable positioning and operation of the robot are ensured.
In summary, the system can effectively process the disturbance caused by illumination change, instability of visual data and line vibration through the comprehensive use of the environment sensing module, the illumination compensation module, the visual inertia fusion module and the vibration compensation module, and the camera parameters are dynamically adjusted through the illumination compensation module so as to adapt to different illumination conditions, and the visual inertia fusion module and the vibration compensation module can reduce the positioning error caused by vibration and movement, so that the measures ensure the positioning precision and stability of the robot in a complex environment, and the accuracy and efficiency of the defect elimination task are improved. The self-adaptive vision algorithm module in the system can automatically adjust the parameter settings of the vision system, such as exposure time, contrast and the like, according to the change of the external environment so as to adapt to different vision environments and ensure the validity and accuracy of vision information. The self-adaptive capacity greatly enhances the adaptability of the robot to complex and changeable outdoor environments. Meanwhile, the multi-sensor fusion module integrates data from different sensors, so that the reliability and redundancy of the data are improved, the system can sense and understand the surrounding environment more comprehensively, and more accurate and rich information support is provided for subsequent path planning and navigation. The path planning and navigation module is based on the multi-source fusion data and combines an intelligent algorithm to realize accurate path planning and navigation, so that the operation of the robot on the power transmission line is more efficient and safer. The characteristics improve the intelligent level of the system together, and provide powerful technical support for intelligent inspection and maintenance of the power transmission line.
The positioning method using the intelligent defect eliminating robot outdoor visual positioning system of the power transmission line comprises the following steps:
Collecting environmental data around the power transmission line by various sensors in the environmental perception module, wherein the environmental data comprise illumination intensity, wind speed, temperature and humidity and line vibration data;
Starting a visual sensor, acquiring a visual image sequence, and synchronously starting an inertial sensor to ensure that the time stamps of visual data and inertial data are synchronous;
Triggering an illumination compensation module according to illumination data acquired by the environment sensing module, and dynamically adjusting camera parameters to adapt to current illumination conditions so as to ensure stable image quality;
extracting visual characteristic points from a current image frame, tracking the displacement of the characteristic points in adjacent frames, and recording the displacement information of the characteristic points to perform preliminary visual positioning;
integrating acceleration and angular velocity data of the inertial sensor, calculating pose variation of the robot at each moment, and estimating displacement and angle variation of the robot;
The visual inertial fusion module is used for fusing visual data with inertial sensor data, and a Kalman filter or an extended Kalman filter is used for carrying out data filtering, so that noise and errors are eliminated, and the position information of the robot is updated in real time;
Monitoring the vibration frequency and amplitude of the power transmission line through a vibration sensor, performing spectrum analysis on vibration data, and extracting high-frequency vibration components affecting visual positioning;
Calculating vibration compensation quantity according to a frequency and amplitude-based compensation model constructed by the vibration compensation module, correcting visual positioning deviation caused by line vibration, and correcting the position of a characteristic point in visual data;
the multi-sensor fusion module is used for fusing the environment data, the visual data, the inertia data and the vibration compensation data to acquire more accurate robot position information;
According to the fused positioning data, combining with the geographical position information of the power transmission line, planning a moving path of the robot on the power transmission line through a path planning and navigation module, so as to ensure that the robot can stably and accurately travel;
the positioning accuracy of the robot is monitored in real time through a feedback control module, the motion parameters of the robot are adjusted if necessary, and the compensation algorithm and the sensor parameters are dynamically adjusted according to the environmental change;
The ground operator can check the positioning information and the running state of the robot in real time through the remote monitoring module to carry out remote adjustment, and meanwhile, the fault detection module is used for monitoring system abnormality and alarming in time.
It should be noted that the above embodiments are only for illustrating the technical solution of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that the technical solution of the present invention may be modified or substituted without departing from the spirit and scope of the technical solution of the present invention, which is intended to be covered in the scope of the claims of the present invention.

Claims (7)

1.一种输电线路智能消缺机器人室外视觉定位系统,其特征在于,包括:1. An outdoor visual positioning system for an intelligent fault elimination robot for power transmission lines, characterized by comprising: 环境感知模块,所述环境感知模块用于通过传感器采集周围环境数据;An environment perception module, which is used to collect surrounding environment data through sensors; 光照补偿模块,所述光照补偿模块根据环境感知模块提供的光照数据,动态调整摄像头参数;An illumination compensation module, which dynamically adjusts camera parameters according to illumination data provided by the environment perception module; 所述动态调整摄像头参数包括以下步骤:The dynamic adjustment of camera parameters comprises the following steps: S11:系统设定一个光照变化的阈值,当光照强度变化超过设定阈值时,触发动态调整流程;S11: The system sets a threshold for light changes. When the light intensity changes beyond the set threshold, the dynamic adjustment process is triggered; S12:根据监测到的光照数据,将当前光照条件分类,通过预设算法评估光照变化的类型和幅度,判断是否需要进行参数调整;S12: classify the current lighting conditions according to the monitored lighting data, evaluate the type and magnitude of lighting changes through a preset algorithm, and determine whether parameter adjustment is required; S13:根据当前光照数据与正常工作光照范围的偏差,计算所需补偿量,然后根据映射公式,将补偿量转换为曝光时间、增益和对比度的调整值;S13: Calculate the required compensation amount according to the deviation between the current illumination data and the normal working illumination range, and then convert the compensation amount into adjustment values of exposure time, gain and contrast according to a mapping formula; S14:调用摄像头的控制接口,按照计算的补偿值对曝光时间、增益和对比度进行调整;S14: calling the control interface of the camera to adjust the exposure time, gain and contrast according to the calculated compensation value; S15:摄像头将调整后的图像数据反馈给系统,视觉系统通过图像分析算法检查画面的亮度、清晰度和噪声水平,判断调整效果是否达到预期;如果图像反馈不理想,系统会继续进行微调,重复调整曝光、增益和对比度参数,直到画面质量达到最佳效果;S15: The camera feeds back the adjusted image data to the system. The visual system uses the image analysis algorithm to check the brightness, clarity and noise level of the image to determine whether the adjustment effect reaches the expected effect. If the image feedback is not ideal, the system will continue to make fine adjustments and repeatedly adjust the exposure, gain and contrast parameters until the image quality reaches the best effect. S16:当光照条件稳定后,系统锁定当前的参数设置,避免不必要的调整,确保视觉系统在光照稳定阶段持续提供高质量图像;S16: When the lighting conditions are stable, the system locks the current parameter settings to avoid unnecessary adjustments and ensure that the vision system continues to provide high-quality images during the lighting stability stage; 视觉惯性融合模块,所述视觉惯性融合模块用于融合视觉传感器和惯性传感器的数据,并采用视觉惯性里程计技术,补偿机器人在移动或震动过程中视觉数据的不稳定性;A visual-inertial fusion module, which is used to fuse the data of the visual sensor and the inertial sensor, and uses the visual-inertial odometer technology to compensate for the instability of the visual data of the robot during movement or vibration; 所述补偿机器人在移动或震动过程中视觉数据的不稳定性包括以下步骤:The method of compensating for the instability of visual data during movement or vibration of the robot comprises the following steps: 步骤21:初始化视觉传感器和惯性传感器,并确保两者的数据时间戳同步,然后,通过视觉传感器获取视觉帧序列数据,通过惯性传感器获取加速度、角速度;Step 21: Initialize the visual sensor and inertial sensor and ensure that their data timestamps are synchronized. Then, obtain the visual frame sequence data through the visual sensor and obtain the acceleration and angular velocity through the inertial sensor. 步骤22:在当前图像帧中检测并提取特征点,跟踪相邻帧之间的特征点匹配关系,记录特征点的位移信息;Step 22: Detect and extract feature points in the current image frame, track the matching relationship of feature points between adjacent frames, and record the displacement information of the feature points; 步骤23:对采集到的加速度和角速度数据进行积分,计算出机器人每个时刻的位置和姿态变化量;Step 23: Integrate the collected acceleration and angular velocity data to calculate the position and posture change of the robot at each moment; 步骤24:通过卡尔曼滤波器或扩展卡尔曼滤波器对惯性数据和视觉数据进行融合,滤除噪声和误差,根据融合后的结果,更新机器人当前的位置信息;Step 24: Fuse the inertial data and visual data through the Kalman filter or the extended Kalman filter to filter out noise and errors, and update the current position information of the robot according to the fusion result; 步骤25:对每个连续帧执行视觉惯性里程计计算,根据视觉传感器提供的图像数据和惯性传感器的运动数据,计算出机器人在连续时刻的位姿变化,将每一帧的位姿结果与之前帧的结果累加,获得机器人完整的移动轨迹;Step 25: Perform visual inertial odometry calculation for each continuous frame. According to the image data provided by the visual sensor and the motion data of the inertial sensor, calculate the posture change of the robot at continuous moments, and add the posture result of each frame with the result of the previous frame to obtain the complete movement trajectory of the robot. 步骤26:分析惯性传感器中的高频震动信息,识别出机器人在振动时刻的运动特性,根据震动特性对视觉定位数据进行动态校正,平滑高频震动带来的噪声;Step 26: Analyze the high-frequency vibration information in the inertial sensor, identify the motion characteristics of the robot at the moment of vibration, dynamically correct the visual positioning data according to the vibration characteristics, and smooth the noise caused by the high-frequency vibration; 步骤27:根据校正后的视觉和惯性数据,实时更新机器人在输电线路上的位置信息,将当前位置信息反馈给路径规划与导航模块,以进行下一步的移动决策;Step 27: Based on the corrected visual and inertial data, the robot's position information on the transmission line is updated in real time, and the current position information is fed back to the path planning and navigation module to make the next movement decision; 步骤28:实时监控定位精度,反馈给控制系统,并根据环境变化调整补偿算法和传感器参数;Step 28: Monitor the positioning accuracy in real time, provide feedback to the control system, and adjust the compensation algorithm and sensor parameters according to environmental changes; 震动补偿模块,所述震动补偿模块用于分析由环境感知模块监测到的输电线路的震动频率和幅度,校正由于线路振动引起的视觉定位偏差;A vibration compensation module, which is used to analyze the vibration frequency and amplitude of the power transmission line monitored by the environment perception module and correct the visual positioning deviation caused by line vibration; 所述校正由于线路振动引起的视觉定位偏差包括以下步骤:The correction of visual positioning deviation caused by line vibration comprises the following steps: 步骤31:传感器实时监测输电线路的振动频率和振动幅度,分析每个时间段内的震动频率和振幅,以确定线路振动的特性;Step 31: The sensor monitors the vibration frequency and amplitude of the transmission line in real time, and analyzes the vibration frequency and amplitude in each time period to determine the characteristics of the line vibration; 步骤32:获取机器人的惯性数据,确保与震动数据的时间同步,同步惯性数据与震动传感器数据,方便后续融合计算;Step 32: Obtain the robot's inertial data, ensure time synchronization with the vibration data, and synchronize the inertial data with the vibration sensor data to facilitate subsequent fusion calculations; 步骤33:对振动数据进行频谱分析,确定高频振动成分,根据分析结果,识别影响视觉定位精度的主要振动成分;Step 33: Perform spectrum analysis on the vibration data to determine the high-frequency vibration components, and identify the main vibration components that affect the visual positioning accuracy based on the analysis results; 步骤34:提取图像中的特征点并进行跟踪,检测相邻帧之间的特征点偏移量,对比视觉传感器和惯性传感器的数据,计算视觉定位误差;Step 34: Extract feature points in the image and track them, detect feature point offsets between adjacent frames, compare data from the visual sensor and the inertial sensor, and calculate visual positioning errors; 步骤35:建立基于频率、幅度的振动补偿模型,计算补偿量,对传感器数据进行修正,滤除高频干扰;Step 35: Establish a vibration compensation model based on frequency and amplitude, calculate the compensation amount, correct the sensor data, and filter out high-frequency interference; 步骤36:根据震动补偿模型,实时调整视觉特征点的位置,校正视觉数据偏差,更新校正后的位置信息,确保机器人的视觉定位稳定性;Step 36: According to the vibration compensation model, adjust the position of the visual feature points in real time, correct the visual data deviation, update the corrected position information, and ensure the visual positioning stability of the robot; 多传感器融合模块,所述多传感器融合模块采用多传感器融合算法,对传感器数据进行融合;A multi-sensor fusion module, wherein the multi-sensor fusion module adopts a multi-sensor fusion algorithm to fuse sensor data; 路径规划与导航模块,所述路径规划与导航模块根据多传感器的数据,通过分析当前机器人在输电线路上的位置信息与周围环境信息,进行精准的路径规划。The path planning and navigation module performs accurate path planning by analyzing the current position information of the robot on the transmission line and the surrounding environment information based on the data of multiple sensors. 2.如权利要求1所述的输电线路智能消缺机器人室外视觉定位系统,其特征在于:所述传感器包括光照传感器、风速传感器、温湿度传感器以及震动传感器,所述光照传感器用于实时监测光照强度和变化,所述风速传感器用于采集输电线路所处环境的风速信息,所述温湿度传感器用于采集输电线路所处环境的温湿度信息,所述震动传感器用于检测输电线路的震动频率和幅度,判断风力或机械运动引起的线路变动。2. The outdoor visual positioning system of the intelligent fault-elimination robot for power transmission lines as described in claim 1 is characterized in that: the sensors include light sensors, wind speed sensors, temperature and humidity sensors, and vibration sensors, the light sensor is used to monitor light intensity and changes in real time, the wind speed sensor is used to collect wind speed information of the environment in which the power transmission line is located, the temperature and humidity sensor is used to collect temperature and humidity information of the environment in which the power transmission line is located, and the vibration sensor is used to detect the vibration frequency and amplitude of the power transmission line to determine line changes caused by wind or mechanical movement. 3.如权利要求1所述的输电线路智能消缺机器人室外视觉定位系统,其特征在于:所述摄像头参数包括曝光、增益以及对比度。3. The outdoor visual positioning system of the intelligent fault elimination robot for power transmission lines as described in claim 1 is characterized in that the camera parameters include exposure, gain and contrast. 4.如权利要求1所述的输电线路智能消缺机器人室外视觉定位系统,其特征在于:还包括反馈控制模块、远程监控模块以及故障检测模块;4. The outdoor visual positioning system of the intelligent fault elimination robot for power transmission lines according to claim 1, characterized in that it also includes a feedback control module, a remote monitoring module and a fault detection module; 所述反馈控制模块根据各模块的反馈数据,实时调整机器人的运动参数;The feedback control module adjusts the motion parameters of the robot in real time according to the feedback data of each module; 所述远程监控模块能够通过远程监控系统实时查看机器人在输电线路上的运行状态和定位信息,配合AR技术,帮助地面操作人员对机器人进行远程调整;The remote monitoring module can view the robot's operating status and positioning information on the transmission line in real time through the remote monitoring system, and cooperate with AR technology to help ground operators remotely adjust the robot; 所述故障检测模块用于进行故障检测,及时报告传感器异常或数据失效。The fault detection module is used to perform fault detection and report sensor abnormalities or data failures in a timely manner. 5.如权利要求1所述的输电线路智能消缺机器人室外视觉定位系统,其特征在于:所述计算所需补偿量的公式如下:5. The outdoor visual positioning system of the intelligent fault elimination robot for power transmission lines according to claim 1, characterized in that: the formula for calculating the required compensation amount is as follows: 其中:in: ΔCompensation(t)表示在时刻t的光照补偿量;ΔCompensation(t) represents the amount of illumination compensation at time t; t0表示初始时间;t 0 represents the initial time; t表示时间;t represents time; Iref为摄像头期望的理想图像亮度,设定为系统所需要的标准亮度;I ref is the ideal image brightness expected by the camera, which is set to the standard brightness required by the system; I(t)表示当前时刻t下摄像头捕捉到的实际图像亮度;I(t) represents the actual image brightness captured by the camera at the current time t; L(t)表示当时刻t的环境光照强度,通过光照传感器实时采集;L(t) represents the ambient light intensity at time t, which is collected in real time by the light sensor; N(t)表示当前时刻t的环境噪声水平,通过环境噪声检测器测得;N(t) represents the ambient noise level at the current time t, measured by the ambient noise detector; G(t)表示当前时刻t摄像头的增益,系统可动态调整的参数;G(t) represents the gain of the camera at the current time t, a parameter that can be dynamically adjusted by the system; E(t)表示当前时刻t摄像头的曝光时间,系统可动态调整的参数;E(t) represents the exposure time of the camera at the current time t, a parameter that can be dynamically adjusted by the system; C(t)表示当前时刻t摄像头图像的对比度,是图像亮度差值的衡量标准;C(t) represents the contrast of the camera image at the current time t, which is a measure of the image brightness difference; e-αn为指数衰减函数,用于控制过去时刻的增益和对比度的调整对当前补偿量的影响,α为衰减系数,为正值,决定了过去的调整对当前的影响程度,数值越大,过去调整的影响越小,n用于表示求和项的小标,用于时间维度上的离散调整计算;e -αn is an exponential decay function, which is used to control the influence of the gain and contrast adjustment at the past moment on the current compensation amount. α is the decay coefficient, which is a positive value and determines the influence of the past adjustment on the current. The larger the value, the smaller the influence of the past adjustment. n is used to represent the subscript of the summation term, which is used for discrete adjustment calculation in the time dimension. (Iref-I(t))2表示理想亮度与实际亮度之间的差异平方,平方放大了差异,确保系统对亮度的微小偏差也能做出反应;(I ref -I(t)) 2 represents the square of the difference between the ideal brightness and the actual brightness. The square amplifies the difference, ensuring that the system can respond to even small deviations in brightness. L(t)+N(t)表示环境光照强度与噪声的叠加效应,光照强度L(t)和噪声N(t)是影响图像质量的两个关键因素,两个和越大,说明环境影响越强,亮度差异需要更少的补偿;L(t)+N(t) represents the superposition effect of ambient light intensity and noise. Light intensity L(t) and noise N(t) are two key factors affecting image quality. The larger the sum of the two, the stronger the environmental impact, and the less compensation is needed for brightness differences. e-αn·G(t)C(t)表示指数函数e-αn与增益G(t)和对比度C(t)的乘积,反映了它们在时间维度上的共同影响,指数衰减控制过去时间对当前补偿的影响,G(t)和C(t)决定了增益和对比度对图像的调整力度;e -αn ·G(t)C(t) represents the product of the exponential function e -αn with the gain G(t) and contrast C(t), reflecting their joint influence in the time dimension. The exponential decay controls the influence of the past time on the current compensation. G(t) and C(t) determine the adjustment strength of the gain and contrast on the image. 表示曝光时间E(t)与环境光照强度L(t)的平方根和,确保补偿计算中曝光时间与光照条件之间的平衡,避免过度调整; It represents the square root sum of exposure time E(t) and ambient light intensity L(t), ensuring the balance between exposure time and lighting conditions in compensation calculation to avoid over-adjustment; 表示时间维度上的累积调整,通过指数衰减函数e-αn来确保过去的增益和对比度对当前的影响逐渐减小,该求和表达了摄像头参数调整中的时间累积效应,确保系统能够随时间动态响应环境; represents the cumulative adjustment in the time dimension, and the exponential decay function e -αn is used to ensure that the influence of past gain and contrast on the current gradually decreases. This sum expresses the time cumulative effect in the camera parameter adjustment, ensuring that the system can dynamically respond to the environment over time; 表示从初始时刻t0到当前时刻t的累积补偿量,积分计算出亮度差异随时间的累积效应,使系统能够跟踪光照变化并进行动态调整; It represents the cumulative compensation amount from the initial time t0 to the current time t. The integral calculates the cumulative effect of the brightness difference over time, enabling the system to track the changes in illumination and make dynamic adjustments; 值域含义:Value range meaning: 公式的值域ΔCompensation(t)在[0,∞)内;The range of the formula ΔCompensation(t) is in [0,∞); 当ΔCompensation(t)接近0时,意味着光照条件较为理想,图像质量已接近预期,无需大幅调整;When ΔCompensation(t) is close to 0, it means that the lighting conditions are ideal and the image quality is close to expectations, so no major adjustments are needed. 当ΔCompensation(t)值增大时,说明光照条件偏离预期,系统需要增加曝光或增益,或进行对比度调节。When the ΔCompensation(t) value increases, it means that the lighting conditions deviate from expectations and the system needs to increase exposure or gain, or perform contrast adjustment. 6.如权利要求1所述的输电线路智能消缺机器人室外视觉定位系统,其特征在于:所述建立基于频率、幅度的振动补偿模型,计算补偿量包括以下步骤:6. The outdoor visual positioning system of the intelligent fault elimination robot for power transmission lines according to claim 1, characterized in that: the establishment of a vibration compensation model based on frequency and amplitude and the calculation of the compensation amount include the following steps: 步骤351:利用滤波算法对安装在输电线路上的震动传感器采集到的数据进行预处理,并从滤波后的数据中提取频率和振幅的关键信息;Step 351: pre-processing the data collected by the vibration sensor installed on the transmission line using a filtering algorithm, and extracting key information of frequency and amplitude from the filtered data; 步骤352:对采集到的振动数据进行傅里叶变换,得到振动频率的谱图,根据频谱图分类高频和低频振动分量,并分析幅度的变化;Step 352: Perform Fourier transform on the collected vibration data to obtain a spectrum of vibration frequency, classify high-frequency and low-frequency vibration components according to the spectrum, and analyze changes in amplitude; 步骤353:定义振动补偿函数C(f,A),构建补偿模型;Step 353: define a vibration compensation function C(f,A) and construct a compensation model; 振动补偿函数:Vibration compensation function: 其中:in: Ahigh和Alow分别代表高频和低频振动的振幅;A high and A low represent the amplitudes of high-frequency and low-frequency vibrations, respectively; fhigh和flow分别为高频和低频振动的频率;f high and flow are the frequencies of high-frequency and low-frequency vibrations respectively; α1和α2是影响系数,用于调整高频和低频振动对补偿量的影响;α 1 and α 2 are influence coefficients, which are used to adjust the influence of high-frequency and low-frequency vibrations on the compensation amount; β1和β2是调节参数,控制补偿函数对频率的响应;β 1 and β 2 are adjustment parameters that control the response of the compensation function to frequency; 步骤354:采集的频率和振幅数据带入补偿函数C(f,A)中,计算实时的补偿量,使用补偿量对视觉传感器采集的特征点进行调整,减少定位误差;Step 354: The collected frequency and amplitude data are brought into the compensation function C(f, A), and the real-time compensation amount is calculated. The compensation amount is used to adjust the feature points collected by the visual sensor to reduce the positioning error. 步骤355:将计算得到的补偿量实时应用到视觉定位系统中,校正特征点的位置,根据校正结果,反馈系统自动调整补偿模型参数,确保适应不同振动情况。Step 355: Apply the calculated compensation amount to the visual positioning system in real time to correct the position of the feature point. According to the correction result, the feedback system automatically adjusts the compensation model parameters to ensure adaptation to different vibration conditions. 7.如权利要求1-6任一项所述的输电线路智能消缺机器人室外视觉定位系统的定位方法,其特征在于,包括以下步骤:7. The positioning method of the outdoor visual positioning system of the power transmission line intelligent fault elimination robot according to any one of claims 1 to 6, characterized in that it comprises the following steps: 通过环境感知模块中的多种传感器采集输电线路周围的环境数据,包括光照强度、风速、温湿度以及线路振动数据;The environmental data around the transmission line is collected through various sensors in the environmental perception module, including light intensity, wind speed, temperature and humidity, and line vibration data; 启动视觉传感器,获取视觉图像序列,并同步启动惯性传感器,保证视觉数据和惯性数据的时间戳同步;Start the visual sensor to obtain the visual image sequence, and start the inertial sensor synchronously to ensure the synchronization of the timestamps of the visual data and the inertial data; 根据环境感知模块采集的光照数据,触发光照补偿模块,动态调整摄像头参数以适应当前光照条件,确保图像质量稳定;According to the lighting data collected by the environment perception module, the lighting compensation module is triggered to dynamically adjust the camera parameters to adapt to the current lighting conditions to ensure stable image quality; 在当前图像帧中提取视觉特征点,跟踪相邻帧中的特征点位移,记录特征点的位移信息以进行初步的视觉定位;Extract visual feature points in the current image frame, track the displacement of feature points in adjacent frames, and record the displacement information of feature points for preliminary visual positioning; 对惯性传感器的加速度和角速度数据进行积分,计算机器人每个时刻的位姿变化量,推测机器人的位移与角度变化;Integrate the acceleration and angular velocity data of the inertial sensor, calculate the change in the robot's posture at each moment, and infer the robot's displacement and angle change; 通过视觉惯性融合模块,将视觉数据与惯性传感器数据进行融合,利用卡尔曼滤波器或扩展卡尔曼滤波器进行数据滤波,消除噪声和误差,实时更新机器人的位置信息;Through the visual inertial fusion module, the visual data is fused with the inertial sensor data, and the Kalman filter or extended Kalman filter is used to filter the data to eliminate noise and errors, and update the robot's position information in real time; 通过震动传感器监测输电线路的振动频率和幅度,对振动数据进行频谱分析,提取影响视觉定位的高频振动成分;Vibration sensors are used to monitor the vibration frequency and amplitude of power transmission lines, and spectrum analysis is performed on vibration data to extract high-frequency vibration components that affect visual positioning. 根据震动补偿模块构建的基于频率、幅度的补偿模型,计算振动补偿量,修正由于线路振动引起的视觉定位偏差,校正视觉数据中的特征点位置;According to the compensation model based on frequency and amplitude constructed by the vibration compensation module, the vibration compensation amount is calculated, the visual positioning deviation caused by line vibration is corrected, and the position of feature points in the visual data is corrected; 使用多传感器融合模块,将环境数据、视觉数据、惯性数据与震动补偿数据进行融合,获取更精确的机器人位置信息;Use a multi-sensor fusion module to fuse environmental data, visual data, inertial data, and vibration compensation data to obtain more accurate robot position information; 根据融合后的定位数据,结合输电线路的地理位置信息,通过路径规划与导航模块规划机器人在输电线路上的移动路径,确保机器人能够稳定准确地行进;Based on the fused positioning data and the geographical location information of the transmission line, the robot's moving path on the transmission line is planned through the path planning and navigation module to ensure that the robot can move stably and accurately. 通过反馈控制模块,实时监控机器人定位精度,必要时调整机器人的运动参数,并根据环境变化动态调整补偿算法和传感器参数;Through the feedback control module, the robot positioning accuracy is monitored in real time, the robot's motion parameters are adjusted when necessary, and the compensation algorithm and sensor parameters are dynamically adjusted according to environmental changes; 通过远程监控模块,地面操作人员可以实时查看机器人的定位信息与运行状态,进行远程调整;同时,故障检测模块用于监测系统异常并及时报警。Through the remote monitoring module, ground operators can view the robot's positioning information and operating status in real time and make remote adjustments; at the same time, the fault detection module is used to monitor system abnormalities and issue timely alarms.
CN202411346052.4A 2024-09-26 2024-09-26 An outdoor visual positioning system and method for an intelligent fault elimination robot for power transmission lines Active CN118857308B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411346052.4A CN118857308B (en) 2024-09-26 2024-09-26 An outdoor visual positioning system and method for an intelligent fault elimination robot for power transmission lines

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411346052.4A CN118857308B (en) 2024-09-26 2024-09-26 An outdoor visual positioning system and method for an intelligent fault elimination robot for power transmission lines

Publications (2)

Publication Number Publication Date
CN118857308A CN118857308A (en) 2024-10-29
CN118857308B true CN118857308B (en) 2025-03-25

Family

ID=93177673

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411346052.4A Active CN118857308B (en) 2024-09-26 2024-09-26 An outdoor visual positioning system and method for an intelligent fault elimination robot for power transmission lines

Country Status (1)

Country Link
CN (1) CN118857308B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119282412B (en) * 2024-12-12 2025-03-04 北京金橙子科技股份有限公司 Equipment processing method and system based on multi-speed production line
CN119478032A (en) * 2025-01-10 2025-02-18 融梦跃视(上海)体育科技有限公司 Table tennis ball landing point analysis and calculation system, method and device based on computer vision
CN119964283A (en) * 2025-01-23 2025-05-09 泰智达(北京)网络科技有限公司 An intelligent verification and access control system and method based on face recognition
CN119897873A (en) * 2025-03-31 2025-04-29 浙江强脑科技有限公司 Dynamic adaptive calibration method and device for robot dual-arm hand-eye

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118376237A (en) * 2024-04-26 2024-07-23 重庆邮电大学 Three-dimensional scene positioning method and device based on visual inertial odometry based on self-attention

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100568926C (en) * 2006-04-30 2009-12-09 华为技术有限公司 Obtaining method and control method of automatic exposure control parameters and imaging device
EP2942941B1 (en) * 2014-05-08 2016-04-27 Axis AB Method and apparatus for determining a need for a change in a pixel density requirement due to changing light conditions
CN112204946A (en) * 2019-10-28 2021-01-08 深圳市大疆创新科技有限公司 Data processing method, device, movable platform and computer readable storage medium
CN114723811A (en) * 2022-02-25 2022-07-08 江苏云幕智造科技有限公司 Stereo vision positioning and mapping method for quadruped robot in unstructured environment
CN117972457A (en) * 2024-02-01 2024-05-03 光子(深圳)精密科技有限公司 Adaptive compensation photoelectric sensor optimization method, system and storage medium
CN118192638A (en) * 2024-04-18 2024-06-14 天津现代职业技术学院 A UAV control platform capable of three-dimensional holographic inspection

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118376237A (en) * 2024-04-26 2024-07-23 重庆邮电大学 Three-dimensional scene positioning method and device based on visual inertial odometry based on self-attention

Also Published As

Publication number Publication date
CN118857308A (en) 2024-10-29

Similar Documents

Publication Publication Date Title
CN118857308B (en) An outdoor visual positioning system and method for an intelligent fault elimination robot for power transmission lines
CN104470139B (en) A Closed-loop Feedback Control Method for Tunnel Lighting
CN117423225B (en) A disaster remote sensing early warning system based on high-speed railway operation
CN112693985A (en) Non-invasive elevator state monitoring method fusing sensor data
CN114577325B (en) An online monitoring and early warning system and method for contact suspension operating status in strong wind areas
CN112326039A (en) Photovoltaic power plant patrols and examines auxiliary system
CN110395398A (en) A kind of ground connection assembly system and its earthing method based on multi-rotor unmanned aerial vehicle
CN119760489B (en) Multi-sensor altitude prediction system integrating LSTM and Kalman filtering
CN114527294A (en) Target speed measuring method based on single camera
CN119803822A (en) A structural health monitoring system for long-span bridge beams based on deformation analysis
CN119723421B (en) A method for low-altitude target recognition and real-time tracking in AI video based on deep learning
JP2011223580A (en) Correction of camera attitude
CN115272892A (en) A UAV positioning deviation monitoring and control system based on data analysis
KR101886510B1 (en) System and method for measuring tension of cable bridge
CN119472804A (en) A method and system for intelligently adjusting adaptive blinds based on light intensity
CN119107510A (en) A structural intelligent monitoring device based on deep learning
CN114543680A (en) On-site monitoring and ranging method for construction vehicles of overhead transmission line passages
CN119967395A (en) A high-precision positioning and navigation remote emergency takeover system
CN118921443B (en) An industrial monitoring system based on image data
CN119719807B (en) Elevator digital twin virtual and physical sensor data calibration method
CN119460918B (en) An intelligent control system for a tethered drone's retractable wire device
KR102737459B1 (en) System having Drone Equip Structure for Measuring Altitude as way of Revision
CN120298449A (en) Water conservancy construction safety supervision system and method based on artificial intelligence
CN120125947A (en) A UAV power inspection multimodal data fusion system and UAV
CN118816866A (en) A high-precision inertial measurement device and a measurement method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20250305

Address after: 214000 No.12 Liangxi Road, Wuxi City, Jiangsu Province

Applicant after: STATE GRID JIANGSU ELECTRIC POWER CO., LTD. WUXI POWER SUPPLY Co.

Country or region after: China

Applicant after: Wuxi Guangying Group Co.,Ltd.

Address before: No. 333 Xishan Avenue, Anzhen Street, Xishan District, Wuxi City, Jiangsu Province, China 214105

Applicant before: Wuxi University

Country or region before: China

GR01 Patent grant
GR01 Patent grant