CN117670309A - Inspection method and system based on robot - Google Patents
Inspection method and system based on robot Download PDFInfo
- Publication number
- CN117670309A CN117670309A CN202311792605.4A CN202311792605A CN117670309A CN 117670309 A CN117670309 A CN 117670309A CN 202311792605 A CN202311792605 A CN 202311792605A CN 117670309 A CN117670309 A CN 117670309A
- Authority
- CN
- China
- Prior art keywords
- robot
- task
- inspection
- target
- position point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/20—Administration of product repair or maintenance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/254—Fusion techniques of classification results, e.g. of results related to same input data
- G06F18/256—Fusion techniques of classification results, e.g. of results related to same input data of results relating to different input data, e.g. multimodal recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06K—GRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K17/00—Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations
- G06K17/0022—Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations arrangements or provisions for transferring data to distant stations, e.g. from a sensing device
- G06K17/0029—Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations arrangements or provisions for transferring data to distant stations, e.g. from a sensing device the arrangement being specially adapted for wireless interrogation of grouped or bundled articles tagged with wireless record carriers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07C—TIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
- G07C1/00—Registering, indicating or recording the time of events or elapsed time, e.g. time-recorders for work people
- G07C1/20—Checking timed patrols, e.g. of watchman
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Business, Economics & Management (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Human Resources & Organizations (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Computer Networks & Wireless Communication (AREA)
- Economics (AREA)
- Entrepreneurship & Innovation (AREA)
- Marketing (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
- Strategic Management (AREA)
- Tourism & Hospitality (AREA)
- General Business, Economics & Management (AREA)
- Manipulator (AREA)
Abstract
The application discloses a patrol method and a system based on a robot, wherein in the method, when the robot runs on a track, pictures with positioning information of the robot are continuously shot; the robot carries out target recognition on the pictures by utilizing a target detection model to obtain confidence coefficients of a plurality of target objects, wherein the target detection model is a training-completed YOLO neural network algorithm; the robot obtains a specific position point location set according to the picture with the highest confidence coefficient of the plurality of target objects and the corresponding positioning information; and finally, the robot detects the running states of a plurality of target objects according to the specific position point location set. In the process, the robot can acquire a specific position point location set for inspection by utilizing the target detection model, and then performs an inspection task according to the specific position point location set, and the whole process does not need manual participation, so that the deployment efficiency of the inspection robot is improved.
Description
Technical Field
The application relates to the technical field of inspection robots, in particular to an inspection method and system based on robots.
Background
In the production process of factories, in order to ensure normal production, the daily reasonable maintenance of equipment is particularly important, and especially in the production process of coking plants, harmful factors such as combustion, explosion, poisoning, electric shock, high temperature, dust and the like exist, so that the daily reasonable maintenance of the equipment is more needed. So the timing inspection of the device is an important one. With the progress of inspection robot technology, more and more industrial sites use inspection robots to replace manpower to carry out inspection tasks, so that the labor intensity of basic staff is reduced to the greatest extent, the safety risk of manual inspection is reduced, and the damage to manual inspection caused by severe site environments of coking plants is also reduced.
However, the inspection site of the current inspection robot adopts a manual measurement and calibration method. The manual calibration method needs to manually set the inspection points of the robot according to the inspected target object, and then the robot can pertinently carry out equipment inspection tasks in the later automatic inspection process. The manpower resource is wasted, and the problem of low deployment efficiency of the inspection robot is caused.
Disclosure of Invention
The application provides a patrol method and system based on a robot, wherein the robot obtains a specific position point location set through a target detection model, and obtains a patrol route map and patrol points based on the specific position point location set, so that the deployment efficiency of the patrol robot is improved.
In a first aspect, the present application provides a robot-based inspection method, the method comprising:
when the robot runs on the track, continuously shooting pictures, wherein the pictures are provided with positioning information of the robot, and the positioning information is position information when the robot shoots the pictures;
the robot performs target recognition on the pictures by using a target detection model to obtain confidence coefficients of a plurality of target objects, wherein the target detection model is a YOLO neural network algorithm after training;
the robot obtains a specific position point location set according to the picture with the highest confidence coefficient of the plurality of target objects and the corresponding positioning information;
the robot detects the running states of a plurality of target objects according to the specific position point location set, wherein the running states comprise a normal running state and an abnormal state.
Optionally, after the robot records the position information of the plurality of electronic tags on the track, the process of obtaining the positioning information of the robot includes:
the robot reads data in the nearby electronic tags according to the RF reader-writer;
the robot determines the position information of the electronic tag on the track according to the data;
the robot obtains a positioning distance, wherein the positioning distance is the distance between the robot and the electronic tag on the track;
and the robot obtains the positioning information of the robot according to the position information and the positioning distance.
Optionally, the robot obtains a specific location point location set according to the pictures with the highest confidence coefficients of the multiple target objects and the corresponding positioning information, including:
the robot acquires a preliminary specific position point location set according to the picture with the highest confidence coefficient of the plurality of target objects and the corresponding positioning information;
and the robot eliminates the false-reported picture and the corresponding positioning information in the initial position point location set to obtain the specific position point location set.
Optionally, the robot detects the running states of the plurality of target objects according to the specific position point location set, including:
the robot collects digital information according to a specific position point set, wherein the digital information comprises images, sounds, infrared thermal images, temperature data and various gas concentration parameter information;
the robot analyzes and processes the digital information by utilizing an intelligent perception key technology algorithm to obtain the running states of a plurality of target objects.
Optionally, the method further comprises:
the robot receives instructions transmitted by the monitoring center according to wireless communication, wherein the instructions are used for instructing the robot to execute various tasks, and the various tasks comprise a patrol task, a charging task, a manual task, an emergency braking task and a stopping task.
The robot transmits the state and the task execution condition of the robot to the monitoring center by utilizing wireless communication.
Optionally, when the instruction is a normal inspection instruction or a high-speed inspection instruction, the robot performs various tasks including:
the robot detects the running state of a target object according to a preset speed;
if the inspection is completed, the monitoring center sends a charging instruction to the robot so as to enable the robot to carry out a charging task;
if the inspection is not completed, the robot judges whether the current battery electric quantity is lower than a preset value;
if the current battery power is lower than a preset value, the monitoring center sends a charging instruction to the robot so as to enable the robot to carry out a charging task;
if the current battery power is higher than the preset value, the robot continues to detect the running state of the target object according to the preset speed.
Optionally, when the command is a manual command or an emergency braking command or a stopping command, the robot performs various tasks including:
after the robot executes a manual task or an emergency braking task or a stopping task, judging whether the current battery electric quantity is lower than a preset value or not;
if the current battery power is lower than the preset value, the monitoring center sends a charging instruction to the robot so as to enable the robot to carry out a charging task;
and if the current battery power is higher than the preset value, the robot waits for the monitoring center to transmit an instruction.
In a second aspect, an embodiment of the present application provides a robot-based inspection system, applied to a robot, the system including: the device comprises a driving subsystem, a sensor subsystem, a holder subsystem and a control subsystem;
the driving subsystem is used for controlling the robot to run on the track;
the cradle head subsystem is used for continuously shooting pictures when the robot runs on the track, the pictures are provided with positioning information of the robot, and the positioning information is position information when the robot shoots the pictures;
the control subsystem is used for carrying out target recognition on the picture by utilizing a target detection model to obtain the confidence coefficients of a plurality of target objects, wherein the target detection model is a YOLO neural network algorithm after training; the method comprises the steps of obtaining a specific position point location set according to pictures with highest confidence coefficients of a plurality of objects and corresponding positioning information; detecting the running states of a plurality of target objects according to the specific position point location set, wherein the running states comprise a normal running state and an abnormal state.
Optionally, the system further comprises:
the sensor subsystem is used for collecting sound, infrared thermal images, temperature data, smoke and various gas concentration parameter information in the digital information according to the specific position point set;
the holder subsystem is also used for collecting pictures in the digital information according to the specific position point set;
the control subsystem is specifically configured to, when detecting the running states of the plurality of target objects according to the specific position point location set: and analyzing and processing the digital information by utilizing an intelligent perception key technology algorithm to obtain the running states of a plurality of target objects.
The system further comprises:
the communication subsystem is used for receiving instructions transmitted by the monitoring center, and the instructions are used for instructing the robot to execute various tasks, wherein the various tasks comprise a patrol task, a charging task, a manual task, an emergency braking task and a stopping task; the robot state and task execution conditions are transmitted to the monitoring center.
Optionally, after the robot records the position information of the plurality of electronic tags on the track, the system further includes:
the sensor subsystem is also used for reading data in the nearby electronic tags according to the RF reader-writer;
the control subsystem is also used for determining the position information of the electronic tag on the track according to the data; the positioning distance is the distance between the robot and the electronic tag on the track; and the positioning information of the robot is obtained according to the position information and the positioning distance.
Optionally, the control subsystem obtains a specific location point location set according to the picture with the highest confidence coefficient of the plurality of objects and the corresponding positioning information, and is specifically used for:
acquiring a preliminary specific position point location set according to the pictures with the highest confidence coefficients of the target objects and the corresponding positioning information;
and eliminating the false-reported picture and the corresponding positioning information in the initial position point location set to obtain the specific position point location set.
From this, this application has following beneficial effect:
the application provides a robot-based inspection method, which comprises the steps of firstly, continuously shooting pictures when a robot runs on a track, wherein the pictures are provided with positioning information of the robot, and the positioning information is the position information when the robot shoots the pictures; the robot performs target recognition on the pictures by using a target detection model to obtain confidence coefficients of a plurality of target objects, wherein the target detection model is a YOLO neural network algorithm after training; the robot obtains a specific position point location set according to the picture with the highest confidence coefficient of the plurality of target objects and the corresponding positioning information; and finally, the robot detects the running states of a plurality of target objects according to the specific position point location set, wherein the running states comprise a normal running state and an abnormal state. In the process, the robot can acquire a specific position point location set for inspection by utilizing the target detection model, and then performs an inspection task according to the specific position point location set, and the whole process does not need manual participation, so that the deployment efficiency of the inspection robot is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings may be obtained according to these drawings for a person having ordinary skill in the art.
Fig. 1 is a schematic flow chart of a robot-based inspection method in an embodiment of the present application;
fig. 2 is a functional flow diagram of a robot-based inspection method according to an embodiment of the present application
Fig. 3 is a schematic structural diagram of a robot-based inspection system according to an embodiment of the present application;
fig. 4 is a schematic diagram of a state of a target object for inspection of a coke oven in an embodiment of the present application;
FIG. 5 is a flow chart of an embodiment of a robot-based inspection method according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a target detection result of a coke oven inspection target object in an embodiment of the present application;
fig. 7 is a schematic view of a inspection route and an inspection point of a target object inspected by a coke oven in an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that, the user information (including, but not limited to, user equipment information, user personal information, etc.) and the data (including, but not limited to, data for analysis, stored data, presented data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data are required to comply with the related laws and regulations and standards of the related countries and regions.
Technical terms in the embodiments of the present application are described in detail below:
YOLO target detection algorithm: is a target detection method based on a deep neural network, which is totally called You Only Look Once. It belongs to one-stage target detection algorithm. The core idea of YOLO is to translate the target detection task into a regression problem. It divides the input image into grids, predicts a fixed number of bounding boxes per grid, and removes the redundant bounding boxes using non-maximal suppression (NMS) to obtain the final target detection result. Specifically, YOLO divides the input image into S x S grids, each of which predicts B bounding boxes and the class to which they belong. Each bounding box contains four coordinates (x, y, w, h) and two confidence scores (scores for object presence and category presence), and a category label. Where (x, y) represents the center coordinates of the bounding box, w and h represent the width and height of the bounding box, the object presence score represents whether an object is present within the bounding box, and the category presence score represents whether the category of the object within the bounding box is correctly predicted. During the training process YOLO uses the loss function to optimize the predicted outcome of the network. The loss function consists of two parts: classification loss and bounding box loss. The classification penalty is used to optimize the class label for each bounding box, and the bounding box penalty is used to optimize the location and size of each bounding box.
At present, because the target objects to be inspected are different in different production plants, the inspection station of the current inspection robot adopts a manual measurement and calibration method. The manual calibration method needs to manually set the inspection points of the robot according to the inspected target object, and then the robot can pertinently carry out equipment inspection tasks in the later automatic inspection process. However, the method not only needs to manually measure and set the inspection station, but also needs to be tried for many times to form an effective inspection robot roadmap.
In the embodiment of the application, when the robot runs on the track, pictures are continuously shot, the pictures are marked with positioning information of the robot, target recognition is carried out on the pictures by utilizing a target detection model, the confidence rates of a plurality of target objects are obtained, and then a specific position point location set is obtained according to the picture with the highest confidence rate of the plurality of target objects and the corresponding positioning information, namely, a routing inspection route map of the robot is set. And finally detecting the running state of the target object according to the specific position point location set, wherein the running state comprises a normal running state and an abnormal state.
Therefore, according to the method provided by the implementation of the application, the robot can acquire the proper specific position point location set by utilizing the target detection model, and acquire the inspection route map and the inspection points based on the specific position point location set, so that the inspection task can be directly performed based on the inspection route map without manual setting, and the deployment efficiency of the inspection robot is improved.
In order to facilitate understanding of the specific implementation of the robot-based inspection method provided in the embodiments of the present application, the following description will be given with reference to the accompanying drawings.
It should be noted that, the main body implementing the inspection method based on the robot may be the inspection system based on the robot provided in the embodiments of the present application, and the inspection system based on the robot may be carried in an electronic device or a functional module of the electronic device.
Fig. 1 is a schematic flow chart of a robot-based inspection method according to an embodiment of the present application. The method may be applied to a robot-based inspection system, such as the robot-based inspection system 300 shown in fig. 3.
As shown in fig. 1, the method includes the following S101 to S104:
s101: when the robot runs on the track, pictures are continuously shot, the pictures are provided with positioning information of the robot, and the positioning information is position information when the robot shoots the pictures.
In order to detect the running state of the target object, firstly, after continuously shooting pictures when the robot runs on the track, carrying out target recognition on the pictures by utilizing a target detection model to obtain the confidence coefficients of a plurality of target objects, then, obtaining a specific position point location set according to the picture with the highest confidence coefficient of the plurality of target objects and corresponding positioning information, and finally, detecting the running state of the plurality of target objects according to the specific position point location set, wherein the running state comprises a normal running state and an abnormal state. Therefore, the embodiment of the present application takes a picture through S101 as a pre-preparation for the subsequent target recognition.
As one example, S101 may include: when the robot runs on the track, pictures are continuously shot, and position information of the robot when the pictures are shot is marked for the pictures. Wherein a plurality of electronic tags have been arranged on the track before S101 is performed, and the robot records positional information of the plurality of electronic tags on the track, so that the process of the robot obtaining the positional information at the time of taking the picture includes: firstly, reading data in a nearby electronic tag according to an RF reader-writer; determining the position information of the electronic tag on the track according to the data; acquiring a positioning distance which is the distance between the robot and the electronic tag on the track; and finally, according to the position information and the positioning distance, obtaining the positioning information of the robot.
S102: the robot utilizes a target detection model to carry out target recognition on the pictures to obtain confidence coefficients of a plurality of target objects, and the target detection model is a YOLO neural network algorithm after training.
As one example, S102 may include: compared with a two-stage target detection algorithm, the robot can directly regress the position and the shape of the target on the picture by taking the trained YOLO neural network algorithm as a target detection model, so that the YOLO algorithm has the advantages of high speed, high accuracy, good small target detection effect and the like.
It should be noted that, the robot performs target recognition on the picture by using the target detection model, and may obtain the position of the target object on the picture and the confidence coefficient, where the confidence coefficient indicates the score of the existence of the target object and the score of the existence of the category, the existence score of the target object indicates whether the target object exists in the bounding box, and the category existence score indicates whether the category of the target object in the bounding box is predicted correctly.
In the process, the confidence coefficient of the target object in the picture can be directly obtained by utilizing the target detection model, so that the position point location set of the subsequent robot inspection is determined according to the confidence coefficient, and the position point location set of the inspection is automatically obtained.
S103: and the robot obtains a specific position point location set according to the pictures with the highest confidence coefficients of the target objects and the corresponding positioning information.
As one example, S103 may include: acquiring a preliminary specific position point location set according to the pictures with the highest confidence coefficients of the target objects and the corresponding positioning information; and the robot eliminates the false-reported picture and the corresponding positioning information in the initial position point location set to obtain the specific position point location set.
Because the number of the initially shot pictures is too large or the shooting environment is not good, when the target detection model is used for target identification, false alarm pictures possibly appear, namely, no target object appears in the pictures, but the target object is identified, or the type of the false target object is identified. Therefore, the initial specific position point location set needs to be acquired, and then the false-reported picture and the corresponding positioning information in the initial position point location set are removed, so that the specific position point location set with high accuracy is finally obtained.
S104: the robot detects the running states of a plurality of target objects according to the specific position point location set, wherein the running states comprise a normal running state and an abnormal state.
As one example, S104 may include: the robot collects digital information according to a specific position point set, wherein the digital information comprises images, sounds, infrared thermal images, temperature data and various gas concentration parameter information; and the robot analyzes and processes the digital information by utilizing an intelligent perception key technology algorithm to obtain the running states of a plurality of target objects.
The fault state of the equipment is a gradual change process, analysis can be carried out according to the vibration condition of a target object before one month or a plurality of months of damage, the damage time of the target object is predicted, the analysis is generally carried out through an accelerometer and the like on the target object, and when the target object is considered to possibly fail, the robot can be used for carrying out analysis and judgment through different means at regular intervals, for example, before one week or a plurality of weeks of damage of the target object, the robot collects the sound of the target object at the target object and carries out analysis to judge the damage condition of the equipment; when the target object is damaged for one or more days, the robot collects infrared thermal imaging of the target object and temperature data around the target object at the target object for analysis, and judges the heating condition of target equipment; when the target object is damaged in one day, the robot is positioned at the target object, collects and analyzes surrounding multiple gas concentration parameter information, and judges the smoking condition of the target object, for example, the robot collects multiple gas concentration parameter information with the distance of about 2 m; when the target object is in an abnormal state, the appearance of the target object is indicated to be broken or deformed, and the robot is positioned at the target object and acquires the image of the target object for analysis.
In the embodiment of the application, besides performing the inspection task, the robot can also perform corresponding operations according to various instructions sent by the monitoring center, and the functional flow of the robot is described with reference to fig. 2:
the method comprises the steps that a robot is initialized firstly, namely after the robot is started, self-checking is firstly carried out, whether hardware and software are in a normal working state or not is checked, namely the robot is in a standby state, instructions transmitted by a monitoring center are received according to wireless communication, the instructions are used for indicating the robot to execute various tasks, and the various tasks comprise a patrol task, a charging task, a manual task, an emergency braking task and a stopping task. After the robot executes the task, the state of the robot and the task execution condition are transmitted to a monitoring center by utilizing wireless communication.
(1) When the instruction is a normal inspection instruction or a high-speed inspection instruction, the robot executes various tasks including: the robot detects the running state of a target object according to a preset speed; the preset speed of normal inspection can be about 0.2m/s-0.3m/s, and the preset speed of high-speed inspection can be more than or equal to 0.5m/s.
If the inspection is completed, the robot sends an inspection completion message to the monitoring center, and the monitoring center sends a charging instruction to the robot so as to enable the robot to carry out a charging task; if the inspection is not completed, the robot judges whether the current battery electric quantity is lower than a preset value, wherein the preset value can be 30%;
if the current battery power is lower than the preset value, the monitoring center sends a charging instruction to the robot so as to enable the robot to carry out a charging task; if the current battery power is higher than the preset value, the robot continues to detect the running state of the target object according to the preset speed.
(2) When the command is a manual command or an emergency braking command or a stopping command, the robot executes various tasks including: after the robot executes a manual task or an emergency braking task or a stopping task, judging whether the current battery electric quantity is lower than a preset value, wherein the preset value can be 30%;
if the current battery power is lower than the preset value, the monitoring center sends a charging instruction to the robot so as to enable the robot to carry out a charging task; and if the current battery power is higher than the preset value, the robot waits for the monitoring center to transmit an instruction.
Therefore, the robot in the embodiment of the application can monitor the instructions transmitted by the center to complete the corresponding tasks besides completing the self inspection tasks, and belongs to an intelligent robot.
Therefore, in the embodiment of the application, the robot can acquire the specific position point location set for inspection by using the target detection model, and then perform the inspection task according to the specific position point location set, and the whole process does not need to be manually participated, so that the deployment efficiency of the inspection robot is improved.
In the implementation of the above robot function, the following factors need to be considered:
(1) The hardware interface and the driver program need to be developed to realize the functions of robot such as motion control and sensor data acquisition.
(2) The intelligent algorithm and decision need to be designed to realize functions of autonomous navigation, environment detection, anomaly detection and the like of the robot.
(3) And the data processing and analysis need to process and analyze the acquired data, extract useful information and provide decision support for management personnel.
(4) The user interface and interaction are needed to design the user interface and interaction mode, so that the user can conveniently perform tasks, operation control and other operations.
(5) Safety and reliability, the operation safety and reliability of the robot, such as collision avoidance, compliance with traffic regulations, and the like, need to be considered.
(6) Maintenance and fault diagnosis need to consider a maintenance scheme and a fault diagnosis mechanism of the robot so as to prolong the service life of the robot and reduce the fault rate.
Therefore, in order to meet the above requirements, the robot in the embodiment of the present application needs to design a corresponding inspection system based on the robot, as shown in the hardware device diagram of the inspection system in fig. 3:
(1) And a driving subsystem: the robot is driven to run on the track by the aid of the motor and the speed reducer, the driver and the controller are electric parts for controlling the transmission subsystem, and the encoder is a sensor for the speed of the motor.
(2) Sensor subsystem: mainly comprises a temperature and humidity sensor, an inertial navigation system (IMU), a gas sensor and an RF reader-writer. The temperature and humidity sensor can detect the change of the ambient temperature. For example a temperature change in the range of-45 deg. to 125 deg.. The inertial navigation system can measure the vibration condition of the vehicle body according to the accelerometer. The gas sensor can measure the content of flammable and explosive gas with the radius of the robot of about 2 m. The RF reader-writer can read the data in the electronic tag on the track for robot positioning. The data obtained from the sensor subsystem may be aggregated to the control subsystem for analysis and processing.
(3) And the holder subsystem: the system mainly controls a camera holder, wherein the camera holder is provided with camera supporting equipment, and a high-definition camera and a thermal imaging camera are arranged on the camera holder, so that photos and videos on a track can be shot, and the camera holder is used for identifying the state of a target object.
(4) And a control subsystem: the robot control system mainly comprises a single board computer, which is used for receiving the data of the sensor subsystem, the image data of the holder subsystem and the instructions transmitted by the communication subsystem for processing and analysis, and can also be used for controlling the driving subsystem so that the robot runs on the track according to the planned route.
(5) Communication subsystem: the robot control system is used for receiving the instruction transmitted by the monitoring center, transmitting the state and the task execution condition of the robot to the monitoring center, and mainly adopting WIFI/5G to communicate with the monitoring center.
(6) And a power subsystem: including Battery Management System (BMS) and battery, adopt lithium ion battery in this application embodiment, also can use other batteries, provide stable power supply for the robot, guarantee the normal operating of robot.
Besides the specific hardware system, the robot system also comprises a robot body which is a frame and a foundation of the robot and can be hung on a rail to support various actions of the robot. The wireless charging coil can also be arranged on the vehicle body to charge the battery.
The hardware equipment diagram of fig. 3 can acquire a corresponding software function module diagram, in a real environment, firstly, the sensor module acquires information of the surrounding environment of the robot, and state information such as the position, the speed and the like of the robot is generated according to the information; the sensor module transmits the state information to the perception module so as to analyze the state information, thereby identifying objects, places and events occurring in the environment or the robot, specifically creating a model of the surrounding environment values of the robot, obtaining an environment map, and estimating the position of the robot relative to the environment map according to the state information of the robot. Based on the environment map and the position of the robot on the environment map, the actions required to reach the target in the shortest time or distance without collision are planned. Finally, based on the planned track and the position of the robot on the environment map, the driver can control the robot to perform the action of reacting and avoiding obstacle when unexpected interference occurs during the execution of the plan, and the robot is prevented from being damaged due to collision to a moving object.
In order to make the method provided by the embodiments of the present application clearer and easier to understand, a specific example of the method applied to the robot-based inspection system 300 shown in fig. 3 is described below with reference to fig. 5.
In the embodiment of the application, the inspection task is described by using the coke oven to inspect the target object. As shown in fig. 4, the stamp-charged coke oven has 65 air holes, which are divided into an above-ground layer and a below-ground layer. The ground layer is air and waste gas exchange equipment, is distributed on two sides of the ground layer, and operates the field guide chain and the cover plate through the oil cylinder and the pull rod every half an hour on the spot to switch the air inlet state and the air exhaust state. The guide chain and the cover plate form a group, 86 groups are arranged on two sides of one 67 groups, each group respectively controls the air inlet and the air outlet of one air port, the states of each air port and the adjacent air port are opposite, and the robot inspection work mainly comprises the position states of the cover plate, the guide chain and the chain top rod of the analysis equipment. The underground layer is a gas pipeline and gas valve reversing device, the underground layer is linked with the above-ground layer, the swing arms are driven to rotate ninety degrees to reach the exchange limiting position through the oil cylinder pull rod in each half hour on site according to the exchange program, 67 swing arms are arranged in total in one coke oven, and the robot needs to detect whether each swing arm swings in place. It should be noted that, the inspection target object has different states, fig. 4 shows three states of the swing arm, the swing arm faces to the left, the swing arm middle, and the swing arm faces to the right.
As shown in fig. 5, this embodiment may include:
s501: the drive subsystem controls the robot to run on the track.
According to the data acquired by the robot sensor subsystem, an environment map is established, then the action required by the robot to reach the target in the shortest time or distance under the condition of no collision is planned based on the position of the robot on the environment map, and finally, the robot can be controlled to run on the track at the speed of 0.5m/s in a straight section by utilizing the driving subsystem based on the planned track and the position of the robot on the environment map.
S502: when the robot runs on the track, the holder subsystem continuously takes pictures.
In order for the robot to automatically acquire the inspection points, the camera in the pan-tilt subsystem is required to capture video pictures every 0.5 seconds when the robot runs on the track.
S503: the control subsystem marks the positioning information of the robot on the picture.
In order to acquire the inspection point, the positioning information of the robot when the image is shot needs to be marked on the image, so that the RF reader-writer in the sensor subsystem is needed to read the data in the electronic tag on the track near the robot; determining the position information of the electronic tag on the track according to the data by using the control subsystem; and finally, the control subsystem is utilized to obtain the positioning information of the robot according to the position information and the positioning distance, and the positioning information is marked on a corresponding picture.
S504: the control subsystem utilizes a target detection model to carry out target recognition on the picture, so as to obtain the confidence coefficients of a plurality of target objects, wherein the target detection model is a YOLO neural network algorithm after training.
The control subsystem utilizes the target detection model to carry out target recognition on the picture, and the types and the confidence rates of a plurality of target objects can be obtained. As one example, for example, as shown in fig. 6, two right swing arms are detected with a target detection model, and the confidence is at 0.92.
S505: and the control subsystem acquires a preliminary specific position point location set according to the pictures with the highest confidence coefficients of the target objects and the corresponding positioning information.
The control subsystem can obtain a preliminary specific position point location set according to the picture with the highest confidence coefficient of each target object in the target objects and the corresponding positioning information.
S506: and the control subsystem eliminates the false-reported picture and the corresponding positioning information in the initial position point location set to obtain the specific position point location set.
In order to avoid the occurrence of images with wrong identification, the images with wrong reports in the initial position point location set and the corresponding positioning information need to be removed, so that the specific position point location set is obtained.
S507: and the control subsystem acquires the routing inspection route and the routing inspection points according to the specific position point location set.
The points on the inspection route in fig. 7 include a robot start point, an end point, an ascending point, and a descending point, and at most, inspection points for inspecting the target objects, and 67 target objects are shown in fig. 6. When the robot runs to the video detection point, video pictures need to be intercepted, and when the robot runs to the gas monitoring point, gas data are collected, and recognition of the target body state is carried out.
S508: and the communication subsystem receives the inspection instruction transmitted by the monitoring center.
After the robot automatically acquires the inspection route and the inspection point, the robot can wait for the inspection instruction transmitted by the monitoring center to carry out the inspection task.
S509: when the robot is at a patrol point on a patrol route, the sensor subsystem and the holder subsystem acquire digital information.
When the pictures of the target object are acquired, camera anti-shake processing is needed to be performed in advance when the robot runs to the inspection point, and the work such as camera light and focal length setting is performed, so that the accuracy of picture resolution is improved, and the accuracy and the stability of robot inspection are finally improved.
S510: the control subsystem analyzes and processes the digital information by utilizing an intelligent perception key technology algorithm to obtain the running states of a plurality of target objects.
Therefore, the robot can acquire the specific position point location set of inspection by utilizing the target detection model, acquire the inspection route and the inspection point according to the specific position point location set, perform the inspection task according to the inspection route and the inspection point, and analyze and process according to the acquired digital information, so that the running state of the target object is obtained, the whole deployment process does not need to be manually participated, namely, the inspection route and the inspection point do not need to be manually set in advance, and the deployment efficiency of the inspection robot is improved.
From the above description of embodiments, it will be apparent to those skilled in the art that all or part of the steps of the above described example methods may be implemented in software plus general hardware platforms. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which may be stored in a storage medium, such as a read-only memory (ROM)/RAM, a magnetic disk, an optical disk, or the like, including several instructions for causing a computer device (which may be a personal computer, a server, or a network communication device such as a router) to perform the methods described in the embodiments or some parts of the embodiments of the present application.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for the device embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference is made to the description of the method embodiments for relevant points. The apparatus embodiments described above are merely illustrative, in which the modules illustrated as separate components may or may not be physically separate, and the components shown as modules may or may not be physical modules, may be located in one place, or may be distributed over multiple network elements. Some or all of the modules may be selected according to actual needs to achieve the objective of the embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
The foregoing is merely exemplary embodiments of the present application and is not intended to limit the scope of the present application.
Claims (10)
1. The inspection method based on the robot is characterized by comprising the following steps of:
when the robot runs on the track, continuously shooting pictures, wherein the pictures are provided with positioning information of the robot, and the positioning information is position information when the robot shoots the pictures;
the robot performs target recognition on the picture by using a target detection model to obtain confidence coefficients of a plurality of target objects, wherein the target detection model is a trained YOLO neural network algorithm;
the robot obtains a specific position point location set according to the picture with the highest confidence coefficient of the plurality of target objects and the corresponding positioning information;
the robot detects the running states of the plurality of target objects according to the specific position point location set, wherein the running states comprise a normal running state and an abnormal state.
2. The method of claim 1, wherein the process of obtaining the positioning information of the robot after the robot records the position information of the plurality of electronic tags on the track comprises:
the robot reads data in the nearby electronic tags according to the RF reader-writer;
the robot determines the position information of the electronic tag on the track according to the data;
the robot obtains a positioning distance, wherein the positioning distance is the distance between the robot and the electronic tag on a track;
and the robot obtains the positioning information of the robot according to the position information and the positioning distance.
3. The method of claim 1, wherein the robot obtains a set of location-specific points according to the picture with the highest confidence of the plurality of target objects and the corresponding positioning information, comprising:
the robot acquires a preliminary specific position point location set according to the picture with the highest confidence coefficient of the plurality of target objects and the corresponding positioning information;
and the robot eliminates the false-reported picture and the corresponding positioning information in the initial position point location set to obtain the specific position point location set.
4. The method of claim 1, wherein the robot detecting the operational status of the plurality of target objects from the set of specific location points comprises:
the robot collects digital information according to the specific position point set, wherein the digital information comprises images, sounds, infrared thermal images, temperature data and various gas concentration parameter information;
and the robot analyzes and processes the digital information by utilizing an intelligent perception key technology algorithm to obtain the running states of the plurality of target objects.
5. The method according to claim 1, wherein the method further comprises:
the robot receives an instruction transmitted by a monitoring center according to wireless communication, wherein the instruction is used for instructing the robot to execute various tasks, and the various tasks comprise a patrol task, a charging task, a manual task, an emergency braking task and a stopping task;
the robot transmits the robot state and the task execution condition to the monitoring center by utilizing the wireless communication.
6. The method of claim 5, wherein when the command is a normal inspection command or a high-speed inspection command, the robot performs various tasks including:
the robot detects the running state of the target object according to a preset speed;
if the inspection is completed, the monitoring center sends a charging instruction to the robot so as to enable the robot to carry out a charging task;
if the inspection is not completed, the robot judges whether the current battery electric quantity is lower than a preset value;
if the current battery power is lower than a preset value, the monitoring center sends a charging instruction to the robot so as to enable the robot to carry out a charging task;
and if the current battery power is higher than a preset value, the robot continues to detect the running state of the target object according to the preset speed.
7. The method of claim 5, wherein when the command is a manual command or an emergency braking command or a stop command, the robot performs various tasks, including:
after the robot executes the manual task or the emergency braking task or the stopping task, judging whether the current battery electric quantity is lower than a preset value or not;
if the current battery power is lower than a preset value, the monitoring center sends a charging instruction to the robot so as to enable the robot to carry out a charging task;
and if the current battery power is higher than a preset value, the robot waits for the monitoring center to transmit an instruction.
8. A robot-based inspection system, for use with a robot, the system comprising: the device comprises a driving subsystem, a sensor subsystem, a holder subsystem and a control subsystem;
the driving subsystem is used for controlling the robot to run on a track;
the holder subsystem is used for continuously shooting pictures when the robot runs on a track, the pictures are marked with positioning information of the robot, and the positioning information is position information when the robot shoots the pictures;
the control subsystem is used for carrying out target recognition on the picture by utilizing a target detection model to obtain confidence rates of a plurality of target objects, and the target detection model is a trained YOLO neural network algorithm; the method comprises the steps of obtaining a specific position point location set according to a picture with highest confidence coefficient of the plurality of objects and corresponding positioning information; detecting the running states of the plurality of target objects according to the specific position point location set, wherein the running states comprise a normal running state and an abnormal state.
9. The inspection system of claim 8, further comprising:
the sensor subsystem is used for collecting sound, infrared thermal images, temperature data, smoke and various gas concentration parameter information in the digital information according to the specific position point set;
the holder subsystem is also used for collecting pictures in the digital information according to the specific position point set;
the control subsystem is specifically configured to, when detecting the running states of the plurality of target objects according to the specific location point location set: and analyzing and processing the digital information by utilizing an intelligent perception key technology algorithm to obtain the running states of the plurality of target objects.
10. The inspection system of claim 8, further comprising:
the communication subsystem is used for receiving instructions transmitted by the monitoring center, and the instructions are used for instructing the robot to execute various tasks, wherein the various tasks comprise a patrol task, a charging task, a manual task, an emergency braking task and a stopping task; and the robot state and task execution conditions are transmitted to the monitoring center.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311792605.4A CN117670309A (en) | 2023-12-25 | 2023-12-25 | Inspection method and system based on robot |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311792605.4A CN117670309A (en) | 2023-12-25 | 2023-12-25 | Inspection method and system based on robot |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117670309A true CN117670309A (en) | 2024-03-08 |
Family
ID=90077012
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311792605.4A Pending CN117670309A (en) | 2023-12-25 | 2023-12-25 | Inspection method and system based on robot |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117670309A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118397723A (en) * | 2024-03-13 | 2024-07-26 | 北京地铁信息发展有限公司 | Computer vision-based inspection operation compliance detection method for machine room inspection personnel |
-
2023
- 2023-12-25 CN CN202311792605.4A patent/CN117670309A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118397723A (en) * | 2024-03-13 | 2024-07-26 | 北京地铁信息发展有限公司 | Computer vision-based inspection operation compliance detection method for machine room inspection personnel |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110841219B (en) | Fire monitoring and handling system and method in cable tunnel environment | |
WO2020047879A1 (en) | Intelligent inspection system for tunnel | |
CN205539242U (en) | Intelligent inspection device of power plant and system | |
CN110632433A (en) | Fault diagnosis system and method for power plant equipment operation | |
CN112506214B (en) | Operation flow of unmanned aerial vehicle autonomous fan inspection system | |
CN112039215A (en) | Three-dimensional inspection system and inspection method for transformer substation | |
CN110027701A (en) | Underground pipe gallery cruising inspection system | |
CN115752462A (en) | Method, system, electronic equipment and medium for inspecting key inspection targets in building | |
CN109927025A (en) | Patrol dimension robot control method, device, computer equipment and storage medium | |
CN110319888A (en) | A kind of petrochemical industry crusing robot and its working method | |
CN115562349A (en) | Inspection method and device for cooperative operation of unmanned aerial vehicle and ground inspection robot | |
CN117670309A (en) | Inspection method and system based on robot | |
CN117309065B (en) | Unmanned aerial vehicle-based remote monitoring method and system for converter station | |
CN112650272A (en) | 5G-based method and system for sensing patrol information of underground coal mine unmanned aerial vehicle | |
CN109218683B (en) | Drone monitoring system and power site monitoring system | |
CN114445756A (en) | Automatic inspection method and system for production equipment | |
WO2024235362A1 (en) | Coal mine gas inspection method | |
CN210982641U (en) | Power plant equipment operation fault diagnosis system | |
CN115056236A (en) | Intelligent inspection robot for power plant | |
CN107860869A (en) | A kind of intelligent air monitoring system and monitoring method based on aircraft | |
CN117111660A (en) | Unattended intelligent granary system and method | |
CN210036823U (en) | Petrochemical inspection robot | |
CN115939996A (en) | Automatic inspection system of power inspection robot | |
CN111780761A (en) | Autonomous navigation method for inspection robot | |
CN114661057B (en) | Intelligent bionic biped inspection robot |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |