CN118952253B - Intelligent inspection robot system based on Internet of things - Google Patents
Intelligent inspection robot system based on Internet of things Download PDFInfo
- Publication number
- CN118952253B CN118952253B CN202411448973.1A CN202411448973A CN118952253B CN 118952253 B CN118952253 B CN 118952253B CN 202411448973 A CN202411448973 A CN 202411448973A CN 118952253 B CN118952253 B CN 118952253B
- Authority
- CN
- China
- Prior art keywords
- feature map
- real
- video image
- time video
- inspection robot
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J11/00—Manipulators not otherwise provided for
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J13/00—Controls for manipulators
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J19/00—Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
- B25J19/02—Sensing devices
- B25J19/04—Viewing devices
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/12—Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
- H04L67/125—Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks involving control of end-device applications over a network
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Artificial Intelligence (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Databases & Information Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computational Linguistics (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Molecular Biology (AREA)
- Data Mining & Analysis (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Image Analysis (AREA)
Abstract
The application relates to the field of inspection robot systems, and provides an intelligent inspection robot system based on the Internet of things, which comprises an inspection robot and a cloud platform; the inspection robot comprises a vision acquisition module, a vision characteristic reconstruction module, a communication module, a control module and a power module, wherein the vision acquisition module is used for acquiring real-time video images shot by the inspection robot, the vision characteristic reconstruction module is used for extracting final characteristic images of the real-time video images based on a neural network, the communication module is used for transmitting the final characteristic images of the real-time video images to a cloud platform in real time, the cloud platform is used for identifying abnormal conditions of the final characteristic images of the real-time video images and issuing abnormal instructions, the control module is used for receiving the abnormal instructions of the cloud platform to drive the inspection robot to execute actions corresponding to the abnormal instructions, and the power module is used for providing power for the inspection robot. According to the application, an intelligent inspection robot system is constructed based on the Internet of things, so that the anomaly identification and processing of the real-time video image are realized.
Description
Technical Field
The invention belongs to the field of inspection robot systems, and particularly relates to an intelligent inspection robot system based on the Internet of things.
Background
The intelligent inspection robot system of the Internet of things is a system combining the technology of the Internet of things and the technology of robots and is used for automatic inspection, and is mainly used for monitoring and maintaining certain specific environments, equipment or infrastructure, such as factories, substations, transmission lines, pipelines and other places needing periodic inspection.
The traditional intelligent inspection robot based on the internet of things can conduct anomaly analysis on real-time video images acquired by the robot, the real-time video images contain a lot of details, the video images are quite abundant in data quantity, the video images are directly transmitted and processed by the intelligent inspection robot based on the internet of things, network resources are occupied, processing complexity is increased, in addition, a lot of unnecessary details such as background and light change exist in the video images, the information can interfere with subsequent anomaly identification on the video images, therefore, the general inspection robot can have basic image processing functions, but is generally limited to some simple algorithms such as edge detection and color analysis, the algorithms can find some obvious anomalies, but can not effectively detect tiny and potential faults such as tiny cracks on the surface of equipment, gradually worn parts and the like.
In addition, the traditional intelligent inspection robot based on the Internet of things is difficult to extract depth information from image data in a complex environment. For example, in the case of uneven illumination, complex background, or noise on the surface of the device, it is difficult for a simple image processing method to recognize and separate important visual information.
Disclosure of Invention
In view of the above-mentioned drawbacks of the prior art, the present invention proposes an intelligent inspection robot system based on the internet of things, and the technical scheme of the present invention includes:
inspection robots and cloud platforms;
the inspection robot comprises a vision acquisition module, a vision characteristic reconstruction module, a communication module, a control module and a power supply module;
The vision acquisition module is used for acquiring real-time video images shot by the inspection robot on a monitoring object in the inspection process;
The visual feature reconstruction module is used for extracting a final feature map of a real-time video image and comprises a residual error unit, an up-sampling layer and a final feature map generation layer, wherein the residual error unit is used for carrying out down-sampling on the input real-time video image to generate a primary feature map;
The communication module is used for transmitting the final feature map of the real-time video image to the cloud platform in real time;
the cloud platform is used for identifying abnormal conditions of a final feature map of the real-time video image and issuing an abnormal instruction;
The control module is used for receiving an abnormal instruction of the cloud platform to drive the inspection robot to execute an action corresponding to the abnormal instruction;
The power module is used for providing power for the inspection robot.
Preferably, the final feature map generating layer is configured to convolve the restored primary feature map and output a final feature map of the real-time video image, and includes:
The final feature map generation layer comprises a convolution filter and a 1×1 convolution operation;
And mapping each pixel value in the restored primary feature map into a corresponding element in a matrix, performing sliding calculation on the matrix by the convolution filter, and outputting a final feature map of the real-time video image through a convolution operation of 1 multiplied by 1.
Preferably, the convolution filter performs sliding calculation on the matrix, and the formula is as follows:
In the formula, As a final feature map of the real-time video image,As a function of the offset,For the corresponding element in the matrix,In the form of a convolution kernel,AndFor the height and width of the convolution kernel,For the step size of the convolution kernel,AndIs the internal index of the convolution kernel.
Preferably, the residual unit is configured to downsample the input real-time video image to generate a primary feature map, where the formula is as follows:
In the formula, As a primary feature map of the image of the object,As an operational function of the residual unit,Is an input real-time video image.
Preferably, the inspection robot further comprises a feature map detection module;
The feature map detection module is used for storing the final feature map of the history non-abnormal video image, calculating the repeated values of the final feature map of the real-time video image and the final feature map of the history non-abnormal video image, and judging whether the final feature map of the currently acquired real-time video image is the repeated feature map or not based on the repeated values so as to exclude the repeated feature map.
Preferably, the calculating the repeated values of the final feature map of the real-time video image and the final feature map of the historical anomaly-free video image is as follows:
In the formula, The value of the repetition is repeated and,For the corner positions of the final feature map of the real-time video image,For the corner positions of the final feature map of the historical anomaly-free video image,Is the number of corner points.
Preferably, the determining whether the final feature map of the currently acquired real-time video image is a repeated feature map based on the repeated value includes:
Judging whether the repeated value of the final feature map of the currently acquired real-time video image is in a preset repeated range, if so, marking the final feature map of the currently acquired real-time video image as the repeated feature map, and if not, outputting normally.
Preferably, the cloud platform is configured to identify an abnormal condition of a final feature map of the real-time video image and issue an abnormal instruction, including:
The cloud platform receives a final feature map of the real-time video image transmitted by the inspection robot through the communication module, compares the final feature map with a feature map library in a normal state stored in history, identifies an abnormality in the final feature map of the real-time video image based on the classification model, and generates a corresponding abnormality instruction according to the abnormality condition when the abnormality is detected.
The beneficial effects are that:
1. According to the method, the real-time feature map transmitted by the inspection robot is compared with the historical normal feature map library through the cloud platform, the abnormality is accurately identified based on the classification model, and the corresponding abnormality instruction is generated, so that the abnormality detection is more intelligent and accurate, false alarm and missing report are reduced, and the speed and pertinence of the abnormality response are improved;
2. The vision characteristic reconstruction module not only greatly improves the processing capacity of the inspection robot to vision data, but also enhances the capacity of the inspection robot to detect tiny anomalies and faults through an advanced convolutional neural network and a deep learning technology, and the introduction of the module effectively makes up the defect of the vision processing capacity of the traditional inspection robot, in particular to the aspects of automation and high-precision anomaly detection;
3. The feature map detection module of the invention compares the feature map of the image with the feature map in the history record by analyzing the features of the image to judge whether the current image is repeated with the image recorded before, and if the repeated images are detected, the system can exclude the repeated images without redundant processing or storage of the repeated images.
Drawings
FIG. 1 is a schematic view of a preferred embodiment of the present invention;
Fig. 2 is a schematic structural diagram of a inspection robot according to a preferred embodiment of the present invention.
Detailed Description
The following examples of the present invention are described in detail, and are given by way of illustration of the present invention, but the scope of the present invention is not limited to the following examples.
The invention designs an intelligent inspection robot system based on the Internet of things, which is shown in fig. 1, and the technical scheme specifically comprises the following steps:
inspection robots and cloud platforms;
the inspection robot comprises a vision acquisition module, a vision characteristic reconstruction module, a communication module, a control module and a power module;
The vision acquisition module is used for acquiring real-time video images shot by the inspection robot on a monitoring object in the inspection process;
The visual feature reconstruction module is used for extracting a final feature map of the real-time video image and comprises a residual error unit, an up-sampling layer and a final feature map generation layer, wherein the residual error unit is used for performing down-sampling on the input real-time video image to generate a primary feature map;
The communication module is used for transmitting the final feature image of the real-time video image to the cloud platform in real time;
the cloud platform is used for identifying the abnormal condition of the final feature map of the real-time video image and issuing an abnormal instruction;
The control module is used for receiving an abnormal instruction of the cloud platform to drive the inspection robot to execute an action corresponding to the abnormal instruction;
The power module is used for providing power for the inspection robot.
Preferably, the final feature map generating layer is configured to convolve the restored primary feature map and output a final feature map of the real-time video image, and includes:
the final feature map generation layer contains convolution filters and a1×1 convolution operation;
Each pixel value in the restored primary feature map is mapped to a corresponding element in the matrix, the convolution filter performs sliding calculation on the matrix, and the final feature map of the real-time video image is output through a convolution operation of 1×1.
Preferably, the convolution filter performs a sliding calculation on the matrix as follows:
In the formula, As a final feature map of the real-time video image,As a function of the offset,For the corresponding element in the matrix,In the form of a convolution kernel,AndFor the height and width of the convolution kernel,For the step size of the convolution kernel,AndIs the internal index of the convolution kernel.
Preferably, the residual unit is configured to downsample the input real-time video image to generate a primary feature map, with the following formula:
In the formula, As a primary feature map of the image of the object,As an operational function of the residual unit,Is an input real-time video image.
Specifically, the visual feature reconstruction module uses a Convolutional Neural Network (CNN), and the CNN can well extract spatial position features when processing a real-time video image, a convolutional layer comprising the CNN can scan a local area in the real-time video image, extract local features like edges and corner points, and can well capture information of different positions in the real-time video image. The corner points are regions with severe change in the real-time video image, are the junction of two edges, play a very key role in the object structure, have high stability and are suitable for comparison, so that the repeated values calculated by the subsequent feature map detection module are compared by the corner points.
Furthermore, the sliding calculation step on the matrix using the convolution filter is performed by inputting corresponding elements in the matrixIs obtained from a preceding network layer and contains preliminary features extracted from the original video image, for each position in the feature mapAnd calculating the offset, and adjusting the sampling point of the convolution kernel according to the calculated offset. For each positionThe point that should be sampled originallyIs adjusted toAnd performing convolution operation on the sampling points subjected to offset adjustment. Convolving the adjusted input points with dynamically generated convolution kernelsAnd performing dot product operation to obtain a convolution result, and calculating a final characteristic diagram through the adjusted convolution operation. Conventional convolution calculations are typically sampling on a fixed grid of the input feature map, and as the step size slides on the feature map according to a certain rule, the receptive field by adding the offset function convolution kernel is no longer limited to the regular grid. The convolution kernel can adjust the sampling position according to the specific input characteristics, flexibly cope with irregular deformation, bending and other structures in the image, accords with the irregular inspection condition of the inspection robot for coping with abnormal conditions, and the offset function can help the convolution kernel to better capture the changes, so that the capability of the inspection robot for inspecting and collecting image pictures is improved, for example, due to the self-adaptive capability of the offset function, the convolution kernel can more accurately capture important corner points in the image collected by the inspection robot, avoid corner point deviation caused by the problems of image deformation, blurring and the like, and promote subsequent acquisitionAndIn addition, the shape and the angle of the inspection object may change in different view angles or different frames, and the offset function may adjust the sampling position of the convolution kernel according to the changes, so as to help better identify new corner points.
In addition, the CNN extracts local features through convolution operations, and the resulting final feature map has a smaller size than the original video image but retains key information. The efficient feature map not only reduces the burden of processing data by the cloud platform, but also improves the processing speed. The cloud platform does not need to comprehensively analyze each frame of video image, and only needs to process the simplified feature map generated by the visual feature reconstruction module. In this way, the cloud platform can simply and quickly make a judgment based on the classification model, and does not need to consume a large amount of computing resources to process redundant information.
Preferably, as shown in fig. 2, the inspection robot further includes a feature map detection module;
The feature map detection module is used for storing the final feature map of the history non-abnormal video image, calculating the repeated values of the final feature map of the real-time video image and the final feature map of the history non-abnormal video image, judging whether the final feature map of the currently acquired real-time video image is the repeated feature map or not based on the repeated values, and eliminating the repeated feature map.
Preferably, the repeated values of the final feature map of the real-time video image and the final feature map of the historical anomaly-free video image are calculated as follows:
In the formula, The value of the repetition is repeated and,For the corner positions of the final feature map of the real-time video image,For the corner positions of the final feature map of the historical anomaly-free video image,Is the number of corner points.
Preferably, determining whether the final feature map of the currently acquired real-time video image is a repeated feature map based on the repetition value includes:
Judging whether the repeated value of the final feature map of the currently acquired real-time video image is in a preset repeated range, if so, marking the final feature map of the currently acquired real-time video image as the repeated feature map, and if not, outputting normally.
Specifically, compared with the traditional video image matching method (pixel level or histogram contrast), the formula of the repeated value can more accurately capture the detail change of the image through angular point position and cosine calculation, and has obvious advantages especially when processing the feature images with angular point change (such as rotation and scaling). Although cosine calculation is not a brand new concept in image feature matching, the cosine calculation is combined with the corner positions and the final feature images output by CNN, and the quick judgment is carried out through a simplified formula structure, so that the calculation efficiency can be improved while the accuracy is maintained, and the method is particularly suitable for being used in real-time video processing scenes. In addition, for the region that the robot needs to patrol, there are a plurality of robots to carry out synchronous collection to the region that needs to patrol at different angles in certain time, and the circumstances that image characteristics can take place mirror image reversal or local reversal, adopt the formula of repetition value can output under the mirror image reversal circumstances and be close to half of former repetition value, take place local reversal, repetition value formula also can reduce the error as far as possible to promote repetition value judgement accuracy and then improve repetition feature map's judgement accuracy.
In addition, in the case of the optical fiber,The term t represents time and refers to the repetition value calculated at time t to represent the final feature map of the real-time video image at that time. Thus, the determination of whether the final feature map of the currently acquired real-time video image is a historical duplicate feature map based on the duplicate values is also a determination with temporal attribute, and the specific determination is that each time a new real-time video image is acquired (time point t), its duplicate value is calculatedAnd compares this repetition value with a preset repetition range and adds a time threshold, which may be a fixed period of time (e.g. seconds, minutes) or a time interval with respect to a certain reference point in time, and compares the calculated time difference Δt with a preset time threshold, defining whether the image within this time threshold may be considered to be repetitive. If Δt is less than or equal to the time threshold, the current image is considered to be temporally close to the historical image, a repeatability determination can be made, otherwise they are considered not to be within a reasonable time range, and no further comparison is required.
Preferably, the cloud platform is configured to identify an abnormal condition of a final feature map of the real-time video image and issue an abnormal instruction, including:
The cloud platform receives a final feature map of the real-time video image transmitted by the inspection robot through the communication module, compares the final feature map with a feature map library in a normal state stored in history, identifies an abnormality in the final feature map of the real-time video image based on the classification model, and generates a corresponding abnormality instruction according to the abnormality condition when the abnormality is detected.
Specifically, once the cloud platform detects an abnormality, a corresponding abnormality instruction is generated, wherein the abnormality condition comprises equipment fault, temperature abnormality, surface damage and the like, the severity of the abnormality is evaluated, the abnormality is divided into slight abnormality, serious abnormality and the like, different response strategies are generated according to different grades, an abnormality characteristic diagram and related information are stored in a database and used for subsequent analysis or historical data tracking, and the generated abnormality instruction comprises stopping a robot, rechecking, alarming, shooting images with more angles and further monitoring abnormal parts. The cloud platform sends the generated abnormal instruction back to the inspection robot to guide the robot to drive, such as returning to a designated position, alarming, adjusting an inspection path, collecting more detailed images and the like, through the communication module.
The foregoing describes in detail preferred embodiments of the present invention. It should be understood that numerous modifications and variations can be made in accordance with the concepts of the invention without requiring creative effort by one of ordinary skill in the art. Therefore, all technical solutions which can be obtained by logic analysis, reasoning or limited experiments based on the prior art by a person skilled in the art according to the inventive concept shall be within the scope of protection defined by the claims.
Claims (5)
1. Intelligent inspection robot system based on thing networking, its characterized in that includes:
inspection robots and cloud platforms;
the inspection robot comprises a vision acquisition module, a vision characteristic reconstruction module, a communication module, a control module and a power supply module;
The vision acquisition module is used for acquiring real-time video images shot by the inspection robot on a monitoring object in the inspection process;
The visual feature reconstruction module is used for extracting a final feature map of a real-time video image and comprises a residual error unit, an up-sampling layer and a final feature map generation layer, wherein the residual error unit is used for carrying out down-sampling on the input real-time video image to generate a primary feature map;
The communication module is used for transmitting the final feature map of the real-time video image to the cloud platform in real time;
the cloud platform is used for identifying abnormal conditions of a final feature map of the real-time video image and issuing an abnormal instruction;
The control module is used for receiving an abnormal instruction of the cloud platform to drive the inspection robot to execute an action corresponding to the abnormal instruction;
The power supply module is used for providing power for the inspection robot;
the inspection robot further comprises a feature map detection module;
The feature map detection module is used for storing the final feature map of the historical non-abnormal video image, calculating the repeated values of the final feature map of the real-time video image and the final feature map of the historical non-abnormal video image, and judging whether the final feature map of the currently acquired real-time video image is the repeated feature map or not based on the repeated values so as to eliminate the repeated feature map;
And calculating the repeated values of the final feature map of the real-time video image and the final feature map of the historical non-abnormal video image, wherein the formula is as follows:
Wherein Duplicate (t) is a repeated value, W k is the corner position of the final feature map of the real-time video image, W k is the corner position of the final feature map of the history non-abnormal video image, and d is the corner number;
the step of judging whether the final feature map of the currently acquired real-time video image is a repeated feature map based on the repeated value comprises the following steps:
Judging whether the repeated value of the final feature map of the currently acquired real-time video image is in a preset repeated range, if so, marking the final feature map of the currently acquired real-time video image as the repeated feature map, and if not, outputting normally.
2. The intelligent inspection robot system based on the internet of things according to claim 1, wherein the final feature map generating layer is configured to perform convolution processing on the recovered primary feature map and output a final feature map of a real-time video image, and the final feature map generating layer includes:
The final feature map generation layer comprises a convolution filter and a 1×1 convolution operation;
And mapping each pixel value in the restored primary feature map into a corresponding element in a matrix, performing sliding calculation on the matrix by the convolution filter, and outputting a final feature map of the real-time video image through a convolution operation of 1 multiplied by 1.
3. The intelligent inspection robot system based on the internet of things according to claim 2, wherein the convolution filter performs sliding calculation on a matrix according to the following formula:
Wherein Z (i, j) is a final feature map of the real-time video image, delta (i, j) is an offset function, Q (i, j) is a corresponding element in the matrix, W (x, y) is a convolution kernel, A and B are the height and width of the convolution kernel, s is the step size of the convolution kernel, and m and n are the internal indexes of the convolution kernel.
4. The intelligent inspection robot system based on the internet of things according to claim 1, wherein the residual error unit is configured to downsample an input real-time video image to generate a primary feature map, and the formula is as follows:
C=G(v)+v
wherein C is a primary feature map, G is an operation function of a residual unit, and v is an input real-time video image.
5. The intelligent inspection robot system based on the internet of things according to claim 1, wherein the cloud platform is configured to identify an abnormal condition of a final feature map of a real-time video image and issue an abnormal instruction comprises:
The cloud platform receives a final feature map of the real-time video image transmitted by the inspection robot through the communication module, compares the final feature map with a feature map library in a normal state stored in history, identifies an abnormality in the final feature map of the real-time video image based on the classification model, and generates a corresponding abnormality instruction according to the abnormality condition when the abnormality is detected.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202411448973.1A CN118952253B (en) | 2024-10-17 | 2024-10-17 | Intelligent inspection robot system based on Internet of things |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202411448973.1A CN118952253B (en) | 2024-10-17 | 2024-10-17 | Intelligent inspection robot system based on Internet of things |
Publications (2)
Publication Number | Publication Date |
---|---|
CN118952253A CN118952253A (en) | 2024-11-15 |
CN118952253B true CN118952253B (en) | 2025-02-11 |
Family
ID=93396458
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202411448973.1A Active CN118952253B (en) | 2024-10-17 | 2024-10-17 | Intelligent inspection robot system based on Internet of things |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118952253B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN119417199A (en) * | 2025-01-08 | 2025-02-11 | 爱动超越人工智能科技(北京)有限责任公司 | Logistics dispatching system and dispatching method based on forklift inspection data collection within the factory |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110969166A (en) * | 2019-12-04 | 2020-04-07 | 国网智能科技股份有限公司 | A small target recognition method and system in an inspection scene |
CN220408745U (en) * | 2023-07-28 | 2024-01-30 | 西安文理学院 | Abnormal condition alarm system is patrolled and examined to robot is patrolled and examined in transformer substation |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10891537B2 (en) * | 2019-03-20 | 2021-01-12 | Huawei Technologies Co., Ltd. | Convolutional neural network-based image processing method and image processing apparatus |
WO2022133814A1 (en) * | 2020-12-23 | 2022-06-30 | Intel Corporation | Omni-scale convolution for convolutional neural networks |
CN114220126A (en) * | 2021-12-17 | 2022-03-22 | 杭州晨鹰军泰科技有限公司 | Target detection system and acquisition method |
CN116846059A (en) * | 2023-03-07 | 2023-10-03 | 云南电网有限责任公司玉溪供电局 | Edge detection system for power grid inspection and monitoring |
CN117876329A (en) * | 2024-01-11 | 2024-04-12 | 云南航天工程物探检测股份有限公司 | Real-time road disease detection method based on radar, video and data analysis |
-
2024
- 2024-10-17 CN CN202411448973.1A patent/CN118952253B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110969166A (en) * | 2019-12-04 | 2020-04-07 | 国网智能科技股份有限公司 | A small target recognition method and system in an inspection scene |
CN220408745U (en) * | 2023-07-28 | 2024-01-30 | 西安文理学院 | Abnormal condition alarm system is patrolled and examined to robot is patrolled and examined in transformer substation |
Also Published As
Publication number | Publication date |
---|---|
CN118952253A (en) | 2024-11-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110084165B (en) | Intelligent identification and early warning method for abnormal events in open scene of power field based on edge calculation | |
CN118952253B (en) | Intelligent inspection robot system based on Internet of things | |
CN113283344A (en) | Mining conveying belt deviation detection method based on semantic segmentation network | |
CN116168019B (en) | Power grid fault detection method and system based on machine vision technology | |
CN115861210A (en) | Transformer substation equipment abnormity detection method and system based on twin network | |
Daogang et al. | Anomaly identification of critical power plant facilities based on YOLOX-CBAM | |
CN118799924A (en) | A bird-repelling control method based on environmental factor optimization | |
CN118053111A (en) | Wind generating set image recognition fault detection method | |
CN117274091A (en) | Video data restoration method and device based on beta-divergence tensor decomposition | |
CN112529881B (en) | Power control cabinet cable anomaly identification method and device | |
Wang | Electrical Control Equipment Patrol Inspection Method Based on High Quality Image Recognition Technology. | |
Wang et al. | ClearSight: Deep Learning-Based Image Dehazing for Enhanced UAV Road Patrol | |
CN118101899B (en) | A security monitoring storage information intelligent analysis management method and system | |
CN119151525B (en) | Photovoltaic module temperature monitoring method and system | |
CN119026088B (en) | Power equipment diagnosis method and system based on artificial intelligence | |
CN119004377B (en) | A harmful gas leakage detection method and system based on industrial multimodal data fusion | |
CN118470577B (en) | Inspection scene identification method and system based on big data | |
CN118549923B (en) | Video radar monitoring method and related equipment | |
He et al. | Fabric defect detection based on improved object as point | |
CN118229037B (en) | High-speed illegal monitoring facility management method and system based on physical model | |
CN119299332A (en) | A safety monitoring system and monitoring method based on machine vision | |
CN119272104A (en) | An intelligent inspection and monitoring method and system based on electronic vision | |
CN118938770A (en) | A power plant safety monitoring system based on artificial intelligence | |
Ren et al. | A Novel Detection Method for Three Date Marks of Industrial Product based on Machine Vision | |
CN119478627A (en) | Swin transducer-CNN-based contact net static geometric parameter detection method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |