CN109376580B - A deep learning-based identification method for power tower components - Google Patents
A deep learning-based identification method for power tower components Download PDFInfo
- Publication number
- CN109376580B CN109376580B CN201811002575.1A CN201811002575A CN109376580B CN 109376580 B CN109376580 B CN 109376580B CN 201811002575 A CN201811002575 A CN 201811002575A CN 109376580 B CN109376580 B CN 109376580B
- Authority
- CN
- China
- Prior art keywords
- layer
- layers
- convolutional
- size
- feature maps
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 17
- 238000013135 deep learning Methods 0.000 title claims abstract description 8
- 238000012360 testing method Methods 0.000 claims abstract description 28
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 25
- 238000012549 training Methods 0.000 claims abstract description 18
- 238000007781 pre-processing Methods 0.000 claims abstract description 4
- 238000012546 transfer Methods 0.000 claims description 9
- 238000007689 inspection Methods 0.000 claims description 8
- 238000001514 detection method Methods 0.000 claims description 5
- 230000004927 fusion Effects 0.000 claims description 5
- 238000010200 validation analysis Methods 0.000 claims description 5
- 238000011176 pooling Methods 0.000 claims description 4
- 238000012795 verification Methods 0.000 abstract description 2
- 238000004519 manufacturing process Methods 0.000 abstract 1
- 238000013527 convolutional neural network Methods 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000007637 random forest analysis Methods 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/13—Satellite images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y04—INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
- Y04S—SYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
- Y04S10/00—Systems supporting electrical power generation, transmission or distribution
- Y04S10/50—Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Molecular Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Astronomy & Astrophysics (AREA)
- Remote Sensing (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
Description
技术领域technical field
本发明属于深度学习目标识别领域,涉及一种电力塔架部件识别方法,特别涉及一种基于深度学习的电力塔架部件实时识别方法。The invention belongs to the field of deep learning target recognition, and relates to a method for identifying parts of a power tower, in particular to a method for real-time identification of parts of a power tower based on deep learning.
背景技术Background technique
随着无人机行业的快速发展,无人机在电力巡检作业中的应用得到了广泛的关注。无人机电力巡检会产生大量包含电力塔架部件的图片数据,如果仅依靠人工判读则需要大量的时间。因此,采用目标识别算法对这些图片数据中的电力塔架部件进行自动检测识别意义重大。由于无人机电力巡检的特殊作业方式和复杂的作业环境,与常见的行人、车辆等识别目标相比,无人机巡检过程得到的图像的背景更加复杂、待识别的目标与背景的对比度低,还经常伴随着较大的干扰。传统的电气设备识别算法依赖于人工提取特征,如SIFT(Scale-Invariant Feature Transform)、输电线边缘检测、HOG(Histogram ofOriented Gridients),将获得特征结合支持向量机或随机森林等算法进行识别。此外也有采用自适应阈值、分水岭等图像分割算法对电气设备外围轮廓进行分割。但由于电气设备结构复杂、不规则,这些方法取得的效果一般,准确率不高而且泛化能力较差。With the rapid development of the UAV industry, the application of UAVs in power inspection operations has received extensive attention. UAV power inspection will generate a large amount of image data including power tower components, which requires a lot of time if only human interpretation is required. Therefore, it is of great significance to use the target recognition algorithm to automatically detect and identify the power tower components in these picture data. Due to the special operation mode and complex operating environment of UAV power inspection, compared with common identification targets such as pedestrians and vehicles, the background of the image obtained during the UAV inspection process is more complex, and the target to be identified is different from the background. Contrast is low, and is often accompanied by large distractions. Traditional electrical equipment identification algorithms rely on manually extracted features, such as SIFT (Scale-Invariant Feature Transform), power line edge detection, HOG (Histogram of Oriented Gridients), and the obtained features are combined with algorithms such as support vector machines or random forests for identification. In addition, there are also image segmentation algorithms such as adaptive threshold and watershed to segment the peripheral contour of electrical equipment. However, due to the complex and irregular structure of electrical equipment, these methods achieve general results with low accuracy and poor generalization ability.
2012年Alexnet的提出使得深度学习在图片识别和目标检测领域上的应用备受关注。当前卷积神经网络做为特征提取基础的目标检测框架大致可以分为两类。一类是基于区域候选框提出的两步网络(two-stage)。具有代表性的有R-CNN(Region basedConvolutional Neural Network),Fast R-CNN,Faster R-CNN。第二类是以YOLO(You OnlyLook Once)、SSD(Single Shot Multibox Detector)为代表单步网络(one-stage)。RedmonJ等提出了YOLO算法,YOLO在追求速度的同时丢失了一定精度。之后YOLOv2的提出,改善了YOLO精度丢失的问题,在识别速率和精度上都取得了较好的效果,适用于无人机电力塔架部件实时识别的应用场景。The proposal of Alexnet in 2012 made the application of deep learning in the field of image recognition and object detection attract much attention. The current target detection frameworks based on convolutional neural networks as the basis for feature extraction can be roughly divided into two categories. One class is a two-stage network based on region candidate boxes. The representative ones are R-CNN (Region based Convolutional Neural Network), Fast R-CNN, Faster R-CNN. The second category is one-stage network represented by YOLO (You Only Look Once) and SSD (Single Shot Multibox Detector). RedmonJ et al. proposed the YOLO algorithm, and YOLO lost a certain precision while pursuing speed. Later, YOLOv2 was proposed, which improved the problem of YOLO accuracy loss, and achieved good results in terms of recognition rate and accuracy, and was suitable for the application scenario of real-time recognition of UAV power tower components.
发明内容SUMMARY OF THE INVENTION
为了能够快速而且准确的实现无人机电力巡检中的电力塔架部件识别,本发明提出了一种基于深度学习的电力塔架部件识别方法。首先通过无人机搭载图像采集设备获取大量包含电力塔架部件的图像;然后选取包含电力塔架、指示牌、塔架底座这三类待识别目标的图像进行预处理,并按一定比例分别制作训练集、验证集和测试集;然后利用改进的YOLOv2算法对数据集进行训练;最后利用训练得到的模型对测试集进行测试,并对结果进行评价。In order to quickly and accurately realize the identification of power tower components in the UAV power inspection, the present invention proposes a method for identifying power tower components based on deep learning. First, a large number of images containing power tower components are obtained through the image acquisition equipment mounted on the UAV; then images containing three types of targets to be identified, including power towers, signs, and tower bases, are selected for preprocessing, and produced according to a certain proportion. training set, validation set and test set; then use the improved YOLOv2 algorithm to train the data set; finally use the trained model to test the test set and evaluate the results.
本发明方法主要包括以下步骤:The method of the present invention mainly comprises the following steps:
(1).利用无人机搭载图像采集设备,在电力巡检过程采集电力塔架部件的图像信息。(1) UAV is used to carry image acquisition equipment to collect image information of power tower components in the process of power inspection.
(2).从步骤(1)所采集的电力塔架部件图像中选取塔架,指示牌,塔架底座三类作为识别对象。挑选包含上述三类对象的图像进行预处理,并制作成训练集、验证集和测试集,用于后续的训练和测试。(2). Three types of towers, signboards and tower bases are selected from the images of the power tower components collected in step (1) as identification objects. Images containing the above three types of objects are selected for preprocessing, and made into training, validation and test sets for subsequent training and testing.
(3).使用改进的YOLOv2算法对步骤(2)中制作好的训练集进行训练。(3). Use the improved YOLOv2 algorithm to train the training set prepared in step (2).
改进的YOLOv2算法如下:The improved YOLOv2 algorithm is as follows:
改进的YOLOv2网络结构包含24个卷积层、5个池化层和两个转移层。该网络的卷积层中使用了3×3、1×1卷积核,含3×3卷积核的卷积层与1×1卷积核的卷积层交替放置,充分利用了1×1卷积核压缩特征的效果,并在每个池化层之后对卷积核的个数进行翻倍。改进的网络结构移除了YOLOv2网络末端冗余的两个3×3×1024的卷积层,并将Conv_3~Conv_6卷积层的卷积核数量减少了一半。The improved YOLOv2 network structure contains 24 convolutional layers, 5 pooling layers and two transfer layers. 3×3 and 1×1 convolution kernels are used in the convolutional layers of the network. The convolutional layers with 3×3 convolution kernels and the convolutional layers with 1×1 convolution kernels are alternately placed, making full use of the 1×1 convolution kernel. 1 Convolution kernel compresses the effect of features, and doubles the number of convolution kernels after each pooling layer. The improved network structure removes the redundant two 3×3×1024 convolutional layers at the end of the YOLOv2 network, and reduces the number of convolution kernels in the Conv_3~Conv_6 convolutional layers by half.
转移层由route层和reorg层两部分组成,route层将不同层的特征图进行连接,reorg用于调整特征图的尺寸。二者结合实现将其他层的特征图的尺寸调整至与当前层特征图尺寸一致,再进行拼接,实现不同尺寸特征图的特征融合。改进的网络结构在网络末端利用两个转移层将26×26、52×52尺寸的特征图与13×13尺寸的特征图进行融合。转移层的表示形式如下:The transfer layer consists of the route layer and the reorg layer. The route layer connects the feature maps of different layers, and the reorg is used to adjust the size of the feature map. The combination of the two realizes that the size of the feature maps of other layers is adjusted to be consistent with the size of the feature maps of the current layer, and then spliced to achieve feature fusion of feature maps of different sizes. The improved network structure utilizes two transfer layers at the end of the network to fuse the feature maps of size 26×26 and 52×52 with the feature map of
Xn=fp1+fp2+...+fpj (1)X n = fp 1 +fp 2 +...+fp j (1)
Xm=fp1+fp2+...+fpk (2)X m = fp 1 +fp 2 +...+fp k (2)
Xn表示来自第n层卷积层的所有特征图,fp1~fpj分别对应j个特征图,Xm表示来自第m层卷积层的所有特征图,fp1~fpk分别对应k个特征图。X n represents all feature maps from the nth convolutional layer, fp 1 to fp j correspond to j feature maps respectively, X m represents all feature maps from the mth convolutional layer, and fp 1 to fp k correspond to k respectively feature map.
表示第n层卷积层的特征图通过隔行隔列采样后,将特征图尺寸调整为与第m层卷积层特征图的尺寸一致,Sn表示第n层特征图的尺寸为Sn×Sn,Sm表示第m层特征图尺寸为Sm×Sm,调整后的特征图数量为调整前的2λ倍,Xmerge表示第m、n层卷积层特征图融合得到的结果。 After the feature map of the nth layer convolutional layer is sampled by every row and column, the size of the feature map is adjusted to be consistent with the size of the feature map of the mth layer convolutional layer, and Sn indicates that the size of the nth layer feature map is Sn × S n , S m indicate that the size of the feature map of the mth layer is S m ×S m , the number of feature maps after adjustment is 2λ times that before the adjustment, and X merge represents the result obtained by the fusion of the feature maps of the mth and nth layers of convolutional layers .
利用改进的网络结构训练至模型收敛。Use the improved network structure to train until the model converges.
(4).利用训练得到的模型对测试集进行测试,并利用mAP和P-R曲线以及每张图片的平均测试件时间对所得结果进行评价;其中mAP即Mean Average Precision,P-R即Precision-Recall;(4). Use the model obtained by training to test the test set, and use the mAP and P-R curves and the average test piece time of each picture to evaluate the obtained results; where mAP is Mean Average Precision, and P-R is Precision-Recall;
准确率Precision的定义:The definition of accuracy Precision:
TP表示为真正类,即一个实例为正类,也被预测为正类,FP为假正类即一个实例为负类,被预测为了正类。Precision反映了模型预测正例的能力。TP is represented as a true class, that is, an instance is a positive class and is also predicted to be a positive class, and FP is a false positive class, that is, an instance is a negative class and is predicted to be a positive class. Precision reflects the ability of the model to predict positive examples.
召回率Recall的定义:The definition of recall rate Recall:
FN表示为假负类,即一个实例为正类,被预测为负类。Recall反映了模型的检出能力。P-R曲线反映了分类器对正例的识别准确程度和对正例的覆盖能力之间的权衡。FN is represented as a false negative class, i.e. an instance is a positive class and is predicted to be a negative class. Recall reflects the detection ability of the model. The P-R curve reflects the trade-off between the accuracy of the classifier's recognition of positive examples and the ability to cover positive examples.
AP为PR曲线与X轴围成的图形面积,它的定义为:AP is the area of the graph enclosed by the PR curve and the X-axis, which is defined as:
mAP的定义为:mAP is defined as:
Q为对应的类别数,即对各类的AP求均值,mAP是模型对多类别目标识别的综合能力体现。Q is the number of corresponding categories, that is, the average value of APs of various types, and mAP is the comprehensive ability of the model to recognize multi-category targets.
本发明与已有的与电力塔架部件识别方法相比,具有如下特点:Compared with the existing method for identifying parts of electric power towers, the present invention has the following characteristics:
采用深度学习目标检测算法,能够更充分的利用图像数据的深层特征,模型具有更强的泛化能力。改进的YOLOv2算法融合了不同尺度特征图的特征,提高了识别准确率;网络结构的简化加快了模型的识别速率。The deep learning target detection algorithm can make full use of the deep features of image data, and the model has stronger generalization ability. The improved YOLOv2 algorithm combines the features of different scale feature maps to improve the recognition accuracy; the simplification of the network structure speeds up the recognition rate of the model.
附图说明Description of drawings
图1为本发明实施的流程图;Fig. 1 is the flow chart of the implementation of the present invention;
图2为改进的YOLOv2网络结构示意图;Figure 2 is a schematic diagram of the improved YOLOv2 network structure;
图3为三类目标测试结果P-R曲线。Figure 3 is the P-R curve of the three types of target test results.
具体实施方式Detailed ways
下面结合附图对本发明的实施例作详细说明:本实施例在以本发明技术方案为前提下进行实施,给出了详细的实施方式和具体的操作过程。The embodiments of the present invention will be described in detail below with reference to the accompanying drawings: the present embodiment is implemented on the premise of the technical solution of the present invention, and provides a detailed implementation manner and a specific operation process.
如图1所示,本实施例包括如下步骤:As shown in Figure 1, this embodiment includes the following steps:
步骤一,利用无人机搭载高清摄像头,在高压输电线路进行电力巡检时拍摄大量包含电力塔架、指示牌、塔架底座三类对象的图片。Step 1: UAVs are equipped with high-definition cameras to take a large number of pictures including three types of objects including power towers, signs, and tower bases during power inspection of high-voltage transmission lines.
步骤二,选取步骤一中获取的包含电力塔架、指示牌、塔架底座三类对象的图片各1200张,并按训练集1000张,验证集和测试集各100张区分,且互相独立。将所有图片大小调整为600×450,并将所有图片中包含的待识别对象进行标注,用于后续的训练和测试。Step 2: Select 1,200 pictures of three types of objects including power towers, signs, and tower bases obtained in step 1, and distinguish them by 1,000 pictures in the training set, 100 pictures in the verification set and 100 in the test set, and are independent of each other. All images are resized to 600×450, and the objects to be recognized contained in all images are marked for subsequent training and testing.
步骤三,改进的YOLOv2网络结构如图2所示,与YOLOv2原网络结构相比,剔除了网络末端两个3×3×1024的卷积层,并对网络中间卷积层的卷积核数量进行了减半处理;此外还添加了两个转移层在网络末端对52×52,、26×26、13×13三种尺寸的特征图进行了融合。用原YOLOv2算法与改进后的YOLOv2分别对做好的训练集进行训练,观察训练过程中验证集的准确率以及训练的损失值,等到验证集的准确率以及训练的损失值趋于稳定,即表示模型已经收敛,保存模型,停止训练。Step 3, the improved YOLOv2 network structure is shown in Figure 2. Compared with the original YOLOv2 network structure, the two 3×3×1024 convolution layers at the end of the network are removed, and the number of convolution kernels in the middle convolution layer of the network is calculated. The halving process is performed; in addition, two transfer layers are added to fuse feature maps of three sizes of 52×52, 26×26, and 13×13 at the end of the network. Use the original YOLOv2 algorithm and the improved YOLOv2 to train the prepared training set respectively, observe the accuracy of the validation set and the loss value of the training during the training process, and wait until the accuracy of the validation set and the loss value of the training become stable, that is Indicates that the model has converged, save the model, and stop training.
步骤四,用YOLOv2算法以及改进后的YOLOv2算法分别对测试集进行测试,并对结果进行评价和比较。Step 4: Use the YOLOv2 algorithm and the improved YOLOv2 algorithm to test the test set respectively, and evaluate and compare the results.
表1改进的YOLOv2与YOLOv2的测试结果比较Table 1 Comparison of test results between improved YOLOv2 and YOLOv2
表1记录了YOLOv2算法与改进的YOLOv2算法训练所得模型在测试集上的mAP、Recall以及平均每张照片的测试时间。可以看出改进的YOLOv2在拥有更高的召回率的同时,有着更高的识别准确率,此外每张照片的平均测试时间也快了约35%。图3是根据YOLOv2算法与改进的YOLOv2算法训练所得的模型对测试集中的指示牌、塔架、塔架底座三类目标的测试结果分别绘制的P-R曲线,YOLOv2测试结果对应的P-R曲线在准确率上升时,召回率下降较快,表明该模型的鲁棒性较差,而改进的YOLOv2对应的P-R曲线表明该模型有着更好的鲁棒性。Table 1 records the mAP, Recall, and average test time per photo of the models trained by the YOLOv2 algorithm and the improved YOLOv2 algorithm on the test set. It can be seen that the improved YOLOv2 has a higher recognition accuracy while having a higher recall rate, and the average test time per photo is also about 35% faster. Figure 3 is the P-R curve drawn according to the test results of the three types of targets in the test set, namely the signboard, the tower, and the tower base, respectively, based on the model trained by the YOLOv2 algorithm and the improved YOLOv2 algorithm. The P-R curve corresponding to the YOLOv2 test result is in the accuracy rate When it rises, the recall rate drops faster, indicating that the model is less robust, while the P-R curve corresponding to the improved YOLOv2 shows that the model has better robustness.
因此,改进的YOLOv2算法不仅提高了对电力塔架部件的识别准确率和速率,且训练所得模型拥有更好的鲁棒性,表明该改进的算法相比YOLOv2算法在电力塔架部件识别中具有更大的优势。Therefore, the improved YOLOv2 algorithm not only improves the recognition accuracy and rate of power tower components, but also has better robustness of the trained model, which shows that the improved algorithm has more advantages in the identification of power tower components than the YOLOv2 algorithm. greater advantage.
Claims (1)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811002575.1A CN109376580B (en) | 2018-08-30 | 2018-08-30 | A deep learning-based identification method for power tower components |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811002575.1A CN109376580B (en) | 2018-08-30 | 2018-08-30 | A deep learning-based identification method for power tower components |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109376580A CN109376580A (en) | 2019-02-22 |
CN109376580B true CN109376580B (en) | 2022-05-20 |
Family
ID=65404862
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811002575.1A Active CN109376580B (en) | 2018-08-30 | 2018-08-30 | A deep learning-based identification method for power tower components |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109376580B (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110245644A (en) * | 2019-06-22 | 2019-09-17 | 福州大学 | A kind of unmanned plane image transmission tower lodging knowledge method for distinguishing based on deep learning |
CN110647977B (en) * | 2019-08-26 | 2023-02-03 | 北京空间机电研究所 | An optimization method of Tiny-YOLO network for on-board ship target detection |
CN110992307A (en) * | 2019-11-04 | 2020-04-10 | 华北电力大学(保定) | Insulator positioning and identifying method and device based on YOLO |
CN112634129A (en) * | 2020-11-27 | 2021-04-09 | 国家电网有限公司大数据中心 | Image sensitive information desensitization method and device |
CN112528318A (en) * | 2020-11-27 | 2021-03-19 | 国家电网有限公司大数据中心 | Image desensitization method and device and electronic equipment |
CN112598054B (en) * | 2020-12-21 | 2023-09-22 | 福建京力信息科技有限公司 | Power transmission and transformation project quality common disease prevention and detection method based on deep learning |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105930906A (en) * | 2016-04-15 | 2016-09-07 | 上海大学 | Trip detection method based on characteristic weighting and improved Bayesian algorithm |
CN107563412A (en) * | 2017-08-09 | 2018-01-09 | 浙江大学 | A real-time detection method of infrared image power equipment based on deep learning |
CN108256634A (en) * | 2018-02-08 | 2018-07-06 | 杭州电子科技大学 | A kind of ship target detection method based on lightweight deep neural network |
CN108389197A (en) * | 2018-02-26 | 2018-08-10 | 上海赛特斯信息科技股份有限公司 | Transmission line of electricity defect inspection method based on deep learning |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160171622A1 (en) * | 2014-12-15 | 2016-06-16 | Loss of Use, Inc. | Insurance Asset Verification and Claims Processing System |
-
2018
- 2018-08-30 CN CN201811002575.1A patent/CN109376580B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105930906A (en) * | 2016-04-15 | 2016-09-07 | 上海大学 | Trip detection method based on characteristic weighting and improved Bayesian algorithm |
CN107563412A (en) * | 2017-08-09 | 2018-01-09 | 浙江大学 | A real-time detection method of infrared image power equipment based on deep learning |
CN108256634A (en) * | 2018-02-08 | 2018-07-06 | 杭州电子科技大学 | A kind of ship target detection method based on lightweight deep neural network |
CN108389197A (en) * | 2018-02-26 | 2018-08-10 | 上海赛特斯信息科技股份有限公司 | Transmission line of electricity defect inspection method based on deep learning |
Non-Patent Citations (3)
Title |
---|
Pedestrian Detection Based on YOLO Network Model;Wenbo Lan etc.;《IEEExplorer》;20180808;论文第III-IV部分,表1 * |
基于YOLOv2的复杂场景下车辆目标检测;李云鹏等;《电视技术》;20181231;第42卷(第5期);全文 * |
基于改进YOLOv2网络的遗留物检测算法;张瑞林;《浙江理工大学学报(自然科学版)》;20180531;第39卷(第03期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN109376580A (en) | 2019-02-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109376580B (en) | A deep learning-based identification method for power tower components | |
TWI770757B (en) | Defect detection method, electronic equipment and computer-readable storage medium thereof | |
CN104700099B (en) | The method and apparatus for recognizing traffic sign | |
CN107563372B (en) | License plate positioning method based on deep learning SSD frame | |
CN112348787B (en) | Training method of object defect detection model, object defect detection method and device | |
CN109325418A (en) | Pedestrian recognition method in road traffic environment based on improved YOLOv3 | |
CN106682696B (en) | The more example detection networks and its training method refined based on online example classification device | |
CN110969166A (en) | A small target recognition method and system in an inspection scene | |
CN104615986B (en) | The method that pedestrian detection is carried out to the video image of scene changes using multi-detector | |
CN112633149B (en) | Domain-adaptive foggy-day image target detection method and device | |
CN103871077B (en) | A kind of extraction method of key frame in road vehicles monitoring video | |
CN107808141A (en) | A kind of electric transmission line isolator explosion recognition methods based on deep learning | |
CN111382766A (en) | A device fault detection method based on Faster R-CNN | |
CN111489339A (en) | Method for detecting defects of bolt spare nuts of high-speed railway positioner | |
CN106056101A (en) | Non-maximum suppression method for face detection | |
CN110298227A (en) | A kind of vehicle checking method in unmanned plane image based on deep learning | |
CN116030396B (en) | An Accurate Segmentation Method for Video Structured Extraction | |
CN111160100A (en) | A Lightweight Deep Model Aerial Vehicle Detection Method Based on Sample Generation | |
CN111709291B (en) | Takeaway personnel identity recognition method based on fusion information | |
CN111738114A (en) | Vehicle target detection method based on accurate sampling of remote sensing images without anchor points | |
CN105320764A (en) | 3D model retrieval method and 3D model retrieval apparatus based on slow increment features | |
CN111340019A (en) | Detection method of granary pests based on Faster R-CNN | |
CN110334775B (en) | A method and device for UAV line fault identification based on breadth learning | |
CN110826415A (en) | Method and device for re-identifying vehicles in scene image | |
CN116704490B (en) | License plate recognition method, license plate recognition device and computer equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |