[go: up one dir, main page]

CN110232445B - Cultural relic authenticity identification method based on knowledge distillation - Google Patents

Cultural relic authenticity identification method based on knowledge distillation Download PDF

Info

Publication number
CN110232445B
CN110232445B CN201910526264.3A CN201910526264A CN110232445B CN 110232445 B CN110232445 B CN 110232445B CN 201910526264 A CN201910526264 A CN 201910526264A CN 110232445 B CN110232445 B CN 110232445B
Authority
CN
China
Prior art keywords
yolov3
network
tiny
knowledge distillation
cultural relic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201910526264.3A
Other languages
Chinese (zh)
Other versions
CN110232445A (en
Inventor
刘学平
李玙乾
张晶阳
王哲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Graduate School Tsinghua University
Original Assignee
Shenzhen Graduate School Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Graduate School Tsinghua University filed Critical Shenzhen Graduate School Tsinghua University
Priority to CN201910526264.3A priority Critical patent/CN110232445B/en
Publication of CN110232445A publication Critical patent/CN110232445A/en
Application granted granted Critical
Publication of CN110232445B publication Critical patent/CN110232445B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Tourism & Hospitality (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Strategic Management (AREA)
  • Primary Health Care (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Marketing (AREA)
  • Human Resources & Organizations (AREA)
  • Molecular Biology (AREA)
  • General Business, Economics & Management (AREA)
  • Economics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

本发明涉及文物真伪鉴定领域,特别涉及一种基于知识蒸馏的文物真伪鉴定方法,主要包括:步骤1:文物展出前,采集指纹区域图像,制作数据集;步骤2:配置YOLOV3网络作为教师网络,配置YOLOV3‑Tiny网络作为学生网络;步骤3:训练YOLOV3;步骤4:基于知识蒸馏训练YOLOV3‑Tiny;步骤5:文物回收后,重新采集指纹区域图像,制作测试集;步骤6:利用训练好的YOLOV3‑Tiny鉴定文物真伪。本发明利用准确性好但速度较慢的YOLOV3作为教师网络,准确性差但速度较快的YOLOV3‑Tiny作为学生网络,进行知识蒸馏,用软化后的目标监督学生网络学习,在保持YOLOV3‑Tiny原有较快的检测速度的情况下,大大提升其准确性,减轻了文物鉴定过程中的硬件资源占用,提升了检测效率,节约了鉴定成本。

Figure 201910526264

The invention relates to the field of authenticity identification of cultural relics, in particular to a method for authenticity identification of cultural relics based on knowledge distillation, which mainly includes: step 1: before the cultural relics are exhibited, collect images of fingerprint areas to make a data set; step 2: configure a YOLOV3 network as a Teacher network, configure YOLOV3‑Tiny network as the student network; Step 3: Train YOLOV3; Step 4: Train YOLOV3‑Tiny based on knowledge distillation; Step 5: After the cultural relics are recovered, re-collect the fingerprint area images to make a test set; Step 6: Use The trained YOLOV3‑Tiny identifies the authenticity of artifacts. The invention uses YOLOV3 with good accuracy but slow speed as the teacher network, and YOLOV3-Tiny with poor accuracy but fast speed as the student network, performs knowledge distillation, supervises students' network learning with softened goals, and maintains the original YOLOV3-Tiny In the case of a faster detection speed, the accuracy is greatly improved, the hardware resource occupation in the process of cultural relic identification is reduced, the detection efficiency is improved, and the identification cost is saved.

Figure 201910526264

Description

Cultural relic authenticity identification method based on knowledge distillation
Technical Field
The invention relates to the field of cultural relic authenticity identification, in particular to a cultural relic authenticity identification method based on knowledge distillation.
Background
The historical culture of China is long, and a great amount of historical relics are treasure. In order to carry forward historical culture, cultural relic collection units all over the country can regularly develop cultural relic exhibition activities. After the activity is shown, the cultural relics need to be identified so as to prevent the cultural relics from being damaged and replaced. At present, the cultural relic identification work is mainly completed manually, which depends on the personal experience and knowledge range of experts and sometimes needs to be assisted by high-tech equipment for detection. The manual identification process is long in time consumption, more manpower and material resources are required to be invested, and the subjectivity of the identification result is strong.
Disclosure of Invention
In order to solve the problems in the background art, the invention provides a cultural relic authenticity identification method based on knowledge distillation, which has the advantages of high detection speed and high accuracy.
The technical scheme for solving the problems is as follows: the cultural relic authenticity identification method based on knowledge distillation is characterized by comprising the following steps of:
step 1: before the cultural relics are displayed, fingerprint area images are collected, and a data set is made;
step 2: configuring a YOLOV3 network as a teacher network and configuring a YOLOV3-Tiny network as a student network;
and step 3: training YOLOV 3;
and 4, step 4: training YOLOV3-Tiny based on knowledge distillation;
and 5: after the cultural relics are recovered, the fingerprint area images are collected again to manufacture a test set;
step 6: and (3) identifying the authenticity of the cultural relic by using the trained YOLOV 3-Tiny.
Further, the step 1 specifically includes: before the cultural relics are shown, selecting an area on the cultural relics as a fingerprint area, acquiring RGB images of the fingerprint area from multiple angles by using a high-precision camera under different illumination conditions, marking the fingerprint area in the images by using a marking tool, manufacturing a data set, randomly selecting a part of images and marking files thereof as a training set, and taking the rest parts of the images as a verification set.
Further, in step 2, the yolo layer with prediction scale of 52 × 52 of YOLOV3 is deleted, and only the predictions of 13 × 13 and 26 × 26 are retained; aiming at the training set obtained in the step 1, 6 anchors are calculated by using a K-Means algorithm, and the anchors of the original Yolov3 and Yolov3-Tiny are replaced.
Further, the step 3 specifically includes: and (3) training the Yolov3 network model on the data set obtained in the step 1, and storing a weight file.
Further, in the step 4, the trained Yolov3 network is used as a teacher network, and the Yolov3-Tiny network is used as a student network; the two networks perform forward propagation on the input image in turn, resulting in outputs of scale 13 × 13 × c, 26 × 26 × c, respectivelyIs marked as outt(teacher) and outs(students); the error of Yolov3-Tiny was calculated according to the formulas (1) to (3):
LOSS=αT2·losssoft+(1-α)·losshard (1)
Figure BDA0002098326430000021
losshard=crossentropy(outs,Target) (3)
in the formula, losssoftFor soft target error, losshardFor a hard target error (i.e., the original error of the Yolov3 network), α is the adjustment losssoftAnd losshardT is the distillation temperature; target represents the original label of the data set, namely the hard Target; softmax () and cross () respectively represent the function value of softmax and the cross entropy value; to balance losssoftAnd losshardOrder of magnitude of (1), introduction coefficient beta1(ii) a In equation 2, the position, confidence, and category prediction values of the student and teacher networks are softened, and then the relative entropy is obtained as a soft target error.
Further, the step 5 specifically includes: and after the cultural relics are recovered, acquiring a plurality of images in the fingerprint area again to be used as a test set.
Further, the step 6 specifically includes: and (3) performing inference on the trained YOLOV3-Tiny on the test set obtained in the step (5) to obtain confidence values of the fingerprint regions, solving the average value of the confidence values of each fingerprint region, if the confidence value is greater than a set threshold value, determining that the fingerprint region is unchanged, and if the fingerprint region is not changed, determining that the cultural relic is a genuine relic.
The invention has the advantages that:
according to the cultural relic authenticity identification method based on knowledge distillation, the YOLOV3 which is good in accuracy and low in speed is used as a teacher network, the YOLOV3-Tiny which is poor in accuracy and high in speed is used as a student network, knowledge distillation is carried out, the softened target is used for supervising student network learning, under the condition that the original high detection speed of YOLOV3-Tiny is kept, the accuracy is greatly improved, the hardware resource occupation in the cultural relic identification process is reduced, the detection efficiency is improved, and the identification cost is saved.
Drawings
FIG. 1 is a schematic flow chart of the cultural relic authenticity identification method based on knowledge distillation of the invention;
FIG. 2 is a modified YOLOV3 network structure;
FIG. 3 is a YOLOV3-Tiny network structure;
FIG. 4 is a schematic diagram of the knowledge distillation.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings of the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
Referring to fig. 1, a cultural relic authenticity identification method based on knowledge distillation comprises the following steps:
step 1: before the cultural relics are displayed, fingerprint area images are collected, and a data set is made;
step 2: configuring a YOLOV3 network as a teacher network and configuring a YOLOV3-Tiny network as a student network;
and step 3: YOLOV3 was trained on the training set;
and 4, step 4: training YOLOV3-Tiny based on knowledge distillation;
and 5: after the cultural relics are recovered, the fingerprint area images are collected again to manufacture a test set;
step 6: identifying the authenticity of the cultural relic by using the trained YOLOV 3-Tiny;
step 1: before the cultural relics are shown, fingerprint area images are collected and a data set is made.
Further, the step 1 specifically includes: before the cultural relics are shown, selecting an area on the cultural relics as a fingerprint area, acquiring RGB images of the fingerprint area from multiple angles by using a high-precision camera under different illumination conditions, marking the fingerprint area in the images by using a marking tool, manufacturing a data set, randomly selecting a part of images and marking files thereof as a training set, and taking the rest parts of the images as a verification set.
Further, in step 2, the yolo layer with prediction scale of 52 × 52 in YOLOV3 is deleted, only the predictions of 13 × 13 and 26 × 26 are retained, and the network model after deletion is shown in fig. 2. The number of classes of the modified YoloV3 network is m classes, the number of convolution kernel channels of the yolo layer is (m +1+4) × 3 and is marked as c, and the output size of the modified yolo layer is 13 × 13 × c and 26 × 26 × c. Aiming at the training set obtained in the step 1, 6 anchors are calculated by using a K-Means algorithm, and the anchors of the original Yolov3 and Yolov3-Tiny are replaced. The Yolov3-TINY network model is shown in FIG. 3.
Further, in step 3, setting an initial hyper-parameter of the YOLOV3 network, setting a maximum iteration number epochmax and a batch processing number batch, training on a training set, calculating precision ratio, recall ratio and mAP of the YOLOV3 on a verification set after each epoch is finished, and storing the precision ratio, the recall ratio and the mAP in a training log. After each training (reaching the maximum iteration frequency epochmax), the coefficients of the position error function, the confidence coefficient error function and the classification error function are adjusted according to the training log, the training results with high precision ratio, recall ratio and mAP are finally obtained, and the weight file is saved.
Further, in step 4, the trained YOLOV3 network is used as a teacher network, and the YOLOV3-Tiny network is used as a student network. The two networks perform forward propagation on the input image in turn, resulting in outputs with scales of 13 × 13 × c, 26 × 26 × c, respectively denoted as out, as shown in fig. 4t(teacher) and outs(students). Backward propagation and updating of weights is only done for the YOLOV3-Tiny networkHeavy, the YOLOV3 network does not update the weights, only performs forward inference. The back propagation error includes two components, as shown in equations 1-3.
LOSS=αT2·losssoft+(1-α)·losshard (1)
Figure BDA0002098326430000051
losshard=crossentropy(outs,Target) (3)
In the formula, losssoftFor soft target error, losshardFor a hard target error (i.e., the original error of the Yolov3 network), α is the adjustment losssoftAnd losshardT is the distillation temperature. Target represents the original annotation of the data set, i.e. the hard Target. softmax () and cross () represent the softmax function value and cross entropy value, respectively. To balance losssoftAnd losshardOrder of magnitude of (1), introduction coefficient beta1. In equation 2, the position, confidence, and category prediction values of the student and teacher networks are softened, and then the relative entropy is obtained as a soft target error.
Setting the maximum iteration times and the batch processing number, training on a training set, calculating precision ratio, recall ratio and mAP of YOLOV3-Tiny on a verification set after each epoch is finished, and storing the precision ratio, the recall ratio and the mAP in a training log. After each training is finished, the hyperparameter of YOLOV3-Tiny is adjusted according to the training log, the training result with higher precision ratio, recall ratio and mAP is finally obtained, and the weight file is saved.
Further, in the step 5, after the cultural relics are recovered, a plurality of images are collected again in the fingerprint area at the position m to form a test set for identifying the authenticity of the cultural relics.
Further, step 6 uses the YOLOV3-Tiny network trained in step 4 to perform an inference process on the test set, and obtains a confidence value of each image fingerprint region. And counting the confidence values of the fingerprint areas at the same position, calculating the average value of the confidence values, and if the m average values are all larger than a set threshold value, judging that the cultural relic is a genuine product.
Example (b):
in the embodiment, 5 fingerprint areas are known before the cultural relics are displayed, and the sizes of the fingerprint areas are all 5mm by 5mm2. Under different illumination conditions, 500 RGB images are respectively collected from 5 fingerprint areas at multiple angles by using an EOS 7D Mark II camera with an MP-E65 mm f/2.81-5X lens, the resolution is 5472 multiplied by 3648 pixels, and 2500 images are obtained in total. And marking out the fingerprint area in the image by using a marking tool labelimg to obtain an XML file corresponding to each image. 2300 images and the labeled files thereof are randomly selected as a training set, and the rest are used as a verification set.
Step 2: configuring a YOLOV3 network as a teacher network and configuring a YOLOV3-Tiny network as a student network; modifying the network model and parameters;
in this embodiment, a deep learning development environment needs to be configured, the CPU is i79700K, the GPU is nvidiageforcetrtx 2080, the operating system is Ubuntu 16.04LTS, CUDA10.0, and the deep learning framework is Pytorch.
Modifying a Yolov3 network structure, wherein the concrete contents comprise: 1) deleting a yolo layer with the dimension of 52 multiplied by 52, and deleting a corresponding convolutional layer, an upper sampling layer and a route layer in the cfg configuration file; 1) modifying the number of anchors to be 6, modifying num to be 6 in the cfg configuration file, recalculating 6 anchors on the training set by using a K-Means algorithm and replacing the original values; 1) the modified classification number is 5, the modified classes in the cfg configuration file is 5, the number of channels of the yolo layer is (5+1+4) × 3, that is, 30, and the modified yolo layer size is 13 × 13 × 30, 26 × 26 × 30.
Modifying the YOLOV3-Tiny network parameter, wherein the specific contents comprise: 1) replacing the original values with the 6 anchors; 2) the modified classification number is 5, the modified classes in the cfg configuration file is 5, the number of channels of the yolo layer is (5+1+4) × 3, that is, 30, and the modified yolo layer size is 13 × 13 × 30, 26 × 26 × 30. And building a Yolov3 and a Yolov3-Tiny network model.
And step 3: and training a Yolov3 network and storing a weight file.
This embodiment is used to train the YOLOV3 network model. Setting the epochmax to be 250 and the batch to be 8, calculating precision ratio, recall ratio and mAP on the verification set after each epoch is finished, and storing the precision ratio, the recall ratio and the mAP as training logs. And after one-time training is finished, adjusting coefficients of the positioning error function, the confidence error function and the classification error function according to the training log, training for multiple times to obtain a network model with better performance, and storing a weight file.
And 4, step 4: training YOLOV3-Tiny based on knowledge distillation
In this embodiment, the YOLOV3 network loads its weight file, and the YOLOV3-Tiny network trains from scratch. The two networks perform forward propagation on the input image in turn, resulting in outputs of scale 13 × 13 × 30, 26 × 26 × 30, respectively denoted outt(teacher) and outs(students). Then, calculating a YOLOV3-Tiny network error according to the formulas 1-3, and setting the relevant parameters as follows: t4, α 0.6, β1=0.0003。
And setting the maximum iteration times and the batch processing number, only calculating the error and the gradient of the YOLOV3-Tiny network in the training process, and updating the weight, wherein the YOLOV3 network only executes forward inference and does not calculate the error and the gradient. And after each epoch is finished, calculating precision ratio, recall ratio and mAP of YOLOV3-Tiny on the verification set, and storing the precision ratio, the recall ratio and the mAP as training logs. After one-time training is finished, the hyperparameter of YOLOV3-Tiny is adjusted according to the training log, a network model with better performance is obtained through multiple times of training, and a weight file is stored.
And 5: after the cultural relics are recovered, the fingerprint area images are collected again;
in the embodiment, after the cultural relic is withdrawn, 100 images of the fingerprint area at 5 positions are collected again by using the camera, the resolution is 5472 multiplied by 3648 pixels, and a total of 500 image forming test sets are obtained.
Step 6: identifying the authenticity of the cultural relic by using the trained YOLOV 3-Tiny;
the trained YOLOV3-Tiny network was used to perform an inference process on the test set, and each image outputted a confidence value for the fingerprint region. Averaging 100 confidence values of each fingerprint area, if the average value is larger than a set threshold value of 0.95, determining that the fingerprint area is not changed, and if no fingerprint area is changed at 5 positions, determining that the cultural relic is a genuine relic.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all equivalent structures or equivalent flow transformations made by using the contents of the specification and the drawings, or applied directly or indirectly to other related systems, are included in the scope of the present invention.

Claims (7)

1.一种基于知识蒸馏的文物真伪鉴定方法,其特征在于,包括以下步骤:1. a cultural relic authenticity identification method based on knowledge distillation, is characterized in that, comprises the following steps: 步骤1:文物展出前,采集指纹区域图像,制作数据集;Step 1: Before the cultural relics are exhibited, collect images of the fingerprint area and create a data set; 步骤2:配置YOLOV3网络作为教师网络,配置YOLOV3-Tiny网络作为学生网络;Step 2: Configure the YOLOV3 network as the teacher network and the YOLOV3-Tiny network as the student network; 步骤3:训练YOLOV3;Step 3: Train YOLOV3; 步骤4:基于知识蒸馏训练YOLOV3-Tiny;Step 4: Train YOLOV3-Tiny based on knowledge distillation; 步骤5:文物回收后,重新采集指纹区域图像,制作测试集;Step 5: After the cultural relics are recovered, re-collect the fingerprint area image to make a test set; 步骤6:利用训练好的YOLOV3-Tiny鉴定文物真伪。Step 6: Use the trained YOLOV3-Tiny to identify the authenticity of cultural relics. 2.根据权利要求1所述的一种基于知识蒸馏的文物真伪鉴定方法,其特征在于:2. a kind of cultural relic authenticity identification method based on knowledge distillation according to claim 1, is characterized in that: 所述步骤1具体为:在文物展出前,在文物上选一区域作为指纹区域,在不同光照条件下,使用相机从多角度采集指纹区域的RGB图像,利用标注工具标注出图像中的指纹区域,制作数据集,随机选取其中的一部分图像及其标注文件作为训练集,剩余的作为验证集。The step 1 is specifically: before the cultural relic is exhibited, select an area on the cultural relic as the fingerprint area, use a camera to collect RGB images of the fingerprint area from multiple angles under different lighting conditions, and use an annotation tool to mark the fingerprint in the image. Region, make a dataset, randomly select a part of the images and their annotation files as the training set, and the rest as the validation set. 3.根据权利要求2所述的一种基于知识蒸馏的文物真伪鉴定方法,其特征在于:3. a kind of cultural relic authenticity identification method based on knowledge distillation according to claim 2, is characterized in that: 所述步骤2中,删除YOLOV3的预测尺度为52×52的yolo层,仅保留13×13与26×26两个尺度的预测;修改YOLOV3网络的类别数为m类,yolo层卷积核通道数为(m+1+4)×3,记为c;针对步骤1得到的训练集,利用K-Means算法计算出6个anchorbox,替换掉原YOLOV3与YOLOV3-Tiny的anchorbox。In step 2, delete the yolo layer whose prediction scale of YOLOV3 is 52×52, and only retain the predictions of 13×13 and 26×26 scales; modify the number of categories of the YOLOV3 network to m, and the yolo layer convolution kernel channel The number is (m+1+4)×3, denoted as c; for the training set obtained in step 1, the K-Means algorithm is used to calculate 6 anchorboxes, replacing the original anchorboxes of YOLOV3 and YOLOV3-Tiny. 4.根据权利要求3所述的一种基于知识蒸馏的文物真伪鉴定方法,其特征在于:4. a kind of cultural relic authenticity identification method based on knowledge distillation according to claim 3, is characterized in that: 所述步骤3具体为:将YOLOV3网络模型在步骤1得到数据集上训练,并保存权重文件。The step 3 is specifically: train the YOLOV3 network model on the data set obtained in step 1, and save the weight file. 5.根据权利要求4所述的一种基于知识蒸馏的文物真伪鉴定方法,其特征在于:5. a kind of cultural relic authenticity identification method based on knowledge distillation according to claim 4, is characterized in that: 所述步骤4中,将已经训练好的YOLOV3网络作为教师网络,YOLOV3-Tiny网络作为学生网络;两个网络依次在输入图像上执行前向传播,得到尺度为13×13×c、26×26×c的输出,分别记为outt与outs;按照式(1)~(3)计算YOLOV3-Tiny误差:In the step 4, the trained YOLOV3 network is used as the teacher network, and the YOLOV3-Tiny network is used as the student network; the two networks perform forward propagation on the input image in turn, and the scales are 13 × 13 × c, 26 × 26 The output of ×c is recorded as out t and out s respectively; YOLOV3-Tiny error is calculated according to formulas (1) to (3): LOSS=αT2·losssoft+(1-α)·losshard (1)LOSS=αT 2 ·loss soft +(1-α)·loss hard (1)
Figure FDA0002856634720000021
Figure FDA0002856634720000021
losshard=crossentropy(outs,Target) (3)loss hard =crossentropy(out s ,Target) (3) 上式中,losssoft为软目标误差,losshard为硬目标误差,α为调节losssoft与losshard的权重系数,T为蒸馏温度;Target代表数据集的原始标注,也即硬目标;softmax()与crossentropy()分别代表求softmax函数值与交叉熵值;为了平衡losssoft与losshard的数量级,引入系数β1;在式(2)中,将学生、教师网络的位置、置信度、类别预测值进行软化后,求取相对熵,作为软目标误差。In the above formula, loss soft is the soft target error, loss hard is the hard target error, α is the weight coefficient for adjusting loss soft and loss hard , T is the distillation temperature; Target represents the original label of the data set, that is, the hard target; softmax( ) and crossentropy() represent the value of softmax function and cross entropy respectively; in order to balance the order of magnitude of loss soft and loss hard , a coefficient β 1 is introduced; After the predicted value is softened, the relative entropy is obtained as the soft target error.
6.根据权利要求5所述的一种基于知识蒸馏的文物真伪鉴定方法,其特征在于:6. a kind of cultural relic authenticity identification method based on knowledge distillation according to claim 5, is characterized in that: 所述步骤5具体为:在文物回收后,在所述指纹区域重新采集多张图像,作为测试集。The step 5 is specifically: after the cultural relics are recovered, a plurality of images are collected again in the fingerprint area as a test set. 7.根据权利要求6所述的一种基于知识蒸馏的文物真伪鉴定方法,其特征在于:7. a kind of cultural relic authenticity identification method based on knowledge distillation according to claim 6, is characterized in that: 所述步骤6具体为:将训练好的YOLOV3-Tiny在步骤5得到的测试集上执行推断,得到指纹区域的置信度值,求取每个指纹区域置信度的平均值,若大于设定阈值,则认为该处未变化,若所述指纹区域均未发生变化,则判定文物为真品。The step 6 is specifically as follows: perform inference on the trained YOLOV3-Tiny on the test set obtained in step 5, obtain the confidence value of the fingerprint area, and obtain the average value of the confidence degree of each fingerprint area, if it is greater than the set threshold value. , it is considered that the place has not changed, and if the fingerprint area has not changed, it is determined that the cultural relic is genuine.
CN201910526264.3A 2019-06-18 2019-06-18 Cultural relic authenticity identification method based on knowledge distillation Expired - Fee Related CN110232445B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910526264.3A CN110232445B (en) 2019-06-18 2019-06-18 Cultural relic authenticity identification method based on knowledge distillation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910526264.3A CN110232445B (en) 2019-06-18 2019-06-18 Cultural relic authenticity identification method based on knowledge distillation

Publications (2)

Publication Number Publication Date
CN110232445A CN110232445A (en) 2019-09-13
CN110232445B true CN110232445B (en) 2021-03-26

Family

ID=67859621

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910526264.3A Expired - Fee Related CN110232445B (en) 2019-06-18 2019-06-18 Cultural relic authenticity identification method based on knowledge distillation

Country Status (1)

Country Link
CN (1) CN110232445B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112200764B (en) * 2020-09-02 2022-05-03 重庆邮电大学 A method for detecting and locating hot spots in photovoltaic power plants based on thermal infrared images
CN112001364A (en) * 2020-09-22 2020-11-27 上海商汤临港智能科技有限公司 Image recognition method and device, electronic device and storage medium
CN112348167B (en) * 2020-10-20 2022-10-11 华东交通大学 A kind of ore sorting method and computer readable storage medium based on knowledge distillation
CN112308130B (en) * 2020-10-29 2021-10-15 成都千嘉科技有限公司 Deployment method of deep learning network of Internet of things
CN113158969A (en) * 2021-05-10 2021-07-23 上海畅选科技合伙企业(有限合伙) Apple appearance defect identification system and method
CN115705688A (en) * 2021-08-10 2023-02-17 万维数码智能有限公司 Ancient and modern artwork identification method and system based on artificial intelligence
CN117114053B (en) * 2023-08-24 2024-06-21 之江实验室 Convolutional neural network model compression method and device based on structure search and knowledge distillation

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108287833A (en) * 2017-01-09 2018-07-17 北京艺鉴通科技有限公司 It is a kind of for the art work identification to scheme to search drawing method
CN109003098A (en) * 2018-05-24 2018-12-14 孝昌天空电子商务有限公司 Agricultural-product supply-chain traceability system based on Internet of Things and block chain
CN109523282A (en) * 2018-12-02 2019-03-26 程昔恩 A method of constructing believable article Internet of Things

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102930251B (en) * 2012-10-26 2016-09-21 北京炎黄拍卖有限公司 Bidimensional collectibles data acquisition and the apparatus and method of examination
KR20170035362A (en) * 2015-08-31 2017-03-31 (주)늘푸른광고산업 Cultural properties guide system using wireless terminal and guide plate for cultural properties guide system using wireless terminal
US10402701B2 (en) * 2017-03-17 2019-09-03 Nec Corporation Face recognition system for face recognition in unlabeled videos with domain adversarial learning and knowledge distillation

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108287833A (en) * 2017-01-09 2018-07-17 北京艺鉴通科技有限公司 It is a kind of for the art work identification to scheme to search drawing method
CN109003098A (en) * 2018-05-24 2018-12-14 孝昌天空电子商务有限公司 Agricultural-product supply-chain traceability system based on Internet of Things and block chain
CN109523282A (en) * 2018-12-02 2019-03-26 程昔恩 A method of constructing believable article Internet of Things

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Machine Learning for the Developing World;MARIA DE-ARTEAGA et al;《ACM Transactions on Management Information Systems》;20180831;1-14 *
智能算法在古陶瓷文物鉴定中的应用;吴旭东等;《技术创新》;20171225;49-50 *

Also Published As

Publication number Publication date
CN110232445A (en) 2019-09-13

Similar Documents

Publication Publication Date Title
CN110232445B (en) Cultural relic authenticity identification method based on knowledge distillation
CN108960135B (en) Dense ship target accurate detection method based on high-resolution remote sensing image
CN113792667A (en) Method and device for automatic classification of building properties in villages and towns based on 3D remote sensing images
CN109493346A (en) It is a kind of based on the gastric cancer pathology sectioning image dividing method more lost and device
CN107392252A (en) Computer deep learning characteristics of image and the method for quantifying perceptibility
CN106096654A (en) A kind of cell atypia automatic grading method tactful based on degree of depth study and combination
CN110020650B (en) A method and device for recognizing tilted license plates based on a deep learning recognition model
CN113222071A (en) Rock classification method based on rock slice microscopic image deep learning
CN104951803A (en) Soft sensor method for aviation fuel dry point in atmospheric distillation column based on dynamic moving window least squares support vector machine
CN114187530B (en) Remote sensing image change detection method based on neural network structure search
CN113627522B (en) Image classification method, device, equipment and storage medium based on relational network
CN112418632A (en) A method and system for identifying key areas of ecological restoration
CN119027777A (en) Target detection model, method and system for embedded part detection
CN109815959A (en) A kind of winter wheat yield prediction method and device
CN110211109B (en) Image change detection method based on deep neural network structure optimization
CN116310783A (en) Non-cultivated habitat vegetation feature extraction method and system
CN117874346B (en) Recommendation model construction method and system for Non-IID image data
CN118898598A (en) Road damage detection method based on neural network for unbalanced samples
CN119048420A (en) Metal surface defect detection network model light-weight method
CN110852475B (en) Extreme gradient lifting algorithm-based vegetation index prediction method, system and equipment
CN116152503B (en) Street view-oriented online extraction method and system of urban sky visual domain
CN111325384A (en) NDVI prediction method combining statistical characteristics and convolutional neural network model
CN112464705A (en) Method and system for detecting pine wood nematode disease tree based on YOLOv3-CIoU
CN111860178A (en) A small sample remote sensing target detection method and system based on weight dictionary learning
CN117292756A (en) Virus property prediction model training method and virus property prediction method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210326

CF01 Termination of patent right due to non-payment of annual fee