CN116306886A - Channel Attention-Based Convolutional Neural Network Pruning Approach for Text Detection in Natural Scenes - Google Patents
Channel Attention-Based Convolutional Neural Network Pruning Approach for Text Detection in Natural Scenes Download PDFInfo
- Publication number
- CN116306886A CN116306886A CN202310249533.2A CN202310249533A CN116306886A CN 116306886 A CN116306886 A CN 116306886A CN 202310249533 A CN202310249533 A CN 202310249533A CN 116306886 A CN116306886 A CN 116306886A
- Authority
- CN
- China
- Prior art keywords
- channel
- importance
- attention
- text detection
- score
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/082—Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/62—Text, e.g. of license plates, overlay texts or captions on TV images
- G06V20/63—Scene text, e.g. street names
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- General Physics & Mathematics (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Image Analysis (AREA)
Abstract
Description
技术领域technical field
本发明涉及计算机视觉深度学习技术,特别涉及一种用于自然场景文本检测的基于通道注意力(Channel Attention)的卷积神经网络(Convolutional Neural Networks,CNN)剪枝技术。The invention relates to computer vision deep learning technology, in particular to a channel attention (Channel Attention)-based Convolutional Neural Networks (CNN) pruning technology for natural scene text detection.
背景技术Background technique
卷积神经网络是在计算机视觉领域广泛使用的算法模型,CNN的出现有效提高了自然场景下文本检测任务的精度。庞大的网络架构设计通常能够显著地提升精度,但同时也带来了更高的内存占用和算力消耗。这一特性导致CNN难以在诸如手机、智能眼镜等资源受限的环境中部署。运用模型剪枝技术削减CNN模型参数与计算量,对于CNN算法在嵌入式设备端的部署有着重要意义。Convolutional neural network is an algorithm model widely used in the field of computer vision. The emergence of CNN has effectively improved the accuracy of text detection tasks in natural scenes. Huge network architecture design can usually significantly improve the accuracy, but it also brings higher memory usage and computing power consumption. This characteristic makes it difficult for CNNs to be deployed in resource-constrained environments such as mobile phones and smart glasses. Using model pruning technology to reduce CNN model parameters and calculation load is of great significance for the deployment of CNN algorithms on embedded devices.
模型剪枝算法通常会预训练一个参数稠密模型以取得高精度,再剪除不重要的权重以消除冗余,最后微调剪枝后的模型以恢复性能。剪枝算法的核心在于设计有效的评价体系和指标来评估网络参数的重要程度,用于指导删除冗余参数。常见的参数重要性评价方法有基于权重幅值评价,例如L1范数,基于KL散度评估,模拟退火,重要性采样等直接评价方法。一种常用的间接评估方法是将损失函数的梯度利用权重构造泰勒级数近似,令某一权重为零后损失函数值变化较小,则说明该权重不重要。此外,评估与通道相对应的输出特征图以替代分析卷积通道的重要性也是一类有效的评价方法,例如将输出特征谱的矩阵平均秩作为剪枝依据。Model pruning algorithms usually pre-train a parameter-dense model to achieve high accuracy, then prune unimportant weights to eliminate redundancy, and finally fine-tune the pruned model to restore performance. The core of the pruning algorithm is to design an effective evaluation system and indicators to evaluate the importance of network parameters and guide the deletion of redundant parameters. Common parameter importance evaluation methods include weight magnitude evaluation, such as L1 norm, KL divergence evaluation, simulated annealing, importance sampling and other direct evaluation methods. A commonly used indirect evaluation method is to use the weight of the gradient of the loss function to construct a Taylor series approximation, so that the value of the loss function changes little after a certain weight is zero, indicating that the weight is not important. In addition, it is also an effective evaluation method to evaluate the output feature map corresponding to the channel instead of analyzing the importance of the convolutional channel, for example, the matrix average rank of the output feature spectrum is used as the pruning basis.
早期剪枝方法从模型中删除单个参数以获得稀疏化结构从而实现参数压缩,即这类非结构化剪枝方法通常将不重要的参数置为零,模型的参数量并没有下降,想要获得实际的推理加速效果需要特殊的硬件设备支持,难以在现有硬件设备上取得加速效果。一些结构化剪枝方法手工设计重要性评价方式,评价指标通常较为单一,重要性表达能力不足;对于不同网络、不同任务可能需要重新设计有效的评价方式。此外,预训练-剪枝-微调的三步剪枝法,普遍存在训练时间冗长,模型剪枝后会导致目标检测精度会大幅下降的问题,需要耗费时间重新细调剪枝之后的模型来恢复精度。Early pruning methods delete a single parameter from the model to obtain a sparse structure to achieve parameter compression, that is, such unstructured pruning methods usually set unimportant parameters to zero, and the number of parameters in the model has not decreased. Want to obtain The actual inference acceleration effect requires the support of special hardware devices, and it is difficult to achieve acceleration effects on existing hardware devices. Some structured pruning methods manually design the importance evaluation method, the evaluation index is usually relatively single, and the importance expression ability is insufficient; for different networks and different tasks, it may be necessary to redesign effective evaluation methods. In addition, the three-step pruning method of pre-training-pruning-fine-tuning generally has a long training time. After the model is pruned, the target detection accuracy will be greatly reduced. It takes time to re-fine-tune the pruned model to recover precision.
发明内容Contents of the invention
本发明所要解决的问题是,提供一种适用于自然场景文本检测任务的,可以极大地节省剪枝微调所需时间的,加快建立用于自然场景文本检测的卷积神经网络的方法。The problem to be solved by the present invention is to provide a method suitable for natural scene text detection tasks, which can greatly save the time required for pruning and fine-tuning, and speed up the establishment of a convolutional neural network for natural scene text detection.
本发明为解决上述技术问题所采用的技术方案是,用于自然场景文本检测的基于通道注意力的卷积神经网络剪枝方法,包括以下步骤:The technical scheme that the present invention adopts for solving the above-mentioned technical problems is, the channel attention-based convolutional neural network pruning method for natural scene text detection comprises the following steps:
步骤1:预训练用于自然场景文本检测的参数稠密的卷积神经网络;Step 1: Pre-train a parameter-dense convolutional neural network for natural scene text detection;
步骤2:在所述卷积神经网络中添加通道注意力打分模块;Step 2: Add a channel attention scoring module in the convolutional neural network;
步骤3:使用自然场景文本检测训练集优化通道注意力打分模块的参数,直至到达预定迭代次数;Step 3: Use the natural scene text detection training set to optimize the parameters of the channel attention scoring module until the predetermined number of iterations is reached;
步骤4:使用自然场景文本检测测试集获得通道注意力打分模块中各通道的重要性打分;Step 4: Use the natural scene text detection test set to obtain the importance score of each channel in the channel attention scoring module;
步骤5:统计测试集中所有样本下的通道重要性打分,计算重要性打分均值;Step 5: Statistically score the channel importance scores under all samples in the test set, and calculate the mean value of the importance scores;
步骤6:确定阈值;Step 6: Determine the threshold;
步骤7:卷积神经网络中卷积核的重要程度使用重要性打分均值来衡量,将卷积神经网络中重要性打分均值小于阈值的通道所对应的卷积核进行剪枝;Step 7: The importance of the convolution kernel in the convolutional neural network is measured by the mean value of the importance score, and the convolution kernel corresponding to the channel with the mean value of the importance score in the convolutional neural network is less than the threshold value is pruned;
步骤8:将剪枝之后的将卷积神经网络作为自然场景文本检测模型;Step 8: Use the pruned convolutional neural network as a natural scene text detection model;
步骤9:使用自然场景文本检测训练集细调自然场景文本检测模型。Step 9: Fine-tune the natural scene text detection model using the natural scene text detection training set.
具体的,通道注意力打分模块包括全局平均池化层、全局最大池化层、卷积层、激活函数模块和缩放模块,其工作步骤包括:Specifically, the channel attention scoring module includes a global average pooling layer, a global maximum pooling layer, a convolutional layer, an activation function module, and a scaling module. The working steps include:
1)全局平均池化层接收输入的图像特征谱F,输出各个通道内特征平均值Avg至卷积层;全局最大池化层接收输入的图像特征谱F,输出各个通道内特征最大值Max至卷积层;1) The global average pooling layer receives the input image feature spectrum F, and outputs the feature average value Avg in each channel to the convolutional layer; the global maximum pooling layer receives the input image feature spectrum F, and outputs the feature maximum value Max in each channel to Convolution layer;
2)卷积层将各个通道的Avg与Max求和后重新编码得到注意力得分S,输出各个通道的注意力得分S至激活函数;2) The convolutional layer sums the Avg and Max of each channel and then re-encodes to obtain the attention score S, and outputs the attention score S of each channel to the activation function;
3)激活函数对各个通道的注意力得分S重新映射后获得各通道的重要性打分S';3) The activation function remaps the attention score S of each channel to obtain the importance score S' of each channel;
4)缩放模块使用各通道的重要性打分S'对图像特征谱F进行加权,将加权结果作为通道注意力打分模块的输出特征F'=F+F×S'。4) The scaling module uses the importance score S' of each channel to weight the image feature spectrum F, and uses the weighted result as the output feature F'=F+F×S' of the channel attention scoring module.
本发明使用结构化剪枝方法,将不重要的卷积核整个剔除,可以在任意设备上实现参数压缩和计算加速。为保证文本目标检测的准确性,通过设计一个通道注意力打分模块来评估网络中各个卷积通道的重要性,在训练中仅保留重要性打分高于既定阈值的通道参与计算。由于打分模块的输出特征谱中抑制了即将被裁剪通道所对应的输出特征的幅值,即训练过程中冗余参数已经被剪去,则仅被保留的参数会参与下一层网络的计算,减轻计算量。The present invention uses a structured pruning method to completely eliminate unimportant convolution kernels, and can realize parameter compression and calculation acceleration on any device. In order to ensure the accuracy of text target detection, a channel attention scoring module is designed to evaluate the importance of each convolution channel in the network, and only channels with an importance score higher than a predetermined threshold are reserved to participate in the calculation during training. Since the output feature spectrum of the scoring module suppresses the amplitude of the output feature corresponding to the channel to be trimmed, that is, the redundant parameters have been trimmed during the training process, only the retained parameters will participate in the calculation of the next layer of the network, Reduce the amount of calculation.
本发明的有益效果是,以卷积核为最小单位进行结构化剪枝,剪枝后模型可以在现有嵌入式设备上获得内存压缩与推理加速的效果;使用通道注意力机制让网络来学习哪些输出特征不重要,避免了手工设计评价指标;通道注意力打分模块具有即插即用的特性,适用于任意卷积神经网络的剪枝;训练过程中非重要参数已经被抑制,极大地减少了细调剪枝网络所需时间。The beneficial effect of the present invention is that structural pruning is carried out with the convolution kernel as the smallest unit, and the pruned model can obtain the effect of memory compression and reasoning acceleration on the existing embedded device; use the channel attention mechanism to let the network learn Which output features are not important, avoiding the manual design of evaluation indicators; the channel attention scoring module has a plug-and-play feature, which is suitable for pruning of any convolutional neural network; non-important parameters have been suppressed during the training process, greatly reducing The time required to fine-tune the pruned network.
附图说明Description of drawings
图1:通道注意力打分模块结构图。Figure 1: Structural diagram of channel attention scoring module.
图2:DBNet剪枝实验例网络结构示意图。Figure 2: Schematic diagram of the network structure of the DBNet pruning experiment example.
具体实施方式Detailed ways
首先对实施例采用的通道注意力打分模块进行说明,具体结构设计如图1所示,包括全局平均池化层Avg Pooling、全局最大池化层Max Pooling、卷积层、tanh激活函数模块和缩放模块Scale;卷积层使用由激活函数ReLU连接的两层尺寸为1×1的卷积层Conv。First, the channel attention scoring module used in the embodiment is described. The specific structural design is shown in Figure 1, including the global average pooling layer Avg Pooling, the global maximum pooling layer Max Pooling, convolutional layer, tanh activation function module and scaling Module Scale; the convolutional layer uses two layers of convolutional layers Conv with a size of 1×1 connected by the activation function ReLU.
通道注意力打分模块工作流程步骤如下:The workflow steps of the channel attention scoring module are as follows:
将尺寸为H×W×C的图像特征谱F输入全局平均池化层Avg Pooling,得到尺寸为1×1×C的通道内特征平均值向量Avg;Input the image feature spectrum F with the size of H×W×C into the global average pooling layer Avg Pooling to obtain the feature average vector Avg in the channel with the size of 1×1×C;
将尺寸为H×W×C的图像特征谱F输入全局最大池化层Max Pooling,得到尺寸为1×1×C的通道内特征最大值向量Max;Input the image feature spectrum F with the size of H×W×C into the global maximum pooling layer Max Pooling to obtain the feature maximum value vector Max in the channel with the size of 1×1×C;
使用两层尺寸为1×1的卷积层对(Avg+Max)进行重新编码,获得注意力得分S;Use two convolutional layers of size 1×1 to re-encode (Avg+Max) to obtain the attention score S;
tanh激活函数模块使用tanh激活函数和两个超参数α、β对注意力得分S重新映射,调整打分区间范围至(-α+β,α+β),得到打分S'=α×tanh(S)+β;The tanh activation function module uses the tanh activation function and two hyperparameters α, β to remap the attention score S, and adjusts the scoring range to (-α+β,α+β), and obtains the score S'=α×tanh(S )+β;
缩放模块Scale使用S'对输入特征谱各个通道的参数分别进行放缩,得到通道注意力打分模块的输出特征谱F'=F+F×S'。The scaling module Scale uses S' to scale the parameters of each channel of the input feature spectrum to obtain the output feature spectrum F'=F+F×S' of the channel attention scoring module.
上述流程中的超参数α和β通常取0.5和-0.5,此时打分区间被限制在(-1,0)。若某一卷积核的打分为-1,则该通道的输出特征F'=F-F=0,重要性打分可以体现出通道注意力打分模块的剪枝意图。The hyperparameters α and β in the above process are usually 0.5 and -0.5, and the scoring interval is limited to (-1,0). If the score of a certain convolution kernel is -1, the output feature of the channel is F'=F-F=0, and the importance score can reflect the pruning intention of the channel attention scoring module.
实施例在TITAN X服务器上借助PyTorch框架进行DBNet算法剪枝。DBNet是一个文本框检测算法,具体结构如图2虚线框内所示。网络从网络骨架backbone中提取四个不同尺度的特征谱,融合后形成一个共享特征谱feature,预测头利用feature中的参数来预测输入图像中文本框的位置信息。实施例在feature后添加了通道注意力打分模块(CA-module),该模块输出剪枝特征谱feature',训练过程中预测头利用剪枝后的feature'来定位文本框。DBNet剪枝实验具体步骤如下:The embodiment uses the PyTorch framework to perform DBNet algorithm pruning on the TITAN X server. DBNet is a text box detection algorithm, and the specific structure is shown in the dashed box in Figure 2. The network extracts four feature spectra of different scales from the network skeleton backbone, and forms a shared feature spectrum feature after fusion. The prediction head uses the parameters in the feature to predict the position information of the text box in the input image. In the embodiment, a channel attention scoring module (CA-module) is added after the feature, and the module outputs a pruned feature spectrum feature', and the prediction head uses the pruned feature' to locate the text box during the training process. The specific steps of the DBNet pruning experiment are as follows:
步骤1:预训练一个高精度的DBNet网络模型;Step 1: Pre-train a high-precision DBNet network model;
步骤2:在模型中feature之后添加通道注意力打分模块;Step 2: Add a channel attention scoring module after the feature in the model;
步骤3:冻结原始网络参数,优化通道注意力打分模块的参数,直至到达预定迭代次数;Step 3: Freeze the original network parameters and optimize the parameters of the channel attention scoring module until the predetermined number of iterations is reached;
步骤4:使用测试集获得所有测试样本下各个通道的重要性打分,计算重要性打分均值;Step 4: Use the test set to obtain the importance scores of each channel under all test samples, and calculate the mean value of the importance scores;
步骤5:将重要性打分均值从小到大排序,计算剪枝率与通道数目乘积后向下取整并记为N,排序第N的重要性打分即为剪枝阈值;Step 5: Sort the mean value of the importance score from small to large, calculate the product of the pruning rate and the number of channels, round it down and record it as N, and the importance score of the Nth ranking is the pruning threshold;
步骤6:以卷积核为单位,剪枝打分均值小于阈值的通道的所有卷积核;Step 6: Taking the convolution kernel as the unit, pruning all the convolution kernels of the channel whose scoring average is less than the threshold;
步骤7:拷贝保留参数,形成剪枝之后的DBNet模型;Step 7: Copy the reserved parameters to form the DBNet model after pruning;
步骤8:细调剪枝后的DBNet模型以恢复精度,形成以最终用于文本框检测Pre-Head的模型。Step 8: Fine-tune the pruned DBNet model to restore accuracy, and form a model that is finally used for text box detection Pre-Head.
在网络配置方面和训练细节方面,通道注意力打分模块的超参数α取0.5,β取-0.5;训练网络的优化算法使用随机梯度下降法,批数据量为16,初始学习率为0.001,采用指数衰减策略,衰减率为0.998,衰减步长为500次迭代,总迭代次数为20K次;剪枝率为0.5。In terms of network configuration and training details, the hyperparameter α of the channel attention scoring module is set to 0.5, and β is set to -0.5; the optimization algorithm for training the network uses the stochastic gradient descent method, the batch data size is 16, and the initial learning rate is 0.001. Exponential decay strategy, the decay rate is 0.998, the decay step size is 500 iterations, and the total number of iterations is 20K; the pruning rate is 0.5.
Claims (4)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202310249533.2A CN116306886B (en) | 2023-03-15 | 2023-03-15 | Convolutional neural network pruning method based on channel attention for natural scene text detection |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202310249533.2A CN116306886B (en) | 2023-03-15 | 2023-03-15 | Convolutional neural network pruning method based on channel attention for natural scene text detection |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN116306886A true CN116306886A (en) | 2023-06-23 |
| CN116306886B CN116306886B (en) | 2025-12-23 |
Family
ID=86784676
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202310249533.2A Active CN116306886B (en) | 2023-03-15 | 2023-03-15 | Convolutional neural network pruning method based on channel attention for natural scene text detection |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN116306886B (en) |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112949840A (en) * | 2021-04-20 | 2021-06-11 | 中国人民解放军国防科技大学 | Channel attention guided convolutional neural network dynamic channel pruning method and device |
| CN113052211A (en) * | 2021-03-11 | 2021-06-29 | 天津大学 | Pruning method based on characteristic rank and channel importance |
| CN113065558A (en) * | 2021-04-21 | 2021-07-02 | 浙江工业大学 | Lightweight small target detection method combined with attention mechanism |
-
2023
- 2023-03-15 CN CN202310249533.2A patent/CN116306886B/en active Active
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN113052211A (en) * | 2021-03-11 | 2021-06-29 | 天津大学 | Pruning method based on characteristic rank and channel importance |
| CN112949840A (en) * | 2021-04-20 | 2021-06-11 | 中国人民解放军国防科技大学 | Channel attention guided convolutional neural network dynamic channel pruning method and device |
| CN113065558A (en) * | 2021-04-21 | 2021-07-02 | 浙江工业大学 | Lightweight small target detection method combined with attention mechanism |
Non-Patent Citations (2)
| Title |
|---|
| JIANQIANG HU: "Neural network pruning based on channel attention mechanism", CONNECTION SCIENCE, vol. 34, no. 1, 24 August 2022 (2022-08-24), pages 2201 - 2218 * |
| 何乃宇: "自然场景下密集文本实时检测与识别算法研究", 中国优秀硕士学位论文全文数据库 信息科技辑, no. 4, 15 April 2024 (2024-04-15), pages 138 - 1546 * |
Also Published As
| Publication number | Publication date |
|---|---|
| CN116306886B (en) | 2025-12-23 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN108805185B (en) | Face recognition method, device, storage medium and computer equipment | |
| CN113420651B (en) | Light weight method, system and target detection method for deep convolutional neural network | |
| CN110097187A (en) | It is a kind of based on activation-entropy weight hard cutting CNN model compression method | |
| CN113011588B (en) | Pruning method, device, equipment and medium of convolutional neural network | |
| CN112132062B (en) | Remote sensing image classification method based on pruning compression neural network | |
| CN112148831B (en) | Image-text mixed retrieval method and device, storage medium and computer equipment | |
| CN108665106A (en) | A kind of aquaculture dissolved oxygen prediction method and device | |
| CN113901904A (en) | Image processing method, face recognition model training method, device and equipment | |
| CN116822593B (en) | A hardware-aware large-scale pre-trained language model compression method | |
| CN111507521A (en) | Power load forecasting method and forecasting device in Taiwan area | |
| CN106227718A (en) | Land based on CNN sky call semantic consistency method of calibration | |
| CN112560985A (en) | Neural network searching method and device and electronic equipment | |
| CN113361707A (en) | Model compression method, system and computer readable medium | |
| CN114781650B (en) | Data processing method, device, equipment and storage medium | |
| CN112561041A (en) | Neural network model acceleration method and platform based on filter distribution | |
| CN116502695A (en) | Model compression method, device, equipment and medium based on channel pruning | |
| CN113283524A (en) | Anti-attack based deep neural network approximate model analysis method | |
| CN120562503A (en) | Model pruning method, device and computer equipment | |
| CN109543029A (en) | File classification method, device, medium and equipment based on convolutional neural networks | |
| CN116933859A (en) | Dynamic pruning method for visual transducer | |
| CN115169230A (en) | Method and device for recommending process parameters under abnormal working conditions, chip, medium and program | |
| CN113688989B (en) | Deep learning network acceleration method, device, equipment and storage medium | |
| CN109145107A (en) | Subject distillation method, apparatus, medium and equipment based on convolutional neural networks | |
| CN112561054B (en) | Neural network filter pruning method based on batch characteristic heat map | |
| CN114155602A (en) | A sparse pruning method for human pose estimation model |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |