CN105894012A - Object identification method based on cascade micro neural network - Google Patents
Object identification method based on cascade micro neural network Download PDFInfo
- Publication number
- CN105894012A CN105894012A CN201610185644.1A CN201610185644A CN105894012A CN 105894012 A CN105894012 A CN 105894012A CN 201610185644 A CN201610185644 A CN 201610185644A CN 105894012 A CN105894012 A CN 105894012A
- Authority
- CN
- China
- Prior art keywords
- cascade
- sub
- block
- neural network
- cascaded
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/285—Selection of pattern recognition techniques, e.g. of classifiers in a multi-classifier system
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Image Analysis (AREA)
Abstract
本发明涉及一种基于级联微神经网络的物体识别方法,步骤如下:根据输入样本图像的尺寸计算深度神经网络中级联卷积层的数目;构建深度级联微神经网络;设定该深度级联微神经网络的训练参数,采用随机梯度下降法进行训练;采用softmax分类器进行分类,并利用前向传播算法计算分类误差;利用反向传播运算更新神经网络中需要训练的参数的权重值;得到深度级联微神经网络,该级联微神经网络可以高效地识别出相应类别的物体。本发明可以在不增加参数(运算复杂度)的情况下提高物体识别系统的识别性能。同时,本方法简单,容易收敛,不会增加训练时间。
The invention relates to an object recognition method based on a cascaded microneural network. The steps are as follows: calculate the number of cascaded convolution layers in a deep neural network according to the size of an input sample image; construct a deep cascaded microneural network; set the depth The training parameters of the cascaded micro-neural network are trained using the stochastic gradient descent method; the softmax classifier is used for classification, and the forward propagation algorithm is used to calculate the classification error; the weight value of the parameters that need to be trained in the neural network is updated by using the back propagation operation ; A deep cascaded microneural network is obtained, which can efficiently identify objects of the corresponding category. The invention can improve the recognition performance of the object recognition system without increasing the parameters (calculation complexity). At the same time, the method is simple, easy to converge, and will not increase the training time.
Description
所属技术领域Technical field
本发明涉及人机交互、计算机视觉等领域中快速、高效的物体识别方法,特别是涉及采用深度神经网络进行物体识别的方法。The invention relates to a fast and efficient object recognition method in the fields of human-computer interaction, computer vision, etc., in particular to a method for object recognition using a deep neural network.
背景技术Background technique
物体识别是计算机视觉中一个至关重要的研究领域,包含车牌识别、路标识别、生物种类识别以及自然场景中多类物体识别等。物体识别可以广泛地应用于安防监控、智能交通、人机交互等领域。例如,目前正迅速发展的无人驾驶车就需要物体识别技术,物体识别技术可以帮助无人驾驶汽车辨别道路中的物体,从而保障其能够顺利、安全地行驶。Object recognition is a crucial research field in computer vision, including license plate recognition, road sign recognition, biological species recognition, and multi-type object recognition in natural scenes. Object recognition can be widely used in security monitoring, intelligent transportation, human-computer interaction and other fields. For example, the rapid development of unmanned vehicles requires object recognition technology, which can help unmanned vehicles to identify objects on the road, so as to ensure that they can drive smoothly and safely.
目前,深度神经网络技术在物体识别领域取得了突破性进展。2012年,Alex Drizhevsky等人[1]设计一个具有6千万参数的7层的神经网络。该网络可以识别1000类不同的物体,且具有较好的识别性能。2013年,Network inNetwork(NIN)[2]利用多层感知器(MLP)来加强神经网络不同特征通道之间的关联,从而使神经网络能够学习到更加鲁棒的特征,进而提高识别性能。2014年,VGG神经网络[3]把神经网络的深度扩展到了19层,从而大幅提高了物体识别系统的性能。2014年,GoogleNet[4]利用多尺度卷积核构造了一个22层的深度神经网络,并在ImageNet Large-Scale Visual Recognition Challenge 2014(ILSVRC2014)比赛中获得了物体识别的冠军。2015年,ResNet[5]构建了一个深度多达152层的神经网络,并在ILSVRC2015比赛中获得了物体识别的冠军。At present, deep neural network technology has made breakthroughs in the field of object recognition. In 2012, Alex Drizhevsky et al. [1] designed a 7-layer neural network with 60 million parameters. The network can recognize 1000 different objects and has good recognition performance. In 2013, Network inNetwork (NIN) [2] used a multi-layer perceptron (MLP) to strengthen the association between different feature channels of the neural network, so that the neural network can learn more robust features and improve recognition performance. In 2014, the VGG neural network [3] extended the depth of the neural network to 19 layers, which greatly improved the performance of the object recognition system. In 2014, GoogleNet[4] constructed a 22-layer deep neural network using multi-scale convolution kernels, and won the object recognition championship in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC2014) competition. In 2015, ResNet [5] built a neural network with a depth of up to 152 layers, and won the object recognition championship in the ILSVRC2015 competition.
从以上相关研究中可以看出,增加神经网络的深度可以提高物体识别系统的性能。但是,神经网络深度的增加也会带来两方面的问题。一方面会大幅增加神经网络的参数(计算量),这就使得神经网络的实时性降低,不能在现实环境中有效地运用。另一方面,神经网络参数的增加使得神经网络难以收敛,不容易训练,这就会在训练过程中耗费大量的精力。From the above related studies, it can be seen that increasing the depth of the neural network can improve the performance of the object recognition system. However, increasing the depth of the neural network also brings two problems. On the one hand, the parameters (calculation amount) of the neural network will be greatly increased, which will reduce the real-time performance of the neural network and cannot be effectively used in the real environment. On the other hand, the increase of neural network parameters makes it difficult for the neural network to converge and not easy to train, which will consume a lot of energy in the training process.
现有神经网络通过增加深度来提高其识别性能,其根本原因在于基本运算单元(滤波器)的表达能力有限。给定一幅输入图像(或者一个视频帧),神经网络需要对图像进行特征提取。特征提取是以局部图像块为基本处理单元。现有神经网络的滤波器的大小与局部图像块的大小一致,经过简单的点积运算,输出结果为一个特征值。这种特征变换是很简单的,表达能力有限。The existing neural network improves its recognition performance by increasing its depth. The fundamental reason is that the expressive ability of the basic operation unit (filter) is limited. Given an input image (or a video frame), the neural network needs to extract features from the image. Feature extraction is based on the local image block as the basic processing unit. The size of the filter of the existing neural network is consistent with the size of the local image block, and the output result is a feature value after a simple dot product operation. This feature transformation is very simple and has limited expressive power.
参考文献:references:
[1]A.Krizhevsky,I.Sutskever,and G.Hinton,Imagenet classification with deep convolutional neuralnetworks[C].In Proceedings ofAdvances inNeural Information Processing Systems,2012.[1] A. Krizhevsky, I. Sutskever, and G. Hinton, Imagenet classification with deep convolutional neural networks [C]. In Proceedings of Advances in Neural Information Processing Systems, 2012.
[2]M.Lin,Q.Chen,and S.Yan,Network in network[C],In Proceedings of International Conference onLearning Representations,2014.[2] M.Lin, Q.Chen, and S.Yan, Network in network[C], In Proceedings of International Conference on Learning Representations, 2014.
[3]K.Simonyan andA.Zisserman,Verydeep convolutionalnetworks forlarge-scale image recognition[C],In Proceedings ofInternational Conference on Learning Representations,,2015.[3]K.Simonyan andA.Zisserman,Verydeep convolutional networks for large-scale image recognition[C],In Proceedings ofInternational Conference on Learning Representations,2015.
[4]C.Szegedy,W.Liu,Y.Jia,P.Sermanet,S.Reed,D.Anguelov,D.Erhan,V.Vanhoucke,and A.Rabinovich,Going deeper with convolutions[C],In Proceedings ofthe IEEE Conference on Computer Vision andPattern Recognition,2015.[4] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, Going deeper with convolutions [C], In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015.
[5]K.He,X.Zhang,S.Ren,J.Sun,Deep Residual Learning for Image Recognition,arXiv:1512.03385,2015.[5] K.He, X.Zhang, S.Ren, J.Sun, Deep Residual Learning for Image Recognition, arXiv:1512.03385, 2015.
发明内容Contents of the invention
本发明的目的是,克服现有技术的上述不足,提供一种可以在不增加运算复杂度的情况下提高物体识别系统的识别性能的方法。本发明的技术方案如下:The object of the present invention is to overcome the above-mentioned shortcomings of the prior art and provide a method that can improve the recognition performance of an object recognition system without increasing the complexity of computation. Technical scheme of the present invention is as follows:
一种基于级联微神经网络的物体识别方法,该方法采用深度神经网络进行物体识别,其基本运算单元由级联子块滤波器fl组成,级联子块滤波器fl中的第i个子块滤波器wi,k由两个基本滤波器和组成,表示为k指示输出的特征通道索引,称为空间滤波器,大小为hi×wi,且满足hi<H和wi<W,hi和wi分别为该空间滤波器的高度和宽度,H和W为输入的局部图像块的高度和宽度,用于提取输入局部图像块中子块的空间特征;称为通道滤波器,大小为1×1,用于加强输出通道特征之间的联系,第i个子块滤波器wi,k的大小表示为(hi×wi,1×1),级联子块滤波器fl由n个子块滤波器级联组成,表示为fl=[(h1×w1,1×1),(h2×w2,1×1),...(hn×wn,1×1)],l表示级联子块滤波器在深度神经网络中所处的层数;级联子块滤波器fl与大小为H×W的局部图像块满足和的约束,级联子块滤波器fl与输入通道特征卷积得到级联卷积层,表示为Cl;An object recognition method based on a cascaded micro-neural network, the method adopts a deep neural network for object recognition, and its basic operation unit is composed of a cascaded sub-block filter f l , and the i-th in the cascaded sub-block filter f l sub-block filters w i,k are composed of two basic filters and composed of k indicates the feature channel index of the output, It is called a spatial filter, its size is h i ×w i , and it satisfies h i <H and w i <W, h i and w i are the height and width of the spatial filter, respectively, and H and W are the input partial images The height and width of the block are used to extract the spatial characteristics of the sub-blocks in the input local image block; It is called a channel filter, and its size is 1×1, which is used to strengthen the connection between the output channel features. The size of the i-th sub-block filter w i,k is expressed as (h i ×w i ,1×1), and the level The joint sub-block filter f l is composed of n sub-block filter cascades, expressed as f l =[(h 1 ×w 1 ,1×1),(h 2 ×w 2 ,1×1),... (h n ×w n ,1×1)], l represents the number of layers of the cascaded sub-block filter in the deep neural network; the cascaded sub-block filter f l and the local image block of size H×W satisfy and The constraints of the cascaded sub-block filter f l and input channel feature convolution to obtain a cascaded convolutional layer, denoted as C l ;
所述的物体识别方法的步骤如下:The steps of the object recognition method are as follows:
步骤1:根据输入样本图像的尺寸HI和WI,HI表示高度,WI表示宽度,计算出深度神经网络中级联卷积层的数目L,其中L为满足2L×5≥HI,2L×5≥WI的最小整数值;Step 1: According to the size H I and W I of the input sample image, H I represents the height, and W I represents the width, calculate the number L of cascaded convolutional layers in the deep neural network, where L satisfies 2 L × 5 ≥ H I , the minimum integer value of 2 L ×5≥W I ;
步骤2:根据每个级联卷积层Cl所处理的局部图像块的大小H×W构建级联滤波器fl,对得到的级联卷积层Cl进行0.5倍的降采样操作,连接所有L个级联卷积层Cl构建一个深度级联微神经网络;Step 2: Construct a cascaded filter f l according to the size H×W of the local image block processed by each cascaded convolutional layer C l , and perform a 0.5 times downsampling operation on the obtained cascaded convolutional layer C l , Connect all L cascaded convolutional layers C l to construct a deep cascaded microneural network;
步骤3:设定该深度级联微神经网络的训练参数,采用随机梯度下降法进行训练;Step 3: Set the training parameters of the deep cascaded micro-neural network, and use the stochastic gradient descent method for training;
步骤4:采用softmax分类器进行分类,并利用前向传播算法计算分类误差;Step 4: Use the softmax classifier for classification, and use the forward propagation algorithm to calculate the classification error;
步骤5:利用反向传播运算更新神经网络中需要训练的参数的权重值;Step 5: Utilize the backpropagation operation to update the weight values of the parameters to be trained in the neural network;
步骤6:重复步骤4至步骤5,直至验证误差不再变化为止,此时训练过程结束,得到一个深度级联微神经网络,该级联微神经网络可以高效地识别出相应类别的物体。Step 6: Repeat steps 4 to 5 until the verification error no longer changes, at this point the training process ends, and a deep cascaded microneural network is obtained, which can efficiently identify objects of the corresponding category.
采用本发明所述方法,利用级联微神经网络运算单元代替传统的卷积运算,可以在不增加参数(运算复杂度)的情况下提高物体识别系统的识别性能。同时,本方法简单,容易收敛,不会增加训练时间。By adopting the method of the invention, the traditional convolution operation is replaced by the cascaded micro-neural network operation unit, and the recognition performance of the object recognition system can be improved without increasing parameters (complexity of operation). At the same time, the method is simple, easy to converge, and will not increase the training time.
附图说明Description of drawings
图1是本发明所提方法的框图Fig. 1 is the block diagram of proposed method of the present invention
具体实施方式detailed description
下面结合附图和实施例对本发明进行说明。The present invention will be described below in conjunction with the accompanying drawings and embodiments.
本发明提出的级联微神经网络不同于现有神经网络,其主要区别在于本发明用级联子块滤波器代替传统滤波器。即本发明提出的级联微神经网络的基本运算单元由级联子块滤波器组成。为了后面叙述方便,首先对一些概念和变量进行简要说明。级联子块滤波器中的第i个子块滤波器wi,k由两个基本滤波器和组成,可以表示为k指示输出的特征通道索引。称为空间滤波器,大小为hi×wi,且满足hi<H和wi<W。hi和wi分别为该滤波器的高度和宽度,H和W为输入局部图像块的高度和宽度。它的作用是提取输入局部图像块中子块的空间特征。称为通道滤波器,大小为1×1,它的作用是加强输出通道特征之间的联系。子块滤波器wi,k的大小可以表示为(hi×wi,1×1)。级联子块滤波器由n个子块滤波器级联组成,可以表示为fl=[(h1×w1,1×1),(h2×w2,1×1),...(hn×wn,1×1)],l表示级联子块滤波器在深度神经网络中所处的层数。级联子块滤波器fl与大小为H×W的局部图像块满足和的约束。级联子块滤波器fl与输入通道特征卷积得到级联卷积层,表示为Cl。The cascaded micro-neural network proposed by the present invention is different from the existing neural network, and its main difference is that the present invention uses cascaded sub-block filters instead of traditional filters. That is, the basic operation unit of the cascaded micro-neural network proposed by the present invention is composed of cascaded sub-block filters. For the convenience of later description, some concepts and variables are briefly explained first. The i-th sub-block filter w i,k in the cascade of sub-block filters consists of two basic filters and composition, which can be expressed as k indicates the feature channel index of the output. It is called a spatial filter, its size is h i ×w i , and it satisfies h i <H and w i <W. h i and w i are the height and width of the filter, respectively, and H and W are the height and width of the input local image block. Its role is to extract the spatial features of the sub-blocks in the input local image patch. It is called a channel filter with a size of 1×1, and its function is to strengthen the connection between the output channel features. The size of the sub-block filter w i,k can be expressed as (h i ×w i , 1×1). The cascaded sub-block filter is composed of n sub-block filters cascaded, which can be expressed as f l =[(h 1 ×w 1 ,1×1),(h 2 ×w 2 ,1×1),... (h n ×w n ,1×1)], l represents the number of layers of the cascaded sub-block filter in the deep neural network. The cascaded sub-block filter f l and the local image block of size H×W satisfy and constraints. The cascaded sub-block filter f l is convolved with the input channel features to obtain a cascaded convolutional layer, denoted as C l .
本发明用级联子块滤波器代替传统的滤波器,可以提取更加鲁棒的局部特征,同时不会增加系统的参数。其具体步骤如下:The present invention replaces traditional filters with cascaded sub-block filters, and can extract more robust local features without increasing system parameters. The specific steps are as follows:
步骤1:根据输入样本图像的尺寸HI和WI(HI表示高度,WI表示宽度),计算出深度神经网络中级联卷积层的数目L。其中L为满足2L×5≥HI,2L×5≥WI的最小整数值。Step 1: Calculate the number L of cascaded convolutional layers in the deep neural network according to the dimensions H I and W I of the input sample image (H I represents the height, and W I represents the width). Where L is the smallest integer value that satisfies 2 L ×5≥H I , 2 L ×5≥W I.
步骤2:根据每个级联卷积层Cl所处理的局部图像块的大小H×W构建级联滤波器fl,对得到的级联卷积层Cl进行0.5倍的降采样操作。连接所有L个级联卷积层Cl构建成一个深度级联微神经网络。Step 2: Construct a cascaded filter f l according to the size H×W of the local image block processed by each cascaded convolutional layer C l , and perform a 0.5 times downsampling operation on the obtained cascaded convolutional layer C l . Connect all L cascaded convolutional layers C l to build a deep cascaded microneural network.
步骤3:设定该深度级联微神经网络的训练参数。我们采用随机梯度下降法进行训练。每一批次的训练样本数目为m=100,训练动量为0.9,权重值衰减因子为0.0005。我们利用均值为0、标准方差为0.01的高斯分布来初始化所有训练参数的权重值。初始化学习率为0.01。Step 3: Set the training parameters of the deep cascaded micro-neural network. We use stochastic gradient descent for training. The number of training samples in each batch is m=100, the training momentum is 0.9, and the weight decay factor is 0.0005. We initialize the weight values of all training parameters using a Gaussian distribution with mean 0 and standard deviation 0.01. The initial learning rate is 0.01.
步骤4:采用softmax分类器进行分类,并利用前向传播算法计算分类误差。Step 4: Use the softmax classifier for classification, and use the forward propagation algorithm to calculate the classification error.
步骤5:利用反向传播运算更新神经网络中需要训练的参数的权重值。Step 5: Utilize the backpropagation operation to update the weight values of the parameters to be trained in the neural network.
步骤6:重复步骤4至步骤5,直至验证误差不再变化为止,此时训练过程结束,就可以得到一个深度级联微神经网络。该级联微神经网络可以高效地识别出相应类别的物体。Step 6: Repeat steps 4 to 5 until the verification error no longer changes, at this time the training process is over, and a deep cascaded microneural network can be obtained. The cascaded microneural network can efficiently identify objects of the corresponding category.
Claims (1)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201610185644.1A CN105894012B (en) | 2016-03-29 | 2016-03-29 | Based on the object identification method for cascading micro- neural network |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201610185644.1A CN105894012B (en) | 2016-03-29 | 2016-03-29 | Based on the object identification method for cascading micro- neural network |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN105894012A true CN105894012A (en) | 2016-08-24 |
| CN105894012B CN105894012B (en) | 2019-05-14 |
Family
ID=57013871
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201610185644.1A Expired - Fee Related CN105894012B (en) | 2016-03-29 | 2016-03-29 | Based on the object identification method for cascading micro- neural network |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN105894012B (en) |
Cited By (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108805279A (en) * | 2017-04-26 | 2018-11-13 | 北京邮电大学 | A method of quickly generating video using photo |
| CN108960069A (en) * | 2018-06-05 | 2018-12-07 | 天津大学 | A method of the enhancing context for single phase object detector |
| CN109344958A (en) * | 2018-08-16 | 2019-02-15 | 北京师范大学 | Object recognition method and recognition system based on feedback regulation |
| CN110378191A (en) * | 2019-04-25 | 2019-10-25 | 东南大学 | Pedestrian and vehicle classification method based on millimeter wave sensor |
| CN114511845A (en) * | 2021-12-08 | 2022-05-17 | 成都臻识科技发展有限公司 | License plate character classification and recognition method and medium based on deep learning |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5119438A (en) * | 1989-03-13 | 1992-06-02 | Sharp Kabushiki Kaisha | Recognizing apparatus |
| CN104781827A (en) * | 2012-12-18 | 2015-07-15 | 英特尔公司 | Hardware convolution pre-filter to accelerate object detection |
| CN105184303A (en) * | 2015-04-23 | 2015-12-23 | 南京邮电大学 | Image marking method based on multi-mode deep learning |
-
2016
- 2016-03-29 CN CN201610185644.1A patent/CN105894012B/en not_active Expired - Fee Related
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5119438A (en) * | 1989-03-13 | 1992-06-02 | Sharp Kabushiki Kaisha | Recognizing apparatus |
| CN104781827A (en) * | 2012-12-18 | 2015-07-15 | 英特尔公司 | Hardware convolution pre-filter to accelerate object detection |
| CN105184303A (en) * | 2015-04-23 | 2015-12-23 | 南京邮电大学 | Image marking method based on multi-mode deep learning |
Non-Patent Citations (4)
| Title |
|---|
| DANIEL MATURANA,ET AL: "VoxNet: A 3D Convolutional Neural Network for Real-Time Object", 《2015 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS》 * |
| MING LIANG,ET AL: "Recurrent Convolutional Neural Network for Object Recognition", 《2015 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN》 * |
| 卢宏涛等: "深度卷积神经网络在计算机视觉中的应用研究综述", 《数据采集与处理》 * |
| 陈先昌: "基于卷积神经网络的深度学习算法与应用研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 * |
Cited By (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108805279A (en) * | 2017-04-26 | 2018-11-13 | 北京邮电大学 | A method of quickly generating video using photo |
| CN108960069A (en) * | 2018-06-05 | 2018-12-07 | 天津大学 | A method of the enhancing context for single phase object detector |
| CN109344958A (en) * | 2018-08-16 | 2019-02-15 | 北京师范大学 | Object recognition method and recognition system based on feedback regulation |
| CN109344958B (en) * | 2018-08-16 | 2022-04-29 | 北京师范大学 | Object recognition method and recognition system based on feedback regulation |
| CN110378191A (en) * | 2019-04-25 | 2019-10-25 | 东南大学 | Pedestrian and vehicle classification method based on millimeter wave sensor |
| CN110378191B (en) * | 2019-04-25 | 2023-09-22 | 东南大学 | Pedestrian and vehicle classification method based on millimeter wave sensors |
| CN114511845A (en) * | 2021-12-08 | 2022-05-17 | 成都臻识科技发展有限公司 | License plate character classification and recognition method and medium based on deep learning |
Also Published As
| Publication number | Publication date |
|---|---|
| CN105894012B (en) | 2019-05-14 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN112308158B (en) | Multi-source field self-adaptive model and method based on partial feature alignment | |
| CN112995150B (en) | Botnet detection method based on CNN-LSTM fusion | |
| CN110276765B (en) | Image panorama segmentation method based on multi-task learning deep neural network | |
| CN109993220A (en) | Multi-source remote sensing image classification method based on dual attention fusion neural network | |
| CN105894012A (en) | Object identification method based on cascade micro neural network | |
| CN106780466A (en) | A kind of cervical cell image-recognizing method based on convolutional neural networks | |
| CN112750129B (en) | A Semantic Image Segmentation Model Based on Feature Enhanced Positional Attention Mechanism | |
| CN107437096A (en) | Image classification method based on the efficient depth residual error network model of parameter | |
| CN106228177A (en) | Daily life subject image recognition methods based on convolutional neural networks | |
| CN104299012A (en) | Gait recognition method based on deep learning | |
| CN110135558B (en) | A Test Sufficiency Method for Deep Neural Networks Based on Variable Intensity Combination Testing | |
| CN106339753A (en) | Method for effectively enhancing robustness of convolutional neural network | |
| CN104091169A (en) | Behavior identification method based on multi feature fusion | |
| CN115527159B (en) | Counting system and method based on inter-modal scale attention aggregation features | |
| CN107657204A (en) | The construction method and facial expression recognizing method and system of deep layer network model | |
| CN115393714A (en) | A detection method for lack of bolts in transmission line bolts based on graph theory reasoning | |
| CN106570522A (en) | Object recognition model establishment method and object recognition method | |
| CN114022727A (en) | Deep convolution neural network self-distillation method based on image knowledge review | |
| CN110349179A (en) | Visual tracking method and device outside a kind of visible red based on more adapters | |
| CN112381176B (en) | An image classification method based on binocular feature fusion network | |
| CN115170933A (en) | Digital image forged area positioning method based on double-current deep neural network | |
| CN110942106A (en) | Pooling convolutional neural network image classification method based on square average | |
| CN114581789A (en) | Hyperspectral image classification method and system | |
| CN110287990A (en) | Microalgae image classification method, system, device and storage medium | |
| CN113537306A (en) | Image classification method based on progressive growth element learning |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| C06 | Publication | ||
| PB01 | Publication | ||
| C10 | Entry into substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant | ||
| CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20190514 |
|
| CF01 | Termination of patent right due to non-payment of annual fee |