CN112308058B - Method for recognizing handwritten characters - Google Patents
Method for recognizing handwritten characters Download PDFInfo
- Publication number
- CN112308058B CN112308058B CN202011151366.0A CN202011151366A CN112308058B CN 112308058 B CN112308058 B CN 112308058B CN 202011151366 A CN202011151366 A CN 202011151366A CN 112308058 B CN112308058 B CN 112308058B
- Authority
- CN
- China
- Prior art keywords
- classifier
- handwritten
- character
- net
- standard
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/22—Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
- G06F18/2148—Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the process organisation or structure, e.g. boosting cascade
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Computing Systems (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Probability & Statistics with Applications (AREA)
- Character Discrimination (AREA)
- Image Analysis (AREA)
Abstract
本发明公开了一种手写字符的识别方法,包括步骤1)制作手写字符对应的标准图像;步骤2)搭建U型全卷积神经网络,初始化网络参数,以手写字符图像作为训练数据,以步骤1)制作的各字符标准图像作为期望输出训练网络;步骤3)搭建分类器,以步骤1)制作的各字符标准图像作为训练数据,训练分类器使之能够对标准图像准确分类;步骤4)级联U‑Net与分类器,完成对手写字符的识别。本发明提供的手写字符识别方法,先将手写体图像处理为易于识别的标准印刷体图像,随后分类器对标准印刷体图像进行分类,能够有效提高手写体识别的准确率。
The invention discloses a method for recognizing handwritten characters, which includes steps 1) making standard images corresponding to handwritten characters; step 2) building a U-shaped fully convolutional neural network, initializing network parameters, using handwritten character images as training data, and taking steps 1) Use the standard images of each character produced as the desired output to train the network; Step 3) Build a classifier, use the standard images of each character produced in Step 1) as training data, and train the classifier so that it can accurately classify the standard images; Step 4) Cascade U‑Net and classifier to complete the recognition of handwritten characters. The handwritten character recognition method provided by the present invention first processes the handwritten image into a standard printed image that is easy to recognize, and then the classifier classifies the standard printed image, which can effectively improve the accuracy of handwritten recognition.
Description
技术领域Technical field
本发明属于计算机视觉领域,特别涉及一种手写字符的识别方法,主要用于光学字符识别。The invention belongs to the field of computer vision, and particularly relates to a handwritten character recognition method, which is mainly used for optical character recognition.
背景技术Background technique
手写字符识别是计算机视觉领域的基础研究内容,主要任务是识别从纸、照片或触摸屏等媒介得到的文字信息。随着机器学习的发展,大多数的光学字符识别采用人工神经网络的方法,取得了较好的识别效果。但是相较于标准印刷体字符的识别,由于手写体字符因人而异并存在许多变化,因此难以被机器识别正确,有些甚至难以被人工识别正确。Handwritten character recognition is a basic research content in the field of computer vision. Its main task is to recognize text information obtained from media such as paper, photos, or touch screens. With the development of machine learning, most optical character recognition uses artificial neural network methods and has achieved good recognition results. However, compared with the recognition of standard printed characters, since handwritten characters vary from person to person and have many changes, they are difficult to be recognized correctly by machines, and some are even difficult to be recognized correctly by humans.
为了提高对于手写字符的识别准确率,本发明提出先将复杂多变的手写体图像处理为易于识别的标准印刷体图像,随后利用简单的分类器对标准印刷体进行分类的识别方法。本发明利用U-Net的网络结构来实现将手写体图像处理为标准印刷体图像的操作,U-Net是由Ronneberger在2015年提出一种含有多个下采样与上采样操作的U型全卷积神经网络,通常用于图像的语义分割。分类器无需对大量的手写体图像进行学习,仅需要学习将标准印刷体正确分类即可,因此可以选取简单分类器,大大减少了分类器的模型复杂度,并且提高了分类器的识别速度。最后通过将U-Net与分类器级联的方式,实现对手写体图像的高效识别。In order to improve the recognition accuracy of handwritten characters, the present invention proposes a recognition method that first processes complex and changeable handwritten images into standard printed images that are easy to recognize, and then uses a simple classifier to classify the standard printed images. This invention uses the network structure of U-Net to realize the operation of processing handwritten images into standard printed images. U-Net was proposed by Ronneberger in 2015 as a U-shaped full convolution containing multiple down-sampling and up-sampling operations. Neural networks are often used for semantic segmentation of images. The classifier does not need to learn a large number of handwritten images. It only needs to learn to correctly classify standard printed characters. Therefore, a simple classifier can be selected, which greatly reduces the model complexity of the classifier and improves the recognition speed of the classifier. Finally, by cascading U-Net with a classifier, efficient recognition of handwriting images is achieved.
发明内容Contents of the invention
本发明的目的是提供一种手写字符的识别方法,从而解决目前对于手写字符识别不够准确的问题。The purpose of the present invention is to provide a handwritten character recognition method, thereby solving the current problem of insufficient accuracy in handwritten character recognition.
为了实现上述目的,所采用的技术方法如下:In order to achieve the above goals, the technical methods adopted are as follows:
步骤1)、制作手写字符对应的印刷体标准图像。对于同一字符的不同手写体图像,制作对应字符的标准印刷体作为标准图像,随后对标准图像进行灰度化、二值化等操作,以便U-Net进行学习;Step 1), create a printed standard image corresponding to the handwritten characters. For different handwritten images of the same character, create a standard printed version of the corresponding character as a standard image, and then perform operations such as grayscale and binarization on the standard image to facilitate U-Net learning;
步骤2)、搭建U型全卷积神经网络,初始化网络参数,将步骤1)中制作的各字符标准图像作为期望输出训练U-Net,从而实现将同一字符的不同手写体图像输入U-Net,输出该字符的标准图像;Step 2), build a U-shaped fully convolutional neural network, initialize the network parameters, and use the standard image of each character produced in step 1) as the desired output to train U-Net, so as to input different handwritten images of the same character into U-Net. Output the standard image of the character;
步骤3)、搭建一个简单的分类器,将步骤1)中制作的各字符的单张标准图像作为训练数据,训练该分类器,使其能够对标准图像准确分类;Step 3), build a simple classifier, use the single standard image of each character produced in step 1) as training data, and train the classifier so that it can accurately classify the standard image;
步骤4)、将U-Net与分类器级联,原始图像输入U-Net后得到经过处理的标准图像,随后将其送入分类器得到分类结果。Step 4), cascade U-Net and classifier. The original image is input into U-Net to obtain the processed standard image, which is then sent to the classifier to obtain the classification result.
本发明提供的手写字符识别方法,先将手写体图像处理为易于识别的标准印刷体图像,随后分类器对标准印刷体图像进行分类,能够有效提高手写体识别的准确率。The handwritten character recognition method provided by the present invention first processes the handwritten image into a standard printed image that is easy to recognize, and then the classifier classifies the standard printed image, which can effectively improve the accuracy of handwritten recognition.
附图说明Description of the drawings
图1为制作标准图像的示意图,即对于同一字符的不同手写体,制作标准印刷体作为对应字符的标准图像;Figure 1 is a schematic diagram of making a standard image, that is, for different handwritten versions of the same character, a standard printed version is made as a standard image of the corresponding character;
图2为U型全卷积神经网络结构的示意图;Figure 2 is a schematic diagram of the U-shaped fully convolutional neural network structure;
图3为本发明对手写字符识别的流程示意图。Figure 3 is a schematic flow chart of handwritten character recognition according to the present invention.
具体实施方式Detailed ways
为了使本领域的技术人员更好地理解本申请的技术方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整的描述。应当理解,此处所描述的具体实施例仅用以解释本发明,并不限定本发明。In order to enable those skilled in the art to better understand the technical solutions of the present application, the technical solutions in the embodiments of the present application will be clearly and completely described below in conjunction with the drawings in the embodiments of the present application. It should be understood that the specific embodiments described here are only used to explain the present invention and do not limit the present invention.
根据本发明的实施方案,手写体样本来源于MNIST数据集。According to an embodiment of the present invention, the handwriting samples are derived from the MNIST data set.
步骤1:参见图1的示意图,对于数据集中的每类字符,制作该类字符的标准印刷体图像素材,并将其灰度化为单通道图像,随后将其二值化,最后处理其图像尺寸,使其与数据集中图像尺寸一致。通过上述操作生成各类字符的标准图像。Step 1: Refer to the schematic diagram in Figure 1. For each type of character in the data set, make a standard printed image material of that type of character, grayscale it into a single-channel image, then binarize it, and finally process its image. Size so that it matches the image size in the dataset. Through the above operations, standard images of various types of characters are generated.
步骤2.1:搭建U型全卷积神经网络。参见图2的示意图,U型全卷积神经网络共包含13个卷积层(Convolutional Layer),2个下采样层(Down-sampling Layer),2个上采样层(Up-sampling Layer),以及两次跨层连接。Step 2.1: Build a U-shaped fully convolutional neural network. Referring to the schematic diagram in Figure 2, the U-shaped fully convolutional neural network contains a total of 13 convolutional layers (Convolutional Layer), 2 down-sampling layers (Down-sampling Layer), 2 up-sampling layers (Up-sampling Layer), and Two cross-layer connections.
除最后一个卷积层外,其余卷积核大小(Kernel Size)均为3×3,滑动步长(Stride)为1,边缘填充(Padding)为1,激活函数为ReLU,公式如下:Except for the last convolution layer, the other convolution kernel sizes (Kernel Size) are 3×3, the sliding stride (Stride) is 1, the edge padding (Padding) is 1, and the activation function is ReLU. The formula is as follows:
f(x)=max(0,x)f(x)=max(0,x)
其中x为节点的输入值,f(x)为节点的输入经过激活函数之后的输出值。Where x is the input value of the node, and f(x) is the output value of the node input after passing through the activation function.
卷积层与激活函数之间都加入批量归一化(Batch Normalization),公式如下:Batch Normalization (Batch Normalization) is added between the convolution layer and the activation function. The formula is as follows:
其中x是卷积后的结果,μ和σ分别是数据的均值和方差,γ和β是两个可学习的参数,分别控制数据的缩放和平移。where x is the result after convolution, μ and σ are the mean and variance of the data respectively, and γ and β are two learnable parameters that control the scaling and translation of the data respectively.
下采样层采用最大池化(Max Pooling)的方法,窗口大小均为2×2,滑动步长为2。上采样层采用双线性内插法放大特征图,放大比例为2。跨层连接采用拼接(Concatenate)的方式,将前层与后层的特征图连接。The downsampling layer adopts the Max Pooling method, the window size is 2×2, and the sliding step size is 2. The upsampling layer uses bilinear interpolation to amplify the feature map, with an amplification ratio of 2. Cross-layer connection uses concatenation to connect the feature maps of the front layer and the back layer.
步骤2.2:训练U-Net。初始化网络参数,将手写体图像数据作为输入送入U-Net,将步骤1中制作的标准图像作为期望输出,损失函数采用二分类交叉熵(Binary CrossEntropy),公式如下:Step 2.2: Train U-Net. Initialize the network parameters, send the handwritten image data as input to U-Net, and use the standard image produced in step 1 as the desired output. The loss function uses binary cross entropy (Binary CrossEntropy), and the formula is as follows:
其中是模型预测样本是正例的概率,y是样本标签,若样本为正例,取值为1,否则取值为0。in is the probability that the model predicts that the sample is a positive example, y is the sample label, if the sample is a positive example, the value is 1, otherwise the value is 0.
步骤3.1:搭建一个简单的分类器,本实施例采用支持向量机(Support VectorMachine,SVM)作为分类器,采用径向基函数核(RBF kernel),正则化系数为1,超参数为0.5。Step 3.1: Build a simple classifier. In this example, a Support Vector Machine (SVM) is used as the classifier, a radial basis function kernel (RBF kernel) is used, the regularization coefficient is 1, and the hyperparameter is 0.5.
步骤3.2:训练SVM分类器。仅使用各字符的单张标准图像作为SVM的输入进行训练,使得SVM能够准确分类各字符的标准图像。Step 3.2: Train the SVM classifier. Only a single standard image of each character is used as the input of the SVM for training, so that the SVM can accurately classify the standard image of each character.
步骤4:将U-Net与分类器级联,参见图3的示意图。将手写体图像送入U-Net,经过U-Net处理为标准图像,随后经过将标准图像送入分类器得到分类结果。Step 4: Cascade U-Net with the classifier, see the schematic diagram in Figure 3. The handwritten image is sent to U-Net, which is processed into a standard image by U-Net, and then the standard image is sent to the classifier to obtain the classification result.
Claims (1)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202011151366.0A CN112308058B (en) | 2020-10-25 | 2020-10-25 | Method for recognizing handwritten characters |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202011151366.0A CN112308058B (en) | 2020-10-25 | 2020-10-25 | Method for recognizing handwritten characters |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN112308058A CN112308058A (en) | 2021-02-02 |
| CN112308058B true CN112308058B (en) | 2023-10-24 |
Family
ID=74330425
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202011151366.0A Active CN112308058B (en) | 2020-10-25 | 2020-10-25 | Method for recognizing handwritten characters |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN112308058B (en) |
Families Citing this family (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN116386062A (en) * | 2023-04-06 | 2023-07-04 | 北京百度网讯科技有限公司 | Formula recognition method, device, electronic device, and storage medium |
| CN116758563B (en) * | 2023-06-06 | 2025-11-25 | 河海大学 | A Handwritten Character Recognition Method and System Based on Template-Centered Deep Discriminant Analysis |
Citations (21)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR20050035610A (en) * | 2003-10-13 | 2005-04-19 | 한국전자통신연구원 | Apparatus and method for identification of address type and address identification system using it |
| JP2014049118A (en) * | 2012-08-31 | 2014-03-17 | Fujitsu Ltd | Convolution neural network classifier system, training method for the same, classifying method, and usage |
| CN106446954A (en) * | 2016-09-29 | 2017-02-22 | 南京维睛视空信息科技有限公司 | Character recognition method based on depth learning |
| CN106650721A (en) * | 2016-12-28 | 2017-05-10 | 吴晓军 | Industrial character identification method based on convolution neural network |
| CN107704859A (en) * | 2017-11-01 | 2018-02-16 | 哈尔滨工业大学深圳研究生院 | A kind of character recognition method based on deep learning training framework |
| CN107844740A (en) * | 2017-09-05 | 2018-03-27 | 中国地质调查局西安地质调查中心 | A kind of offline handwriting, printing Chinese character recognition methods and system |
| CN107886065A (en) * | 2017-11-06 | 2018-04-06 | 哈尔滨工程大学 | A kind of Serial No. recognition methods of mixing script |
| CN108805222A (en) * | 2018-05-08 | 2018-11-13 | 南京邮电大学 | A kind of deep learning digital handwriting body recognition methods based on ARM platforms |
| CN109034281A (en) * | 2018-07-18 | 2018-12-18 | 中国科学院半导体研究所 | The Chinese handwritten body based on convolutional neural networks is accelerated to know method for distinguishing |
| CN109460769A (en) * | 2018-11-16 | 2019-03-12 | 湖南大学 | A kind of mobile end system and method based on table character machining and identification |
| CN109508742A (en) * | 2018-11-12 | 2019-03-22 | 南京邮电大学 | Handwritten Digit Recognition method based on ARM platform and independent loops neural network |
| CN109583423A (en) * | 2018-12-18 | 2019-04-05 | 苏州大学 | A kind of method, apparatus and associated component of Handwritten Digit Recognition |
| CN109800746A (en) * | 2018-12-05 | 2019-05-24 | 天津大学 | A kind of hand-written English document recognition methods based on CNN |
| WO2019232873A1 (en) * | 2018-06-04 | 2019-12-12 | 平安科技(深圳)有限公司 | Character model training method, character recognition method, apparatuses, device and medium |
| WO2019232855A1 (en) * | 2018-06-04 | 2019-12-12 | 平安科技(深圳)有限公司 | Handwriting model training method, handwritten character recognition method and device, apparatus, and medium |
| WO2019232854A1 (en) * | 2018-06-04 | 2019-12-12 | 平安科技(深圳)有限公司 | Handwritten model training method and apparatus, handwritten character recognition method and apparatus, and device and medium |
| WO2019232847A1 (en) * | 2018-06-04 | 2019-12-12 | 平安科技(深圳)有限公司 | Handwriting model training method, handwritten character recognition method and apparatus, and device and medium |
| WO2020023748A1 (en) * | 2018-07-26 | 2020-01-30 | Raytheon Company | Class level artificial neural network |
| CN110956167A (en) * | 2019-12-09 | 2020-04-03 | 南京红松信息技术有限公司 | Classification discrimination and strengthened separation method based on positioning characters |
| CN111553423A (en) * | 2020-04-29 | 2020-08-18 | 河北地质大学 | Handwriting recognition method based on deep convolutional neural network image processing technology |
| CN111652332A (en) * | 2020-06-09 | 2020-09-11 | 山东大学 | Deep learning handwritten Chinese character recognition method and system based on binary classification |
Family Cites Families (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| RU2709661C1 (en) * | 2018-09-19 | 2019-12-19 | Общество с ограниченной ответственностью "Аби Продакшн" | Training neural networks for image processing using synthetic photorealistic containing image signs |
-
2020
- 2020-10-25 CN CN202011151366.0A patent/CN112308058B/en active Active
Patent Citations (21)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR20050035610A (en) * | 2003-10-13 | 2005-04-19 | 한국전자통신연구원 | Apparatus and method for identification of address type and address identification system using it |
| JP2014049118A (en) * | 2012-08-31 | 2014-03-17 | Fujitsu Ltd | Convolution neural network classifier system, training method for the same, classifying method, and usage |
| CN106446954A (en) * | 2016-09-29 | 2017-02-22 | 南京维睛视空信息科技有限公司 | Character recognition method based on depth learning |
| CN106650721A (en) * | 2016-12-28 | 2017-05-10 | 吴晓军 | Industrial character identification method based on convolution neural network |
| CN107844740A (en) * | 2017-09-05 | 2018-03-27 | 中国地质调查局西安地质调查中心 | A kind of offline handwriting, printing Chinese character recognition methods and system |
| CN107704859A (en) * | 2017-11-01 | 2018-02-16 | 哈尔滨工业大学深圳研究生院 | A kind of character recognition method based on deep learning training framework |
| CN107886065A (en) * | 2017-11-06 | 2018-04-06 | 哈尔滨工程大学 | A kind of Serial No. recognition methods of mixing script |
| CN108805222A (en) * | 2018-05-08 | 2018-11-13 | 南京邮电大学 | A kind of deep learning digital handwriting body recognition methods based on ARM platforms |
| WO2019232854A1 (en) * | 2018-06-04 | 2019-12-12 | 平安科技(深圳)有限公司 | Handwritten model training method and apparatus, handwritten character recognition method and apparatus, and device and medium |
| WO2019232873A1 (en) * | 2018-06-04 | 2019-12-12 | 平安科技(深圳)有限公司 | Character model training method, character recognition method, apparatuses, device and medium |
| WO2019232855A1 (en) * | 2018-06-04 | 2019-12-12 | 平安科技(深圳)有限公司 | Handwriting model training method, handwritten character recognition method and device, apparatus, and medium |
| WO2019232847A1 (en) * | 2018-06-04 | 2019-12-12 | 平安科技(深圳)有限公司 | Handwriting model training method, handwritten character recognition method and apparatus, and device and medium |
| CN109034281A (en) * | 2018-07-18 | 2018-12-18 | 中国科学院半导体研究所 | The Chinese handwritten body based on convolutional neural networks is accelerated to know method for distinguishing |
| WO2020023748A1 (en) * | 2018-07-26 | 2020-01-30 | Raytheon Company | Class level artificial neural network |
| CN109508742A (en) * | 2018-11-12 | 2019-03-22 | 南京邮电大学 | Handwritten Digit Recognition method based on ARM platform and independent loops neural network |
| CN109460769A (en) * | 2018-11-16 | 2019-03-12 | 湖南大学 | A kind of mobile end system and method based on table character machining and identification |
| CN109800746A (en) * | 2018-12-05 | 2019-05-24 | 天津大学 | A kind of hand-written English document recognition methods based on CNN |
| CN109583423A (en) * | 2018-12-18 | 2019-04-05 | 苏州大学 | A kind of method, apparatus and associated component of Handwritten Digit Recognition |
| CN110956167A (en) * | 2019-12-09 | 2020-04-03 | 南京红松信息技术有限公司 | Classification discrimination and strengthened separation method based on positioning characters |
| CN111553423A (en) * | 2020-04-29 | 2020-08-18 | 河北地质大学 | Handwriting recognition method based on deep convolutional neural network image processing technology |
| CN111652332A (en) * | 2020-06-09 | 2020-09-11 | 山东大学 | Deep learning handwritten Chinese character recognition method and system based on binary classification |
Non-Patent Citations (2)
| Title |
|---|
| A new design based-SVM of the CNN classifier architecture with dropout for offline Arabic handwritten recognition;Elleuch等;《Procedia Computer Science》;第80卷;1712-1723 * |
| 复杂背景下基于神经网络的驾驶证识别技术的研究与实现;时金钰;《中国优秀硕士学位论文全文数据库 信息科技辑》(第7期);I138-566 * |
Also Published As
| Publication number | Publication date |
|---|---|
| CN112308058A (en) | 2021-02-02 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Altwaijry et al. | Arabic handwriting recognition system using convolutional neural network | |
| Sevillano et al. | Improving classification of pollen grain images of the POLEN23E dataset through three different applications of deep learning convolutional neural networks | |
| CN107292333B (en) | A kind of rapid image categorization method based on deep learning | |
| CN111652332B (en) | Deep learning handwritten Chinese character recognition method and system based on binary classification | |
| CN110321967B (en) | Improved method of image classification based on convolutional neural network | |
| Misgar et al. | Recognition of offline handwritten Urdu characters using RNN and LSTM models | |
| WO2020113561A1 (en) | Method for extracting structural data from image, apparatus and device | |
| Jindal et al. | An optimized CNN system to recognize handwritten characters in ancient documents in Grantha script | |
| CN108197666A (en) | Image classification model processing method and device and storage medium | |
| Joshi et al. | Deep learning based Gujarati handwritten character recognition | |
| CN113901952A (en) | Print form and handwritten form separated character recognition method based on deep learning | |
| Desai | Support vector machine for identification of handwritten Gujarati alphabets using hybrid feature space | |
| CN112308058B (en) | Method for recognizing handwritten characters | |
| WO2024103997A1 (en) | Handwriting recognition method and handwriting recognition model training method and apparatus | |
| CN111460782A (en) | Information processing method, device and equipment | |
| Rabby et al. | Borno: Bangla handwritten character recognition using a multiclass convolutional neural network | |
| Hemanth et al. | CNN-RNN based handwritten text recognition. | |
| Khan et al. | PHND: Pashtu Handwritten Numerals Database and deep learning benchmark | |
| Arivazhagan et al. | Recognition of handwritten characters using deep convolution neural network | |
| CN115083011A (en) | Sign language understanding visual gesture recognition method and system based on deep learning, computer device and storage medium | |
| Choudhary et al. | A neural approach to cursive handwritten character recognition using features extracted from binarization technique | |
| Palanikkumar et al. | Managing the Offline Text Recognition Accuracy Using Recurrent Neural Network | |
| Salau et al. | Image-based number sign recognition for ethiopian sign language using support vector machine | |
| CN108960275A (en) | A kind of image-recognizing method and system based on depth Boltzmann machine | |
| Saabni | Ada-boosting extreme learning machines for handwritten digit and digit strings recognition |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |