[go: up one dir, main page]

JPH01321591A - Character recognizing device - Google Patents

Character recognizing device

Info

Publication number
JPH01321591A
JPH01321591A JP63155609A JP15560988A JPH01321591A JP H01321591 A JPH01321591 A JP H01321591A JP 63155609 A JP63155609 A JP 63155609A JP 15560988 A JP15560988 A JP 15560988A JP H01321591 A JPH01321591 A JP H01321591A
Authority
JP
Japan
Prior art keywords
category
matrix
vector
feature
mapping
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP63155609A
Other languages
Japanese (ja)
Inventor
Hiroyuki Kami
上 博行
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NEC Corp
Original Assignee
NEC Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by NEC Corp filed Critical NEC Corp
Priority to JP63155609A priority Critical patent/JPH01321591A/en
Publication of JPH01321591A publication Critical patent/JPH01321591A/en
Pending legal-status Critical Current

Links

Landscapes

  • Character Discrimination (AREA)

Abstract

PURPOSE:To simply learn a new category by producing a mapping matrix and recognizing dictionary for extraction of features of a discriminating method and a feature vector of each category via the K-L evolution of a single time. CONSTITUTION:An average initial vector of the learning character categories is produced from an initial vector where the feature values detected out of the pictures of learning categories. An inter-category covariance matrix is obtained from said average initial vector and the average initial vector of the categories registered before learning. Then an inter-normalized category covariance matrix is obtained from the product obtained between a normalized mapping matrix produced previously from an intra category covariance matrix and the inter-category covariance matrix. Then the proper vectors obtained by applying the K-L evolution to the inter-normalized category covariance matrix are arranged for production of an inter-category emphasis matrix which emphasizes the difference of categories. Then a matrix obtained from the product of the normalized mapping matrix and the inter-category emphasis matrix is used as a mapping matrix for decision/analysis. While a vector obtained from the product of the average initial vector of each category and the mapping matrix is used as a feature vector of a dictionary. Thus the learning is possible in a simple process.

Description

【発明の詳細な説明】 (産業上の利用分野) 本発明は判別分析の特徴抽出に用いる写像行列と認識辞
書となる文字カテゴリごとの特徴ベクトルを発生する機
構を備えた文字&IN!識装置に関する。
DETAILED DESCRIPTION OF THE INVENTION (Field of Industrial Application) The present invention provides a character & IN! related to the identification device.

(従来の技術) 判別分析を利用し文字認識を行う方法(以下の説明では
判別分析法と呼ぶことにする)は、大津展之著[バメー
7認識における特徴抽出に関する数理的研究」、電子総
合研究所報告第818号、昭和56年7月の188ペー
ジから191ページ(参考文献1)や上池[階層化判別
法とその文字認識システムPC−OCRJ、電子通信学
会パターン認識と理解研究会資料PRU86−76(参
考文献2)に述べられている。前記文献では特徴抽出に
用いる写像行列と認識辞書となる特徴ベクトルを予め求
めておき、文字認識に使っている。
(Prior art) A method of character recognition using discriminant analysis (referred to as discriminant analysis method in the following explanation) is described in Nobuyuki Otsu [Mathematical study on feature extraction in Bame 7 recognition], Denshi General Research Institute Report No. 818, July 1988, pages 188 to 191 (Reference 1), Kamiike [Hierarchical discriminant method and its character recognition system PC-OCRJ, IEICE Pattern Recognition and Understanding Study Group materials PRU86-76 (Reference 2). In the above literature, a mapping matrix used for feature extraction and a feature vector serving as a recognition dictionary are obtained in advance and used for character recognition.

参考文献1によれば前述の写像行列と特徴ベクトルは第
2図に例示するような流れ図で求めることができる。ま
ず入力画像から検出した特徴音を並べたベクトル(初期
ベクトル)をもとにカテゴリ間共分散行列とカテゴリ内
共分散行列とを求める(201)。次に、カテゴリ内共
分散行列をK −L展開し得られる固有値と固有ベクト
ルとからカテゴリ内共分散行列を正規化する正規化写像
行列を求める(202)。更に前記正規化写像行列とカ
テゴリ間共分散行列との積により正規化カテゴリ間共分
散行列を求め(203)、正規化カテゴリ間共分散行列
をに−L展開し得られる固有ベクトルを並べてカテゴリ
間の相違を強調するカテゴリ間強調行列を作成する(2
04)。前記正規化写像行列とカテゴリ間強調行列との
積で得られる行列が判別分析の写像行列となる(205
)。また、判別分析法で認識辞書となる特徴ベクトルは
、カテゴリごとに、前記初期ベクトルと写像行列との積
で求まるベクトルをベクトルの各要素で平均化して作ら
れる(206.207)。一方、判別分析法と類似した
統計を基礎とする方法として主成分分析法があり、例え
ば「パターン認識学習装置」、特開昭62−73391
号(参考文献3)に述べられている。参考文献3ではカ
テゴリごとの共分散行列のに−L展開だけで標準パター
ンを学習することができる。
According to reference document 1, the above-mentioned mapping matrix and feature vector can be obtained using a flowchart as illustrated in FIG. 2. First, an inter-category covariance matrix and an intra-category covariance matrix are determined based on a vector (initial vector) in which characteristic sounds detected from an input image are arranged (201). Next, a normalization mapping matrix for normalizing the intra-category covariance matrix is obtained from the eigenvalues and eigenvectors obtained by performing K-L expansion of the intra-category covariance matrix (202). Furthermore, the normalized inter-category covariance matrix is obtained by multiplying the normalized mapping matrix and the inter-category covariance matrix (203), and the normalized inter-category covariance matrix is −L-expanded and the resulting eigenvectors are arranged to calculate the inter-category covariance matrix. Create an emphasis matrix between categories that emphasizes differences (2
04). The matrix obtained by multiplying the normalized mapping matrix and the inter-category emphasis matrix becomes the mapping matrix for discriminant analysis (205
). Further, a feature vector that becomes a recognition dictionary in the discriminant analysis method is created by averaging each element of the vector found by multiplying the initial vector and the mapping matrix for each category (206 and 207). On the other hand, there is a principal component analysis method as a statistical-based method similar to the discriminant analysis method.
No. (Reference 3). In Reference 3, standard patterns can be learned simply by -L expansion of the covariance matrix for each category.

(発明が解決しようとしている課題) 判別分析法で新しい文字カテゴリの標準パターンを追加
、すなわち学習するには、第2図の説明から容易に考え
つくように、カテゴリ間共分散行列とカテゴリ内共分散
行列とを求め直し、2回のに−L展開と5回の行列積を
行う必要がある。従って前述の主成分分析法に比較し判
別分析法は新たなカテゴリの学習が難しいという問題が
あった。
(Problem to be solved by the invention) In order to add, that is, learn standard patterns for new character categories using the discriminant analysis method, as can be easily understood from the explanation of Figure 2, the inter-category covariance matrix and the within-category covariance matrix It is necessary to recalculate the matrix and perform -L expansion twice and matrix multiplication five times. Therefore, compared to the above-mentioned principal component analysis method, the discriminant analysis method has a problem in that it is difficult to learn new categories.

(課題を解決するだめの手段) 前述の課題を解決するために本発明が提供する装置は、 入力画像から検出した特徴音を並べたベクトルである初
期ベクトルを判別分析の写像行列で変換して得られる特
徴ベクトルと、登録ずみカテゴリの特徴ベクトルとを照
合し認識処理をすると共に、特定カテゴリの画像を入力
して該特定カテゴリを含む全カテゴリの特徴ベクトルを
作成し登録する文字認識装置であって、 文字カテゴリごとに初期ベクトルを平均化する平均化手
段と。
(Means for solving the problem) In order to solve the above-mentioned problem, the device provided by the present invention transforms an initial vector, which is a vector in which characteristic sounds detected from an input image are arranged, using a mapping matrix of discriminant analysis. The character recognition device performs recognition processing by comparing the obtained feature vector with the feature vectors of registered categories, and also inputs an image of a specific category to create and register feature vectors of all categories including the specific category. and an averaging means that averages the initial vectors for each character category.

前記平均化手段の出力である平均初期ベクトルを記憶す
る平均初期ベクトル記憶手段と、初期ベクトルのカテゴ
リ内共分散行列を正規化する写像行列を記憶する正規化
写像記憶手段と、前記平均初期ベクトル記憶手段の平均
初期ベクトルからカテゴリ間共分散行列を求め、該カテ
ゴリ間共分散行列をに−L展開し得られる固有ベクトル
を要素とする行列と前記正規化写像記憶手段の写像行列
との積を特徴変換写像として出力すると同時に、前記平
均初期ベクトル記憶手段からの各文字カテゴリの平均初
期ベクトルと前記特徴変換写像との積を各文字カテゴリ
の特徴ベクトルとして出力する学習処理手段とを備える
ことを特徴とする。
average initial vector storage means for storing an average initial vector which is an output of the averaging means; normalization mapping storage means for storing a mapping matrix for normalizing the intra-category covariance matrix of the initial vector; and the average initial vector storage means. An inter-category covariance matrix is obtained from the average initial vector of the means, and the inter-category covariance matrix is expanded into −L, and the product of the matrix whose elements are the obtained eigenvectors and the mapping matrix of the normalized mapping storage means is subjected to feature transformation. It is characterized by comprising learning processing means for outputting the product of the average initial vector of each character category from the average initial vector storage means and the feature conversion mapping as a feature vector of each character category at the same time as outputting it as a mapping. .

(作用) 第3図は本発明での特徴抽出に用いる写像行列と認識辞
書となる特徴ベクトルを求める流れ図である。
(Operation) FIG. 3 is a flowchart for obtaining a mapping matrix used for feature extraction and a feature vector serving as a recognition dictionary in the present invention.

学習カテゴIJ iの画像から検出した特徴音を並べた
ベクトル(初期ベクトル)より、学習文字カテゴIJ 
iの平均初期ベクトルを作成する(301)。
From the vector (initial vector) that arranges the characteristic sounds detected from the image of learning category IJ i, learning character category IJ
An average initial vector of i is created (301).

学習以前に登録されたカテゴリの平均初期ベクトルと学
習カテゴリの平均初期ベクトルとからカテゴリ間共分散
行列を求める(302)。次に登録ずみカテゴリのカテ
ゴリ内共分散行列から予め作られた正規化写像行列とカ
テゴリ間共分散行列との積により正規化カテゴリ間共分
散行列を求め(303)、正規化カテゴリ間共分散行列
をに−L展開し得られる固有ベクトルを並べてカテゴリ
間の相違を強調するカテゴリ間強調行列を作成する(3
04)。
An inter-category covariance matrix is obtained from the average initial vector of the categories registered before learning and the average initial vector of the learning category (302). Next, a normalized inter-category covariance matrix is obtained by multiplying the normalized mapping matrix created in advance from the intra-category covariance matrix of the registered categories and the inter-category covariance matrix (303), and the normalized inter-category covariance matrix is -L expansion and arrange the resulting eigenvectors to create an inter-category emphasis matrix that emphasizes the differences between categories (3
04).

前記正規化写像行列とカテゴリ間強調行列との積で得ら
れる行列を判別分析の写像行列とする(305)。
A matrix obtained by multiplying the normalized mapping matrix and the inter-category emphasis matrix is used as a mapping matrix for discriminant analysis (305).

また判別分析法で認識辞書となる特徴ベクトルは、カテ
ゴリごとの平均初期ベクトルと前記写像行列との積で求
まるベクトルとする(306)。前述の第2図と比較す
るとに−L展開は1回であり、また平均初期ベクトルか
ら辞書の特徴ベクトルを求めているので処理が簡単にな
っている。従りて主成分分析法と同程度の処理で学習が
可能となる。
Further, the feature vector that becomes a recognition dictionary in the discriminant analysis method is a vector found by multiplying the average initial vector for each category and the mapping matrix (306). Compared to FIG. 2 described above, -L expansion is performed only once, and the feature vector of the dictionary is obtained from the average initial vector, making the processing simpler. Therefore, learning can be performed with the same level of processing as the principal component analysis method.

(実施例) 以下、本願発明の実施例を図面を参照して説明する。(Example) Embodiments of the present invention will be described below with reference to the drawings.

第1図は本願発明の一実施例である学習機能付き文字認
識装置のブロック図である。図において、1は画像入力
手段、2は文字切出手段、3は特徴検出処理記憶手段、
4は特徴検出手段、5は平均化手段、6は平均初期ベク
トル記憶手段、7は正規化写像記憶手段、8は学習処理
手段、9は特徴変換写像記憶手段、10は辞書記憶手段
、11は特徴抽出手段、12は認識処理手段、13は認
識結果表示手段である。
FIG. 1 is a block diagram of a character recognition device with a learning function, which is an embodiment of the present invention. In the figure, 1 is an image input means, 2 is a character cutting means, 3 is a feature detection processing storage means,
4 is a feature detection means, 5 is an averaging means, 6 is an average initial vector storage means, 7 is a normalization mapping storage means, 8 is a learning processing means, 9 is a feature conversion mapping storage means, 10 is a dictionary storage means, and 11 is a 12 is a recognition processing means; and 13 is a recognition result display means.

第1図の文字認識装置を用いた新規カテゴリの学習は次
のようにして行う。
Learning of a new category using the character recognition device shown in FIG. 1 is performed as follows.

まず学習対象カテゴリの文字を含む画像を画像入力手段
IKより求め、文字切出手段2は前記画像入力手段1の
出力である画像から学習対象カテゴリの一個づつの文字
画像位置を決定しその位置にある画像を順次出力する。
First, an image containing characters of the learning target category is obtained from the image input means IK, and the character cutting means 2 determines the position of each character image of the learning target category from the image output from the image input means 1, and sets the character image at that position. Output a certain image sequentially.

特徴検出処理記憶手段3には文字の特徴を得るための処
理が記憶されていることから次に特徴検出手段4は前記
特徴検出処理記憶手段3から出力される処理に従って前
記文字切出手段2からの文字画像より特徴量を求める。
Since the feature detection processing storage means 3 stores the processing for obtaining character features, the feature detection means 4 next extracts the characters from the character cutting means 2 according to the processing output from the feature detection processing storage means 3. Find the feature amount from the character image.

ここで複数個の特徴に対してそれぞれ求められた特徴量
を並べたものを初期ベクトルと呼ぶ。
Here, a sequence of feature quantities obtained for each of a plurality of features is called an initial vector.

平均化手段5は各文字画像ごとに出力される初期ベクト
ルから特徴別に特徴量の平均値を計算し、平均初期ベク
トルとして前記特徴の組に対応する平均値の組を出力す
る。前記平均化手段5から出力された学習カテゴリの平
均初期ベクトルは平均初期ベクトル記憶手段6に記憶さ
れる。正規化写像記憶手段7には既在カテゴリの初期ベ
クトルから作成されたカテゴリ内共分散行列を正規化す
る写像行列が記憶されているので、学習処理手段8は前
記第3図の手順に従って写像行列と各カテゴリの辞書と
なる特徴ベクトルとを出力する。学習処理手段8から出
力される写像行列は特徴変換写像記憶手段9に記憶され
、他方の特徴ベクトルは辞書記憶手段10に記憶される
。写像行列と特徴ベクトルとの更新により学習が終了す
る。上記説明は一つのカテゴリの学習であったが、複数
カテゴリの学習は複数カテゴリの平均初期ベクトルが求
まった後で前記学習処理手段8を動作させることで行え
る。
The averaging means 5 calculates the average value of the feature amount for each feature from the initial vector output for each character image, and outputs a set of average values corresponding to the set of features as an average initial vector. The average initial vector of the learning category output from the averaging means 5 is stored in the average initial vector storage means 6. Since the normalization mapping storage means 7 stores a mapping matrix for normalizing the intra-category covariance matrix created from the initial vectors of existing categories, the learning processing means 8 stores the mapping matrix according to the procedure shown in FIG. and a feature vector serving as a dictionary for each category. The mapping matrix output from the learning processing means 8 is stored in the feature transformation mapping storage means 9, and the other feature vector is stored in the dictionary storage means 10. Learning ends by updating the mapping matrix and feature vector. Although the above explanation deals with learning of one category, learning of a plurality of categories can be performed by operating the learning processing means 8 after the average initial vector of the plurality of categories is determined.

また、第1図の文字認識装置での文字認識は次に示す順
序で行なわれる。
Further, character recognition in the character recognition device shown in FIG. 1 is performed in the following order.

認識対象の文字を含む画像を画像入力手段1により求め
、文字切出手段2は前記画像入力手段1の出力である画
像から一個づつ認識対象の文字画像位置を決定しその位
置にある文字画像を順次出力する。以下の処理は各文字
画像ごとに行なわれる。前と同じく特徴検出手段4は特
徴検出処理記憶手段3から出力される処理に従って前記
文字切出手段2からの文字画像より特徴量を求める、特
徴抽出手段11は得られた特徴量を特徴とする特徴ベク
トルと特徴変換写像記憶手段9から出力された写像行列
との積を計算し求まる特徴ベクトルを出力する。認識処
理手段12は前記特徴抽出手段11からの特徴ベクトル
と辞書記憶手段10にあるカテゴリごとの特徴ベクトル
との照合を行い、照合結果である距離値の小さい順(又
は類似度値)のカテゴリ名の符号と値とを出力する。認
識結果表示手段13は前記認識処理手段12から出力さ
れるカテゴリ名の符号と値を記憶し、カテゴリ名の符号
に対応する画像を表示する。
The image input means 1 obtains an image containing the character to be recognized, and the character cutting means 2 determines the position of the character image to be recognized one by one from the image output from the image input means 1, and extracts the character image at that position. Output sequentially. The following processing is performed for each character image. As before, the feature detection means 4 obtains a feature amount from the character image from the character extraction means 2 according to the process output from the feature detection processing storage means 3, and the feature extraction means 11 uses the obtained feature amount as a feature. The feature vector obtained by calculating the product of the feature vector and the mapping matrix output from the feature transformation mapping storage means 9 is output. The recognition processing means 12 matches the feature vectors from the feature extraction means 11 with the feature vectors for each category stored in the dictionary storage means 10, and selects category names in descending order of distance value (or similarity value) as a result of the matching. Outputs the sign and value of. The recognition result display means 13 stores the code and value of the category name output from the recognition processing means 12, and displays an image corresponding to the code of the category name.

(発明の効果) 以上に説明したように本願発明によれば、−回のに−L
展開のみで判別分析法での特徴抽出の写像行列と認識辞
書となるカテゴリごとの特徴ベクトルとを作成できるこ
とから、新しいカテゴリの学習を簡単に行える。
(Effect of the invention) As explained above, according to the present invention, - times - L
Since the mapping matrix for feature extraction using the discriminant analysis method and the feature vector for each category, which serves as a recognition dictionary, can be created simply by expansion, new categories can be easily learned.

【図面の簡単な説明】[Brief explanation of the drawing]

第1図は本願発明の学習機能付き文字認識装置の一実施
例を示すブロック図、第2図は判別分析法において認識
辞書を作成するときの処理の流れの一例を示す図、第3
図は本願発明の文字認識装置において認識辞書を作成す
る処理の流れの一例を示す図である。 図において、1は画像入力手段、2は文字切出手段、3
は特徴検出処理記憶手段、4は特徴検出手段、5は平均
化手段、6は平均初期ベクトル記憶手段、7は正規化写
像記憶手段、8は学習処理手段、9は特徴変換写像記憶
手段、10は辞書記憶手段、11は特徴抽出手段、12
は認識処理手段、13は認識結果表示手段である。 代理人 弁理士  本 庄 伸 介 第1図 第2図 第3図
FIG. 1 is a block diagram showing an embodiment of the character recognition device with a learning function of the present invention, FIG. 2 is a diagram showing an example of the processing flow when creating a recognition dictionary in the discriminant analysis method, and FIG.
The figure is a diagram showing an example of the flow of processing for creating a recognition dictionary in the character recognition device of the present invention. In the figure, 1 is an image input means, 2 is a character cutting means, and 3
4 is a feature detection processing storage means, 4 is a feature detection means, 5 is an averaging means, 6 is an average initial vector storage means, 7 is a normalization mapping storage means, 8 is a learning processing means, 9 is a feature transformation mapping storage means, 10 11 is a dictionary storage means, 11 is a feature extraction means, and 12 is a dictionary storage means.
13 is a recognition processing means, and 13 is a recognition result display means. Agent Patent Attorney Shinsuke Honjo Figure 1 Figure 2 Figure 3

Claims (1)

【特許請求の範囲】 入力画像から検出した特徴量を並べたベクトルである初
期ベクトルを判別分析の写像行列で変換して得られる特
徴ベクトルと、登録ずみカテゴリの特徴ベクトルとを照
合し認識処理をすると共に、特定カテゴリの画像を入力
して該特定カテゴリを含む全カテゴリの特徴ベクトルを
作成し登録する文字認識装置において、 文字カテゴリごとに初期ベクトルを平均化する平均化手
段と、 前記平均化手段の出力である平均初期ベクトルを記憶す
る平均初期ベクトル記憶手段と、 初期ベクトルのカテゴリ内共分散行列を正規化する写像
行列を記憶する正規化写像記憶手段と、前記平均初期ベ
クトル記憶手段の平均初期ベクトルからカテゴリ間共分
散行列を求め、該カテゴリ間共分散行列をK−L展開し
得られる固有ベクトルを要素とする行列と前記正規化写
像記憶手段の写像行列との積を特徴変換写像として出力
すると同時に、前記平均初期ベクトル記憶手段からの各
文字カテゴリの平均初期ベクトルと前記特徴変換写像と
の積を各文字カテゴリの特徴ベクトルとして出力する学
習処理手段とを備えることを特徴とする文字認識装置。
[Claims] Recognition processing is performed by comparing the feature vector obtained by converting the initial vector, which is a vector of feature quantities detected from the input image, with a mapping matrix of discriminant analysis, and the feature vector of the registered category. In a character recognition device that inputs an image of a specific category and creates and registers feature vectors of all categories including the specific category, the device further comprises: averaging means for averaging initial vectors for each character category; and the averaging means. average initial vector storage means for storing an average initial vector that is an output of the average initial vector; normalization mapping storage means for storing a mapping matrix that normalizes the intra-category covariance matrix of the initial vector; An inter-category covariance matrix is obtained from the vector, and the product of the matrix whose elements are the eigenvectors obtained by performing K-L expansion of the inter-category covariance matrix and the mapping matrix of the normalized mapping storage means is output as a feature transformation mapping. At the same time, the character recognition device further comprises learning processing means for outputting the product of the average initial vector of each character category from the average initial vector storage means and the feature conversion mapping as the feature vector of each character category.
JP63155609A 1988-06-23 1988-06-23 Character recognizing device Pending JPH01321591A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP63155609A JPH01321591A (en) 1988-06-23 1988-06-23 Character recognizing device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP63155609A JPH01321591A (en) 1988-06-23 1988-06-23 Character recognizing device

Publications (1)

Publication Number Publication Date
JPH01321591A true JPH01321591A (en) 1989-12-27

Family

ID=15609764

Family Applications (1)

Application Number Title Priority Date Filing Date
JP63155609A Pending JPH01321591A (en) 1988-06-23 1988-06-23 Character recognizing device

Country Status (1)

Country Link
JP (1) JPH01321591A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0540852A (en) * 1991-08-05 1993-02-19 Science & Tech Agency Pattern recognizing device
EP0539749A2 (en) * 1991-10-31 1993-05-05 International Business Machines Corporation A statistical mixture approach to automatic handwriting recognition
JPH05114051A (en) * 1991-03-12 1993-05-07 Science & Tech Agency Method for recognizing fuzzy pattern
US6778701B1 (en) 1999-10-04 2004-08-17 Nec Corporation Feature extracting device for pattern recognition
US7634140B2 (en) 2002-02-27 2009-12-15 Nec Corporation Pattern feature selection method, classification method, judgment method, program, and device
CN110297241A (en) * 2019-07-09 2019-10-01 中国人民解放军国防科技大学 A Construction Method of Contextual Covariance Matrix for Image Processing

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05114051A (en) * 1991-03-12 1993-05-07 Science & Tech Agency Method for recognizing fuzzy pattern
JPH0540852A (en) * 1991-08-05 1993-02-19 Science & Tech Agency Pattern recognizing device
EP0539749A2 (en) * 1991-10-31 1993-05-05 International Business Machines Corporation A statistical mixture approach to automatic handwriting recognition
EP0539749A3 (en) * 1991-10-31 1994-05-11 Ibm A statistical mixture approach to automatic handwriting recognition
US5343537A (en) * 1991-10-31 1994-08-30 International Business Machines Corporation Statistical mixture approach to automatic handwriting recognition
US6778701B1 (en) 1999-10-04 2004-08-17 Nec Corporation Feature extracting device for pattern recognition
US7634140B2 (en) 2002-02-27 2009-12-15 Nec Corporation Pattern feature selection method, classification method, judgment method, program, and device
CN110297241A (en) * 2019-07-09 2019-10-01 中国人民解放军国防科技大学 A Construction Method of Contextual Covariance Matrix for Image Processing

Similar Documents

Publication Publication Date Title
Paclík et al. Building road-sign classifiers using a trainable similarity measure
EP0539749B1 (en) Handwriting recognition system and method
EP0085545B1 (en) Pattern recognition apparatus and method for making same
US5588073A (en) Online handwritten character recognizing system and method thereof
US6912527B2 (en) Data classifying apparatus and material recognizing apparatus
JP3761937B2 (en) Pattern recognition method and apparatus, and computer control apparatus
JPH02238588A (en) Recognizing device
JP3155616B2 (en) Character recognition method and device
CN110322398B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
JPH01321591A (en) Character recognizing device
Salau et al. Image-based number sign recognition for ethiopian sign language using support vector machine
CN110533636B (en) Image analysis device
JPS6151799B2 (en)
JP2701311B2 (en) Character recognition device with recognition dictionary creation function
JPH06251156A (en) Pattern recognizing device
Travieso et al. Handwritten digits parameterisation for HMM based recognition
CN115471847A (en) Character recognition and extraction method, system, device and storage medium
JPH01180083A (en) Device for recognizing character of plural font
Babu et al. Transformation of Sign Language to Text in Digital Era Using Deep Neural Network
JP2001118073A (en) Device and method for recognizing pattern
JP2765617B2 (en) Character recognition device
Karimi et al. Fuzzy-Based Algorithm for Efficient Vehicle License Plate Recognition in Iran's Transportation System
Temel et al. Turkish Sign Language Recognition Using CNN with New Alphabet Dataset.
JP2737733B2 (en) Pattern classification device
JPH04260987A (en) Character recognizing device