JPH04184668A - Correcting method for tutor data of neural network - Google Patents
Correcting method for tutor data of neural networkInfo
- Publication number
- JPH04184668A JPH04184668A JP2317116A JP31711690A JPH04184668A JP H04184668 A JPH04184668 A JP H04184668A JP 2317116 A JP2317116 A JP 2317116A JP 31711690 A JP31711690 A JP 31711690A JP H04184668 A JPH04184668 A JP H04184668A
- Authority
- JP
- Japan
- Prior art keywords
- data
- tutor
- tutor data
- neural network
- new
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 13
- 238000000034 method Methods 0.000 title claims description 16
- 238000012549 training Methods 0.000 claims description 13
- 230000006870 function Effects 0.000 abstract description 3
- 230000008859 change Effects 0.000 description 4
- 238000007664 blowing Methods 0.000 description 2
- 101000716729 Homo sapiens Kit ligand Proteins 0.000 description 1
- 102100020880 Kit ligand Human genes 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
Landscapes
- Air Conditioning Control Device (AREA)
Abstract
Description
【発明の詳細な説明】
(イ)産業上の利用分野
本発明は、階層構造を持ち、学習によって結合係数を変
化させるニューラルネットワークシステムの教師データ
の修正方法に関する。DETAILED DESCRIPTION OF THE INVENTION (A) Field of Industrial Application The present invention relates to a method for modifying training data for a neural network system that has a hierarchical structure and changes coupling coefficients through learning.
1口)従来の技術
学習データに応じた入出力変換を行うニューラルネット
ワークシステムとして現在量もよく利用されているもの
に、パックプロパゲーション法(以下、BP法と略す。1) Conventional technology Learning One of the neural network systems that are currently widely used to perform input/output conversion according to data is the pack propagation method (hereinafter abbreviated as BP method).
)がある。).
BP法はフィードバックのない多層構造をのネットワー
クを対象としたものであり出力と教師データとの2乗誤
差を極小化するように、結合の強度を変化させるもので
ある。The BP method targets networks with multilayer structures without feedback, and changes the strength of connections so as to minimize the square error between the output and the training data.
このようなりP法は教師データを与えて学習を行えば、
どの様な入出力関係でも学習できるという特徴を持つ。In this way, if the P method is trained by giving training data,
It has the characteristic of being able to learn any type of input/output relationship.
しかし逆に言えば、教師データの与えられていない領域
での動作は、全く保証されていない。この為、教師デー
タの一部分を別の教師データに交換することは、教師デ
ータの与えられていない領域を作る可能性があり、従来
は教師データに新しいデータを追加する方法が取られて
いた。However, conversely, operation in areas where training data is not provided is not guaranteed at all. For this reason, exchanging a part of the teaching data with another teaching data may create an area where no teaching data is given, and conventionally the method has been to add new data to the teaching data.
(ハ)発明が解決しようとする課題
然し乍ら、従来の方法では、新たに教師データを追加す
ることによって教師データの数はどんどん増大し、学習
時間もそれにつれて増大する結果となる。しかも教師信
号の増大は、教師データ用のメモリの確保を必要とし、
教師データの追加に制限が加えざるを得ないのが常であ
った。(c) Problems to be Solved by the Invention However, in the conventional method, the number of teacher data increases rapidly by adding new teacher data, and the learning time also increases accordingly. Moreover, increasing the number of teacher signals requires securing memory for the teacher data.
It has always been necessary to impose restrictions on the addition of teaching data.
(ニ)課題を解決するための手段
本発明はこのような点に鑑みて梅されたものであって、
予め設定された教師データに教師データを追加する際、
所定の距離関数に基づく距離が小さいもの同士からデー
タを合成して、新たな教師データを再構成して、教師デ
ータを減少させている。(d) Means for solving the problem The present invention has been developed in view of the above points, and includes:
When adding teacher data to preset teacher data,
The amount of training data is reduced by combining data with smaller distances based on a predetermined distance function and reconstructing new training data.
(ホ)作用
教師データは通常入出力空間全体を代表できるように選
定されている。最初に与えた教師データがm個あり、そ
こに新しい教師信号をn個追加したとすると、合計量子
n個の教師信号になる。そして、これ等の中から最も距
離の近いもの同士を合成して、m個の新たな教師データ
が−を求めると、これらm個の教師データは、新旧のm
+n個の教師データを代表することになる。(e) The action training data is usually selected to represent the entire input/output space. If there are m pieces of initially given teacher data and n new teacher signals are added thereto, there will be a total of n pieces of teacher signals. Then, if we combine the ones that are closest to each other and find m new teaching data -, these m teaching data will be the new and old m
+n pieces of teacher data will be represented.
(へ)実施例
まず、ニューラルネットワークシステムをエアコン等の
装置に搭載して、使用環境に合わせた制御を行うことを
考える。予め標準の教師データで標準環境を学習させて
おき、使用者がこの標準動作が気にいらなければ、マニ
ュアルでスイッチを操作する。これにより、このマニュ
アル操作が優先的に動作し、その時の入出力関係を適当
にサンプリングして記憶しておく。(f) Example First, consider installing a neural network system in a device such as an air conditioner to perform control according to the usage environment. A standard environment is trained in advance using standard teacher data, and if the user does not like this standard operation, the user operates the switch manually. As a result, this manual operation operates preferentially, and the input/output relationship at that time is appropriately sampled and stored.
電源OFF後、このマニュアル時の入出力関係を新しい
教師データとして、古い教師データと合わせてクラスタ
リングを行い、教師信号を元の数に戻し、この教師信号
で学習を行なわせる。After the power is turned off, this manual input/output relationship is used as new teacher data, clustering is performed together with the old teacher data, the teacher signals are returned to the original number, and learning is performed using the teacher signals.
第1図は本発明を実施するための教師データ修正装置で
あって、通常コンピュータ等の情報処理装置でしつげん
される。例えば、エアコンにおいては、室温、室温変化
、及びこれに対応する送風温度等のエアコン操作に必要
な情報が初期の教師データとしてn個RA M 1に記
憶されている。そして、この教師データに基づいて標準
的にニューラルネットワーク2が学習させられている。FIG. 1 shows a teacher data correction device for implementing the present invention, which is usually implemented by an information processing device such as a computer. For example, in an air conditioner, n pieces of information necessary for operating the air conditioner, such as room temperature, room temperature change, and corresponding air blowing temperature, are stored in RAM 1 as initial teacher data. The neural network 2 is normally trained based on this teacher data.
エアコン動作に際し、入力値(室温、室温変化)が与え
られるとニューラルネットワーク2が最適と判断したエ
アコン出力値(送風温度)が出力される。使用者はこの
出力が不満であれば操作キー等によりマニュアル操作を
行う。この時、マニュアル操作による出力値が選択手段
3で優先的に選択されエアコンは動作される。また、こ
のマニュアル操作値は出力値として室温、室温変化等の
入力値と共に新たな教師データとなって、逐次レジスタ
4に書き込まれる。When operating the air conditioner, when input values (room temperature, change in room temperature) are given, an air conditioner output value (blow temperature) determined by the neural network 2 to be optimal is output. If the user is not satisfied with this output, he or she performs manual operation using operation keys or the like. At this time, the output value by manual operation is preferentially selected by the selection means 3 and the air conditioner is operated. Further, this manual operation value becomes new teacher data as an output value together with input values such as room temperature and changes in room temperature, and is sequentially written into the register 4.
このエアコン動作が終了すると、RAMI内のn個の教
師データとレジスタ4内のm個の教師データとがクラス
タリング手段5によりn個にグループ分けされて、それ
ぞれのクラスタに代表点(代表教師データ)が決められ
る。When this air conditioner operation is completed, the n pieces of teacher data in the RAMI and the m pieces of teacher data in the register 4 are divided into n groups by the clustering means 5, and each cluster has a representative point (representative teacher data). can be determined.
エアコンのように室温と室温変化と送風温度を教師デー
タの要素とする場合、まず、クラスタリング時に単位を
無次元化して考える。また、室温変化や室温の数値は単
位の取り方によっても異なるので適当な係数を掛けてお
く。こうした状態で室温Tl(’C)、室温変化T2(
”C)がらKITl、K2T2を導き出す。同様にして
、エアコン送風温度T3がらに3T3を導き出す。簡単
のためX=KIT1、Y=に2T2、Z=に3T3と置
いて、i番目のr−タ(Xi、Yi、Zi)と3番目の
データ(Xi、Yi、Zi)との間のユークリッド距離
は
d、= [(Xi−Xj)”+ (Yi−Yj)”+(
Zi十Zj)月178
で表される。そして、この距離の小さいものから順に教
師データを統合し、統合されたデータの平均を新しいデ
ータとする。これを繰り返してm十n個のデータをn個
のデータにする。When using room temperature, room temperature change, and air blowing temperature as elements of training data, such as in the case of an air conditioner, first consider making the units dimensionless during clustering. Also, since room temperature changes and room temperature values vary depending on the unit, multiply them by an appropriate coefficient. Under these conditions, the room temperature Tl ('C) and the room temperature change T2 (
"C)" to derive KITl and K2T2. Similarly, 3T3 is derived from the air conditioner air temperature T3. For simplicity, set X = KIT1, Y = 2T2, Z = 3T3, and calculate the i-th r-ta. The Euclidean distance between (Xi, Yi, Zi) and the third data (Xi, Yi, Zi) is d, = [(Xi-Xj)"+ (Yi-Yj)"+(
It is expressed as 178 months. Then, the training data is integrated in order of decreasing distance, and the average of the integrated data is used as new data. This process is repeated to convert m10n pieces of data into n pieces of data.
このようにMill成されたデータにより、ニューラル
ネットワーク2は最学習させられる。The neural network 2 is trained using the milled data in this way.
なお、本実施例では階層型でユークリッド距離を用い、
代表点(代表教師データ)として群平均を取る方法を説
明したが、クラスタリングの方法については何ら制限は
なく、標準ユークリッド距離、マハラノピスの距離、ミ
ンコフスキーの距離等を用いてもよく、代表点の決め方
についても、最短距離法、最長距離法、重心法、つオー
ド法等を用いてもよい。In addition, in this example, the Euclidean distance is used in a hierarchical manner,
Although we explained how to take the group average as the representative point (representative teacher data), there are no restrictions on the clustering method, and standard Euclidean distance, Mahalanopis distance, Minkowski distance, etc. may be used, and how to determine the representative point. Also, the shortest distance method, the longest distance method, the centroid method, the quadratic method, etc. may be used.
(ト)発明の効果
以上述べた如く、本発明では、標準の教師データに使用
環境に応じた教師データを追加しても、全ての教師デー
タの特徴を生かして、教師データの総数を削減すること
ができ、ニューラルネットワークの最学習時間が短縮さ
れる。(G) Effects of the Invention As described above, in the present invention, even if teacher data corresponding to the usage environment is added to the standard teacher data, the characteristics of all the teacher data are utilized to reduce the total number of teacher data. This reduces the neural network's maximum learning time.
第1図は本発明ニューラルネットワークにおける教師デ
ータの修正方法を実施するための構成図である。
1・・・RAM、2・・・ニューラルネットワーク、3
・・・選択手段、4・・・レジスタ、5・・・クラスタ
リング手段。FIG. 1 is a block diagram for implementing a method for correcting training data in a neural network according to the present invention. 1...RAM, 2...Neural network, 3
. . . selection means, 4 . . . register, 5 . . . clustering means.
Claims (2)
る際、所定の距離関数に基づく距離が小さいもの同士か
らデータを合成して、新たな教師データを再構成して、
教師データを減少させることを特徴とするニューラルネ
ットワークシステムにおける教師データの修正方法。(1) When adding teacher data to preset teacher data, new teacher data is reconstructed by combining data from those with a small distance based on a predetermined distance function,
A method for correcting training data in a neural network system characterized by reducing training data.
たことを特徴とした特許請求の範囲第1項記載のニュー
ラルネットワークシステムにおける教師データの修正方
法。(2) A method for correcting training data in a neural network system according to claim 1, characterized in that a Euclidean distance function is used as the distance function.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2317116A JPH04184668A (en) | 1990-11-20 | 1990-11-20 | Correcting method for tutor data of neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2317116A JPH04184668A (en) | 1990-11-20 | 1990-11-20 | Correcting method for tutor data of neural network |
Publications (1)
Publication Number | Publication Date |
---|---|
JPH04184668A true JPH04184668A (en) | 1992-07-01 |
Family
ID=18084619
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
JP2317116A Pending JPH04184668A (en) | 1990-11-20 | 1990-11-20 | Correcting method for tutor data of neural network |
Country Status (1)
Country | Link |
---|---|
JP (1) | JPH04184668A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH10170053A (en) * | 1996-12-04 | 1998-06-26 | Matsushita Electric Ind Co Ltd | Load estimating device for air conditioner |
JP2018032394A (en) * | 2016-08-25 | 2018-03-01 | 株式会社日立製作所 | Device control based on hierarchical data |
JP2022043923A (en) * | 2020-09-04 | 2022-03-16 | ダイキン工業株式会社 | Generation method, program, information processing apparatus, information processing method, and learned model |
WO2023223594A1 (en) * | 2022-05-20 | 2023-11-23 | ダイキン工業株式会社 | Prediction device, refrigeration system, prediction method, prediction program |
US11965667B2 (en) | 2020-09-04 | 2024-04-23 | Daikin Industries, Ltd. | Generation method, program, information processing apparatus, information processing method, and trained model |
-
1990
- 1990-11-20 JP JP2317116A patent/JPH04184668A/en active Pending
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH10170053A (en) * | 1996-12-04 | 1998-06-26 | Matsushita Electric Ind Co Ltd | Load estimating device for air conditioner |
JP2018032394A (en) * | 2016-08-25 | 2018-03-01 | 株式会社日立製作所 | Device control based on hierarchical data |
JP2022043923A (en) * | 2020-09-04 | 2022-03-16 | ダイキン工業株式会社 | Generation method, program, information processing apparatus, information processing method, and learned model |
US11965667B2 (en) | 2020-09-04 | 2024-04-23 | Daikin Industries, Ltd. | Generation method, program, information processing apparatus, information processing method, and trained model |
US12130037B2 (en) | 2020-09-04 | 2024-10-29 | Daikin Industries, Ltd. | Generation method, program, information processing apparatus, information processing method, and trained model |
WO2023223594A1 (en) * | 2022-05-20 | 2023-11-23 | ダイキン工業株式会社 | Prediction device, refrigeration system, prediction method, prediction program |
JP2023170830A (en) * | 2022-05-20 | 2023-12-01 | ダイキン工業株式会社 | Prediction device, refrigeration system, prediction method and prediction program |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US5926803A (en) | Circuit designing method and circuit designing device | |
AU2001260916A1 (en) | Control system for actuators in an aircraft | |
JPH05127706A (en) | Neural network type simulator | |
JPH04184668A (en) | Correcting method for tutor data of neural network | |
JP2862337B2 (en) | How to build a neural network | |
WO1991016674A1 (en) | Discrete type repetition control method and apparatus therefor | |
JPH04195667A (en) | Electronic machinery and apparatus | |
JPH05290013A (en) | Neural network arithmetic unit | |
JPH04237388A (en) | Neuro processor | |
JP3518813B2 (en) | Structured neural network construction device | |
Polycarpou et al. | Stable adaptive neural control of nonlinear systems | |
US4949237A (en) | Digital integrating module for sampling control devices | |
JPH03235101A (en) | Model norm type adaptive control method | |
JPH08110896A (en) | Feedforward type neural network | |
JPH0728768A (en) | A method for obtaining an inverse solution by learning a neural network | |
CN115271027A (en) | Novel multidimensional neural network topological structure construction system | |
JPH05135129A (en) | Simulation device | |
JP2767625B2 (en) | Fuzzy inference apparatus and operation method thereof | |
CN119398122A (en) | Fine tuning method based on general LoRA and domain-specific LoRA multi-task mixed expert model | |
JPS63177685A (en) | Picture processing device | |
JPS62241039A (en) | Logic simulator | |
JPS61161555A (en) | Hardware simulator | |
JPH0414192A (en) | Group unit learning adjustment system for connection in neural network | |
JPH0540720A (en) | Network constituting information generating device | |
Sangiorgi et al. | An Improved Systolic Array for String Correction |