JPH02201427A - Optical neurocomputer - Google Patents
Optical neurocomputerInfo
- Publication number
- JPH02201427A JPH02201427A JP2112789A JP2112789A JPH02201427A JP H02201427 A JPH02201427 A JP H02201427A JP 2112789 A JP2112789 A JP 2112789A JP 2112789 A JP2112789 A JP 2112789A JP H02201427 A JPH02201427 A JP H02201427A
- Authority
- JP
- Japan
- Prior art keywords
- incidence matrices
- levels
- values
- incidence
- state vector
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000003287 optical effect Effects 0.000 title claims abstract description 16
- 239000011159 matrix material Substances 0.000 claims abstract description 39
- 239000013598 vector Substances 0.000 claims abstract description 27
- 210000002569 neuron Anatomy 0.000 claims abstract description 17
- 230000008878 coupling Effects 0.000 claims description 27
- 238000010168 coupling process Methods 0.000 claims description 27
- 238000005859 coupling reaction Methods 0.000 claims description 27
- 238000000926 separation method Methods 0.000 claims description 3
- 239000003990 capacitor Substances 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 6
- 230000000694 effects Effects 0.000 description 4
- 230000006870 function Effects 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 239000012528 membrane Substances 0.000 description 2
- 230000009022 nonlinear effect Effects 0.000 description 2
- 238000013529 biological neural network Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000037303 wrinkles Effects 0.000 description 1
Abstract
Description
【発明の詳細な説明】
C産業上の利用分野)
との発明は、生物の神経回路網を模倣し、連想機能、パ
ターン認識機能などを有するコンピュータを光技術を用
いて達成する光ニューロコンビエータに関するものであ
る。[Detailed Description of the Invention] The invention with C) is an optical neurocombinator that imitates biological neural networks and uses optical technology to create a computer that has associative functions, pattern recognition functions, etc. It is related to.
第4図は例えば、^ppHed 0ptics誌198
5年第24巻第10号1469頁〜1475頁に示され
た従来の光ニューロコンピュータ装置を示す構成概念図
であり、図において、(1] は発光素子アレイ、(2
)は結合行列を表わす空間光変調器(SLM) 、(3
)は受光素子アレイ、(4) は比較器等で構成される
し、きい値素子である。For example, Figure 4 is from ^ppHed Optics magazine 198.
5, Vol. 24, No. 10, pp. 1469 to 1475, in which (1) is a light emitting element array;
) represents the coupling matrix of the spatial light modulator (SLM), (3
) consists of a light receiving element array, and (4) consists of a comparator, etc., and is a threshold element.
次に動作について説明する0本装置はHopfield
により提案された二二一うルネットモデルに基づいてい
る。(例えば、「連想光ニューロコンピュータ」電子情
報通信学会0QE87−174 (1988年))この
モデルによれば、予め蓄積しているM個のN次元ベクト
ル情報St”’ (1w 1.・”N、 m= 1.
”−M。Next, we will explain the operation of this device.The device is Hopfield.
It is based on the 221 Urunet model proposed by. (For example, "Associative Optical Neurocomputer" Institute of Electronics, Information and Communication Engineers 0QE87-174 (1988)) According to this model, M pieces of N-dimensional vector information St"' (1w 1.・"N, m=1.
”-M.
Ham) ε(1,−1) NfovV、 )中から
、入力ベクトル5115m°1にも最も類似したものを
選びだす連想メモリを構成することができる。It is possible to construct an associative memory that selects the vector most similar to the input vector 5115m°1 from among the vectors ε(1,−1)NfovV, ).
蓄積情報は、空間光変調器(2)により与えられる。結
合行列をTIJとするとTiJは次式で与えられる。The stored information is provided by a spatial light modulator (2). Letting the coupling matrix be TIJ, TiJ is given by the following equation.
い、即ち、互いに直交しているものとする。っまり
ると、結合マトリックス51 (+a01の積はΣ7
i j5 jL m Ol〜51”’ +o(f■丁
)(3)となることが示される。0(x)はX次の微小
量である。In other words, they are assumed to be orthogonal to each other. Then, the connection matrix 51 (the product of +a01 is Σ7
It is shown that i j5 jL m Ol~51"'+o(f■d) (3). 0(x) is an X-order minute quantity.
従フて、強い非線形作用を行い、更にこれを入力してフ
ィードバックすることで入力ベクトルに最も近い蓄積情
報ベクトルを出力することとなる。ここでは、非線形作
用としてしきい値素子(4)を用い、又
Vi= (S1+ 1)/2
(4)として、単極化したベクトルを用いて、発光
素子アレイ(1)の0N10FFでニューロン状態を表
現している。又このニューロン状態ベクトルと結合マト
リックスとの積Ui
Ui=ΣTiJ VJ (
5)」ll+
は、受光素子アレイ(3)の出力として表現される。こ
の旧をしきい値処理し、入力してフィードバックする。Therefore, a strong nonlinear action is performed, and by further inputting and feeding back this, the accumulated information vector closest to the input vector is output. Here, a threshold element (4) is used as a nonlinear action, and Vi= (S1+ 1)/2
As (4), the neuron state is expressed by 0N10FF of the light emitting element array (1) using a unipolar vector. Also, the product of this neuron state vector and the connection matrix Ui Ui=ΣTiJ VJ (
5)'ll+ is expressed as the output of the light receiving element array (3). This old value is thresholded, inputted and fed back.
即ち
Vi=θ (01−tlth)
は
M 〜N/(4log N)
(7)で表される。That is, Vi=θ (01-tlth) is M ~ N/(4log N)
It is expressed as (7).
従来のニューラルコンピュータは以上のように構成され
ているので、結合行列の値がとりえる範囲(TIJの範
囲)が、SLMのコントラスト比(ON10FF比)に
よって制限されてしまい、蓄積情報を増やすと連想精度
が劣化するという問題点があった。又、正確な連想の為
には、SLM面内でコントラスト比を均一かつ安定にす
る必要があるという問題点があった。Since conventional neural computers are configured as described above, the range in which the values of the coupling matrix can take (TIJ range) is limited by the contrast ratio of the SLM (ON10FF ratio). There was a problem that accuracy deteriorated. Another problem is that for accurate association, it is necessary to make the contrast ratio uniform and stable within the SLM plane.
この発明は上記のような問題点を解消するためになされ
たもので、結合行列値の表現範囲を拡大できるとともに
、SLM全面にわたって均一かつ安定に結合行列の値を
表現できる光ニューロコンピュータを得ることを目的と
する。This invention was made in order to solve the above-mentioned problems, and provides an optical neurocomputer that can expand the expression range of coupling matrix values and can uniformly and stably express coupling matrix values over the entire surface of an SLM. With the goal.
この第1の発明に係る光ニューロコンピュータは、結合
行列の値のレベルを複数のレベルに分割し、時系列的に
各レベルの結合行列を発生させてニューロン状態ベクト
ルとの積演算を行い、その後全体として結合行列とニュ
ーロン状態ベクトルとの積演算を行なう制御部を設けた
ことを特徴としている。The optical neurocomputer according to the first invention divides the level of the value of the coupling matrix into a plurality of levels, generates a coupling matrix for each level in time series, performs a multiplication operation with a neuron state vector, and then The device is characterized in that it is provided with a control section that performs a product operation of the connection matrix and the neuron state vector as a whole.
また第2の発明に係る光ニューロコンピュータは、結合
行列の値のレベルを複数のレベルに分割し、各レベルの
結合行列を周波数変調して周波数多重化し、多重化した
状態で状態ベクトルと結合行列の積演算を行い、積演算
を行った後周波数分離して全体の状態ベクトルと結合行
列の積を得る制御部を設けたことを特徴としている。Further, the optical neurocomputer according to the second invention divides the level of the value of the coupling matrix into a plurality of levels, frequency modulates and frequency multiplexes the coupling matrix of each level, and combines the state vector and the coupling matrix in the multiplexed state. The present invention is characterized in that it is provided with a control section that performs a product operation, performs frequency separation after performing the product operation, and obtains the product of the entire state vector and the coupling matrix.
第1の発明における制御部は、結合行列の値のレベルを
幾つかのレベルに分割し、時系列的に各レベルの結合行
列を発生させる。そして各レベルの結合行列とニューロ
ン状態ベクトルの積演算を行って、全体として結合行列
と状態ベクトルの積演算を行なう。The control unit in the first invention divides the level of the values of the coupling matrix into several levels, and generates the coupling matrix of each level in time series. Then, a product operation is performed between the connection matrix of each level and the neuron state vector, and the product operation of the connection matrix and state vector is performed as a whole.
また、第2の発明における制御部は、結合行列の値のレ
ベルを幾つかのレベルに分割し、各レベルの結合行列を
異なる周波数を用いて変調ル、周波数多重化を行ない、
多重化した状態で状態ベクトルと結合行列の積演算を行
ない、積演算を行なった後、周波数分離を行い、全体と
して状態ベクトルと結合行列の積を得る。Further, the control unit in the second invention divides the level of the value of the coupling matrix into several levels, modulates and frequency multiplexes the coupling matrix of each level using a different frequency,
A product operation of the state vector and the coupling matrix is performed in the multiplexed state, and after the product operation, frequency separation is performed to obtain the product of the state vector and the coupling matrix as a whole.
(実施例)
以下、この発明の実施例を図について説明する。第1図
はこの第1の発明の一実施例を示す構成図で、図におい
て(5) はコンデンサを用いた積分回路によって構成
されるアキニームレータ、(8) は発光素子アレイ(
1)、結合行列を表現するSLM (2) 、受光素子
アレイ(3) を制御する制御部である。又第2図は、
光ニューロコンピュータに係る第2の発明を示す構成図
で、図において、(7) はバンドパスフィルタ、(8
) はピークホールド回路、(9)は結合行列を表現す
るSLM (2)の制御回路である。(Example) Hereinafter, an example of the present invention will be described with reference to the drawings. FIG. 1 is a block diagram showing an embodiment of the first invention. In the figure, (5) is an akineumulator constituted by an integrating circuit using a capacitor, and (8) is a light emitting element array (
1), a control unit that controls the SLM (2) that expresses the coupling matrix, and the light receiving element array (3). Also, Figure 2 is
This is a configuration diagram showing the second invention related to an optical neurocomputer, in which (7) is a band pass filter, (8
) is a peak hold circuit, and (9) is a control circuit for SLM (2) that expresses the coupling matrix.
次に上記構成に基づき動作について説明する。Next, the operation will be explained based on the above configuration.
今、結合行列TIJ (i=1.2.・・−、N、J−
1,2,−・・、N)は、
ITIJI ≦S for Vl、Jなる整数で
あるとする。以下、正負は各チャンネルに分は非負数の
み考える。今、
TIJ −P (<S) (pε非負整数)のとき
、TIjを次の様に分割する。Now, the coupling matrix TIJ (i=1.2...-, N, J-
1, 2, -..., N) are integers such that ITIJI ≦S for Vl, J. Below, only non-negative numbers will be considered for each channel. Now, when TIJ -P (<S) (pε non-negative integer), TIj is divided as follows.
Tij −Σ bij(k)
klll
そして、SLM上にこのkに対して分割した形の結合行
列bij (n)を出現させ多重化させる。Tij −Σ bij(k) kllll Then, a coupling matrix bij (n) in a form divided for k appears on the SLM and is multiplexed.
以下、時系列的な場合と周波数的な場合について順に述
べる。The time series case and the frequency case will be described below.
今、時刻t−1でSLM上にbiJ (1)を出現させ
、ニューロン状態ベクトルvjとの積
■(1)=Σbij (1)VJ
を受光素子アレイ上に得る。これをT−3から時系列に
行ない、アキニームレータで
旧雪Σ旧(k)
ml
なる膜電位を得ることができる。その後、しきい値処理
を行い、ニューロン状態ベクトルの更新を行うことがで
きる。Now, biJ (1) appears on the SLM at time t-1, and the product (1)=Σbij (1)VJ with the neuron state vector vj is obtained on the light receiving element array. This is performed in chronological order starting from T-3, and a membrane potential of old snow Σ old (k) ml can be obtained using an Akinimulator. Thereafter, thresholding can be performed to update the neuron state vector.
又、周波数的に行う場合は、各kについて周波数ω皺を
与え、biJ(k)をこのωにで変調して多重化してS
LM上に出現させる。即ち
biJ =Σ bIj (k) cos (ωht)f
il
とする、受光側では
=Σ Σ biJ(k)VJ cos(ωht)j
謔I klll
の出力が得られる。この出力をフィルタを通して各周波
に成分に分離した後、各周波数成分の和をとる、即ち
を行い、膜電位Ofを得る。これをしきい値処理し、ニ
ューロン状態ベクトルの更新を行う。In addition, when performing frequency-wise, a frequency ω wrinkle is given for each k, and biJ(k) is modulated by this ω and multiplexed to obtain S
Make it appear on LM. That is, biJ = Σ bIj (k) cos (ωht)f
il, and on the light receiving side = Σ Σ biJ(k)VJ cos(ωht)j
The output of 謔I kllll is obtained. After this output is separated into each frequency component through a filter, the sum of each frequency component is taken, ie, the membrane potential Of is obtained. This is subjected to threshold processing and the neuron state vector is updated.
なお、上記実施例では連想特性の場合について説明した
が、最適値問題の場合でありても良く、又、学習機能を
導入する場合であっても良く、上記実施例と同様の効果
を奏する。In addition, although the case of the associative characteristic was explained in the above embodiment, it may be a case of an optimal value problem, or a case where a learning function is introduced, and the same effects as in the above embodiment can be obtained.
又、上記実施例ではHopfi@ldニューラルネット
ワークモデルの場合について説明したが、バックプロパ
ゲーションモデルの様な他の多層のニューラルネットワ
ークそデルであっても良く、上記実施例と同様の効果を
奥する。Furthermore, although the above embodiment describes the case of the Hopfi@ld neural network model, other multilayer neural networks such as a backpropagation model may be used, and the same effects as in the above embodiment can be obtained. .
第3図はこの多層ニューラルネットワークの構成図で、
図において、(lO)は入力層、(11)は中間層、(
12)は出力層を表わす。(13)は入カニニット、(
14)は中間層ユニット、(15)は出カニニットであ
る。又、(16)は入力層と中間層間の結合(SLM)
、(17)は中間層と出力層間の結合(SLM)であ
る。各結合は一実施例と同じく、biJ(k)に分割さ
れ、ω、で変調されて多重化してSLM上に表現する。Figure 3 shows the configuration of this multilayer neural network.
In the figure, (lO) is the input layer, (11) is the intermediate layer, (
12) represents the output layer. (13) is crab knit, (
14) is an intermediate layer unit, and (15) is a dekani knit. Also, (16) is the coupling between the input layer and the middle layer (SLM)
, (17) is the coupling (SLM) between the hidden layer and the output layer. As in the embodiment, each combination is divided into biJ(k), modulated by ω, multiplexed, and expressed on the SLM.
これにより、各層間の結合を広範囲に表現することが可
能となる。This makes it possible to express the connections between each layer over a wide range.
(発明の効果)
以上の様に、この発明によれば、結合行列の値のレベル
を幾つかのレベルに分割し、各レベルの結合行列を多重
化した構成としたので、SLHのコントラスト°比に依
存することなく結合行列の値を表現でき、連想精度を向
上させることができる。(Effects of the Invention) As described above, according to the present invention, the level of the values of the coupling matrix is divided into several levels, and the coupling matrix of each level is multiplexed, so that the SLH contrast ratio The value of the connection matrix can be expressed without depending on the , and the associative accuracy can be improved.
第1図はこの発明の第1の発明の一実施例による光ニュ
ーロンコンピュータを表わす概念図、第2図はこの発明
の第2の発明の実施例による光ニューロンコンピュータ
を表わす概念図、第3図はこの発明による他の実施例を
示す概念図、第4図は従来の光ニューロンコンピュータ
を示す概念図である。
図において、(1)は発光素子アレイ、(2)は空間光
変調器(SLM) 、(3)は受光素子アレイ、(6)
。
(9)は制御回路、(7)はバンドパスフィルタ。
なお、各図中同一符号は同一、又は相当部分を示す。FIG. 1 is a conceptual diagram showing an optical neuron computer according to an embodiment of the first invention of the present invention, FIG. 2 is a conceptual diagram showing an optical neuron computer according to an embodiment of the second invention of the invention, and FIG. 4 is a conceptual diagram showing another embodiment of the present invention, and FIG. 4 is a conceptual diagram showing a conventional optical neuron computer. In the figure, (1) is a light emitting element array, (2) is a spatial light modulator (SLM), (3) is a light receiving element array, and (6) is a light receiving element array.
. (9) is a control circuit, and (7) is a bandpass filter. Note that the same reference numerals in each figure indicate the same or equivalent parts.
Claims (2)
算を行なう光ニューロコンピュータにおいて、上記結合
行列の値のレベルを複数のレベルに分割し、時系列的に
各レベルの結合行列を発生させてニューロン状態ベクト
ルとの積演算を行い、その後全体として結合行列とニュ
ーロン状態ベクトルとの積演算を行なう制御部を設けた
ことを特徴とする光ニューロコンピュータ。(1) In an optical neurocomputer that performs a product-sum operation between a neuron's state vector and a connection matrix, the level of the value of the connection matrix is divided into multiple levels, and a connection matrix for each level is generated in time series. 1. An optical neurocomputer characterized by comprising a control unit that performs a product operation with a neuron state vector, and then performs a product operation between a connection matrix and a neuron state vector as a whole.
算を行なう光ニューロコンピュータにおいて、上記結合
行列の値のレベルを複数のレベルに分割し、各レベルの
結合行列を周波数変調して周波数多重化し、多重化した
状態で状態ベクトルと結合行列の積演算を行い、積演算
を行った後周波数分離して全体の状態ベクトルと結合行
列の積を得る制御部を設けたことを特徴とする光ニュー
ロコンピュータ。(2) In an optical neurocomputer that performs a product-sum operation between a neuron's state vector and a connection matrix, the level of the value of the connection matrix is divided into multiple levels, and the connection matrix at each level is frequency-modulated and frequency-multiplexed. , an optical neuron comprising a control unit that performs a product operation of a state vector and a coupling matrix in a multiplexed state, performs frequency separation after performing the product operation, and obtains a product of the entire state vector and the coupling matrix. Computer.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP1021127A JP2540200B2 (en) | 1989-01-31 | 1989-01-31 | Optical Neuro Computer |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP1021127A JP2540200B2 (en) | 1989-01-31 | 1989-01-31 | Optical Neuro Computer |
Publications (2)
Publication Number | Publication Date |
---|---|
JPH02201427A true JPH02201427A (en) | 1990-08-09 |
JP2540200B2 JP2540200B2 (en) | 1996-10-02 |
Family
ID=12046227
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
JP1021127A Expired - Lifetime JP2540200B2 (en) | 1989-01-31 | 1989-01-31 | Optical Neuro Computer |
Country Status (1)
Country | Link |
---|---|
JP (1) | JP2540200B2 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
AT402350B (en) * | 1992-10-15 | 1997-04-25 | Grabherr Manfred | A device which operates on the neural network principle |
CN112099565A (en) * | 2020-09-16 | 2020-12-18 | 清华大学 | Universal linear light calculation module and its control method |
-
1989
- 1989-01-31 JP JP1021127A patent/JP2540200B2/en not_active Expired - Lifetime
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
AT402350B (en) * | 1992-10-15 | 1997-04-25 | Grabherr Manfred | A device which operates on the neural network principle |
CN112099565A (en) * | 2020-09-16 | 2020-12-18 | 清华大学 | Universal linear light calculation module and its control method |
Also Published As
Publication number | Publication date |
---|---|
JP2540200B2 (en) | 1996-10-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Ma et al. | Facial expression recognition using constructive feedforward neural networks | |
CA2642041C (en) | Spatio-temporal pattern recognition using a spiking neural network and processing thereof on a portable and/or distributed computer | |
Bengio et al. | Recurrent neural networks for missing or asynchronous data | |
US7028271B2 (en) | Hierarchical processing apparatus | |
Tiezzi et al. | A lagrangian approach to information propagation in graph neural networks | |
Elthakeb et al. | Divide and conquer: Leveraging intermediate feature representations for quantized training of neural networks | |
Boubez et al. | Wavelet neural networks and receptive field partitioning | |
Cowen et al. | Lsalsa: accelerated source separation via learned sparse coding | |
Bengio et al. | Learning the dynamic nature of speech with back-propagation for sequences | |
Yao et al. | EPNet for chaotic time-series prediction | |
Landstad | Quantizations arising from abelian subgroups | |
Littwin et al. | On random kernels of residual architectures | |
JPH02201427A (en) | Optical neurocomputer | |
Bungert et al. | Neural architecture search via Bregman iterations | |
Kozachkov et al. | Recursive construction of stable assemblies of recurrent neural networks | |
Duane | “FORCE” learning in recurrent neural networks as data assimilation | |
Belbahri et al. | Foothill: A quasiconvex regularization for edge computing of deep neural networks | |
WO1999028848A1 (en) | E-cell and basic circuit modules of e-circuits: e-cell pair totem, basic memory circuit and association extension | |
de Menezes et al. | Classification of paintings authorship using convolutional neural network | |
Gürel et al. | Functional identification of biological neural networks using reservoir adaptation for point processes | |
Cancelliere et al. | An analysis of numerical issues in neural training by pseudoinversion | |
Schultz et al. | Analyzing emergence in biological neural networks using graph signal processing | |
Cho et al. | A neural network model based on the cortical modularity | |
Hwang et al. | Mixture of discriminative learning experts of constant sensitivity for automated cytology screening | |
Jutten et al. | Simulation machine and integrated implementation of neural networks: A review of methods, problems and realizations |