CN109165730B - A State Quantization Network Implementation Method in Cross-Array Neuromorphic Hardware - Google Patents
A State Quantization Network Implementation Method in Cross-Array Neuromorphic Hardware Download PDFInfo
- Publication number
- CN109165730B CN109165730B CN201811029532.2A CN201811029532A CN109165730B CN 109165730 B CN109165730 B CN 109165730B CN 201811029532 A CN201811029532 A CN 201811029532A CN 109165730 B CN109165730 B CN 109165730B
- Authority
- CN
- China
- Prior art keywords
- cross
- array
- neural network
- neuromorphic hardware
- parameters
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000013139 quantization Methods 0.000 title claims abstract description 61
- 238000000034 method Methods 0.000 title claims abstract description 16
- 238000013528 artificial neural network Methods 0.000 claims abstract description 89
- 230000036279 refractory period Effects 0.000 claims abstract description 15
- 238000011002 quantification Methods 0.000 claims abstract description 13
- 230000000946 synaptic effect Effects 0.000 claims abstract description 10
- 238000012421 spiking Methods 0.000 claims description 43
- 238000012549 training Methods 0.000 claims description 38
- 210000002569 neuron Anatomy 0.000 claims description 34
- 239000003990 capacitor Substances 0.000 claims description 11
- 238000003491 array Methods 0.000 claims description 4
- 230000000717 retained effect Effects 0.000 claims description 4
- 238000007781 pre-processing Methods 0.000 claims 1
- 238000013507 mapping Methods 0.000 abstract description 8
- 230000000875 corresponding effect Effects 0.000 description 26
- 230000006870 function Effects 0.000 description 8
- 238000013527 convolutional neural network Methods 0.000 description 3
- 238000013500 data storage Methods 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 210000000225 synapse Anatomy 0.000 description 2
- 238000004260 weight control Methods 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000003592 biomimetic effect Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 230000001276 controlling effect Effects 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 238000010304 firing Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 210000000653 nervous system Anatomy 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
- G06N3/061—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using biological neurons, e.g. biological neurons connected to an integrated circuit
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
- G06N3/063—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Theoretical Computer Science (AREA)
- Molecular Biology (AREA)
- Neurology (AREA)
- Computational Linguistics (AREA)
- Artificial Intelligence (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Microelectronics & Electronic Packaging (AREA)
- Compression, Expansion, Code Conversion, And Decoders (AREA)
- Feedback Control In General (AREA)
Abstract
本发明属于神经网络技术领域,涉及一种交叉阵列神经形态硬件中状态量化网络实现方法。本发明的方法为,对人工神经网络各项参数(权值、阈值、泄漏常数、置位电压值、不应期时长、突触延迟时长等参数)进行量化后,将量化后的各项参数映射到交叉阵列神经形态硬件中,随后将经过预处理后输入数据送入到交叉阵列神经形态硬件中即可实现状态量化网络。通过状态量化,有效降低了交叉阵列神经形态硬件对存储单元的规模、存储级数、可靠性等的要求。
The invention belongs to the technical field of neural networks, and relates to a method for realizing a state quantification network in cross-array neuromorphic hardware. The method of the present invention is to quantify various parameters of the artificial neural network (weights, thresholds, leakage constants, setting voltage values, refractory period duration, synaptic delay duration and other parameters), and then quantify the quantized parameters. The state quantization network can be realized by mapping to the cross-array neuromorphic hardware, and then sending the pre-processed input data into the cross-array neuromorphic hardware. Through state quantization, the requirements of the cross-array neuromorphic hardware on the size of the storage unit, the number of storage levels, and the reliability are effectively reduced.
Description
技术领域technical field
本发明属于神经网络技术领域,涉及一种交叉阵列神经形态硬件中状态量化网络实现方法。The invention belongs to the technical field of neural networks, and relates to a method for realizing a state quantification network in cross-array neuromorphic hardware.
背景技术Background technique
神经形态硬件(Neuromorphic computing)用来指代与普遍的冯·诺依曼计算机体系结构形成鲜明对比的源于大脑的计算机、器件和模型。这种仿生学方法创造了高度连接的合成神经元和突触,其可用于神经科学理论建模,解决机器学习问题。Neuromorphic computing is used to refer to brain-derived computers, devices, and models in stark contrast to the prevailing von Neumann computer architecture. This biomimetic approach creates highly connected synthetic neurons and synapses that can be used to model neuroscience theories and solve machine learning problems.
神经形态电路是神经网络模型的物理实现之一,以硬件化的手段对生物神经系统进行高层面,高效地抽象和模拟,以期能在实现神经系统信息处理能力的基础上,达到低功耗、高适应性等特性。Neuromorphic circuit is one of the physical realizations of neural network models. It abstracts and simulates biological nervous systems at a high level by means of hardware, so as to achieve low power consumption, High adaptability and other characteristics.
交叉阵列将忆阻器用于数据存储和并行计算以及作为神经网络节点的一种重要组成架构就是采用交叉阵列(Crossbar)来组建大规模集成运算电路。通过垂直交叉阵列,可以将大量的忆阻器平行的放置在一起,形成忆阻器矩阵。在不同的电压控制下,读取和改变忆阻器的值就能获得和设置一个权值矩阵。交叉阵列广泛应用于数据存储和神经网络学习。Crossbars use memristors for data storage and parallel computing and as an important structure of neural network nodes. Crossbars are used to form large-scale integrated circuits. With a vertical crossover array, a large number of memristors can be placed together in parallel to form a memristor matrix. Under different voltage control, reading and changing the value of the memristor can obtain and set a weight matrix. Interleaved arrays are widely used in data storage and neural network learning.
交叉阵列交叉处单元除忆阻器之外,也可以选择其他器件构成,如电容,晶体管,可变电阻等,也可像忆阻器一样形成阵列,用于数据存储或者用于交叉阵列神经形态硬件中。In addition to the memristor, the unit at the intersection of the cross-array can also be composed of other devices, such as capacitors, transistors, variable resistors, etc., and can also form an array like a memristor for data storage or for cross-array neuromorphism. in hardware.
现有技术至少存在以下问题:The prior art has at least the following problems:
目前所实现的交叉阵列神经形态硬件中,突触权值以及神经元的各种参数例如阈值,泄露常数,置位电压,不应期时长,突触延迟时长等参数需要占用许多系统存储资源,随着电路规模的急剧扩大,在如今存储资源相对比较匮乏的情况下,这必然会成为神经形态硬件的一个重大瓶颈。In the currently implemented cross-array neuromorphic hardware, parameters such as synaptic weights and neuron parameters such as threshold, leakage constant, set voltage, refractory period, and synaptic delay require a lot of system storage resources. With the rapid expansion of circuit scale, this will inevitably become a major bottleneck for neuromorphic hardware in today's relatively scarce storage resources.
发明内容SUMMARY OF THE INVENTION
针对上述问题,本发明提出了交叉阵列神经形态硬件中状态量化网络实现方法,将交叉阵列神经形态硬件中的各种参数进行状态量化,有效降低了交叉阵列神经形态硬件对存储单元的规模、存储级数、可靠性等的要求,可有力推进交叉阵列神经形态硬件的应用。In view of the above problems, the present invention proposes a state quantization network implementation method in the cross-array neuromorphic hardware, which quantifies various parameters in the cross-array neuromorphic hardware, effectively reducing the size and storage capacity of the storage unit by the cross-array neuromorphic hardware. The requirements of series, reliability, etc. can strongly promote the application of cross-array neuromorphic hardware.
本发明的技术方案如下:The technical scheme of the present invention is as follows:
S1:选取参数并对其进行量化,参数量化可在神经网络训练完成后进行,也可在神经网络训练时进行。S1: Select parameters and quantize them. Parameter quantization can be performed after the neural network training is completed, or it can be performed during the neural network training.
A:在神经网络训练完成后量化A: Quantize after neural network training is complete
将人工神经网络(包括MLP、CNN、RNN、LSTM等)在特定任务和特定条件下进行训练获得参数(包括权值、阈值、泄漏常数、置位电压值、不应期时长、突触延迟时长等);Train artificial neural networks (including MLP, CNN, RNN, LSTM, etc.) under specific tasks and conditions to obtain parameters (including weights, thresholds, leakage constants, set voltage values, refractory period duration, synaptic delay duration) Wait);
在脉冲神经网络中对S1中获取的人工神经网络参数进行量化,在脉冲神经网络中对至少一个上述人工神经网络训练获得的参数进行量化,即用几个量化状态来取代训练所得的参数的所有状态。对脉冲神经网络中量化的参数进行反复调整使得参数量化后的脉冲神经网络达到预定的功能与性能,则参数量化完成。The artificial neural network parameters obtained in S1 are quantized in the spiking neural network, and at least one parameter obtained by the above artificial neural network training is quantized in the spiking neural network, that is, several quantization states are used to replace all the parameters obtained by training. state. The parameters quantized in the spiking neural network are repeatedly adjusted so that the quantized spiking neural network achieves the predetermined function and performance, and the parameter quantization is completed.
B:在神经网络训练时量化B: Quantize during neural network training
在对人工神经网络进行训练时将要量化的参数(如权值)的取值进行量化,如将权值量化值定为-1、-0.4、0、0.4、1,之后再对人工神经网络进行训练,训练完成后将参数映射到相应的脉冲神经网络中,并对训练所得的量化参数进行调整或重新选取量化参数值进行训练,直到脉冲神经网络能达到预定功能与性能,则参数量化完成。When training the artificial neural network, the values of the parameters to be quantized (such as weights) are quantized, for example, the quantization values of the weights are set as -1, -0.4, 0, 0.4, 1, and then the artificial neural network is quantized. After training, the parameters are mapped to the corresponding spiking neural network, and the quantization parameters obtained from the training are adjusted or the quantization parameter values are re-selected for training, until the spiking neural network can achieve the predetermined function and performance, then the parameter quantization is completed.
S2:将S1中量化后的脉冲神经网络参数映射到交叉阵列神经形态硬件中S2: Mapping the quantized spiking neural network parameters in S1 into the cross-array neuromorphic hardware
将训练好的脉冲网络中的量化参数映射到交叉阵列神经形态硬件中,不同的参数,映射到交叉阵列神经形态硬件中的对应的控制部分。如量化权值映射到交叉阵列神经形态硬件中的交叉阵列;量化阈值映射到神经元阈值控制部分;量化泄漏常数映射到神经元泄漏常数控制部分;量化置位电压值映射到神经元置位电压控制部分;量化不应期时长映射到神经元不应期时长控制部分;量化突触延迟时长映射到突触延迟时长控制部分。The quantization parameters in the trained spiking network are mapped to the cross-array neuromorphic hardware, and different parameters are mapped to the corresponding control parts in the cross-array neuromorphic hardware. For example, the quantization weights are mapped to the cross arrays in the cross-array neuromorphic hardware; the quantization thresholds are mapped to the neuron threshold control part; the quantized leakage constants are mapped to the neuron leakage constant control parts; the quantized set voltage values are mapped to the neuron set voltage Control section; quantified refractory period duration maps to neuron refractory period duration control section; quantified synaptic delay duration maps to synaptic delay duration control section.
S3:对输入数据进行预处理,如转化为脉冲输入以及编码等,将预处理后的输入数据送入到交叉阵列神经形态硬件中,即可实现状态量化网络。S3: Preprocess the input data, such as converting it into pulse input and coding, etc., and send the preprocessed input data to the cross-array neuromorphic hardware to realize the state quantization network.
进一步的,上述交叉阵列神经形态硬件中的交叉阵列有多种实现方式;Further, the cross-array in the above-mentioned cross-array neuromorphic hardware has multiple implementations;
具体的,交叉阵列交叉处单元可由一个晶体管(N型晶体管、P型晶体管、浮栅晶体管、突触晶体管等)实现;Specifically, the unit at the intersection of the cross array can be realized by one transistor (N-type transistor, P-type transistor, floating gate transistor, synapse transistor, etc.);
具体的,交叉阵列交叉处单元可由一个半导体存储单元(如6管SRAM单元)实现;Specifically, the unit at the intersection of the crossbar array can be realized by a semiconductor memory unit (such as a 6-tube SRAM unit);
具体的,交叉阵列交叉处单元可由一个电容实现;Specifically, the unit at the intersection of the cross array can be realized by a capacitor;
具体的,交叉阵列交叉处单元可由一个选择晶体管加一个电容实现;Specifically, the unit at the intersection of the cross array can be realized by adding a selection transistor and a capacitor;
具体的,交叉阵列交叉处单元可由一个忆阻器实现;Specifically, the unit at the intersection of the crossed array can be realized by a memristor;
具体的,交叉阵列交叉处单元可由一个选择晶体管加一个可变电阻实现;Specifically, the unit at the intersection of the crossover array can be realized by a selection transistor plus a variable resistor;
具体的,交叉阵列交叉处单元可由一个整流二极管加一个可变电阻实现。Specifically, the unit at the intersection of the crossbar array can be realized by a rectifier diode and a variable resistor.
本发明的有益效果为,将人工神经网络转化为脉冲神经网络的基础上,可实现一个或多个参数的量化,并在此基础上进一步的将量化参数映射到交叉阵列神经形态硬件中对应的控制部分,从而实现交叉阵列神经形态硬件中状态量化网络,从而在硬件层面实现了状态量化,有效降低了交叉阵列神经形态硬件对存储单元的规模、存储级数、可靠性等的要求。The beneficial effect of the present invention is that, on the basis of converting the artificial neural network into an impulse neural network, the quantization of one or more parameters can be realized, and on this basis, the quantization parameters can be further mapped to the corresponding corresponding ones in the cross-array neuromorphic hardware. The control part is used to realize the state quantification network in the cross-array neuromorphic hardware, thereby realizing the state quantization at the hardware level, and effectively reducing the requirements of the cross-array neuromorphic hardware on the scale, storage level, reliability, etc. of the storage unit.
附图说明Description of drawings
图1是本发明实施例提供的交叉阵列神经形态硬件中状态量化网络功能实现方法的流程图;1 is a flowchart of a method for implementing a state quantification network function in cross-array neuromorphic hardware provided by an embodiment of the present invention;
图2是本发明实施例提供的实现交叉阵列神经形态硬件中状态量化网络功能的系统结构图;2 is a system structure diagram for realizing the state quantification network function in the cross-array neuromorphic hardware provided by an embodiment of the present invention;
图3是用电容实现的交叉阵列形式;Figure 3 is a cross-array form realized with capacitors;
图4是用忆阻器实现的交叉阵列形式;Figure 4 is a cross-array form implemented with memristors;
图5是用晶体管加可变电阻实现的交叉阵列形式;Fig. 5 is the cross-array form realized with transistors and variable resistors;
图6是在本发明实施例中所采用的一种神经元模型。FIG. 6 is a neuron model adopted in the embodiment of the present invention.
具体实施方式Detailed ways
为使本发明的目的、技术方案和优点更加清楚,下面将结合附图对本发明实施方式作进一步地详细描述。In order to make the objectives, technical solutions and advantages of the present invention clearer, the embodiments of the present invention will be further described in detail below with reference to the accompanying drawings.
参阅图1,本发明实施例提供了交叉阵列神经形态硬件中实现状态量化网络的方法,包括以下步骤:Referring to FIG. 1, an embodiment of the present invention provides a method for implementing a state quantization network in cross-array neuromorphic hardware, including the following steps:
S1:选取参数并对其进行量化,参数量化可在神经网络训练完成后进行,也可在神经网络训练时进行。S1: Select parameters and quantize them. Parameter quantization can be performed after the neural network training is completed, or it can be performed during the neural network training.
A:在神经网络训练完成后量化A: Quantize after neural network training is complete
将人工神经网络(包括MLP、CNN、RNN、LSTM等)在特定任务和特定条件下进行训练获得参数,(包括权值、阈值、泄漏常数、置位电压值、不应期时长、突触延迟时长等);The artificial neural network (including MLP, CNN, RNN, LSTM, etc.) is trained under specific tasks and conditions to obtain parameters, (including weights, thresholds, leakage constants, set voltage values, refractory period duration, synaptic delay duration, etc.);
在脉冲神经网络中对S1中获取的人工神经网络参数进行量化,在脉冲神经网络中对至少一个上述人工神经网络训练获得的参数进行量化,即用几个量化状态来取代训练所得的全部参数。对脉冲神经网络中量化的参数进行反复调整使得参数量化后的脉冲神经网络得到与原人工神经网络一样的功能以及几乎一样的性能;则参数量化完成。The artificial neural network parameters obtained in S1 are quantized in the spiking neural network, and at least one parameter obtained by the above artificial neural network training is quantized in the spiking neural network, that is, several quantization states are used to replace all the parameters obtained by training. The quantized parameters in the spiking neural network are repeatedly adjusted so that the quantized spiking neural network can obtain the same function and almost the same performance as the original artificial neural network; then the parameter quantization is completed.
B:在神经网络训练时量化B: Quantize during neural network training
在对人工神经网络进行训练时将要量化的参数(如权值)的取值进行量化,如将权值量化值定为-1、-0.4、0、0.4、1,之后再对人工神经网络进行训练,训练完成后将参数映射到相应的脉冲神经网络中,并对训练所得的量化参数进行调整,直到脉冲神经网络能达到与原人工神经网络相同的功能以及几乎一样的性能,则参数量化完成。When training the artificial neural network, the values of the parameters to be quantized (such as weights) are quantized, for example, the quantization values of the weights are set as -1, -0.4, 0, 0.4, 1, and then the artificial neural network is quantized. Training, after the training is completed, the parameters are mapped to the corresponding spiking neural network, and the quantization parameters obtained from the training are adjusted until the spiking neural network can achieve the same function and almost the same performance as the original artificial neural network, then the parameter quantization is completed. .
S2:将S1中量化后的脉冲神经网络参数映射到交叉阵列神经形态硬件中S2: Mapping the quantized spiking neural network parameters in S1 into the cross-array neuromorphic hardware
将训练好的脉冲网络中的量化参数映射到交叉阵列神经形态硬件中,。不同的参数,映射到交叉阵列神经形态硬件中的部位不一样,如量化权值映射到交叉阵列神经形态硬件中的交叉阵列;量化阈值映射到神经元阈值控制部分;量化泄漏常数映射到神经元泄漏常数控制部分;量化置位电压值映射到神经元置位电压控制部分;量化不应期时长映射到神经元不应期时长控制部分;量化突触延迟时长映射到突触延迟时长控制部分。Mapping the quantized parameters from the trained spiking network to the cross-array neuromorphic hardware, . Different parameters are mapped to different parts in the cross-array neuromorphic hardware. For example, the quantization weights are mapped to the cross-array in the cross-array neuromorphic hardware; the quantization threshold is mapped to the neuron threshold control part; the quantization leakage constant is mapped to the neuron The leakage constant control part; the quantized set voltage value is mapped to the neuron set voltage control part; the quantified refractory period duration is mapped to the neuron refractory period duration control part; the quantified synaptic delay duration is mapped to the synaptic delay duration control part.
S3:对输入数据进行预处理,将原输入转化为脉冲输入并对其进行编码等,将预处理后的输入数据送入到交叉阵列神经形态硬件中,即可实现状态量化网络。S3: Preprocess the input data, convert the original input into pulse input and encode it, etc., and send the preprocessed input data to the cross-array neuromorphic hardware to realize the state quantization network.
下面以权值量化为例进行详细的说明。The following takes weight quantization as an example for detailed description.
针对某一特定任务建立相应的人工神经网络,该神经网络可以是MLP、CNN、RNN、LSTM等任一模型的人工神经网络。A corresponding artificial neural network is established for a specific task, and the neural network can be an artificial neural network of any model such as MLP, CNN, RNN, and LSTM.
在神经网络训练完成后量化:在一定条件下使用传统的人工神经网络训练方法对其进行训练,得到训练后的参数。再将人工神经网络训练好的参数映射到与其拓扑结构相同的脉冲神经网络中,选取需要量化的权值参数,在脉冲神经网络中用有限个权值状态代替之前所有权值,实现权值的量化,使用量化后的权值对脉冲神经网络进行测试,对比此时脉冲神经网络与对应的人工神经网络的性能,若脉冲神经网络达到了性能指标,则权值量化结束,否则重新取权值量化的状态取值,并再次对脉冲神经网络进行测试,如此反复,直到其达到性能指标。Quantization after the neural network training is completed: Under certain conditions, the traditional artificial neural network training method is used to train it, and the parameters after training are obtained. Then map the parameters trained by the artificial neural network to the spiking neural network with the same topology structure, select the weight parameters that need to be quantified, and replace the previous ownership values with a limited number of weight states in the spiking neural network to realize the quantization of the weights. , use the quantized weights to test the spiking neural network, compare the performance of the spiking neural network and the corresponding artificial neural network at this time, if the spiking neural network reaches the performance index, the weight quantization ends, otherwise the weight quantization is re-taken , and test the spiking neural network again, and so on, until it reaches the performance target.
在神经网络训练时量化:在对人工神经网络进行训练之前选取权值的几个取值状态,并对人工神经网络进行训练,若能达到要求,则训练结束,反之则改变权值的取值状态,并重新训练,如此反复,直到人工神经网络达到要求,训练结束。将训练好的量化权值及其他参数映射到与人工神经网络拓扑结构相同的脉冲神经网络中,对脉冲神经网络进行测试,对比此时脉冲神经网络与对应的人工神经网络的性能,若脉冲神经网络达到了性能指标,则权值量化结束,否则调整量化权值取值,并再次进行人工神经网络训练和脉冲神经网络的映射,如此反复,直到其达到性能指标。Quantization during neural network training: select several value states of the weights before training the artificial neural network, and train the artificial neural network. If the requirements can be met, the training is over; otherwise, the weights are changed. state, and retrain, and so on, until the artificial neural network meets the requirements and the training ends. Map the trained quantization weights and other parameters to the spiking neural network with the same topology as the artificial neural network, test the spiking neural network, and compare the performance of the spiking neural network and the corresponding artificial neural network at this time. If the network reaches the performance index, the weight quantization ends, otherwise the quantization weight value is adjusted, and the artificial neural network training and the mapping of the spiking neural network are performed again, and so on, until it reaches the performance index.
将训练好的脉冲神经网络映射到交叉阵列神经形态硬件中对应的权值控制部分,即交叉阵列单元中。调整交叉阵列单元的相关参数,使交叉阵列神经形态硬件中的权值取值为之前脉冲神经网络训练后所得的有限个权值状态。The trained spiking neural network is mapped to the corresponding weight control part in the cross-array neuromorphic hardware, that is, the cross-array unit. The relevant parameters of the cross-array unit are adjusted so that the weights in the cross-array neuromorphic hardware are the finite weight states obtained after the previous training of the spiking neural network.
对输入数据进行预处理,将其转化为脉冲输入并对其进行编码后送入到交叉阵列神经形态硬件中,即可用交叉阵列神经形态硬件实现权值量化神经网络功能。The input data is preprocessed, converted into pulse input, encoded and sent to the cross-array neuromorphic hardware, and the weight quantization neural network function can be realized by the cross-array neuromorphic hardware.
图2示出了本发明实施例中交叉阵列神经形态硬件实现状态量化网络的系统结构图。如图所示,正权值与负权值分别由各自对应的交叉阵列实现,输入分为正输入+Vin,1,+Vin,2,+Vin,3,、、、,+Vin,n与负输入-Vin,1,-Vin,2,-Vin,3,、、、,-Vin,n(与正输入大小符号都相等)。其中,正权值交叉阵列中保留原来的正权值,负权值置零;负权值交叉阵列中保留原来的负权值,正权值置零,并对负权值取绝对值。每一个输入通过交叉阵列对应的单元的状态(即相应的权值大小)产生附加了权值的输入,一列正输入与对应的一列负输入分别接对应神经元的正输入与负输入,通过神经元内部和差功能相关部分实现正输入与负输入相减。具体的,以输入为正输入1正输入2,负输入9,负输入10时为例,分析神经元19对应输入与输出情况,其他神经元的输入输出情况可同理分析得出。正输入1经由交叉阵列中的5产生对应的附有权值的电流输入;正输入2经由交叉阵列中的6产生对应的附有权值的电流输入;负输入9经由交叉阵列中的13产生对应的附有权值的电流输入;负输入10经由交叉阵列中的14产生对应的附有权值的电流输入。对应于神经元19的正权值交叉阵列的该列产生总的正输入17,类似的对应于神经元19的负权值交叉阵列的该列产生总的负输入18,神经元19对总的正输入17与总的负输入18进行相关的处理,实现正输入与负输入相减,并对其进行后续的处理,最终产生相应的输出。FIG. 2 shows a system structure diagram of a state quantization network implemented by cross-array neuromorphic hardware in an embodiment of the present invention. As shown in the figure, the positive weights and negative weights are realized by their corresponding cross arrays respectively, and the input is divided into positive inputs +V in,1 , +V in,2 , +V in,3 ,,,,,+V in,n and negative input -V in,1 ,-V in,2 ,-V in,3 ,,,,,-V in,n (same sign as positive input). Among them, the original positive weight is retained in the positive weight cross array, and the negative weight is set to zero; the original negative weight is retained in the negative weight cross array, and the positive weight is zero, and the absolute value of the negative weight is taken. Each input generates an input with additional weights through the state of the corresponding unit of the crossover array (ie, the corresponding weight size), and a column of positive inputs and a corresponding column of negative inputs are respectively connected to the positive and negative inputs of the corresponding neurons. The relevant part of the internal sum-difference function of the element realizes the subtraction of the positive input and the negative input. Specifically, taking the input as
脉冲神经网络量化权值映射到交叉阵列神经形态硬件中权值控制部分,具体的,量化权值映射到图2所示的系统结构中的交叉阵列,通过调整每个交叉阵列单元的控制部分,改变交叉阵列单元的状态,实现权值量化。The quantized weight of the spiking neural network is mapped to the weight control part of the cross-array neuromorphic hardware. Specifically, the quantized weight is mapped to the cross-array in the system structure shown in Figure 2. By adjusting the control part of each cross-array unit, Change the state of the cross array unit to realize weight quantization.
下面对不同的交叉阵列单元如何实现权值量化进行说明The following describes how different cross array units realize weight quantization
图3是由电容构成的交叉阵列,具体的,交叉阵列中的每一个单元,控制该单元的输入与神经元的连接强度,即权值大小。交叉阵列中的电容单元同伙该单元上的电荷量的多少来表示权值的大小,单位输入所产生的电流就相应的越大。脉冲神经网络量化权值映射到交叉阵列神经形态硬件时,根据量化权值的取值状态,调整交叉阵列中电容的控制单元,使得电容上的电荷量取值状态与量化权值状态一一对应,则可用由电容构成的交叉阵列实现交叉阵列神经形态硬件中的权值量化。Figure 3 is a crossover array composed of capacitors. Specifically, each unit in the crossover array controls the connection strength between the input of the unit and the neuron, that is, the size of the weight. The capacitance unit in the crossover array and the amount of charge on the unit represent the size of the weight, and the current generated by the unit input is correspondingly larger. When the quantized weights of the spiking neural network are mapped to the cross-array neuromorphic hardware, the control unit of the capacitors in the cross-array is adjusted according to the value state of the quantization weights, so that the value state of the charge amount on the capacitor corresponds to the state of the quantization weight value one-to-one. , the weight quantization in the cross-array neuromorphic hardware can be realized by a cross-array consisting of capacitors.
图4是由忆阻器构成的交叉阵列,具体的,由忆阻器的阻值状态表示权值大小,从而控制单位输入所产生的电流大小。脉冲神经网络量化权值映射到交叉阵列神经形态硬件时,根据量化权值的取值状态,调整交叉阵列中忆阻器的控制单元,使得忆阻器的阻值取值状态与量化权值状态一一对应,则可用由忆阻器构成的交叉阵列实现交叉阵列神经形态硬件中的权值量化。Figure 4 is a cross-array composed of memristors. Specifically, the resistance state of the memristor represents the size of the weight, thereby controlling the magnitude of the current generated by the unit input. When the quantized weight of the spiking neural network is mapped to the cross-array neuromorphic hardware, the control unit of the memristor in the cross-array is adjusted according to the value state of the quantization weight, so that the resistance value state of the memristor is the same as that of the quantization weight value. One-to-one correspondence, the weight quantization in the cross-array neuromorphic hardware can be realized by a cross-array consisting of memristors.
图5是由晶体管加可变电阻构成的交叉阵列,具体的,改变可变电阻的阻值大小,从而控制单位输入所产生的电流大小。脉冲神经网络量化权值映射到交叉阵列神经形态硬件时,根据量化权值的取值状态,调整交叉阵列中可变电阻的控制单元,使得可变电组的阻值取值状态与量化权值状态一一对应,则可用由晶体管加可变电组构成的交叉阵列实现交叉阵列神经形态硬件中的权值量化。Figure 5 is a cross-array consisting of transistors and variable resistors. Specifically, the resistance value of the variable resistor is changed to control the magnitude of the current generated by the unit input. When the quantized weights of the spiking neural network are mapped to the cross-array neuromorphic hardware, the control unit of the variable resistors in the cross-array is adjusted according to the value state of the quantization weights, so that the resistance value state of the variable electrical group is consistent with the quantization weights. If the states correspond one-to-one, the weight quantization in the cross-array neuromorphic hardware can be realized by a cross-array consisting of transistors and variable electrical groups.
图6是本发明实施例中所采用的一种神经元结构模型。该神经元在实现正输入与负输入的同时可对最终输入进行后续处理。如图所示,神经元有正输入与负输入,正输入与负输入经由电阻R1、R2、R3、R4、R5、运放1和运放2构成的权值处理部分实现正输入与负输入相减,产生最终输入。最终输入由后续单元进行处理并产生相应输出。量化的脉冲神经网络参数映射到神经形态硬件时,不同参数映射到神经元各自的对应控制部分。量化阈值映射到图中神经元的阈值控制部分,即图中的可调电压源部分。该可调电压源可产生多个电压值,对应于神经元的多个阈值。根据脉冲神经网络量化阈值,调整该可调电压源所产生的电压从而改变神经元的阈值,即实现了量化阈值的映射;量化泄漏常数映射到图中神经元中的R8,通过调整R8的阻值大小可改变电容C2上电荷的泄漏速度,即可改变神经元的泄漏常数,将脉冲神经网络量化泄漏常数映射到交叉神经形态硬件时,根据量化泄漏常数,相应的调整电阻R8的阻值大小,即可实现量化泄漏常数的映射;量化不应期时长映射到图中神经元的不应期时长控制单元,即图中的选择开关S。选择开关S可调整导通状态以及保持时间,在开关S处于放电状态的保持时间内,神经元不能对外部输入产生反应,即神经元处于不应期,根据脉冲神经网络量化不应期时长调整开关S在神经元每次发放时的保持时间,即可调整神经元的不应期时长,则可实现量化不应期时长的映射。FIG. 6 is a neuron structure model adopted in the embodiment of the present invention. The neuron implements positive and negative inputs while post-processing the final input. As shown in the figure, the neuron has positive input and negative input, and the positive input and negative input are realized by the weight processing part composed of resistors R 1 , R 2 , R 3 , R 4 , R 5 ,
Claims (7)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811029532.2A CN109165730B (en) | 2018-09-05 | 2018-09-05 | A State Quantization Network Implementation Method in Cross-Array Neuromorphic Hardware |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811029532.2A CN109165730B (en) | 2018-09-05 | 2018-09-05 | A State Quantization Network Implementation Method in Cross-Array Neuromorphic Hardware |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109165730A CN109165730A (en) | 2019-01-08 |
CN109165730B true CN109165730B (en) | 2022-04-26 |
Family
ID=64893970
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811029532.2A Active CN109165730B (en) | 2018-09-05 | 2018-09-05 | A State Quantization Network Implementation Method in Cross-Array Neuromorphic Hardware |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109165730B (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109800872B (en) * | 2019-01-28 | 2022-12-16 | 电子科技大学 | A Neuromorphic Processor Based on Segment Multiplexing and Parameter Quantization Sharing |
CN112183734B (en) * | 2019-07-03 | 2025-02-14 | 财团法人工业技术研究院 | Neuronal circuits |
CN111490162B (en) * | 2020-04-14 | 2023-05-05 | 中国科学院重庆绿色智能技术研究院 | A flexible artificial afferent nervous system based on micro-nano structure force sensitive film and its preparation method |
CN111598237B (en) * | 2020-05-21 | 2024-06-11 | 上海商汤智能科技有限公司 | Quantization training, image processing method and device, and storage medium |
CN114186676A (en) | 2020-09-15 | 2022-03-15 | 深圳市九天睿芯科技有限公司 | An in-memory spiking neural network based on current integration |
CN112163673B (en) * | 2020-09-28 | 2023-04-07 | 复旦大学 | Population routing method for large-scale brain-like computing network |
CN112199234A (en) * | 2020-09-29 | 2021-01-08 | 中国科学院上海微系统与信息技术研究所 | Neural network fault tolerance method based on memristor |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009026181A (en) * | 2007-07-23 | 2009-02-05 | Ryukoku Univ | neural network |
CN108009640A (en) * | 2017-12-25 | 2018-05-08 | 清华大学 | The training device and its training method of neutral net based on memristor |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8515885B2 (en) * | 2010-10-29 | 2013-08-20 | International Business Machines Corporation | Neuromorphic and synaptronic spiking neural network with synaptic weights learned using simulation |
CN105390520B (en) * | 2015-10-21 | 2018-06-22 | 清华大学 | The method for parameter configuration of memristor crossed array |
EP3414702A1 (en) * | 2016-02-08 | 2018-12-19 | Spero Devices, Inc. | Analog co-processor |
CN108304922B (en) * | 2017-01-13 | 2020-12-15 | 华为技术有限公司 | Computing device and computing method for neural network computing |
CN106971372B (en) * | 2017-02-24 | 2020-01-03 | 北京大学 | Coding type flash memory system and method for realizing image convolution |
-
2018
- 2018-09-05 CN CN201811029532.2A patent/CN109165730B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009026181A (en) * | 2007-07-23 | 2009-02-05 | Ryukoku Univ | neural network |
CN108009640A (en) * | 2017-12-25 | 2018-05-08 | 清华大学 | The training device and its training method of neutral net based on memristor |
Non-Patent Citations (3)
Title |
---|
Memristor Crossbar-Based Neuromorphic Computing System: A Case Study;Miao Hu等;《IEEE Transactions on Neural Networks and Learning Systems》;20140110;第25卷(第10期);1864-1878 * |
Neuromorphic Computing with Memristor Crossbar;Xinjiang Zhang等;《physica status solidi (a)》;20180711;第215卷(第13期);1-16 * |
基于忆阻器交叉阵列的卷积神经网络电路设计;胡飞等;《计算机研究与发展》;20180515;1097-1107 * |
Also Published As
Publication number | Publication date |
---|---|
CN109165730A (en) | 2019-01-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109165730B (en) | A State Quantization Network Implementation Method in Cross-Array Neuromorphic Hardware | |
CN112183739B (en) | Hardware architecture of memristor-based low-power-consumption pulse convolution neural network | |
US11861489B2 (en) | Convolutional neural network on-chip learning system based on non-volatile memory | |
CN109816026B (en) | Fusion device and method of convolutional neural network and impulse neural network | |
Milo et al. | Demonstration of hybrid CMOS/RRAM neural networks with spike time/rate-dependent plasticity | |
US9330355B2 (en) | Computed synapses for neuromorphic systems | |
CN108804786B (en) | A memristor model circuit design method with plastic synaptic weights in associative neural network | |
CN108717570A (en) | A kind of impulsive neural networks parameter quantification method | |
CN110852429B (en) | 1T 1R-based convolutional neural network circuit and operation method thereof | |
CN110998611A (en) | Neuromorphic processing device | |
CN108268938B (en) | Neural network, information processing method and information processing system thereof | |
EP3055812A2 (en) | Shared memory architecture for a neural simulator | |
Zhang et al. | The framework and memristive circuit design for multisensory mutual associative memory networks | |
US9959499B2 (en) | Methods and apparatus for implementation of group tags for neural models | |
CN111242063A (en) | Construction method of small sample classification model based on transfer learning and application of iris classification | |
JPWO2020092691A5 (en) | ||
WO2015167765A2 (en) | Temporal spike encoding for temporal learning | |
KR20210152244A (en) | Apparatus for implementing neural network and operation method thereof | |
Sun et al. | Low-consumption neuromorphic memristor architecture based on convolutional neural networks | |
WO2015148210A2 (en) | Plastic synapse management | |
CN109102072B (en) | Memristor synaptic pulse neural network circuit design method based on single-electron transistor | |
CN114169511B (en) | An associative memory circuit and method based on physical memristor | |
US20150213356A1 (en) | Method for converting values into spikes | |
Sun et al. | Quaternary synapses network for memristor-based spiking convolutional neural networks | |
Daddinounou et al. | Synaptic control for hardware implementation of spike timing dependent plasticity |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |