Roohi et al., 2019 - Google Patents
Processing-in-memory acceleration of convolutional neural networks for energy-effciency, and power-intermittency resilienceRoohi et al., 2019
View PDF- Document ID
- 811471095583419745
- Author
- Roohi A
- Angizi S
- Fan D
- DeMara R
- Publication year
- Publication venue
- 20th International Symposium on Quality Electronic Design (ISQED)
External Links
Snippet
Herein, a bit-wise Convolutional Neural Network (CNN) in-memory accelerator is implemented using Spin-Orbit Torque Magnetic Random Access Memory (SOT-MRAM) computational sub-arrays. It utilizes a novel AND-Accumulation method capable of …
- 230000001537 neural 0 title abstract description 12
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F7/00—Methods or arrangements for processing data by operating upon the order or content of the data handled
- G06F7/38—Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation
- G06F7/48—Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation using non-contact-making devices, e.g. tube, solid state device; using unspecified devices
- G06F7/52—Multiplying; Dividing
- G06F7/523—Multiplying only
- G06F7/53—Multiplying only in parallel-parallel fashion, i.e. both operands being entered in parallel
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for programme control, e.g. control unit
- G06F9/06—Arrangements for programme control, e.g. control unit using stored programme, i.e. using internal store of processing equipment to receive and retain programme
- G06F9/30—Arrangements for executing machine-instructions, e.g. instruction decode
- G06F9/30003—Arrangements for executing specific machine instructions
- G06F9/30007—Arrangements for executing specific machine instructions to perform operations on data operands
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F7/00—Methods or arrangements for processing data by operating upon the order or content of the data handled
- G06F7/38—Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation
- G06F7/48—Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation using non-contact-making devices, e.g. tube, solid state device; using unspecified devices
- G06F7/544—Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation using non-contact-making devices, e.g. tube, solid state device; using unspecified devices for evaluating functions by calculation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/16—Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/50—Computer-aided design
- G06F17/5009—Computer-aided design using simulation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Error detection; Error correction; Monitoring responding to the occurence of a fault, e.g. fault tolerance
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C11/00—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor
- G11C11/21—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements
- G11C11/34—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements using semiconductor devices
- G11C11/40—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements using semiconductor devices using transistors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F15/00—Digital computers in general; Data processing equipment in general
- G06F15/76—Architectures of general purpose stored programme computers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F1/00—Details of data-processing equipment not covered by groups G06F3/00 - G06F13/00, e.g. cooling, packaging or power supply specially adapted for computer application
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F2217/00—Indexing scheme relating to computer aided design [CAD]
- G06F2217/78—Power analysis and optimization
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F2207/00—Indexing scheme relating to methods or arrangements for processing data by operating upon the order or content of the data handled
- G06F2207/38—Indexing scheme relating to groups G06F7/38 - G06F7/575
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computer systems based on biological models
- G06N3/02—Computer systems based on biological models using neural network models
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Yang et al. | Sparse ReRAM engine: Joint exploration of activation and weight sparsity in compressed neural networks | |
Bavikadi et al. | A review of in-memory computing architectures for machine learning applications | |
Angizi et al. | Cmp-pim: an energy-efficient comparator-based processing-in-memory neural network accelerator | |
Wang et al. | 14.2 A compute SRAM with bit-serial integer/floating-point operations for programmable in-memory vector acceleration | |
KR102780371B1 (en) | Method for performing PIM (PROCESSING-IN-MEMORY) operations on serially allocated data, and related memory devices and systems | |
Eckert et al. | Neural cache: Bit-serial in-cache acceleration of deep neural networks | |
Roohi et al. | Processing-in-memory acceleration of convolutional neural networks for energy-effciency, and power-intermittency resilience | |
Mittal et al. | A survey of SRAM-based in-memory computing techniques and applications | |
Angizi et al. | IMCE: Energy-efficient bit-wise in-memory convolution engine for deep neural network | |
Umesh et al. | A survey of spintronic architectures for processing-in-memory and neural networks | |
Lenjani et al. | Fulcrum: A simplified control and access mechanism toward flexible and practical in-situ accelerators | |
Zheng et al. | Mobilatice: a depth-wise dcnn accelerator with hybrid digital/analog nonvolatile processing-in-memory block | |
Angizi et al. | Parapim: a parallel processing-in-memory accelerator for binary-weight deep neural networks | |
Angizi et al. | Dima: a depthwise cnn in-memory accelerator | |
Luo et al. | AILC: Accelerate on-chip incremental learning with compute-in-memory technology | |
US20250094126A1 (en) | In-memory computation circuit and method | |
Lu et al. | RIME: A scalable and energy-efficient processing-in-memory architecture for floating-point operations | |
Liu et al. | Sme: Reram-based sparse-multiplication-engine to squeeze-out bit sparsity of neural network | |
Tsai et al. | RePIM: Joint exploitation of activation and weight repetitions for in-ReRAM DNN acceleration | |
Zhao et al. | NAND-SPIN-based processing-in-MRAM architecture for convolutional neural network acceleration | |
Jasemi et al. | Reliable and energy efficient MLC STT-RAM buffer for CNN accelerators | |
Ollivier et al. | PIRM: Processing in racetrack memories | |
Liu et al. | Era-bs: Boosting the efficiency of reram-based pim accelerator with fine-grained bit-level sparsity | |
Srinivasa et al. | Trends and opportunities for SRAM based in-memory and near-memory computation | |
Xia et al. | Reconfigurable spatial-parallel stochastic computing for accelerating sparse convolutional neural networks |