WO2025174353A1 - Executing fourier transform operations with deep neural network accelerator - Google Patents
Executing fourier transform operations with deep neural network acceleratorInfo
- Publication number
- WO2025174353A1 WO2025174353A1 PCT/US2024/015312 US2024015312W WO2025174353A1 WO 2025174353 A1 WO2025174353 A1 WO 2025174353A1 US 2024015312 W US2024015312 W US 2024015312W WO 2025174353 A1 WO2025174353 A1 WO 2025174353A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- tensor
- mac
- weight
- input
- operations
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F7/00—Methods or arrangements for processing data by operating upon the order or content of the data handled
- G06F7/38—Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation
- G06F7/48—Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation using non-contact-making devices, e.g. tube, solid state device; using unspecified devices
- G06F7/544—Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation using non-contact-making devices, e.g. tube, solid state device; using unspecified devices for evaluating functions by calculation
- G06F7/5443—Sum of products
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0495—Quantised networks; Sparse networks; Compressed networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
- G06N3/063—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Definitions
- This disclosure relates generally to neural networks (also referred to as “deep neural networks” or “DNN”), and more specifically, executing Fourier transform operations with DNN accelerators.
- DNN deep neural networks
- FIG. 2 illustrates an example convolution, in accordance with various embodiments.
- FIG. 3 is a block diagram of a DNN system, in accordance with various embodiments.
- FIG. 4 is a block diagram of a DNN module, in accordance with various embodiments.
- FIG. 5 illustrates an example transformation matrix of a Fourier transform operation, in accordance with various embodiments.
- FIG. 6 illustrates a sparse convolution cell, in accordance with various embodiments.
- FIG. 8 illustrates mapping a discrete Fourier transform (DFT) operation to a sparse cell array, in accordance with various embodiments.
- FIG. 9 illustrates mapping a real DFT (RDFT) operation to a sparse cell array, in accordance with various embodiments.
- FIG. 12 illustrates another example sliding window pattern of a STFT operation, in accordance with various embodiments.
- the DNN 100 includes a sequence of layers comprising a plurality of convolutional layers 110 (individually referred to as “convolutional layer 110"), a plurality of pooling layers 120 (individually referred to as “pooling layer 120”), and a plurality of fully-connected layers 130 (individually referred to as “fully-connected layer 130").
- the DNN 100 may include fewer, more, or different layers.
- the DNN 100 may include one or more DFT layers or one or more inverse DFT (IDFT) layers.
- the DNN 100 may be trained to perform tasks other than image classification.
- the input tensor 210 has a spatial size H in x W in x C in , where H in is the height of the 3D matrix (i.e., the length along the Y axis, which indicates the number of activations in a column in the 3D matrix of each input channel), W in is the width of the 3D matrix (i.e., the length along the X axis, which indicates the number of activations in a row in the 2D matrix of each input channel), and C in is the depth of the 3D matrix (i.e., the length along the Z axis, which indicates the number of input channels).
- MAC operations can be performed on a 2x3x3 subtensor 215 (which is highlighted with a dotted pattern in FIG. 2) in the input tensor 210 and each filter 220.
- the result of the MAC operations on the subtensor 215 and one filter 220 is an output activation.
- an output activation may include 8 bits, e.g., one byte.
- an output activation may include more than one byte. For instance, an output element may include two bytes.
- the output activations in the output tensor 230 may be further processed based on one or more activation functions before they are stored or inputted into the next layer of the DNN.
- the processing based on the one or more activation functions may be at least part of the post processing of the convolution.
- the post processing may include one or more other computations, such as offset computation, bias computation, and so on.
- the results of the post processing may be stored in a local memory of the compute block and be used as input to the next DNN layer.
- the input activations in the input tensor 210 may be results of post processing of the previous DNN layer.
- the DNN module 301 and DNN accelerator 302 may include different types of processing units.
- the DNN module 301 may be implemented by a CPU.
- the DNN accelerator 302 may also be referred to as an Al accelerator or an Al processor.
- the DNN module 301 and DNN accelerator 302 may be implemented in the same chip or separate chips.
- the DNN module 301 facilitates generation and deployment of DNNs.
- the DNN module 301 may generate and train DNNs.
- the DNN module 301 can define the layered architecture of a DNN.
- the DNN module 301 can also determine the internal parameters of the DNN through a DNN training process.
- the DNN module 301 may also determine one or more hyperparameters that define how the DNN is trained.
- An example hyperparameter is a sparsity ratio that defines the sparsity level of one or more deep learning tensors for the DNN.
- the DNN module 301 may allow the pruned weights to change values so that a pruned, zero-valued weight may have a nonzero value after further training.
- the DNN module 301 may prune weights of the layer again after one or more additional epochs.
- the DNN module 301 may deploy trained, compressed, or validated DNNs for use in deep learning applications.
- the DNN module 301 may distribute trained, compressed, or validated DNNs to devices or systems which may use the DNNs to perform tasks (e.g., image classification, motion planning, etc.) for which the DNNs were trained.
- the DNN module 301 may facilitate deployment of the DNNs using the DNN accelerator 302.
- the DNN module 301 may receive data from a device or system coupled with the DNN system 300 and input the received data (or data generated by the DNN module 301, e.g., based on the received data) into a DNN.
- the DNN module 301 may generate instructions (e.g., configuration files) that control the operation of the DNN accelerator 302 during the DNN execution.
- the DNN module 301 may receive an output of the DNN from the DNN accelerator 302.
- the DNN module 301 may transmit the output of the DNN (or a result of processing the output of the DNN by the DNN module 301) to the device or system.
- the DNN module 301 may control execution processes of trained, compressed, or validated DNNs.
- the DNN module 301 facilitates execution of Fourier transform operations by the DNN accelerator 302.
- the DNN module 301 may convert Fourier transform operations to matrix multiplications that can be performed by the DNN accelerator 302.
- the matrix multiplications may include MAC operations that are similar to MAC operations in convolutions.
- the DNN module 301 may store the input signal of a Fourier transform operation as activation vectors.
- the DNN module 301 may also generate a transformation matrix with twiddle factors of the Fourier transform operation and store the transformation matrix as weight vectors.
- the activation vectors and weight vectors may be processed by the DNN accelerator 302 in the same or similar way that the DNN accelerator 302 processes activation operands and weight operands in convolutions. Certain aspects of the DNN module 301 are provided below in conjunction with FIG. 4. [0064]
- the DNN accelerator 302 executes DNNs provided by the DNN module 301. For instance, the DNN accelerator 302 can perform DNN execution, e.g., by running deep learning operations in the DNNs, for training DNNs or for using the trained/compressed/validated DNNs to perform tasks. As shown in FIG.
- the DMA engine 320 facilitates data transfer between the memory 310 and local memories of the compute blocks 330.
- the DMA engine 320 can read data from the memory 310 and write data into a local memory of a compute block 330.
- the DMA engine 320 can read data from a local memory of a compute block 330and write data into the memory 310.
- the DMA engine 320 provides a DMA feature that allows the compute block 330 to initiate data transfer between the memory 310 and the local memories of the compute blocks 330 and to perform other operations while the data transfer is in being conducted.
- each compute block 330 includes a local memory 340, a sparsity mode module 350, a load module 360, a sparse cell array 370 (also referred to as a data processing unit), and a drain module 380.
- Some or all the components of the compute block 330 can be implemented on the same chip. In other embodiments, alternative configurations, different or additional components may be included in the compute block 330. Further, functionality attributed to a component of the compute block 330 may be accomplished by a different component included in the compute block 330, a different compute block 330, another component of the DNN accelerator 302, or a different system.
- a component of the compute block 330 may be implemented in hardware, software, firmware, or some combination thereof.
- the local memory 340 may store dense tensors (e.g., dense activation tensors, dense weight tensors, etc.), sparse tensors (e.g., sparse activation tensors, sparse weight tensors, etc.), and so on.
- dense tensor may be a tensor from which zero-valued elements (if any) are not removed.
- a dense tensor may be converted to a sparse tensor by removing one or more zero-valued elements in the dense tensor.
- a sparse tensor may also be referred to as a compressed tensor or packed tensor. The process of converting a dense tensor to a sparse tensor may be referred to as sparsity encoding.
- Sparsity encoding may also generate a sparsity tensor.
- Each element in the sparsity tensor may correspond to a different element in the dense tensor and indicate whether the element in the dense tensor is zero or not.
- the sparsity tensor may indicate positions of elements of the sparse tensor in the dense tensor.
- the sparsity tensor may be a sparsity bitmap, each element of which is a bit.
- a sparse tensor may be converted to a dense tensor through a densifying process, in which one or more zeros may be added to the sparse tensor based on the sparsity tensor.
- the load module 360 loads data from the local memory 340 to the sparse cell array 370.
- the load module 360 may read tensors from the local memory 340.
- the tensors may include sparse activation tensors, sparse weight tensors, activation sparsity tensors, weight sparsity tensors, and so on.
- the load module 360 may load data based on the sparsity mode determined by the sparsity mode module 350.
- the load module 360 may select different data to transmit to the sparse cell array 370 in different sparsity modes.
- an MAC unit includes one or more multipliers for performing multiplications.
- An MAC unit may also include one or more accumulators ("adders") for performing accumulations.
- a column of MAC units is referred to as an MAC column.
- An MAC column may be associated with one or more MAC lanes.
- An MAC lane is a path for loading data e.g., by the load module 360, into an MAC column.
- An MAC lane may be also referred to as a data transmission lane or data loading lane.
- An MAC column may have multiple MAC lanes.
- the loading bandwidth of the MAC column is an aggregation of the loading bandwidths of all the MAC lanes associated with the MAC column.
- Each multiplication in the sequence (also referred to as a cycle) is a multiplication of a different activation in the input operand with a different weight in the weight operand.
- the activation and weight in the same cycle may correspond to the same channel.
- the sequence of multiplication produces a product operand that includes a sequence of products.
- the MAC operation may also include accumulations in which multiple product operands are accumulated to produce an output operand of the MAC unit.
- the sparse cell array 370 may output multiple output operands at a time, each of which is generated by a different MAC unit.
- MAC operations may include accumulations across the channels. For instance, as opposed to generating an output operand, a MAC unit may accumulate products across different channels to generate a single output point.
- a weight sparsity tensor may be the sparsity tensor of a weight tensor and has the same number of elements as the weight tensor.
- An element in the weight sparsity tensor may indicate whether the corresponding element in the weight tensor is zero or not. For instance, a zero-valued in the weight sparsity tensor may indicate that the corresponding element in the weight tensor is zero.
- a one-valued in the weight sparsity tensor may indicate that the corresponding element in the weight tensor is nonzero.
- the sparsity module may generate a combined sparsity tensor using an activation sparsity tensor and a weight sparsity tensor.
- the sparsity module may identify activations and weights that correspond to nonzero valued elements of a combined sparsity tensor. In an embodiment where the sparse cell array 370 operates in the activation sparsity mode, the sparsity module may identify activations and weights that correspond to nonzero valued elements of an activation sparsity tensor. In an embodiment where the sparse cell array 370 operates in the weight sparsity mode, the sparsity module may identify activations and weights that correspond to nonzero valued elements of a weight sparsity tensor. The sparsity module may be bypassed in the dense mode as no sparsity acceleration would be conducted.
- the drain module 380 drains data from the sparse cell array 370 and writes the data to the local memory 340.
- the data may be outputs of MAC operations performed by MAC units in the sparse cell array 370.
- the drain module 380 may drain data on a sparse-convolution-cell level.
- the drain module 380 may drain outputs of MAC units in the sparse convolution cell based on a row index or column index of each MAC unit.
- the drain module 380 may use a sequence of cycles to drain data from a sparse convolution cell.
- the drain module 380 may drain the output of some of the MAC units in each cycle.
- the sequence of the cycles may be configured based on a configuration parameter indicating the operation mode of the load module 360.
- the drain module 380 may drain the output of a different MAC row in each cycle.
- the sequence of cycles may start with the first MAC row (e.g., the MAC row at the top of the sparse convolution cell) and end with the last MAC row (e.g., the MAC column at the bottom of the sparse convolution cell).
- the drain module 380 may determine whether to drain the output of an MAC unit based on the row index of the MAC unit when the load module operates in the activation sparsity mode versus based on the column index of the MAC unit when the load module operates in the weight sparsity mode.
- the drain module 380 may also include sparsity encoding logic that can convert outputs of the sparse cell array 370 from a dense format to a sparse format.
- the drain module 380 may be implemented with one or more sparsity encoders.
- a sparsity encoder converts dense data to compressed data based on sparsity in the dense data.
- the sparsity encoder may remove zeros in an activation tensor computed by the sparse cell array 370 to convert the activation tensor to a compressed activation tensor.
- the sparsity encoder may also generate sparsity tensors, including activation sparsity tensors.
- the data drained from the sparse cell array 370 may be at least part of an output tensor (e.g., the output tensor 230 in FIG. 2) of a deep learning operation.
- the sparsity encoder may generate a compressed version of the output tensor.
- the sparsity encoder may identify every zero-valued activation in the output tensor and remove these activations from the output tensor to generate a compressed activation tensor (aka "sparse activation tensor").
- the sparsity encoder may also generate one or more sparsity tensors for the output tensor.
- a sparsity tensor may correspond to a portion of the output tensor (e.g., the vector 235 in FIG. 2).
- the sparsity tensor may include sparsity elements (e.g., bits), each of which corresponds to a different activation in the vector and indicates whether the corresponding activation is zeroed or not.
- the drain module 380 may write the compressed activation tensor and the one or more sparsity tensors into the local memory 340.
- the sparse activation tensor and the one or more sparsity tensors may be further loaded to the memory 310, e.g., through the DMA engine 320. Additionally or alternatively, the sparse activation tensor and the one or more sparsity tensors may be loaded by the load module 360 to the sparse cell array for further computation, e.g., for performing a deep learning operation in the next layer.
- the DNN accelerator 302 may be used for executing Fourier transform operations, such Fourier transform operations in DNNs. Fourier transform operations may be converted to matrix operations that are similar to convolutions. For instance, the input signal of a Fourier transform operation may be encoded by and processed as an input tensor. The transformation matrix of the Fourier transform operation may be processed by the DNN accelerator 302 as if the transformation matrix is a weight tensor of a convolution. The DNN accelerator 302 (e.g., the sparse cell array 370) may perform MAC operations on the input tensor and the transformation matrix to compute the Fourier transform of the input signal, i.e., the output signal of the Fourier transform operation. The transformation matrix may be determined offline, e.g., before the execution of the Fourier transform operation or even before the execution of the entire DNN. In some embodiments, the transformation matrix may be determined by the DNN module 301.
- FIG. 4 is a block diagram of a DNN module 400, in accordance with various embodiments.
- the DNN module 400 may be an embodiment of the DNN module 301 in FIG. 3.
- the DNN module 400 includes an interface module 410, a training module 420, a compressing module 430, a validating module 440, a Fourier transform module 450, and a datastore 460.
- the interface module 410 facilitates communications of the DNN module 400 with other modules or systems.
- the hidden layers include one or more convolutional layers and one or more other types of layers, such as pooling layers, fully-connected layers, normalization layers, SoftMax or logistic layers, and so on.
- the convolutional layers of the DNN abstract the input image to a feature map that is represented by a tensor specifying the feature map height, the feature map width, and the feature map channels (e.g., red, green, blue images include 3 channels).
- a pooling layer is used to reduce the spatial volume of input image after convolution. It is used between two convolution layers.
- a fully-connected layer involves weights, biases, and neurons. It connects neurons in one layer to neurons in another layer. It is used to classify images between different categories by training.
- the training module 420 inputs a training dataset into the DNN.
- the training dataset includes a plurality of training samples.
- An example of a training sample includes an object in an image and a ground-truth label of the object.
- the training module 420 modifies the parameters inside the DNN ("internal parameters of the DNN") to minimize the error between labels of the training objects that are generated by the DNN and the ground-truth labels of the objects.
- the internal parameters include weights of filters in the convolutional layers of the DNN.
- the training module 420 uses a cost function to minimize the error.
- the training module 420 may train the DNN for a predetermined number of epochs.
- the number of epochs is a hyperparameter that defines the number of times that the deep learning algorithm will work through the entire training dataset.
- One epoch means that each sample in the training dataset has had an opportunity to update internal parameters of the DNN.
- the training module 420 may stop updating the parameters in the DNN.
- the DNN having the updated parameters is referred to as a trained DNN.
- the compressing module 430 may select one or more layers in a DNN and modify each selected layer with a pruning operation. For instance, the compressing module 430 may select computationally complex layers, such as layers with large filters. For a pruning operation of a layer or of a type of layer, the compressing module 430 may determine a weight threshold that would not cause a loss of the accuracy of the DNN to exceed an accuracy loss constraint. A pruning operation may modify weights having absolute values above the weight threshold to zeros and leave the other weights unchanged. The weight pruning can reduce memory storage as zero-valued weights may not be stored. Also, the number of operations in the layer can be reduced as computations on zero-valued weights can be skipped without impacting the output of the layer.
- the compressing module 430 may also measure energy saving, final DNN accuracy, or layer-wise sparsity caused by pruning operations. [0100] After compressing a DNN, the compressing module 430 may fine tune the DNN, e.g., through a retraining process. The compressing module 430 may fine tunes DNNs after weights are pruned. In some embodiments, the fine-tuning process is a retraining or further training process. For instance, after weights in a DNN are pruned, the compressing module 430 may further train the DNN by inputting a training dataset into the DNN.
- the validating module 440 may compare the accuracy score with a threshold score. In an example where the validating module 440 determines that the accuracy score of the DNN is less than the threshold score, the validating module 440 instructs the training module 420 to re-train the DNN. In one embodiment, the training module 420 may iteratively re-train the DNN until the occurrence of a stopping condition, such as the accuracy measurement indication that the DNN may be sufficiently accurate, or a number of training rounds having taken place.
- a stopping condition such as the accuracy measurement indication that the DNN may be sufficiently accurate
- the input sequence ⁇ x n ⁇ may be a signal in a time domain
- the output sequence ⁇ X k ⁇ may be a signal in a frequency domain.
- the output sequence may be a frequency domain representation of the input sequence.
- the DFT operation has a corresponding IDFT operation that converts a signal in the frequency domain to a signal in the time domain.
- the IDFT operation may be denoted as:
- the Fourier transform module 450 may divide the transformation matrix into weight vectors that may be stored and processed by the DNN accelerator 302 in the same or similar way as weight vectors in convolutions.
- a weight vector may be a column or row of the transformation matrix.
- the Fourier transform module 450 may divide a single column or row of the transformation matrix to two weight vectors: one weight vector including the real components of the complex elements in the row or column, and the other weight vector including the imaginary components of the complex elements in the row or column.
- the input matrix may be transposed after the first sequence is done so that each row in the input matrix becomes a column in the transposed input matrix.
- the second sequence may be performed on the transposed input matrix in the same way that the first sequence was performed. In an embodiment, the second sequence may be performed after the first sequence. In another embodiment, the second sequence may be performed before the first sequence. In yet another embodiment, the two sequences may be performed simultaneously, e.g., by different sparsity cells in the DNN accelerator.
- the two sequences of matrix multiplication operations may be mapped to MAC units in the DNN accelerator.
- the outputs of the MAC units may constitute the output signal of the Fourier transform operation. More details regarding mapping Fourier transform operations to MAC units are described below in conjunction with FIGS. 8 and 9.
- the datastore 460 stores data received, generated, used, or otherwise associated with the DNN module 400.
- the datastore 460 stores the datasets used by the training module 420 and validating module 440.
- the MAC units 610 are configured to perform MAC operations.
- Each MAC unit 610 may include one or more multipliers and one or more adders.
- a multiplier may multiply an activation with a weight at a time to compute a product.
- the multipliers may operate simultaneously to process multiple activation-weight pairs and compute multiple products in one cycle.
- An adder may accumulate products computed by the multipliers.
- the sparse convolution cell may include an adder tree including a plurality of adder tiers. The first tier may receive outputs of a plurality of MAC units 610.
- the sparse convolution cell 600 is associated with multiplexers (MUXs) 603, 604, 605, and 606. In other embodiments, the sparse convolution cell 600 may be associated with a different number of MUXs or other devices.
- the MUX 603 facilitates loading weights, e.g., from the local memory 340, into the weight register files 620.
- An example of the MUX 603 may be the MUX 530 in FIG. 5.
- the MUX 604 facilitates loading activations, e.g., from the local memory 340, into the activation register files 630.
- An example of the MUX 604 may be the MUX 540 in FIG. 5.
- the sparse convolution cell 600 may also execute matrix multiplications converted from Fourier transform operations.
- the MAC units 610 may perform MAC operations in the two sequences of matrix multiplications converted from the Fourier transform operation.
- the weight register files 620 may be used to store data points in transformation tensor of the Fourier transform operation.
- the activation register file 630 may be used to store data points in the input tensor of the Fourier transform operation.
- the row buffers 640 may store data points in the output tensor of the Fourier transform operation.
- Each sparse convolution cell 710 may perform sparsity accelerated MAC operations.
- the sparse convolution cells 710 may facilitate dynamic sparsity mode. For instance, the sparsity modes of the sparse convolution cells 710 may be dynamically changed between a combined sparsity mode, an activation sparsity mode, a weight sparsity mode, and a dense mode.
- An embodiment of a sparse convolution cell 710 may be the sparse convolution cell 600 in FIG. 6.
- the activation memory 720 stores activations, such as activations in input tensors of deep learning operations. Activations may be loaded from the activation memory 720 to sparse convolution cells 710.
- the weight memory 730 stores weights, such as weights in filters of deep learning operations.
- Weights may be loaded from the weight memory 730 to sparse convolution cells 710.
- the activation memory 720 or weight memory 730 may be a buffer.
- the sparse cell array 700 may include a dense data memory and a sparse data memory in lieu of the activation memory 720 and weight memory 730.
- the dense data memory may store dense tensors, e.g., dense tensors generated by the load module 360.
- the sparse data memory may store sparse tensors.
- the sparse cell array 700 may also execute matrix multiplications in Fourier transform operations.
- the activation memory 710 may be used to store input tensors of the Fourier transform operations.
- the weight memory 730 may be used to store transformation matrices of the Fourier transform operations.
- the input tensor is divided into activation vectors 810A-810P (collectively referred to as “activation vectors 810" or “activation vector 810”), and each activation vector 810 includes 16 activations.
- An activation vector 810 may be a row in the input tensor and may be processed as an activation operand.
- the transformation tensor is divided into weight vectors 820A-820P (collectively referred to as "weight vectors 820" or “weight vector 820"), and each weight vector 820 includes 16 weights.
- a weight vector 820 may be a column in the transformation matrix and may be processed as a weight operand.
- the 16 MAC units in the same row in the sparse cell array 800 may execute a single vector-matrix multiplication, i.e., a multiplication of an activation vector 810 with the entire transformation matrix. All the 256 MAC units may execute all the vector-matrix multiplications in the first 1D-DFT operation. After the first 1D-DFT operation is finished, the sparse cell array 800 may perform the second 1D-DFT operation.
- the input tensor may be transposed so that each activation vector 810 may become a column in the input tensor.
- the sparse cell array 800 may execute the second 1D-DFT operation in the same way that it executed the first 1D-DFT operation.
- the input tensor has real numbers and is divided into four activation vectors 910A-910D (collectively referred to as "activation vectors 910" or “activation vector 910”), and each activation vector 910 includes 4 activations.
- An activation vector 910 may be a row in the input tensor and may be processed as an activation operand.
- the transformation tensor has complex numbers and is divided into eight weight vectors 920A-920H (collectively referred to as "weight vectors 920" or “weight vector 920”), and each weight vector 920 includes 4 weights.
- a weight vector 920 may be processed as a weight operand.
- weight vectors 920A-920D have real elements, and the other four weight vectors 920E-920H have imaginary elements.
- the weight vectors 920A and 920E may constitute the first column of the transformation matrix.
- the weight vector 920A may include the real components of the complex numbers in the first column of the transformation matrix
- the weight vector 920B may include the imaginary components of the complex numbers in the column of the transformation matrix.
- the weight vectors 920B and 920F may constitute the second column of the transformation matrix with the weight vector 920B including the real components of the complex numbers in the second column and the weight vector 920B including the imaginary components of the complex numbers in the second column.
- the weight vectors 920C and 920G may constitute the third column of the transformation matrix.
- the weight vectors 920D and 920H may constitute the fourth column of the transformation matrix.
- the four activation vectors 910 are loaded into four rows of MAC units, respectively.
- the eight weight vectors 920 are loaded into eight columns of MAC units, respectively.
- the 32 MAC units in the four rows and eight columns may execute the RDFT operation.
- the other MAC units in the sparse cell array 900 may be idle. Even though FIG. 9 shows the mapping of a RDFT operation to the sparse cell array 900, IRDFT operations may be mapped to the sparse cell array 900 in the same or similar way.
- FIG. 10 illustrates mapping a DFT of complex numbers to a sparse cell array 1000, in accordance with various embodiments.
- the sparse cell array 1000 may be an example of the sparse cell array 370 in FIG. 3.
- the sparce cell array 1000 may also be referred to as a data processing unit.
- the sparse cell array 1000 in FIG. 10 includes 256 MAC units that are arranged in 16 rows and 16 columns. In other embodiments, the sparse cell array 1000 may include a different number of MAC units or have a different shape.
- both the input tensor and transformation tensor include complex numbers.
- An activation in the input tensor may be denoted as a + ib, where a represents the real component and b represents the imaginary component.
- a weight in the transformation tensor may be denoted as c + id, where c represents the real component and d represents the imaginary component. Multiplying the activation and the weight results in an output element denoted as ac — bd + i(bc + ad), where ac — bd is the real component and be + ad is the imaginary component.
- the execution of the DFT operation may be divided into 4 separate workloads of the sparse cell array 1000.
- the activation and weight vectors may be loaded into the sparse cell array as 4 pairs: i) a, c; ii) b, d; iii) a, b; and iv) b, c.
- Each workload may include matrix multiplications on a different pair.
- a negative scale may be applied to a post processing unit array associated with the sparse cell array to take care of the negative sign.
- the partial sums may be added up separately.
- the (a, c) multiplication may be performed before the (b, d) multiplication.
- (a, c) are respectively loaded into a row of MAC units and a column of MAC units in the same loading cycle
- (h, d) are respectively loaded into the row of MAC units and the column of MAC units in a subsequent loading cycle.
- the (h, d) multiplication may be performed with negative weights. These weights can be set up by the DNN module 301 with minimal or even no hardware overhead.
- the drain module 380 may output the real and imaginary results as different outputs so that it can be properly comprehended by the load module 360 during the execution of the subsequent layers.
- the (a, c) multiplication may be fused with the (a, d) multiplication, a may be loaded to the sparse cell array 1000 once as activations (e.g., as an activation vector), and c and d may be loaded sequentially as two separate sets of weights (e.g., as two separate weight vectors) to be multiplied sequentially with the activations.
- the (b, c) multiplication may be fused with the (b, d) multiplication
- b may be loaded to the sparse cell array 1000 once as activations (e.g., as an activation vector)
- c and d may be loaded sequentially as two separate sets of weights (e.g., as two separate weight vectors) to be multiplied with the activations to be multiplied sequentially with the activations. That way, the activations may be reused in two sets of multiplications to save power and memory bandwidth.
- the outputs corresponding to the c weight set may be later consumed as real components in one or more subsequent layers.
- the outputs corresponding to the d weight set may be later consumed as imaginary components in one or more subsequent layers.
- FIG. 11 illustrates an example sliding window pattern of a STFT operation, in accordance with various embodiments.
- STFT is a type of Fourier transform that has real input signals.
- the input signal to be transformed may be broken into frames (aka chucks) based on a window.
- the frames may all have the size of the window.
- Each frame may be Fourier transformed, and the complex result may be added to a matrix, which may encode the magnitude and phase for each point in time and frequency.
- a STFT operation may be denoted as: where x[n] represents the input signal; and w[n] represents the window.
- m is discrete
- a> is continuous.
- both m and a> are discrete and quantized.
- Frames may be extracted from the input sequence by sliding a window.
- a STFT operation in a DNN may have a window length (also referred to as "frame length") and a frame step (also referred to as "stride").
- the window length may indicate the number of data elements in the window, i.e., the number of data elements in each frame.
- the frame step may indicate the number of data elements traversed per slide.
- the STFT operation may be converted to a sequence of matrix multiplication operations. Each matrix multiplication operation may be performed on a corresponding frame.
- STFT operations may be represented as ID convolutions with frame step as stride and window length as the number of input channels.
- FIG. 11 shows an input sequence 1110 that includes 14 data elements and a window 1120 that includes eight data elements. Each data element is represented by a box in FIG. 11.
- Frames 1130A-1130G are extracted from the input sequence 1110 using the window 1120.
- the frames 1130 are represented by boxes filled with a dotted pattern in FIG. 11.
- the frame 1130A is generated from the first slide of the window 1120.
- the frame 1130B is generated from the second slide of the window 1120. This continues till the frame 1130G is generated.
- One data element is traversed per slide.
- the input length i.e., the length of the input sequence
- window length, or frame step may have different values.
- the frames 1130 may be generated by the DNN module 301.
- the DNN module 301 stores the frames 1130 as separate activation vectors, e.g., in the memory 310 or local memory 340.
- the activation vectors may be used as contexts or operands.
- the load module 360 may load the frames 1130 into the sparse cell array 370 for the sparse cell array 370 to perform the matrix multiplications.
- a storage element may be used to store an activation vector having a spatial size of 1 x 1 x N, where N is an integer. N may equal the window length.
- the storage element may have a storage element pointer that stores the location of the storage element in the memory, such as the memory 310 or the local memory 340.
- ID input sequences can be stored as 2D matrices without any data movement operations. For instance, an input sequence having 16000 elements may be stored as a 512x1247 2D matrix. When one represents a storage element as with 128 elements, the input sequence can be represented with 1250 storage element pointers. More details regarding mapping frames to the sparse cell array 370 are described below in conjunction with FIG. 13.
- FIG. 12 illustrates another example sliding window pattern of a STFT operation, in accordance with various embodiments.
- the sliding window pattern in FIG. 12 requires padding, i.e., adding new data elements into the input signal.
- the STFT operation in the embodiment of FIG. 12 has an input length of 10, a window length of 8, and a frame step of 1.
- an input sequence 1210 includes 10 data elements and a window 1220 includes eight data elements. Each data element is represented by a box in FIG. 12.
- Frames 1230A-1230E are extracted from the input sequence 1210 using the window 1220.
- the frames 1230 are represented by boxes filled with a dotted pattern in FIG. 12.
- FIG. 12 shows four frames 1230, a different number of frames may be extracted from the input sequence 1210.
- the total number of frames extracted from the input sequence may equal the input length divided by the frame step. For the input length of 10 and frame step of 1, the total number of frames may be 10.
- the frame 1230A is generated from the first slide of the window 1220.
- the frame 1230B is generated from the second slide of the window 1220.
- the frame 1230C is generated from the third slide of the window 1220.
- One data element is traversed per slide.
- a new data element is added to the end of the input sequence 1210 so that the frame 1230D can meet the window length.
- another new data element is further added so that the frame 1230E can meet the window length.
- each new element is a zero.
- the new elements may have other values.
- further new elements may be added to generate more frames.
- the STFT operation in the embodiment of FIG. 13 has an input length of 7, a window length of 4, and a frame step of 2. STFT operations with different input lengths, window lengths, or frame steps may be mapped to the sparse cell array 1300 as well.
- FIG. 13 shows four frames 1310, individually referred to as "frame 1310.” Each frame 1310 includes 4 data elements. The four frames 1310 are loaded to four rows of MAC units, respectively. In some embodiments, the four frames 1310 are stored separately in the local memory 340 and loaded to activation register files in the sparse cell array 1300 as separate contexts or separate operands.
- FIG. 14 is a flowchart showing a method 1400 of executing a Fourier transform operation, in accordance with various embodiments.
- the method 1400 may be performed by the DNN accelerator 302 in FIG. 3.
- the method 1400 is described with reference to the flowchart illustrated in FIG. 14, many other methods for executing Fourier transform operations may alternatively be used.
- the order of execution of the steps in FIG. 14 may be changed.
- some of the steps may be changed, eliminated, or combined.
- the DNN accelerator 302 receives 1410 an input tensor that represents an input signal of a DFT operation.
- the input tensor is mapped onto a data processing unit as an activation tensor that comprises activations arranged in one or more rows and one or more columns.
- the DNN accelerator 302 receives the input tensor from a plurality of storage elements. Each of the plurality of storage element storing corresponds to a different row in the input tensor and stores activations in the different row.
- the input tensor is generated from the input signal. A total number of activations in the input tensor is greater than a total number of data elements in the input signal.
- the DNN accelerator 302 converts 1420 the discrete Fourier transform operation into one or more two-dimensional matrix multiplications between the input tensor and a transformation matrix of the discrete Fourier transform operation.
- the transformation matrix is mapped onto the data processing unit as a weight tensor comprising weights.
- the weight tensor is determined based on one or more twiddle factors of the DFT operation. In some embodiments, some or all of the elements in the weight tensor are twiddle factors of the DFT operation. In some embodiments, the weight tensor is determined by the DNN module 301 offline.
- An MAC operation in the second sequence is performed on the weight tensor and a column in the input tensor.
- the second sequence of MAC operations is performed by the MAC units.
- the DNN accelerator 302 transposes the input tensor to generate a transposed tensor. After transposing the input tensor, the DNN accelerator 302 performs the second sequence of MAC operations on the transposed tensor and the weight tensor. Activations in the column in the input tensor are arranged as a row in the transposed tensor. The MAC operation in the second sequence is performed on the weight tensor and the row in the transposed tensor.
- the DNN accelerator 302 divides the weight tensor into weight vectors by dividing a column in the weight tensor into a first weight vector and a second weight vector.
- a data element in the first weight vector represents a real component of a data element in the column in the weight tensor.
- a data element in the second weight vector represents an imaginary component of the data element in the column in the weight tensor.
- FIG. 15 is a block diagram of an example computing device 1500, in accordance with various embodiments.
- the computing device 1500 can be used as at least part of the DNN system 300.
- a number of components are illustrated in FIG. 15 as included in the computing device 1500, but any one or more of these components may be omitted or duplicated, as suitable for the application.
- some or all of the components included in the computing device 1500 may be attached to one or more motherboards. In some embodiments, some or all of these components are fabricated onto a single system on a chip (SoC) die.
- SoC system on a chip
- the computing device 1500 may not include one or more of the components illustrated in FIG. 15, but the computing device 1500 may include interface circuitry for coupling to the one or more components.
- the computing device 1500 may not include a display device 1506, but may include display device interface circuitry (e.g., a connector and driver circuitry) to which a display device 1506 may be coupled.
- the computing device 1500 may not include an audio input device 1518 or an audio output device 1508, but may include audio input or output device interface circuitry (e.g., connectors and supporting circuitry) to which an audio input device 1518 or audio output device 1508 may be coupled.
- wireless and its derivatives may be used to describe circuits, devices, systems, methods, techniques, communications channels, etc., that may communicate data through the use of modulated electromagnetic radiation through a nonsolid medium.
- the term does not imply that the associated devices do not contain any wires, although in some embodiments they might not.
- the communication chip 1512 may implement any of a number of wireless standards or protocols, including but not limited to Institute for Electrical and Electronic Engineers (IEEE) standards including Wi-Fi (IEEE 802.10 family), IEEE 802.16 standards (e.g., IEEE 802.16-2005 Amendment), Long-Term Evolution (LTE) project along with any amendments, updates, and/or revisions (e.g., advanced LTE project, ultramobile broadband (UMB) project (also referred to as "3GPP2”), etc.).
- IEEE 802.16 compatible Broadband Wireless Access (BWA) networks are generally referred to as WiMAX networks, an acronym that stands for worldwide interoperability for microwave access, which is a certification mark for products that pass conformity and interoperability tests for the IEEE 802.16 standards.
- the communication chip 1512 may operate in accordance with a Global System for Mobile Communication (GSM), General Packet Radio Service (GPRS), Universal Mobile Telecommunications System (UMTS), High Speed Packet Access (HSPA), Evolved HSPA (E- HSPA), or LTE network.
- GSM Global System for Mobile Communication
- GPRS General Packet Radio Service
- UMTS Universal Mobile Telecommunications System
- High Speed Packet Access HSPA
- E- HSPA Evolved HSPA
- LTE LTE network.
- the communication chip 1512 may operate in accordance with Enhanced Data for GSM Evolution (EDGE), GSM EDGE Radio Access Network (GERAN), Universal Terrestrial Radio Access Network (UTRAN), or Evolved UTRAN (E-UTRAN).
- EDGE Enhanced Data for GSM Evolution
- GERAN GSM EDGE Radio Access Network
- UTRAN Universal Terrestrial Radio Access Network
- E-UTRAN Evolved UTRAN
- the communication chip 1512 may manage wired communications, such as electrical, optical, or any other suitable communication protocols (e.g., the Ethernet).
- the communication chip 1512 may include multiple communication chips. For instance, a first communication chip 1512 may be dedicated to shorter-range wireless communications such as Wi-Fi or Bluetooth, and a second communication chip 1512 may be dedicated to longer-range wireless communications such as global positioning system (GPS), EDGE, GPRS, CDMA, WiMAX, LTE, EV-DO, or others.
- GPS global positioning system
- EDGE EDGE
- GPRS global positioning system
- CDMA Code Division Multiple Access
- WiMAX Code Division Multiple Access
- LTE Long Term Evolution
- EV-DO Evolution-DO
- the computing device 1500 may include battery/power circuitry 1514.
- the battery/power circuitry 1514 may include one or more energy storage devices (e.g., batteries or capacitors) and/or circuitry for coupling components of the computing device 1500 to an energy source separate from the computing device 1500 (e.g., AC line power).
- the computing device 1500 may include a display device 1506 (or corresponding interface circuitry, as discussed above).
- the display device 1506 may include any visual indicators, such as a heads-up display, a computer monitor, a projector, a touchscreen display, a liquid crystal display (LCD), a light-emitting diode display, or a flat panel display, for example.
- LCD liquid crystal display
- the computing device 1500 may include a GPS device 1516 (or corresponding interface circuitry, as discussed above).
- the GPS device 1516 may be in communication with a satellite-based system and may receive a location of the computing device 1500, as known in the art.
- the computing device 1500 may include another output device 1510 (or corresponding interface circuitry, as discussed above).
- Examples of the other output device 1510 may include an audio codec, a video codec, a printer, a wired or wireless transmitter for providing information to other devices, or an additional storage device.
- the computing device 1500 may include another input device 1520 (or corresponding interface circuitry, as discussed above).
- Examples of the other input device 1520 may include an accelerometer, a gyroscope, a compass, an image capture device, a keyboard, a cursor control device such as a mouse, a stylus, a touchpad, a bar code reader, a Quick Response (QR) code reader, any sensor, or a radio frequency identification (RFID) reader.
- the computing device 1500 may have any desired form factor, such as a handheld or mobile computer system (e.g., a cell phone, a smart phone, a mobile internet device, a music player, a tablet computer, a laptop computer, a netbook computer, an ultrabook computer, a personal digital assistant (PDA), an ultramobile personal computer, etc.), a desktop computer system, a server or other networked computing component, a printer, a scanner, a monitor, a set-top box, an entertainment control unit, a vehicle control unit, a digital camera, a digital video recorder, or a wearable computer system.
- the computing device 1500 may be any other electronic device that processes data.
- Example 1 provides a method, including receiving an input tensor that represents an input signal of a discrete Fourier transform operation; converting the discrete Fourier transform operation into one or more two-dimensional matrix multiplications between the input tensor and a transformation matrix of the discrete Fourier transform operation; and performing MAC operations on the input tensor and the transformation matrix to generate an output tensor that represents at least part of the discrete Fourier transform of the input tensor.
- Example 2 provides the method of example 1, in which the input tensor is mapped onto a data processing unit as an activation tensor including activations arranged in rows and columns, the transformation matrix is onto the data processing unit mapped as a weight tensor including weights, and the data processing unit performs the MAC operations.
- Example 3 provides the method of example 2, in which the data processing unit performs the MAC operations by: performing a first sequence of MAC operations, an MAC operation in the first sequence performed on the weight tensor and a row in the input tensor; and performing a second sequence of MAC operations, an MAC operation in the second sequence performed on the weight tensor and a column in the input tensor.
- Example 4 provides the method of example 3, in which performing the second sequence of MAC operations includes transposing the input tensor to generate a transposed tensor; and after transposing the input tensor, performing the second sequence of MAC operations on the transposed tensor and the weight tensor, in which activations in the column in the input tensor are arranged as a row in the transposed tensor, and the MAC operation in the second sequence is performed on the weight tensor and the row in the transposed tensor.
- Example 5 provides the method of example 3 or 4, in which the first sequence of MAC operations is performed by MAC units in the data processing unit, the MAC units are arranged in rows and columns, and performing the first sequence of MAC operations includes providing activations in the row in the input tensor to a row of MAC units; dividing the weight tensor into weight vectors; and providing the weight vectors to different columns of MAC units.
- Example 6 provides the method of example 5, in which dividing the weight tensor into the weight vectors includes dividing a column in the weight tensor into a first weight vector and a second weight vector, in which a data element in the first weight vector represents a real component of a data element in the column in the weight tensor, and a data element in the second weight vector represents an imaginary component of the data element in the column in the weight tensor.
- Example 7 provides the method of any one of examples 3-6, further including performing a third sequence of MAC operations to compute an output of an inverse discrete Fourier transform operation.
- Example 10 provides the method of any one of examples 1-9, in which the input tensor is generated from the input signal, and a total number of elements in the input tensor is greater than a total number of data elements in the input signal.
- Example 11 provides one or more non-transitory computer-readable media storing instructions executable to perform operations, the operations including receiving an input tensor that represents an input signal of a discrete Fourier transform operation; converting the discrete Fourier transform operation into one or more two-dimensional matrix multiplications between the input tensor and a transformation matrix of the discrete Fourier transform operation; and performing MAC operations on the input tensor and the transformation matrix to generate an output tensor that represents at least part of the discrete Fourier transform of the input tensor.
- Example 16 provides the one or more non-transitory computer-readable media of example 15, in which dividing the weight tensor into the weight vectors includes dividing a column in the weight tensor into a first weight vector and a second weight vector, in which a data element in the first weight vector represents a real component of a data element in the column in the weight tensor, and a data element in the second weight vector represents an imaginary component of the data element in the column in the weight tensor.
- Example 17 provides the one or more non-transitory computer-readable media of any one of examples 11-16, in which a total number of data elements in the output tensor is smaller than a total number of data elements in the discrete Fourier transform of the input tensor.
- Example 1 provides a method, including receiving an input tensor that represents an input signal of a DFT operation, the input tensor including activations arranged in one or more rows and one or more columns; receiving a weight tensor that is determined based on one or more twiddle factors of the DFT operation; performing a first sequence of multiply- accumulate (MAC) operations, an MAC operation in the first sequence performed on the weight tensor and a row in the input tensor; performing a second sequence of MAC operations, an MAC operation in the second sequence performed on the weight tensor and a column in the input tensor; and generating an output tensor that represents at least part of the DFT of the input tensor.
- MAC multiply- accumulate
- Example 2 provides the method of example 1, in which performing the second sequence of MAC operations includes transposing the input tensor to generate a transposed tensor; and after transposing the input tensor, performing the second sequence of MAC operations on the transposed tensor and the weight tensor, in which activations in the column in the input tensor are arranged as a row in the transposed tensor, and the MAC operation in the second sequence is performed on the weight tensor and the row in the transposed tensor.
- Example 3 provides the method of example 1 or 2, in which the first sequence of MAC operations is performed by MAC units arranged in rows and columns, and performing the first sequence of MAC operations includes providing activations in the row in the input tensor to a row of MAC units; dividing the weight tensor into weight vectors; and providing the weight vectors to different columns of MAC units.
- Example 4 provides the method of example 3, in which the second sequence of MAC operations is performed by the MAC units.
- Example 5 provides the method of example 3 or 4, in which dividing the weight tensor into the weight vectors includes dividing a column in the weight tensor into a first weight vector and a second weight vector, in which a data element in the first weight vector represents a real component of a data element in the column in the weight tensor, and a data element in the second weight vector represents an imaginary component of the data element in the column in the weight tensor.
- Example 6 provides the method of any one of examples 1-5, further including performing a third sequence of MAC operations to compute an output of an inverse DFT operation.
- Example 7 provides the method of any one of examples 1-6, in which a total number of data elements in the output tensor is smaller than a total number of data elements in the DFT of the input tensor.
- Example 8 provides the method of example 7, in which the total number of data elements in the output tensor is equal to one plus half of the total number of data elements in the DFT of the input tensor.
- Example 9 provides the method of any one of examples 1-8, in which receiving the input tensor includes receiving the input tensor from a plurality of storage elements, each of the plurality of storage element storing corresponding to a different row in the input tensor and storing activations in the different row.
- Example 10 provides the method of any one of examples 1-9, in which the input tensor is generated from the input signal, and a total number of activations in the input tensor is greater than a total number of data elements in the input signal.
- Example 11 provides one or more non-transitory computer-readable media storing instructions executable to perform operations, the operations including receiving an input tensor that represents an input signal of a DFT operation, the input tensor including activations arranged in one or more rows and one or more columns; receiving a weight tensor that is determined based on one or more twiddle factors of the DFT operation; performing a first sequence of multiply-accumulate (MAC) operations, an MAC operation in the first sequence performed on the weight tensor and a row in the input tensor; performing a second sequence of MAC operations, an MAC operation in the second sequence performed on the weight tensor and a column in the input tensor; and generating an output tensor that represents at least part of the DFT of the input tensor.
- MAC multiply-accumulate
- Example 12 provides the one or more non-transitory computer-readable media of example 11, in which performing the second sequence of MAC operations includes transposing the input tensor to generate a transposed tensor; and after transposing the input tensor, performing the second sequence of MAC operations on the transposed tensor and the weight tensor, in which activations in the column in the input tensor are arranged as a row in the transposed tensor, and the MAC operation in the second sequence is performed on the weight tensor and the row in the transposed tensor.
- Example 15 provides the one or more non-transitory computer-readable media of any one of examples 11-14, in which a total number of data elements in the output tensor is smaller than a total number of data elements in the DFT of the input tensor.
- Example 16 provides the one or more non-transitory computer-readable media of any one of examples 11-15, in which receiving the input tensor includes receiving the input tensor from a plurality of storage elements, each of the plurality of storage element storing corresponding to a different row in the input tensor and storing activations in the different row.
- Example 17 provides the one or more non-transitory computer-readable media of any one of examples 11-16, in which the input tensor is generated from the input signal, and a total number of activations in the input tensor is greater than a total number of data elements in the input signal.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computing Systems (AREA)
- Health & Medical Sciences (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Pure & Applied Mathematics (AREA)
- Computational Mathematics (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Neurology (AREA)
- Complex Calculations (AREA)
Abstract
Fourier transform operations may be converted to matrix multiplications that may be like matrix multiplications in convolutions and can be executed by deep neural network (DNN) accelerators. A DNN accelerator may receive an input tensor representing an input signal of a Fourier transform operation. The input tensor may include activations arranged in one or more rows and one or more columns. The DNN accelerator may receive a weight tensor determined based on one or more twiddle factors of the Fourier transform operation. The DNN accelerator may perform two sequences of multiply-accumulate (MAC) operations. An MAC operation in the first sequence may be performed on the weight tensor and a row in the input tensor. An MAC operation in the second sequence may be performed on the weight tensor and a column in the input tensor. The outputs of the MAC operations may represent the Fourier transform of the input signal.
Description
EXECUTING FOURIER TRANSFORM OPERATIONS WITH DEEP NEURAL NETWORK ACCELERATOR
Technical Field
[0001] This disclosure relates generally to neural networks (also referred to as "deep neural networks" or "DNN"), and more specifically, executing Fourier transform operations with DNN accelerators.
Background
[0002] DNNs are used extensively for a variety of artificial intelligence (Al) applications ranging from computer vision to speech recognition and natural language processing due to their ability to achieve high accuracy. Many Al applications require high quality data, which can be obtained through processing noisy input data. A widely used method is transforming complex and noisy raw data into a more suitable format using Fourier transforms.
Brief Description of the Drawings
[0003] Embodiments will be readily understood by the following detailed description in conjunction with the accompanying drawings. To facilitate this description, like reference numerals designate like structural elements. Embodiments are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings.
[0004] FIG. 1 illustrates an example DNN, in accordance with various embodiments.
[0005] FIG. 2 illustrates an example convolution, in accordance with various embodiments.
[0006] FIG. 3 is a block diagram of a DNN system, in accordance with various embodiments. [0007] FIG. 4 is a block diagram of a DNN module, in accordance with various embodiments. [0008] FIG. 5 illustrates an example transformation matrix of a Fourier transform operation, in accordance with various embodiments.
[0009] FIG. 6 illustrates a sparse convolution cell, in accordance with various embodiments.
[0010] FIG. 7 illustrates an example sparse cell array, in accordance with various embodiments.
[0011] FIG. 8 illustrates mapping a discrete Fourier transform (DFT) operation to a sparse cell array, in accordance with various embodiments.
[0012] FIG. 9 illustrates mapping a real DFT (RDFT) operation to a sparse cell array, in accordance with various embodiments.
[0013] FIG. 10 illustrates mapping a DFT of complex numbers to a sparse cell array, in accordance with various embodiments.
[0014] FIG. 11 illustrates an example sliding window pattern of a short time Fourier transform (STFT) operation, in accordance with various embodiments.
[0015] FIG. 12 illustrates another example sliding window pattern of a STFT operation, in accordance with various embodiments.
[0016] FIG. 13 illustrates mapping frames of a STFT operation to a sparse cell array, in accordance with various embodiments.
[0017] FIG. 14 is a flowchart showing a method of executing a Fourier transform operation, in accordance with various embodiments.
[0018] FIG. 15 is a block diagram of an example computing device, in accordance with various embodiments.
Detailed Description
Overview
[0019] The last decade has witnessed a rapid rise in Al based data processing, particularly based on DNNs. DNNs are widely used in the domains of computer vision, speech recognition, image, and video processing mainly due to their ability to achieve beyond human-level accuracy. The significant improvements in DNN model size and accuracy coupled with the rapid increase in computing power of execution platforms have led to the adoption of DNN applications even within resource constrained mobile and edge devices that have limited energy availability.
[0020] A DNN layer may include one or more deep learning operations (also referred to as "neural network operations"), such as convolution, pooling, elementwise operation, linear operation, nonlinear operation, and so on. A deep learning operation in a DNN may be performed on one or more internal parameters of the DNNs (e.g., weights), which are determined during the training phase, and one or more activations. An activation may be a data point (also referred to as "data elements" or "elements"). Activations or weights of a DNN layer may be elements of a tensor of the DNN layer. A tensor is a data structure having multiple elements across one or more dimensions. Example tensors include a vector, which
is a one-dimensional tensor, and a matrix, which is a two-dimensional tensor. There can also be three-dimensional tensors and even higher dimensional tensors. A DNN layer may have an input tensor (also referred to as "input feature map (IFM)") including one or more input activations (also referred to as "input elements") and a weight tensor including one or more weights. A weight is an element in the weight tensor. A weight tensor of a convolution may be a kernel, a filter, or a group of filters. The output data of the DNN layer may be an output tensor (also referred to as "output feature map (OFM)") that includes one or more output activations (also referred to as "output elements").
[0021] Many Al applications often require Fourier transforms of raw data for image and audio modeling and analysis. For instance, noisy and complex raw data is often transformed into frequency domain which represents the magnitude and phase of different frequency components for further analysis. Such transformation would help with processing techniques that include filtering, normalization, segmentation, feature extraction, encoding, and so on. These techniques help to remove noise, extract relevant features, and improve analysis and modeling accuracy. Image and audio processing can be crucial for ensuring reliable results and enhancing the overall quality and modeling.
[0022] Many audio- and image-based DNN models require the application of DFTs. However, these Fourier transform operations can become the bottlenecks in end-to-end use-cases. The primary reason is that these operations usually are not fully utilizing the compute resources. Many solutions implement the kernel in high-level programming languages and run the kernel on programmable engines, which can be more generic but inefficient. For instance, some solutions relied on general compute systems to implement these steps. Examples of such general compute systems include Central Processing Unit (CPU), Digital Signal Processor (DSP), Streaming Hybrid Architecture Vector Engine, other Very Long Instruction Word (VLIW) processors, and so on.
[0023] Many currently available solutions for mapping Fourier transform for the DNNs (which could be part of the network or pre/post processing operations) involve mapping these layers to limited amount of programmable compute (such as DSP/vector processors) that are also part of the overall DNN accelerator subsystem. However, these programmable compute elements can have limited compute, as well when normalized to power and area, which would result in performance bottlenecks. They can worse performance as they operate at a lower frequency and lower bandwidth. Also, they can consume lots of power.
In addition, general purpose processors do not intrinsically support trigonometric cos/sin required for Fourier transform computation. They usually need to resort to Taylor series decomposition to enable these functions, which can result in power or performance degradation. Another disadvantage of these solutions is that the efficiency depends on the kernel implementation, which varies a lot between different engineers. A poor implementation may lead to deep code loops and result in further inefficient inference. A different approach is the use of dedicated hardware for mapping these layers. However, that is a big area adder not suitable for edge or client devices that usually require smaller footprint or area.
[0024] Embodiments of the present disclosure may improve on at least some of the challenges and issues described above by converting Fourier transform operations to matrix multiplications that can be executed by DNN accelerators. Fourier transform operations that can be executed by DNN accelerators include DFT, inverse DFT (IDFT), RDFT, inverse RDFT (IRDFT), STFT, and so on.
[0025] In various embodiments, a Fourier transform operation may be converted to a two- dimensional (2D) matrix multiplication between the input signal and a transformation matrix. The 2D matrix multiplication may be executed by a DNN accelerator that can perform matrix multiplications in convolution. The DNN accelerator may include components that can optimize compute efficiency in execution of convolution. The input signal of the Fourier transform operation may be represented by a 2D input matrix with data elements arranged in rows and columns. The data elements in the input matrix may be processed in the same or similar way that activations of input tensors of convolution are processed. The transformation matrix may be generated from twiddle factors of the Fourier transform operation. The data elements in the transformation matrix may be processed in the same or similar way that weights of convolution are processed. A twiddle factor of a FFT algorithm may be any of the trigonometric constant coefficients that are multiplied by the data in the course of the algorithm. Twiddle factors may be any data-independent multiplicative constant used over the course of an FFT.
[0026] Activation vectors may be generated from the input matrix, e.g., by a DNN module associated with the DNN accelerator. In some embodiments, an activation vector may be a row or column in the input matrix. In other embodiments (e.g., embodiments where the Fourier transform is STFT), the activation vectors may be frames extracted from the input
sequence by sliding a window over the input sequence. The DNN module may also generate the transformation matrix and divide the transformation matrix into weight vectors. The activation vectors may be loaded to register files that may be designated for storing activations of convolutions in the DNN accelerator and further loaded to multiply- accumulate (MAC) units associated with the register files. The weight vector may be loaded to register files that may be designated for storing weights of convolution and further loaded to MAC units associated with the register files. An example of the DNN accelerator may include MAC units arranged in rows and columns. An activation vector may be loaded to and processed by a row of MAC units. A weight vector may be loaded to and processed by a column of MAC units. The DNN module and DNN accelerator can also facilitate Fourier transform operations that have complex input signals or complex twiddle factors.
[0027] A Fourier transform operation may be converted to two sequences of multiply- accumulate (MAC) operations. Each MAC operation in the first sequence may be performed on the weight matrix and a respective row in the input matrix. Each MAC operation in the second sequence may be performed on the weight matrix and a respective column in the input matrix. The input matrix may be transposed after the first sequence so that each column in the input matrix can become a row in the transposed input matrix. The second sequence can be conducted by repeating the first sequence on the transposed input matrix. The output of the two sequences of MAC operations may represent the Fourier transform of the input signal. In some embodiments, the total number of output elements may be less than the total number of elements in the Fourier transform of the input signal. For instance, the input signal may have N elements, and the Fourier transform of the input signal may also have N elements. The DNN accelerator may compute and store ~ + 1 elements to represent the Fourier transform of the input signal, as the Fourier transform of the input signal is symmetric. This can further reduce the power, time, and memory bandwidth needed to execute the Fourier transform operation and improve the efficiency of the DNN accelerator.
[0028] By converting Fourier transform operations to matrix multiplications and mapping the input signal and twiddle factor matrix as activation and weights onto the MAC units, the present disclosure provides an approach that allows DNN accelerators to execute Fourier transform operations, including Fourier transforms of complex numbers. The approach can
provide performance and efficiency benefits. Compared with currently available approaches, the approach in the present disclosure requires much less time, power, and memory bandwidth to execute Fourier transforms in Al applications.
[0029] For purposes of explanation, specific numbers, materials and configurations are set forth in order to provide a thorough understanding of the illustrative implementations. However, it will be apparent to one skilled in the art that the present disclosure may be practiced without the specific details or/and that the present disclosure may be practiced with only some of the described aspects. In other instances, well known features are omitted or simplified in order not to obscure the illustrative implementations.
[0030] Further, references are made to the accompanying drawings that form a part hereof, and in which is shown, by way of illustration, embodiments that may be practiced. It is to be understood that other embodiments may be utilized, and structural or logical changes may be made without departing from the scope of the present disclosure. Therefore, the following detailed description is not to be taken in a limiting sense.
[0031] Various operations may be described as multiple discrete actions or operations in turn, in a manner that is most helpful in understanding the claimed subject matter.
However, the order of description should not be construed as to imply that these operations are necessarily order dependent. In particular, these operations may not be performed in the order of presentation. Operations described may be performed in a different order from the described embodiment. Various additional operations may be performed or described operations may be omitted in additional embodiments.
[0032] For the purposes of the present disclosure, the phrase "A or B" or the phrase "A and/or B" means (A), (B), or (A and B). For the purposes of the present disclosure, the phrase "A, B, or C" or the phrase "A, B, and/or C" means (A), (B), (C), (A and B), (A and C), (B and C), or (A, B, and C). The term "between," when used with reference to measurement ranges, is inclusive of the ends of the measurement ranges.
[0033] The description uses the phrases "in an embodiment" or "in embodiments," which may each refer to one or more of the same or different embodiments. The terms "comprising," "including," "having," and the like, as used with respect to embodiments of the present disclosure, are synonymous. The disclosure may use perspective-based descriptions such as "above," "below," "top," "bottom," and "side" to explain various features of the drawings, but these terms are simply for ease of discussion, and do not imply
a desired or required orientation. The accompanying drawings are not necessarily drawn to scale. Unless otherwise specified, the use of the ordinal adjectives "first," "second," and "third," etc., to describe a common object, merely indicates that different instances of like objects are being referred to and are not intended to imply that the objects so described must be in a given sequence, either temporally, spatially, in ranking or in any other manner. [0034] In the following detailed description, various aspects of the illustrative implementations will be described using terms commonly employed by those skilled in the art to convey the substance of their work to others skilled in the art.
[0035] The terms "substantially," "close," "approximately," "near," and "about," generally refer to being within +/- 20% of a target value as described herein or as known in the art. Similarly, terms indicating orientation of various elements, e.g., "coplanar," "perpendicular," "orthogonal," "parallel," or any other angle between the elements, generally refer to being within +/- 5-20% of a target value as described herein or as known in the art.
[0036] In addition, the terms "comprise," "comprising," "include," "including," "have," "having" or any other variation thereof, are intended to cover a non-exclusive inclusion. For example, a method, process, device, or DNN accelerator that comprises a list of elements is not necessarily limited to only those elements but may include other elements not expressly listed or inherent to such method, process, device, or DNN accelerators. Also, the term "or" refers to an inclusive "or" and not to an exclusive "or."
[0037] The systems, methods and devices of this disclosure each have several innovative aspects, no single one of which is solely responsible for all desirable attributes disclosed herein. Details of one or more implementations of the subject matter described in this specification are set forth in the description below and the accompanying drawings.
Example DNN
[0038] FIG. 1 illustrates an example DNN 100, in accordance with various embodiments. The DNN 100 may be an example of a teacher network or an example of a student network. For the purpose of illustration, the DNN 100 in FIG. 1 is a CNN. In other embodiments, the DNN 100 may be other types of DNNs. As an example, the DNN 100 is trained to receive images and output classifications of objects in the images. In the embodiments of FIG. 1, the DNN 100 receives an input image 105 that includes objects 115, 125, and 135. The DNN 100 includes a sequence of layers comprising a plurality of convolutional layers 110 (individually referred to as "convolutional layer 110"), a plurality of pooling layers 120 (individually
referred to as "pooling layer 120"), and a plurality of fully-connected layers 130 (individually referred to as "fully-connected layer 130"). In other embodiments, the DNN 100 may include fewer, more, or different layers. For instance, the DNN 100 may include one or more DFT layers or one or more inverse DFT (IDFT) layers. Also, the DNN 100 may be trained to perform tasks other than image classification. In an inference of the DNN 100, the layers of the DNN 100 execute tensor computation that includes many tensor operations, such as convolution (e.g., multiply-accumulate (MAC) operations, etc.), pooling operations, elementwise operations (e.g., elementwise addition, elementwise multiplication, etc.), other types of tensor operations, or some combination thereof.
[0039] The convolutional layers 110 summarize the presence of features in the input image 105. The convolutional layers 110 function as feature extractors. The first layer of the DNN 100 is a convolutional layer 110. In an example, a convolutional layer 110 performs a convolution on an input tensor 140 (also referred to as IFM 140) and a filter 150. As shown in FIG. 1, the IFM 140 is represented by a 7x7x3 three-dimensional (3D) matrix. The IFM 140 includes 3 input channels, each of which is represented by a 7x7 two-dimensional (2D) matrix. The 7x7 2D matrix includes 7 input elements (also referred to as input points) in each row and 7 input elements in each column. The filter 150 is represented by a 3x3x3 3D matrix. The filter 150 includes 3 kernels, each of which may correspond to a different input channel of the IFM 140. A kernel is a 2D matrix of weights, where the weights are arranged in columns and rows. A kernel can be smaller than the IFM. In the embodiments of FIG. 1, each kernel is represented by a 3x3 2D matrix. The 3x3 kernel includes 3 weights in each row and 3 weights in each column. Weights can be initialized and updated by backpropagation using gradient descent. The magnitudes of the weights can indicate importance of the filter 150 in extracting features from the IFM 140.
[0040] The convolution includes MAC operations with the input elements in the IFM 140 and the weights in the filter 150. The convolution may be a standard convolution 163 or a depthwise convolution 183. In the standard convolution 163, the whole filter 150 slides across the IFM 140. All the input channels are combined to produce an output tensor 160 (also referred to as OFM 160). The OFM 160 is represented by a 5x5 2D matrix. The 5x5 2D matrix includes 5 output elements (also referred to as output points) in each row and 5 output elements in each column. For the purpose of illustration, the standard convolution
includes one filter in the embodiments of FIG. 1. In embodiments where there are multiple filters, the standard convolution may produce multiple output channels in the OFM 160. [0041] The multiplication applied between a kernel-sized patch of the IFM 140 and a kernel may be a dot product. A dot product is the elementwise multiplication between the kernelsized patch of the IFM 140 and the corresponding kernel, which is then summed, always resulting in a single value. Because it results in a single value, the operation is often referred to as the "scalar product." Using a kernel smaller than the IFM 140 is intentional as it allows the same kernel (set of weights) to be multiplied by the IFM 140 multiple times at different points on the IFM 140. Specifically, the kernel is applied systematically to each overlapping part or kernel-sized patch of the IFM 140, left to right, top to bottom. The result from multiplying the kernel with the IFM 140 one time is a single value. As the kernel is applied multiple times to the IFM 140, the multiplication result is a 2D matrix of output elements. As such, the 2D output matrix (i.e., the OFM 160) from the standard convolution 163 is referred to as an OFM.
[0042] In the depthwise convolution 183, the input channels are not combined. Rather, MAC operations are performed on an individual input channel and an individual kernel and produce an output channel. As shown in FIG. 1, the depthwise convolution 183 produces a depthwise output tensor 180. The depthwise output tensor 180 is represented by a 5x5x3 3D matrix. The depthwise output tensor 180 includes 3 output channels, each of which is represented by a 5x5 2D matrix. The 5x5 2D matrix includes 5 output elements in each row and 5 output elements in each column. Each output channel is a result of MAC operations of an input channel of the IFM 140 and a kernel of the filter 150. For instance, the first output channel (patterned with dots) is a result of MAC operations of the first input channel (patterned with dots) and the first kernel (patterned with dots), the second output channel (patterned with horizontal strips) is a result of MAC operations of the second input channel (patterned with horizontal strips) and the second kernel (patterned with horizontal strips), and the third output channel (patterned with diagonal stripes) is a result of MAC operations of the third input channel (patterned with diagonal stripes) and the third kernel (patterned with diagonal stripes). In such a depthwise convolution, the number of input channels equals the number of output channels, and each output channel corresponds to a different input channel. The input channels and output channels are referred to collectively as depthwise channels. After the depthwise convolution, a pointwise convolution 193 is then
performed on the depthwise output tensor 180 and a 1x1x3 tensor 190 to produce the OFM
160.
[0043] The OFM 160 is then passed to the next layer in the sequence. In some embodiments, the OFM 160 is passed through an activation function. An example activation function is rectified linear unit (ReLU). ReLU is a calculation that returns the value provided as input directly, or the value zero if the input is zero or less. The convolutional layer 110 may receive several images as input and calculate the convolution of each of them with each of the kernels. This process can be repeated several times. For instance, the OFM 160 is passed to the subsequent convolutional layer 110 (i.e., the convolutional layer 110 following the convolutional layer 110 generating the OFM 160 in the sequence). The subsequent convolutional layers 110 perform a convolution on the OFM 160 with new kernels and generate a new feature map. The new feature map may also be normalized and resized. The new feature map can be kernelled again by a further subsequent convolutional layer 110, and so on.
[0044] In some embodiments, a convolutional layer 110 has four hyperparameters: the number of kernels, the size F kernels (e.g., a kernel is of dimensions FxFxD pixels), the S step with which the window corresponding to the kernel is dragged on the image (e.g., a step of one means moving the window one pixel at a time), and the zero-padding P (e.g., adding a black contour of P pixels thickness to the input image of the convolutional layer 110). The convolutional layers 110 may perform various types of convolutions, such as 2-dimensional convolution, dilated or atrous convolution, spatial separable convolution, depthwise separable convolution, transposed convolution, and so on. The DNN 100 includes 16 convolutional layers 110. In other embodiments, the DNN 100 may include a different number of convolutional layers.
[0045] The pooling layers 120 down-sample feature maps generated by the convolutional layers, e.g., by summarizing the presence of features in the patches of the feature maps. A pooling layer 120 is placed between two convolution layers 110: a preceding convolutional layer 110 (the convolution layer 110 preceding the pooling layer 120 in the sequence of layers) and a subsequent convolutional layer 110 (the convolution layer 110 subsequent to the pooling layer 120 in the sequence of layers). In some embodiments, a pooling layer 120 is added after a convolutional layer 110, e.g., after an activation function (e.g., ReLU, etc.) has been applied to the OFM 160.
[0046] A pooling layer 120 receives feature maps generated by the preceding convolution layer 110 and applies a pooling operation to the feature maps. The pooling operation reduces the size of the feature maps while preserving their important characteristics. Accordingly, the pooling operation improves the efficiency of the DNN and avoids over- learning. The pooling layers 120 may perform the pooling operation through average pooling (calculating the average value for each patch on the feature map), max pooling (calculating the maximum value for each patch of the feature map), or a combination of both. The size of the pooling operation is smaller than the size of the feature maps. In various embodiments, the pooling operation is 2x2 pixels applied with a stride of two pixels, so that the pooling operation reduces the size of a feature map by a factor of 2, e.g., the number of pixels or values in the feature map is reduced to one quarter the size. In an example, a pooling layer 120 applied to a feature map of 6x6 results in an output pooled feature map of 3x3. The output of the pooling layer 120 is inputted into the subsequent convolution layer 110 for further feature extraction. In some embodiments, the pooling layer 120 operates upon each feature map separately to create a new set of the same number of pooled feature maps.
[0047] The fully-connected layers 130 are the last layers of the DNN. The fully-connected layers 130 may be convolutional or not. The fully-connected layers 130 receive an input operand. The input operand defines the output of the convolutional layers 110 and pooling layers 120 and includes the values of the last feature map generated by the last pooling layer 120 in the sequence. The fully-connected layers 130 apply a linear combination and an activation function to the input operand and generate a vector. The vector may contain as many elements as there are classes: element i represents the probability that the image belongs to class i. Each element is therefore between 0 and 1, and the sum of all is worth one. These probabilities are calculated by the last fully-connected layer 130 by using a logistic function (binary classification) or a SoftMax function (multi-class classification) as an activation function.
[0048] In some embodiments, the fully-connected layers 130 classify the input image 105 and return an operand of size N, where N is the number of classes in the image classification problem. In the embodiments of FIG. 1, N equals 3, as there are 3 objects 115, 125, and 135 in the input image. Each element of the operand indicates the probability for the input image 105 to belong to a class. To calculate the probabilities, the fully-connected layers 130
multiply each input element by weight, make the sum, and then apply an activation function (e.g., logistic if N=2, SoftMax if N>2). This is equivalent to multiplying the input operand by the matrix containing the weights. In an example, the vector includes 3 probabilities: a first probability indicating the object 115 being a tree, a second probability indicating the object 125 being a car, and a third probability indicating the object 135 being a person. In other embodiments where the input image 105 includes different objects or a different number of objects, the individual values can be different.
Example Convolution
[0049] FIG. 2 illustrates an example convolution, in accordance with various embodiments. The convolution may be a deep learning operation in a convolutional layer of a DNN, e.g., a convolutional layer 110 in FIG. 1. The convolution can be executed on an input tensor 210 and filters 220 (individually referred to as "filter 220"). A filter, a portion of a filter, or a combination of multiple filters may be referred to as a weight tensor of the convolution. The result of the convolution is an output tensor 230. In some embodiments, the convolution is performed by a DNN accelerator. An example of the DNN accelerator may be the DNN accelerator 302 in FIG. 3. For instance, the convolution may be performed by the sparse cell array 370 in the DNN accelerator 302.
[0050] In the embodiments of FIG. 2, the input tensor 210 includes activations (also referred to as "input activations," "elements," or "input elements") arranged in a 3D matrix. An input element is a data point in the input tensor 210. The input tensor 210 has a spatial size Hin x Win x Cin, where Hin is the height of the 3D matrix (i.e., the length along the Y axis, which indicates the number of activations in a column in the 3D matrix of each input channel), Win is the width of the 3D matrix (i.e., the length along the X axis, which indicates the number of activations in a row in the 2D matrix of each input channel), and Cin is the depth of the 3D matrix (i.e., the length along the Z axis, which indicates the number of input channels). For the purpose of simplicity and illustration, the input tensor 210 has a spatial size of 7x7x3, i.e., the input tensor 210 includes three input channels and each input channel has a 7x7 2D matrix. Each input element in the input tensor 210 may be represented by a (X, Y, Z) coordinate. In other embodiments, the height, width, or depth of the input tensor 210 may be different.
[0051] Each filter 220 includes weights arranged in a 3D matrix. The values of the weights may be determined through training the DNN. A filter 220 has a spatial size Hf x Wf x , where Hf is the height of the filter (i.e., the length along the Y axis, which indicates the number of weights in a column in each kernel), Wf is the width of the filter (i.e., the length along the X axis, which indicates the number of weights in a row in each kernel), and is the depth of the filter (i.e., the length along the Z axis, which indicates the number of channels). In some embodiments, equals Cin. For purpose of simplicity and illustration, each filter 220 in FIG. 2 has a spatial size of 2x3x3, i.e., the filter 220 includes 2 convolutional kernels with a spatial size of 2x3. In other embodiments, the height, width, or depth of the filter 220 may be different. The spatial size of the convolutional kernels is smaller than the spatial size of the 2D matrix of each input channel in the input tensor 210. [0052] An activation or weight may take one or more bytes in a memory. The number of bytes for an activation or weight may depend on the data format. For example, when the activation or weight has an INT8 format, the activation takes one byte. When the activation or weight has a FP16 format, the activation or weight takes two bytes. Other data formats may be used for activations or weights.
[0053] In the convolution, each filter 220 slides across the input tensor 210 and generates a 2D matrix for an output channel in the output tensor 230. In the embodiments of FIG. 2, the 2D matrix has a spatial size of 5x5. The output tensor 230 includes activations (also referred to as "output activations," "elements," or "output element") arranged in a 3D matrix. An output activation is a data point in the output tensor 230. The output tensor 230 has a spatial size Hout X Wout x Cout, where Hout is the height of the 3D matrix (i.e., the length along the Y axis, which indicates the number of output activations in a column in the 2D matrix of each output channel), Wout is the width of the 3D matrix (i.e., the length along the X axis, which indicates the number of output activations in a row in the 2D matrix of each output channel), and Cout is the depth of the 3D matrix (i.e., the length along the Z axis, which indicates the number of output channels). Cout may equal the number of filters 220 in the convolution. Hout and Wout may depend on the heights and weights of the input tensor
210 and each filter 220.
[0054] As a part of the convolution, MAC operations can be performed on a 2x3x3 subtensor 215 (which is highlighted with a dotted pattern in FIG. 2) in the input tensor 210
and each filter 220. The result of the MAC operations on the subtensor 215 and one filter 220 is an output activation. In some embodiments (e.g., embodiments where the convolution is an integral convolution), an output activation may include 8 bits, e.g., one byte. In other embodiments (e.g., embodiments where the convolution is a floating-point convolution), an output activation may include more than one byte. For instance, an output element may include two bytes.
[0055] After the MAC operations on the subtensor 215 and all the filters 220 are finished, a vector 235 is produced. The vector 235 is highlighted with slashes in FIG. 2. The vector 235 includes a sequence of output activations, which are arranged along the Z axis. The output activations in the vector 235 have the same (X, Y) coordinate, but the output activations correspond to different output channels and have different Z coordinates. The dimension of the vector 235 along the Z axis may equal the total number of output channels in the output tensor 230. After the vector 235 is produced, further MAC operations are performed to produce additional vectors till the output tensor 230 is produced.
[0056] In some embodiments, the MAC operations on a 2x3x3 subtensor (e.g., the subtensor 215) and a filter 220 may be performed by a plurality of MAC units. One or more MAC units may receive an input operand (e.g., an input operand 217 shown in FIG. 2) and a weight operand (e.g., the weight operand 227 shown in FIG. 2). The input operand 217 includes a sequence of activations having the same (x, y) coordinate but different z coordinates. The input operand 217 includes an activation from each of the input channels in the input tensor 210. The weight operand 227 includes a sequence of weights having the same (x, y) coordinate but different z coordinates. The weight operand 227 includes a weight from each of the channels in the filter 220. Activations in the input operand 217 and weights in the weight operand 227 may be sequentially fed into a MAC unit. The MAC unit may receive an activation and a weight ("an activation-weight pair") at a time and multiple the activation and the weight. The position of the activation in the input operand 217 may match the position of the weight in the weight operand 227. The activation and weight may correspond to the same channel.
[0057] Activations or weights may be floating-point numbers. Floating-point numbers may have various data formats, such as FP32, FP16, BF16, and so on. A floating-point number may be a positive or negative number with a decimal point. A floating-point number may be represented by a sequence of bits that includes one or more bits representing the sign of
the floating-point number (e.g., positive or negative), bits representing an exponent of the floating-point number, and bits representing a mantissa of the floating-point number. The mantissa is the part of a floating-point number that represents the significant digits of that number. The mantissa is multiplied by the base raised to the exponent to give the actual value of the floating-point number.
[0058] In some embodiments, the output activations in the output tensor 230 may be further processed based on one or more activation functions before they are stored or inputted into the next layer of the DNN. The processing based on the one or more activation functions may be at least part of the post processing of the convolution. In some embodiments, the post processing may include one or more other computations, such as offset computation, bias computation, and so on. The results of the post processing may be stored in a local memory of the compute block and be used as input to the next DNN layer. In some embodiments, the input activations in the input tensor 210 may be results of post processing of the previous DNN layer.
Example DNN System
[0059] FIG. 3 is a block diagram of a DNN system 300, in accordance with various embodiments. The whole DNN system 300 or a part of the DNN system 300 may be implemented in one or more computing devices, such as the computing device 1900 in FIG. 19. The DNN system 300 can generate and execute DNNs, such as the DNN 100 in FIG. 1. As shown in FIG. 3, the DNN system 300 includes a DNN module 301 and a DNN accelerator 302. In other embodiments, alternative configurations, different or additional components may be included in the DNN system 300. For instance, the DNN system 300 may include multiple DNN modules or multiple DNN accelerators. Further, functionality attributed to a component of the DNN system 300 may be accomplished by a different component included in the DNN system 300 or a different system. In some embodiments, the DNN module 301 and DNN accelerator 302 may include different types of processing units. In an example, the DNN module 301 may be implemented by a CPU. The DNN accelerator 302 may also be referred to as an Al accelerator or an Al processor. The DNN module 301 and DNN accelerator 302 may be implemented in the same chip or separate chips.
[0060] The DNN module 301 facilitates generation and deployment of DNNs. In some embodiments, the DNN module 301 may generate and train DNNs. For instance, the DNN module 301 can define the layered architecture of a DNN. The DNN module 301 can also
determine the internal parameters of the DNN through a DNN training process. The DNN module 301 may also determine one or more hyperparameters that define how the DNN is trained. An example hyperparameter is a sparsity ratio that defines the sparsity level of one or more deep learning tensors for the DNN.
[0061] The DNN module 301 may also compress DNNs, e.g., during or after training. In some embodiments, the DNN module 301 may prune weights in one or more layers of a DNN by changing nonzero valued weight to zeros. The DNN module 301 may prune weights based on a target weight sparsity ratio. A weight sparsity ratio may be the ratio of the number of zero-valued weights to the total number of weights. In an example where the DNN module 301 prunes weight during DNN training, the DNN module 301 may prune weight of a layer to achieve a target sparsity ratio after one or more epochs. The DNN module 301 may prevent the pruned weights from changing values during the rest of the training process. Alternatively, the DNN module 301 may allow the pruned weights to change values so that a pruned, zero-valued weight may have a nonzero value after further training. The DNN module 301 may prune weights of the layer again after one or more additional epochs. [0062] The DNN module 301 may deploy trained, compressed, or validated DNNs for use in deep learning applications. In some embodiments, the DNN module 301 may distribute trained, compressed, or validated DNNs to devices or systems which may use the DNNs to perform tasks (e.g., image classification, motion planning, etc.) for which the DNNs were trained. In other embodiments, the DNN module 301 may facilitate deployment of the DNNs using the DNN accelerator 302. For instance, the DNN module 301 may receive data from a device or system coupled with the DNN system 300 and input the received data (or data generated by the DNN module 301, e.g., based on the received data) into a DNN. The DNN module 301 may generate instructions (e.g., configuration files) that control the operation of the DNN accelerator 302 during the DNN execution. The DNN module 301 may receive an output of the DNN from the DNN accelerator 302. The DNN module 301 may transmit the output of the DNN (or a result of processing the output of the DNN by the DNN module 301) to the device or system.
[0063] The DNN module 301 may control execution processes of trained, compressed, or validated DNNs. In some embodiments, the DNN module 301 facilitates execution of Fourier transform operations by the DNN accelerator 302. For instance, the DNN module 301 may convert Fourier transform operations to matrix multiplications that can be performed by the
DNN accelerator 302. The matrix multiplications may include MAC operations that are similar to MAC operations in convolutions. The DNN module 301 may store the input signal of a Fourier transform operation as activation vectors. The DNN module 301 may also generate a transformation matrix with twiddle factors of the Fourier transform operation and store the transformation matrix as weight vectors. The activation vectors and weight vectors may be processed by the DNN accelerator 302 in the same or similar way that the DNN accelerator 302 processes activation operands and weight operands in convolutions. Certain aspects of the DNN module 301 are provided below in conjunction with FIG. 4. [0064] The DNN accelerator 302 executes DNNs provided by the DNN module 301. For instance, the DNN accelerator 302 can perform DNN execution, e.g., by running deep learning operations in the DNNs, for training DNNs or for using the trained/compressed/validated DNNs to perform tasks. As shown in FIG. 3, the DNN accelerator 302 includes a memory 310, a DMA (direct memory access) engine 320, and compute blocks 330 (individually referred to as "compute block 330"). In other embodiments, alternative configurations, different or additional components may be included in the DNN accelerator 302. For example, the DNN accelerator 302 may include more than one memory 310 or DMA engine 320. As another example, the DNN accelerator 302 may include a single compute block 330. Further, functionality attributed to a component of the DNN accelerator 302 may be accomplished by a different component included in the DNN accelerator 302 or by a different system. A component of the DNN accelerator 302 may be implemented in hardware, software, firmware, or some combination thereof.
[0065] The memory 310 stores data associated with deep learning operations performed by the DNN accelerator. In some embodiments, the memory 310 may store data to be used by the compute blocks 330 for DNN execution. The memory 310 may store weights, such as weights of convolutional layers, which are determined by training DNNs. The memory 310 may also store transformation matrices of DFT and IDFT operations. The memory 310 may further store inputs to DNN layers or outputs of DNN layers, such as data generated by the compute blocks 330 from performing deep learning operations in DNNs. Example deep learning operations include convolutions (also referred to as "convolutional operations"), DFT operations, IDFT operations, pooling operations, elementwise operations, activation functions, other types of deep learning operations, or some combination thereof. The
memory 310 may be a main memory of the DNN accelerator 302. In some embodiments, the memory 310 includes one or more dynamic random-access memories (DRAMs).
[0066] The DMA engine 320 facilitates data transfer between the memory 310 and local memories of the compute blocks 330. For example, the DMA engine 320 can read data from the memory 310 and write data into a local memory of a compute block 330. As another example, the DMA engine 320 can read data from a local memory of a compute block 330and write data into the memory 310. The DMA engine 320 provides a DMA feature that allows the compute block 330 to initiate data transfer between the memory 310 and the local memories of the compute blocks 330 and to perform other operations while the data transfer is in being conducted. In some embodiments, the DMA engine 320 may read tensors from the memory 310, modify the tensors in a way that is optimized for the compute block 330 before it writes the tensors into the local memories of the compute blocks 330.
[0067] The compute blocks 330 can perform deep learning operations in DNNs. For instance, a compute block 330 may execute a DNN layer by running one or more deep learning operations in the DNN layer. A compute block 330 may execute a layer, or a portion of a layer, at a time. The compute blocks 330 may be capable of running various types of deep learning operations, such as convolution, pooling, elementwise operation, linear operation, nonlinear operation, and so on. In an example, a compute block 330 may perform convolutions, e.g., standard convolution or depthwise convolution. In some embodiments, the compute block 330 receives an input tensor and one or more convolutional kernels and performs a convolution with the input tensor and convolutional kernels. The result of the convolution may be an output tensor, which can be further computed, e.g., by the compute block 330 or another compute block 330. In some embodiments, the operations of the DNN layers may be run by multiple compute blocks 330 in parallel. For instance, multiple compute blocks 330 may each perform a portion of a workload for a convolution. Data may be shared between the compute blocks 330. A compute block 330 may also be referred to as a compute tile. In some embodiments, each compute block 330 may be a processing unit.
[0068] In the embodiments of FIG. 3, each compute block 330 includes a local memory 340, a sparsity mode module 350, a load module 360, a sparse cell array 370 (also referred to as a data processing unit), and a drain module 380. Some or all the components of the
compute block 330 can be implemented on the same chip. In other embodiments, alternative configurations, different or additional components may be included in the compute block 330. Further, functionality attributed to a component of the compute block 330 may be accomplished by a different component included in the compute block 330, a different compute block 330, another component of the DNN accelerator 302, or a different system. A component of the compute block 330 may be implemented in hardware, software, firmware, or some combination thereof.
[0069] The local memory 340 is local to the corresponding compute block 330. In the embodiments of FIG. 3, the local memory 340 is inside the compute block 330. In other embodiments, the local memory 340 may be outside the compute block 330. Data in the local memory 340 may be transferred to or from the memory 310, e.g., through the DMA engine 320. In some embodiments, data in the local memory 340 may be transferred to or from the local memory of another compute block 330. The local memory 340 may store data received, used, or generated by the sparsity mode module 350, the load module 360, the sparse cell array 370, or the drain module 380. Examples of the data may include input activations, weights, output activations, sparsity bitmaps, and so on.
[0070] In some embodiments, the local memory 340 may store dense tensors (e.g., dense activation tensors, dense weight tensors, etc.), sparse tensors (e.g., sparse activation tensors, sparse weight tensors, etc.), and so on. A dense tensor may be a tensor from which zero-valued elements (if any) are not removed. A dense tensor may be converted to a sparse tensor by removing one or more zero-valued elements in the dense tensor. A sparse tensor may also be referred to as a compressed tensor or packed tensor. The process of converting a dense tensor to a sparse tensor may be referred to as sparsity encoding.
Sparsity encoding may also generate a sparsity tensor. Each element in the sparsity tensor may correspond to a different element in the dense tensor and indicate whether the element in the dense tensor is zero or not. The sparsity tensor may indicate positions of elements of the sparse tensor in the dense tensor. The sparsity tensor may be a sparsity bitmap, each element of which is a bit. A sparse tensor may be converted to a dense tensor through a densifying process, in which one or more zeros may be added to the sparse tensor based on the sparsity tensor.
[0071] In some embodiments, the local memory 340 includes one or more static randomaccess memories (SRAMs). The local memory 340 may be byte-addressable, and each
memory address identifies a single byte (eight bits) of storage. In some embodiments, the local memory 340 may include memory banks. The number of data banks in the local memory 340 may be 16, 64, 128, 356, 512, 1024, 3048, or other numbers. A memory bank may include a plurality of storage units. In an example, a data bank may include 8, 16, 64, or a different number of storage units. A memory bank or a storage unit in a memory bank may have a memory address. In an example, a storage unit may store a single byte, and data larger than a single byte may be stored in storage units with consecutive memory addresses, i.e., adjacent storage units. For instance, a storage unit can store an integer number in the INT8 format, versus two storage units may be needed to store a number in the FP16 or BF16 format, which has 16 bits. In some embodiments, 16 bits can be transferred from the local memory 340 in a single read cycle. In other embodiments, 16 bits can be transferred from the local memory 340 in multiple read cycles, such as two cycles.
[0072] The sparsity mode module 350 determines sparsity modes in which the compute block 330 operates to execute DNN layers. For instance, the sparsity mode module 350 may determine whether to accelerate a layer based on weight sparsity, activation sparsity, or both. The sparsity mode module 350 select the sparsity mode for a layer from a group of sparsity modes that includes, for example, combined sparsity mode in which the layer is accelerated based on both weight sparsity and activation sparsity, activation sparsity mode in which the layer is accelerated based on activation sparsity but not based on weight sparsity, weight sparsity mode in which the layer is accelerated based on weight sparsity but not based on activation sparsity, and a dense mode in which the layer is not accelerated based on sparsity. In some embodiments (e.g., embodiments where a layer is executed by multiple compute blocks 330), the sparsity module 345 may determine the sparsity mode for all the compute blocks 330 that executes the layer. In some embodiments, the sparsity mode module 350 may receive configuration parameters from the DNN module 301. A configuration parameter may correspond to a layer and indicate whether to accelerate the layer based on weight sparsity. The sparsity mode module 350 may determine the sparsity mode of the layer based on the configuration parameter.
[0073] The load module 360 loads data from the local memory 340 to the sparse cell array 370. The load module 360 may read tensors from the local memory 340. The tensors may include sparse activation tensors, sparse weight tensors, activation sparsity tensors, weight sparsity tensors, and so on. In some embodiments, the load module 360 may load data
based on the sparsity mode determined by the sparsity mode module 350. The load module 360 may select different data to transmit to the sparse cell array 370 in different sparsity modes. For instance, the load module 360 may transmit an activation sparsity tensor and a weight sparsity tensor of a layer to the sparse cell array 370 in the combined sparsity mode, while transmit the activation sparsity tensor but not the weight sparsity tensor to the sparse cell array 370 in the activation sparsity mode and transmit the weight sparsity tensor but not the activation sparsity tensor to the sparse cell array 370 in the weight sparsity mode. In the dense mode, the load module 360 does not transmit either the activation sparsity tensor or the weight sparsity tensor to the sparse cell array 370.
[0074] In some embodiments, the load module 360 may process (e.g., densify) data stored in the local memory 340 before providing the data to the sparse cell array 370. In an example, the load module 360, while operating in the weight sparsity mode, may densify sparse activation tensors to generate dense activation tensors based on corresponding activation sparsity tensors. For instance, the load module 360 may add one or more zeros into a sparse activation tensor based on an activation sparsity tensor associated with the sparse activation tensor to generate the dense activation tensor. The dense activation tensor includes one or more elements than the sparse activation tensor. The additional element(s) are zero valued. The load module 360 may identify one or more elements in the activation sparsity tensor that correspond to the zero-valued element(s), determine the position of each of the zero-valued element(s) in the dense activation tensor, and insert the zero-valued element(s) into the sparse activation tensor based on the determined positions. After the densification, the load module 360 may transmit the dense activation tensors to the sparse cell array 370. The load module 360 may also transmit corresponding sparse weight tensors and weight sparsity tensors to the sparse cell array 370. Activation sparsity tensor of the dense activation tensors may not be loaded to the sparse cell array 370.
[0075] In another example, the load module 360, while operating in the activation sparsity mode, may densify sparse weight tensors to generate dense weight tensors based on corresponding weight sparsity tensors by inserting zeros into sparse weight tensors. The densification of sparse weight tensors may be similar to the densification of sparse activation tensors described above. After the densification, the load module 360 may transmit the dense weight tensors to the sparse cell array 370. The load module 360 may also transmit corresponding sparse activation tensors and activation sparsity tensors to the
sparse cell array 370. Weight sparsity tensor of the dense weight tensors may not be loaded to the sparse cell array 370.
[0076] In yet another example, the load module 360, while operating in the dense mode, may densify both sparse weight tensors and sparse activation tensors. The load module 360 may generate the input tensor and weight tensor of the layer and transmit the tensors to the sparse cell array 370 for executing the layer without sparsity acceleration.
[0077] The sparse cell array 370 may include one or more sparse convolution cells. Each sparse convolution cell may include one or more MAC units that can perform MAC operations. The MAC units in a sparse convolution cell may be arranged in an array that includes rows and columns. The sparse convolution cells may be arranged in one or more rows and one or more columns in the sparse cell array 370. All the MAC units in the sparse cell array 370 may constitute a bigger array that includes more rows and columns. In some embodiments (e.g., embodiments where the compute block 330 executes a convolutional layer), a computation in an MAC unit may be an MAC operation on an activation operand and a weight operand. The activation operand may be an activation tensor that may include one or more activations in the input tensor of the convolution. Different activations may be in different input channels. The weight operand may be a weight tensor that may include one or more weights in the filter of the convolution. The values of the weights are determined through training the DNN. The weights in the weight operand may be in different input channels.
[0078] In some embodiments, an MAC unit includes one or more multipliers for performing multiplications. An MAC unit may also include one or more accumulators ("adders") for performing accumulations. A column of MAC units is referred to as an MAC column. An MAC column may be associated with one or more MAC lanes. An MAC lane is a path for loading data e.g., by the load module 360, into an MAC column. An MAC lane may be also referred to as a data transmission lane or data loading lane. An MAC column may have multiple MAC lanes. The loading bandwidth of the MAC column is an aggregation of the loading bandwidths of all the MAC lanes associated with the MAC column. With a certain number of MAC lanes, data can be fed into the same number of independent PEs simultaneously. In some embodiments where an MAC column has four MAC lanes for feeding activations or weights into the MAC column and each MAC lane may have a bandwidth of 16 bytes, the four MAC lanes can have a total loading bandwidth of 64 bytes.
[0079] In some embodiments, the sparse cell array 370 may be capable of depthwise convolution, standard convolution, or both. In a depthwise convolution, an MAC unit may perform an MAC operation that includes a sequence of multiplications for an input operand and a weight operand. Each multiplication in the sequence (also referred to as a cycle) is a multiplication of a different activation in the input operand with a different weight in the weight operand. The activation and weight in the same cycle may correspond to the same channel. The sequence of multiplication produces a product operand that includes a sequence of products. The MAC operation may also include accumulations in which multiple product operands are accumulated to produce an output operand of the MAC unit. The sparse cell array 370 may output multiple output operands at a time, each of which is generated by a different MAC unit. In a standard convolution, MAC operations may include accumulations across the channels. For instance, as opposed to generating an output operand, a MAC unit may accumulate products across different channels to generate a single output point.
[0080] In some embodiments, the sparse cell array 370 may perform MAC operations in quantized deep learning operations, such as MAC operations in a quantized convolution. In some embodiments, an MAC unit in the sparse cell array 370 may receive quantized activation and quantized weights and compute a quantized MAC result. The quantized MAC result may be a quantized value in an integer format and may be the output of the PE. In some embodiments, the MAC unit may also include a quantization multiplier that can multiply a quantization scale with the quantized MAC result, and the output of the MAC unit may be a real value in a floating-point format. The MAC unit may include no quantization subtractors as zero-point offsetting is not needed for the MAC operations in quantized deep learning operations.
[0081] In some embodiments, the sparse cell array 370 may include sparsity acceleration logic for facilitating sparsity acceleration. For instance, each sparse convolution cell in the sparse cell array 370 may include one or more sparsity modules. In an example, each MAC column or each MAC row may have a corresponding sparsity module that accelerates MAC operations in the MAC column or MAC row. In some embodiments, a sparsity module accelerates computations in the sparse cell array 370 based on sparsity in activations, sparsity in weights, or both. The sparsity module may include a storage unit that stores a sparsity tensor, which may be loaded to the storage unit by the load module 360. The
sparsity tensor may be an activation sparsity tensor, a weight sparsity tensor, or a combined sparsity tensor.
[0082] An activation sparsity tensor may be the sparsity tensor of an activation tensor and has the same number of elements as the activation tensor. An element in the activation sparsity tensor may indicate whether the corresponding element in the activation tensor is zero or not. For instance, a zero-valued in the activation sparsity tensor may indicate that the corresponding element in the activation tensor is zero. A one-valued in the activation sparsity tensor may indicate that the corresponding element in the activation tensor is nonzero. A weight sparsity tensor may be the sparsity tensor of a weight tensor and has the same number of elements as the weight tensor. An element in the weight sparsity tensor may indicate whether the corresponding element in the weight tensor is zero or not. For instance, a zero-valued in the weight sparsity tensor may indicate that the corresponding element in the weight tensor is zero. A one-valued in the weight sparsity tensor may indicate that the corresponding element in the weight tensor is nonzero. The sparsity module may generate a combined sparsity tensor using an activation sparsity tensor and a weight sparsity tensor. For instance, the sparsity module may multiply an element of the activation sparsity tensor with a corresponding element of the weight sparsity tensor to compute an element of the combined sparsity tensor. The positions of the three elements in their corresponding sparsity tensors may match. In some embodiments, each element in a sparsity tensor may be a bit, and the sparsity tensor may be referred to as a sparsity bitmap. [0083] The sparsity module may use the sparsity tensor to identify activations and weights to be used in MAC operations by the MAC units. In an embodiment where the sparse cell array 370 operates in the combined sparsity mode, the sparsity module may identify activations and weights that correspond to nonzero valued elements of a combined sparsity tensor. In an embodiment where the sparse cell array 370 operates in the activation sparsity mode, the sparsity module may identify activations and weights that correspond to nonzero valued elements of an activation sparsity tensor. In an embodiment where the sparse cell array 370 operates in the weight sparsity mode, the sparsity module may identify activations and weights that correspond to nonzero valued elements of a weight sparsity tensor. The sparsity module may be bypassed in the dense mode as no sparsity acceleration would be conducted.
[0084] The drain module 380 drains data from the sparse cell array 370 and writes the data to the local memory 340. The data may be outputs of MAC operations performed by MAC units in the sparse cell array 370. In some embodiments, the drain module 380 may drain data on a sparse-convolution-cell level. For each sparse convolution cell, the drain module 380 may drain outputs of MAC units in the sparse convolution cell based on a row index or column index of each MAC unit. For instance, the drain module 380 may use a sequence of cycles to drain data from a sparse convolution cell. The drain module 380 may drain the output of some of the MAC units in each cycle. The sequence of the cycles may be configured based on a configuration parameter indicating the operation mode of the load module 360.
[0085] In some embodiments, the drain module 380 may determine whether to drain the output of an MAC unit based on the column index of the MAC unit when the load module operates in the activation sparsity mode versus based on the row index of the MAC unit when the load module operates in the weight sparsity mode. For instance, for MAC operations where the load module 360 operates in the activation sparsity mode, the drain module 380 may drain the output of a different MAC column in each cycle. The sequence of cycles may start with the first MAC column (e.g., the MAC column on the left side of the sparse convolution cell) and end with the last MAC column (e.g., the MAC column on the right side of the sparse convolution cell). For MAC operations where the load module 360 operates in the weight sparsity mode, the drain module 380 may drain the output of a different MAC row in each cycle. The sequence of cycles may start with the first MAC row (e.g., the MAC row at the top of the sparse convolution cell) and end with the last MAC row (e.g., the MAC column at the bottom of the sparse convolution cell). In other embodiments, the drain module 380 may determine whether to drain the output of an MAC unit based on the row index of the MAC unit when the load module operates in the activation sparsity mode versus based on the column index of the MAC unit when the load module operates in the weight sparsity mode.
[0086] The drain module 380 may also include sparsity encoding logic that can convert outputs of the sparse cell array 370 from a dense format to a sparse format. For instance, the drain module 380 may be implemented with one or more sparsity encoders. A sparsity encoder converts dense data to compressed data based on sparsity in the dense data. For instance, the sparsity encoder may remove zeros in an activation tensor computed by the
sparse cell array 370 to convert the activation tensor to a compressed activation tensor. The sparsity encoder may also generate sparsity tensors, including activation sparsity tensors. [0087] In some embodiments, the data drained from the sparse cell array 370 may be at least part of an output tensor (e.g., the output tensor 230 in FIG. 2) of a deep learning operation. The sparsity encoder may generate a compressed version of the output tensor. The sparsity encoder may identify every zero-valued activation in the output tensor and remove these activations from the output tensor to generate a compressed activation tensor (aka "sparse activation tensor"). The sparsity encoder may also generate one or more sparsity tensors for the output tensor. A sparsity tensor may correspond to a portion of the output tensor (e.g., the vector 235 in FIG. 2). The sparsity tensor may include sparsity elements (e.g., bits), each of which corresponds to a different activation in the vector and indicates whether the corresponding activation is zeroed or not.
[0088] The drain module 380 may write the compressed activation tensor and the one or more sparsity tensors into the local memory 340. The sparse activation tensor and the one or more sparsity tensors may be further loaded to the memory 310, e.g., through the DMA engine 320. Additionally or alternatively, the sparse activation tensor and the one or more sparsity tensors may be loaded by the load module 360 to the sparse cell array for further computation, e.g., for performing a deep learning operation in the next layer.
[0089] The DNN accelerator 302 may be used for executing Fourier transform operations, such Fourier transform operations in DNNs. Fourier transform operations may be converted to matrix operations that are similar to convolutions. For instance, the input signal of a Fourier transform operation may be encoded by and processed as an input tensor. The transformation matrix of the Fourier transform operation may be processed by the DNN accelerator 302 as if the transformation matrix is a weight tensor of a convolution. The DNN accelerator 302 (e.g., the sparse cell array 370) may perform MAC operations on the input tensor and the transformation matrix to compute the Fourier transform of the input signal, i.e., the output signal of the Fourier transform operation. The transformation matrix may be determined offline, e.g., before the execution of the Fourier transform operation or even before the execution of the entire DNN. In some embodiments, the transformation matrix may be determined by the DNN module 301.
[0090] FIG. 4 is a block diagram of a DNN module 400, in accordance with various embodiments. The DNN module 400 may be an embodiment of the DNN module 301 in FIG.
3. As shown in FIG. 4, the DNN module 400 includes an interface module 410, a training module 420, a compressing module 430, a validating module 440, a Fourier transform module 450, and a datastore 460. In other embodiments, alternative configurations, different or additional components may be included in the DNN module 400. Further, functionality attributed to a component of the DNN module 400 may be accomplished by a different component included in the DNN module 400 or a different module or system. [0091] The interface module 410 facilitates communications of the DNN module 400 with other modules or systems. For example, the interface module 410 establishes communications between the DNN module 400 with an external database to receive data that can be used to train DNNs or input into DNNs to perform tasks. As another example, the interface module 410 supports the DNN module 400 to distribute DNNs to other systems, e.g., computing devices configured to apply DNNs to perform tasks.
[0092] The training module 420 trains DNNs by using a training dataset. The training module 420 forms the training dataset. In an embodiment where the training module 420 trains an DNN to recognize objects in images, the training dataset includes training images and training labels. The training labels describe ground-truth classifications of objects in the training images. In some embodiments, each label in the training dataset corresponds to an object in a training image. In some embodiments, a part of the training dataset may be used to initially train the DNN, and the rest of the training dataset may be held back as a validation subset used by the validating module 440 to validate performance of a trained DNN. The portion of the training dataset not including the tuning subset and the validation subset may be used to train the DNN.
[0093] The training module 420 also determines hyperparameters for training the DNN. Hyperparameters are variables specifying the DNN training process. Hyperparameters are different from parameters inside the DNN (e.g., weights of filters). In some embodiments, hyperparameters include variables determining the architecture of the DNN, such as number of hidden layers, etc. Hyperparameters also include variables which determine how the DNN is trained, such as batch size, number of epochs, etc. A batch size defines the number of training samples to work through before updating the parameters of the DNN. The batch size is the same as or smaller than the number of samples in the training dataset. The training dataset can be divided into one or more batches. The number of epochs defines how many times the entire training dataset is passed forward and backwards through the
entire network. The number of epochs defines the number of times that the deep learning algorithm works through the entire training dataset. One epoch means that each training sample in the training dataset has had an opportunity to update the parameters inside the DNN. An epoch may include one or more batches. The number of epochs may be 1, 5, 10, 50, 100, 500, 1000, or even larger.
[0094] The training module 420 defines the architecture of the DNN, e.g., based on some of the hyperparameters. The architecture of the DNN includes an input layer, an output layer, and a plurality of hidden layers. The input layer of an DNN may include tensors (e.g., a multidimensional array) specifying attributes of the input image, such as the height of the input image, the width of the input image, and the depth of the input image (e.g., the number of bits specifying the color of a pixel in the input image). The output layer includes labels of objects in the input layer. The hidden layers are layers between the input layer and output layer. The hidden layers include one or more convolutional layers and one or more other types of layers, such as pooling layers, fully-connected layers, normalization layers, SoftMax or logistic layers, and so on. The convolutional layers of the DNN abstract the input image to a feature map that is represented by a tensor specifying the feature map height, the feature map width, and the feature map channels (e.g., red, green, blue images include 3 channels). A pooling layer is used to reduce the spatial volume of input image after convolution. It is used between two convolution layers. A fully-connected layer involves weights, biases, and neurons. It connects neurons in one layer to neurons in another layer. It is used to classify images between different categories by training.
[0095] In the process of defining the architecture of the DNN, the training module 420 also adds an activation function to a hidden layer or the output layer. An activation function of a layer transforms the weighted sum of the input of the layer to an output of the layer. The activation function may be, for example, a ReLU activation function, a tangent activation function, or other types of activation functions.
[0096] After the training module 420 defines the architecture of the DNN, the training module 420 inputs a training dataset into the DNN. The training dataset includes a plurality of training samples. An example of a training sample includes an object in an image and a ground-truth label of the object. The training module 420 modifies the parameters inside the DNN ("internal parameters of the DNN") to minimize the error between labels of the training objects that are generated by the DNN and the ground-truth labels of the objects.
The internal parameters include weights of filters in the convolutional layers of the DNN. In some embodiments, the training module 420 uses a cost function to minimize the error. [0097] The training module 420 may train the DNN for a predetermined number of epochs. The number of epochs is a hyperparameter that defines the number of times that the deep learning algorithm will work through the entire training dataset. One epoch means that each sample in the training dataset has had an opportunity to update internal parameters of the DNN. After the training module 420 finishes the predetermined number of epochs, the training module 420 may stop updating the parameters in the DNN. The DNN having the updated parameters is referred to as a trained DNN.
[0098] The compressing module 430 compresses DNNs. For instance, the compressing module 430 may add pruning operations to DNN layers to reduce computational complexity or memory usage. A pruning operation may prune weight tensors of a DNN layer by changing one or more nonzero valued weights of the layer to zeros. The modification may be done before, during, or after training. Weights may be pruned during training, during inference, or a combination of both. The compressing module 430 may determine a sparsity ratio for a DNN layer. The sparsity ratio may be a ratio of the number of zero-valued weight to the total number of weights in the layer. The compressing module 430 may perform the pruning operation till the sparsity ratio of the DNN layer meets a target sparsity ration, such as 10%, 20%, 30%, 40%, 50%, and so on.
[0099] In some embodiments, the compressing module 430 may select one or more layers in a DNN and modify each selected layer with a pruning operation. For instance, the compressing module 430 may select computationally complex layers, such as layers with large filters. For a pruning operation of a layer or of a type of layer, the compressing module 430 may determine a weight threshold that would not cause a loss of the accuracy of the DNN to exceed an accuracy loss constraint. A pruning operation may modify weights having absolute values above the weight threshold to zeros and leave the other weights unchanged. The weight pruning can reduce memory storage as zero-valued weights may not be stored. Also, the number of operations in the layer can be reduced as computations on zero-valued weights can be skipped without impacting the output of the layer. In some embodiments, the compressing module 430 may also measure energy saving, final DNN accuracy, or layer-wise sparsity caused by pruning operations.
[0100] After compressing a DNN, the compressing module 430 may fine tune the DNN, e.g., through a retraining process. The compressing module 430 may fine tunes DNNs after weights are pruned. In some embodiments, the fine-tuning process is a retraining or further training process. For instance, after weights in a DNN are pruned, the compressing module 430 may further train the DNN by inputting a training dataset into the DNN. The values of the unpruned weights in the DNN may be modified based on outputs of the DNN and ground-truth labels of the training samples in the training dataset. In some embodiments, the values of the pruned weights (i.e., zero) are not changed during the fine-tuning process. For instance, the compressing module 430 may place a mask over a pruned weight block and the mask can prevent values in the pruned weight blocks from being changed during the fine-tuning process. In other embodiments, the values of all weights, including the pruned weights, may be changed during the fine-tuning process. After one or more cycles of retraining and weight changing by the compressing module 430, the compressing module 430 may perform a new pruning process, e.g., by selecting weight blocks and pruning the selected weight blocks. In some embodiments, the weight pruning process may be repeated multiple times before the fine-tuning process is done.
[0101] In some embodiments, the number of epochs in the fine-tuning process may be different from the number of epochs in the training process in which the pre-pruning values of the weights are determined. For instance, the fine-tuning process may have less epochs than the training process. In an example, the number of epochs in the fine-tuning process may be relatively small, such as 2, 3, 4, 5, and so on.
[0102] The validating module 440 verifies accuracy of trained or compressed DNNs. In some embodiments, the validating module 440 inputs samples in a validation dataset into a trained DNN and uses the outputs of the DNN to determine the model accuracy. In some embodiments, a validation dataset may be formed of some or all the samples in the training dataset. Additionally or alternatively, the validation dataset includes additional samples, other than those in the training sets. In some embodiments, the validating module 440 may determine an accuracy score measuring the precision, recall, or a combination of precision and recall of the DNN. The validating module 440 may use the following metrics to determine the accuracy score: Precision = TP / (TP + FP) and Recall = TP / (TP + FN), where precision may be how many the DNN correctly predicted (TP or true positives) out of the total it predicted (TP + FP or false positives), and recall may be how many the DNN correctly
predicted (TP) out of the total number of objects that did have the property in question (TP + FN or false negatives). The F-score (F-score = 2 * PR / (P + R)) unifies precision and recall into a single measure.
[0103] The validating module 440 may compare the accuracy score with a threshold score. In an example where the validating module 440 determines that the accuracy score of the DNN is less than the threshold score, the validating module 440 instructs the training module 420 to re-train the DNN. In one embodiment, the training module 420 may iteratively re-train the DNN until the occurrence of a stopping condition, such as the accuracy measurement indication that the DNN may be sufficiently accurate, or a number of training rounds having taken place.
[0104] The Fourier transform module 450 facilitates conversion of Fourier transform operations to matrix multiplications that can be executed by DNN accelerators, e.g., the DNN accelerator 302. A Fourier transform operation may be an operation to transform an input signal to an output signal. The input signal and output signal may be in different domains. For example, the input signal is in a time domain while the output signal is in a frequency domain. As another example, the input signal is in a frequency domain while the output signal is in a time domain. Fourier transform operations may include DFT operation, IDFT operations, real DFT (RDFT) operations, inverse RDFT (IRDFT) operations, STFT operations, other types of Fourier transform operations, or some combination thereof.
[0105] In some embodiments, DFT is a method that transforms a finite sequence of equally spaced samples of a function into a same-length sequence of equally spaced samples of the discrete-time Fourier transform (DTFT), which is a complex-valued function of frequency. The interval at which the DTFT is sampled is the reciprocal of the duration of the input sequence. An IDFT is a Fourier series, using the DTFT samples as coefficients of complex sinusoids at the corresponding DTFT frequencies. It has the same sample-values as the original input sequence. The DFT output may be a frequency domain representation of the input. DFT may be an invertible, linear transformation with its inverse known as IDFT. DFT and IDFT can be used in many practical applications, such as digital signal processing, image processing, and so on.
[0106] In some embodiments, a DFT operation may be denoted as:
where {xn} = xQ, x , x2 ••• , xN- is the input of the DFT operation and is a sequence of N complex numbers; and {Xk} = X0,X1,X2 ••• , XN- is the output of the DFT operation and is another sequence of N complex numbers. The input sequence {xn} may be a signal in a time domain, and the output sequence {Xk} may be a signal in a frequency domain. The output sequence may be a frequency domain representation of the input sequence. The DFT operation has a corresponding IDFT operation that converts a signal in the frequency domain to a signal in the time domain. The IDFT operation may be denoted as:
[0107] RDFT is a variant of the DFT that is used when the input sequence is real, as opposed to complex. The input sequence of RDFT may be symmetric. The output sequence of a RDFT operation may be a sequence of complex numbers. In some embodiments, a RDFT operation produces two signals: a real output signal and an imaginary output signal. Each of the two output signals may include ~ + 1 data points, which may run from 0 to N/2. The input signal may be in a time domain, and the two output signals may be in a frequency domain. The output signals may constitute a frequency domain representation of the input signal. A RDFT operation may have a corresponding IRDFT operation that converts a sequence of real numbers in the frequency domain back into a sequence of equally spaced samples of a function in the time domain.
[0108] The Fourier transform module 450 may generate activation vectors from input signals of Fourier transform operations. The input signal of a Fourier transform operation may be a sequence of discrete data elements. The Fourier transform module 450 may convert the input sequence into a 2D matrix. For instance, the Fourier transform module 450 may store segments of the input sequence in different storage elements. Each segment may be used and later processed by the DNN accelerator 302 as a context or operand. The segments are referred to as activation vectors and may be stored and processed by the DNN accelerator 302 in the same or similar way as activation vectors in convolutions. In some embodiments, an activation vector of the Fourier transform operation may be a row or column of the 2D matrix that represents the input signal.
[0109] In some embodiments (e.g., embodiments where the Fourier transform is STFT), the Fourier transform module 450 may extract frames from the input sequence by sliding a window over the input sequence. The frames may be used as activation vectors. In the process of extracting the frames, the Fourier transform module 450 may perform padding to add extra elements into the input sequence.
[0110] The Fourier transform module 450 may also generate transformation matrices of Fourier transform operations. The Fourier transform module 450 may generate a transformation matrix of a Fourier transform operation in advance, i.e., before the Fourier transform operation is executed or even before the execution of the DNN including the Fourier transform operation is started. The transformation matrix may also be referred to as a twiddle factor matrix. A twiddle factor of a FFT algorithm may be any of the trigonometric constant coefficients that are multiplied by the data in the course of the algorithm. For a FFT operation having a N X N input tensor, the Fourier transform module 450 may generate a N X N transformation matrix, such as the transformation tensor 500 in FIG. 5.
[0111] The Fourier transform module 450 may divide the transformation matrix into weight vectors that may be stored and processed by the DNN accelerator 302 in the same or similar way as weight vectors in convolutions. A weight vector may be a column or row of the transformation matrix. In some embodiments (e.g., embodiments where the transformation matrix has complex elements), the Fourier transform module 450 may divide a single column or row of the transformation matrix to two weight vectors: one weight vector including the real components of the complex elements in the row or column, and the other weight vector including the imaginary components of the complex elements in the row or column.
[0112] In some embodiments, the Fourier transform module 450 may map a two- dimensional DFT (2D-DFT) operation to a matrix multiplication operation. A 2D-DFT operation may have a 2D input tensor, which is also referred to as an input matrix, and a 2D transformation tensor, which is also referred to as a transformation matrix. The Fourier transform module 450 may convert the 2D-DFT operation into two sequences of onedimensional DFT (1D-DFT) operations. The first sequence may include 1D-DFT operations over all the rows in the input matrix. Each 1D-DFT operation in the first sequence may be a matrix multiplication operation on the transformation matrix and a different row in the
input matrix. The second sequence may include 1D-DFT operations over all the columns in the input matrix. Each 1D-DFT operation in the second sequence may be a matrix multiplication operation on the transformation matrix and a different column in the input matrix.
[0113] In some embodiments, the input matrix may be transposed after the first sequence is done so that each row in the input matrix becomes a column in the transposed input matrix. The second sequence may be performed on the transposed input matrix in the same way that the first sequence was performed. In an embodiment, the second sequence may be performed after the first sequence. In another embodiment, the second sequence may be performed before the first sequence. In yet another embodiment, the two sequences may be performed simultaneously, e.g., by different sparsity cells in the DNN accelerator. The two sequences of matrix multiplication operations may be mapped to MAC units in the DNN accelerator. The outputs of the MAC units may constitute the output signal of the Fourier transform operation. More details regarding mapping Fourier transform operations to MAC units are described below in conjunction with FIGS. 8 and 9.
[0114] In some embodiments, a DFT operation is associated with an IDFT operation. For instance, the DNN that includes the DFT operation also includes the IDFT operation. The IDFT operation may be arranged after the DFT operation in the DNN. The Fourier transform module 450 may also generate a transformation matrix for the IDFT operation in the DNN. In embodiment where a DNN includes a RDFT operation, the DNN may also include an IRDFT operation. The transformation matrix of the IRDFT operation may be a complex conjugate of the transformation matrix of the RDFT operation. The transformation matrix of the IRDFT operation may be scaled with the Nth dimension value. As the input sequence of the RDFT operation may be symmetric, the Fourier transform module 450 may configure the DNN accelerator to compute and store a part of the output sequence as opposed to the entire output sequence. For instance, the Fourier transform module 450 may configure the DNN
N accelerator to compute and store - + 1 data points out of all the N data points in the output sequence. This can enable the DNN accelerator to save a significant amount of power and memory. For the N-k data points, the loading of the weights may be avoided. [0115] The datastore 460 stores data received, generated, used, or otherwise associated with the DNN module 400. For example, the datastore 460 stores the datasets used by the
training module 420 and validating module 440. The datastore 460 may also store data generated by the training module 420 and validating module 440, such as the hyperparameters for training DNNs, internal parameters of trained DNNs (e.g., weights, etc.), data for sparsity acceleration (e.g., sparsity bitmap, etc.), and so on. The datastore 460 may store transformation matrices generated by the Fourier transform module 450. In the embodiment of FIG. 4, the datastore 460 is a component of the DNN module 400. In other embodiments, the datastore 460 may be external to the DNN module 400 and communicate with the DNN module 400 through a network.
[0116] FIG. 5 illustrates an example transformation tensor 500 of a Fourier transform operation, in accordance with various embodiments. The Fourier transform operation may be a DFT operation, IDFT operation, RDFT operation, IRDFT operation, and so on. The transformation tensor 500 may be computed by the Fourier transform module 450 in FIG. 4. For the purpose of illustration, the transformation tensor 500 is a 2D tensor. In other embodiments, the transformation tensor 500 may have a different number of dimensions. [0117] As shown in FIG. 5, the transformation tensor 500 has a spatial size of N X N, i.e., the transformation tensor 500 has N rows and N columns. In the embodiments of FIG. 5, N is an integer that is greater than 5. In other embodiments, N may have other values. A data point in the transformation tensor 500 may be a twiddle factor of the Fourier transform operation. The data points in the transformation tensor 500 may be referred to as weights. In the process of executing the Fourier transform operation, the transformation tensor 500 is to be processed by a DNN accelerator in the same or similar way that a weight tensor of a convolution is processed by the DNN accelerator.
[0118] FIG. 6 illustrates an example sparse convolution cell 600, in accordance with various embodiments. The sparse convolution cell 600 may be in a sparse cell array, e.g., the sparse cell array 370 in FIG. 3. The sparse convolution cell 600 includes 16 MAC units 610 (individually referred to as "MAC unit 610") arranged in four rows and four columns, 16 weight register files 620 (individually referred to as "weight register file 620"), 16 activation register files 630 (individually referred to as "activation register file 630"), four row buffers 640 (individually referred to as "row buffer 640"), and sparsity modules 660 (individually referred to as "sparsity module 660"). In other embodiments, the sparse convolution cell 600 may include fewer, more, or different components. For example, the sparse convolution cell 600 may include a different number of MAC units 610, weight register files 620,
activation register files 630, row buffers 640, or sparsity modules 660. As another example, the sparse convolution cell 600 may include column buffers in lieu of or in addition to the row buffers 640.
[0119] The MAC units 610 are configured to perform MAC operations. Each MAC unit 610 may include one or more multipliers and one or more adders. A multiplier may multiply an activation with a weight at a time to compute a product. In some embodiments (e.g., embodiments where the MAC unit 610 includes multiple multipliers), the multipliers may operate simultaneously to process multiple activation-weight pairs and compute multiple products in one cycle. An adder may accumulate products computed by the multipliers. Even though not shown in FIG. 6, the sparse convolution cell may include an adder tree including a plurality of adder tiers. The first tier may receive outputs of a plurality of MAC units 610. The number of adders in the first tier may be half of the number of the MAC units 610, and each adder may accumulate the outputs of two MAC units 610. The second tier may receive outputs of adders in the first tier. The number of adders in the second tier may be half of the number of adders in the first tier, and each adder in the second tier may accumulate the outputs of two adders in the first tier. The adder tree may include one or more other tiers. The last tier may include a single adder that accumulates outputs of adders in the second last tier to compute a partial sum of the sparse convolution cell 600.
[0120] The weight register files 620 store weights to be processed in MAC operations. In the embodiments of FIG. 6, four weight register files 620 are grouped into a storage set that stores data to be used by a column of MAC units 610. There are four storage sets corresponding to the four columns of MAC units 610. In some embodiments, a weight register file 620 may correspond to a MAC unit 610 and store data to be processed by the MAC unit. In some embodiments, all the 16 weight register files 620 constitute a weight storage unit, which may be an example of the weight storage unit 545 in FIG. 5.
[0121] The activation register files 630 stores activations to be processed in MAC operations. In the embodiments of FIG. 6, four activation register files 630 are grouped into a storage set that stores data to be used by a row of MAC units 610. There are four storage sets corresponding to the four rows of MAC units 610. In some embodiments, an activation register file 630 may correspond to a MAC unit 610 and store data to be processed by the MAC unit. In some embodiments, all the 16 activation register files 630 constitute an
activation storage unit. The row buffers 640 store outputs of the MAC units 610. Each row buffer 640 may drain outputs of a single row of MAC units 610.
[0122] The sparsity module 660 facilitates dynamic sparsity-based acceleration in the sparse convolution cell 600. In the embodiments of FIG. 6, each sparsity module 660 includes a sparsity tensor storage unit 665 and a control logic 667. The sparsity tensor storage unit 665 stores combined sparsity tensors. A combined sparsity tensor stored in the sparsity tensor storage unit 665 may correspond to an activation tensor and a weight tensor. A nonzero element in the combined sparsity tensor may correspond to a nonzero activation-weight pair that includes a nonzero activation and a nonzero weight. The position of the nonzero activation in the activation tensor may match the position of the nonzero weight in the weight tensor. The product of the nonzero activation and nonzero weight would be nonzero.
[0123] The control logic 667 may control transmission of activations and weights stored from the weight register files 620 and the activation register files 630 to the MAC units 610 based on sparsity tensors. For instance, the control logic 667 may select a subset of the weights stored in the weight register files 620 and select a subset of activations stored in the activation register files 630 based on a combined sparsity tensor. The selected weights and activations constitute nonzero activation-weight pairs. The control logic 667 may transmit the selected weights and activations to the MAC units 610 for performing MAC operations. The other weights stored in the weight register files 620 and the other activations stored in the activation register files 630 are skipped from computation. In the embodiments of FIG. 6, each sparsity module 660 controls sparsity acceleration in a respective MAC unit 610. As the sparsity acceleration is either based on both weight sparsity and activation sparsity, 16 sparsity modules 660 are used for acceleration computations in the 16 MAC units 610.
[0124] As shown in FIG. 6, the sparse convolution cell 600 is associated with multiplexers (MUXs) 603, 604, 605, and 606. In other embodiments, the sparse convolution cell 600 may be associated with a different number of MUXs or other devices. The MUX 603 facilitates loading weights, e.g., from the local memory 340, into the weight register files 620. An example of the MUX 603 may be the MUX 530 in FIG. 5. The MUX 604 facilitates loading activations, e.g., from the local memory 340, into the activation register files 630. An example of the MUX 604 may be the MUX 540 in FIG. 5. The MUX 605 facilitates loading sparsity tensors into the sparsity tensor storage unit 665. An example of the MUX 605 may
be the MUX 550 in FIG. 5. The MUX 606 may be a drain MUX that can facilitate draining outputs of the MAC units 610, e.g., to the local memory 340.
[0125] In some embodiments, the sparse convolution cell 600 may also execute matrix multiplications converted from Fourier transform operations. For an example Fourier transform operation, the MAC units 610 may perform MAC operations in the two sequences of matrix multiplications converted from the Fourier transform operation. The weight register files 620 may be used to store data points in transformation tensor of the Fourier transform operation. The activation register file 630 may be used to store data points in the input tensor of the Fourier transform operation. The row buffers 640 may store data points in the output tensor of the Fourier transform operation.
[0126] FIG. 7 illustrates a sparse cell array 700, in accordance with various embodiments. The sparse cell array 700 may be an example of the sparse cell array 370 in FIG. 3. In FIG. 7, the sparse cell array 700 includes sparse convolution cells 710 (individually referred to as "sparse convolution cell 710") arranged in four columns and four rows, an activation memory 720, and a weight memory 730. The sparce cell array 700 may also be referred to as a data processing unit. In other embodiments, the sparse cell array 700 may include fewer, more, or different components. For instance, the sparse cell array 700 may include a different number of columns, rows, or sparse convolution cells 710.
[0127] Each sparse convolution cell 710 may perform sparsity accelerated MAC operations. The sparse convolution cells 710 may facilitate dynamic sparsity mode. For instance, the sparsity modes of the sparse convolution cells 710 may be dynamically changed between a combined sparsity mode, an activation sparsity mode, a weight sparsity mode, and a dense mode. An embodiment of a sparse convolution cell 710 may be the sparse convolution cell 600 in FIG. 6. The activation memory 720 stores activations, such as activations in input tensors of deep learning operations. Activations may be loaded from the activation memory 720 to sparse convolution cells 710. The weight memory 730 stores weights, such as weights in filters of deep learning operations. Weights may be loaded from the weight memory 730 to sparse convolution cells 710. The activation memory 720 or weight memory 730 may be a buffer. In other embodiments, the sparse cell array 700 may include a dense data memory and a sparse data memory in lieu of the activation memory 720 and weight memory 730. The dense data memory may store dense tensors, e.g., dense tensors generated by the load module 360. The sparse data memory may store sparse tensors.
[0128] The sparse cell array 700 may also execute matrix multiplications in Fourier transform operations. The activation memory 710 may be used to store input tensors of the Fourier transform operations. The weight memory 730 may be used to store transformation matrices of the Fourier transform operations.
Mapping Fourier Transform Operations to Sparse Cell Array
[0129] FIG. 8 illustrates mapping a DFT operation to a sparse cell array 800, in accordance with various embodiments. The sparse cell array 800 may be an example of the sparse cell array 370 in FIG. 3. The sparce cell array 800 may also be referred to as a data processing unit. For the purpose of illustration, the sparse cell array 800 in FIG. 8 includes 256 MAC units that are arranged in 16 rows and 16 columns. The DFT operation has an input tensor having a spatial size of 16x16 and a transformation tensor having a spatial size of 16x16. In other embodiments, the sparse cell array 800 may include a different number of MAC units or have a different shape. Also, the input tensor or transformation tensor may have a different shape or size.
[0130] In the embodiments of FIG. 8, the input tensor is divided into activation vectors 810A-810P (collectively referred to as "activation vectors 810" or "activation vector 810"), and each activation vector 810 includes 16 activations. An activation vector 810 may be a row in the input tensor and may be processed as an activation operand. The transformation tensor is divided into weight vectors 820A-820P (collectively referred to as "weight vectors 820" or "weight vector 820"), and each weight vector 820 includes 16 weights. A weight vector 820 may be a column in the transformation matrix and may be processed as a weight operand.
[0131] In some embodiments (e.g., embodiments where the DFT operation is a 2D-DFT operation that can be converted to two sequential 1D-DFT operations, each of which includes a sequence of vector-matrix multiplications), the 16 MAC units in the same row in the sparse cell array 800 may execute a single vector-matrix multiplication, i.e., a multiplication of an activation vector 810 with the entire transformation matrix. All the 256 MAC units may execute all the vector-matrix multiplications in the first 1D-DFT operation. After the first 1D-DFT operation is finished, the sparse cell array 800 may perform the second 1D-DFT operation. For instance, the input tensor may be transposed so that each activation vector 810 may become a column in the input tensor. The sparse cell array 800
may execute the second 1D-DFT operation in the same way that it executed the first 1D-DFT operation.
[0132] FIG. 9 illustrates mapping a RDFT operation to a sparse cell array 900, in accordance with various embodiments. The sparse cell array 900 may be an example of the sparse cell array 370 in FIG. 3. The sparce cell array 900 may also be referred to as a data processing unit. For the purpose of illustration, the sparse cell array 900 in FIG. 9 includes 256 MAC units that are arranged in 16 rows and 16 columns. The RDFT operation has an input tensor having a spatial size of 4x4 and a transformation tensor having a spatial size of 4x4. In other embodiments, the sparse cell array 900 may include a different number of MAC units or have a different shape. Also, the input tensor or transformation tensor may have a different shape or size.
[0133] In the embodiments of FIG. 9, the input tensor has real numbers and is divided into four activation vectors 910A-910D (collectively referred to as "activation vectors 910" or "activation vector 910"), and each activation vector 910 includes 4 activations. An activation vector 910 may be a row in the input tensor and may be processed as an activation operand. The transformation tensor has complex numbers and is divided into eight weight vectors 920A-920H (collectively referred to as "weight vectors 920" or "weight vector 920"), and each weight vector 920 includes 4 weights. A weight vector 920 may be processed as a weight operand. Four weight vectors 920A-920D have real elements, and the other four weight vectors 920E-920H have imaginary elements. The weight vectors 920A and 920E may constitute the first column of the transformation matrix. For instance, the weight vector 920A may include the real components of the complex numbers in the first column of the transformation matrix, and the weight vector 920B may include the imaginary components of the complex numbers in the column of the transformation matrix. Similarly, the weight vectors 920B and 920F may constitute the second column of the transformation matrix with the weight vector 920B including the real components of the complex numbers in the second column and the weight vector 920B including the imaginary components of the complex numbers in the second column. The weight vectors 920C and 920G may constitute the third column of the transformation matrix. The weight vectors 920D and 920H may constitute the fourth column of the transformation matrix.
[0134] The four activation vectors 910 are loaded into four rows of MAC units, respectively. The eight weight vectors 920 are loaded into eight columns of MAC units, respectively. The
32 MAC units in the four rows and eight columns may execute the RDFT operation. The other MAC units in the sparse cell array 900 may be idle. Even though FIG. 9 shows the mapping of a RDFT operation to the sparse cell array 900, IRDFT operations may be mapped to the sparse cell array 900 in the same or similar way.
[0135] FIG. 10 illustrates mapping a DFT of complex numbers to a sparse cell array 1000, in accordance with various embodiments. The sparse cell array 1000 may be an example of the sparse cell array 370 in FIG. 3. The sparce cell array 1000 may also be referred to as a data processing unit. For the purpose of illustration, the sparse cell array 1000 in FIG. 10 includes 256 MAC units that are arranged in 16 rows and 16 columns. In other embodiments, the sparse cell array 1000 may include a different number of MAC units or have a different shape.
[0136] In the embodiments of FIG. 10, both the input tensor and transformation tensor include complex numbers. An activation in the input tensor may be denoted as a + ib, where a represents the real component and b represents the imaginary component. A weight in the transformation tensor may be denoted as c + id, where c represents the real component and d represents the imaginary component. Multiplying the activation and the weight results in an output element denoted as ac — bd + i(bc + ad), where ac — bd is the real component and be + ad is the imaginary component.
[0137] As the sparse cell array performs an elementwise multiplication of the different tensor elements in the process of executing the DFT operation, the execution of the DFT operation may be divided into 4 separate workloads of the sparse cell array 1000. For example, the activation and weight vectors may be loaded into the sparse cell array as 4 pairs: i) a, c; ii) b, d; iii) a, b; and iv) b, c. Each workload may include matrix multiplications on a different pair. For the pair (b, c), a negative scale may be applied to a post processing unit array associated with the sparse cell array to take care of the negative sign. The partial sums may be added up separately.
[0138] In some embodiments, the (a, c) multiplication may be performed before the (b, d) multiplication. As shown in FIG. 10, (a, c) are respectively loaded into a row of MAC units and a column of MAC units in the same loading cycle, and (h, d) are respectively loaded into the row of MAC units and the column of MAC units in a subsequent loading cycle. In some embodiments, the (h, d) multiplication may be performed with negative weights. These weights can be set up by the DNN module 301 with minimal or even no hardware overhead.
The drain module 380 may output the real and imaginary results as different outputs so that it can be properly comprehended by the load module 360 during the execution of the subsequent layers.
[0139] In some embodiments, the (a, c) multiplication may be fused with the (a, d) multiplication, a may be loaded to the sparse cell array 1000 once as activations (e.g., as an activation vector), and c and d may be loaded sequentially as two separate sets of weights (e.g., as two separate weight vectors) to be multiplied sequentially with the activations. Similarly, the (b, c) multiplication may be fused with the (b, d) multiplication, b may be loaded to the sparse cell array 1000 once as activations (e.g., as an activation vector), and c and d may be loaded sequentially as two separate sets of weights (e.g., as two separate weight vectors) to be multiplied with the activations to be multiplied sequentially with the activations. That way, the activations may be reused in two sets of multiplications to save power and memory bandwidth. The outputs corresponding to the c weight set may be later consumed as real components in one or more subsequent layers. The outputs corresponding to the d weight set may be later consumed as imaginary components in one or more subsequent layers.
[0140] FIG. 11 illustrates an example sliding window pattern of a STFT operation, in accordance with various embodiments. STFT is a type of Fourier transform that has real input signals. In a discrete case, the input signal to be transformed may be broken into frames (aka chucks) based on a window. The frames may all have the size of the window. Each frame may be Fourier transformed, and the complex result may be added to a matrix, which may encode the magnitude and phase for each point in time and frequency. A STFT operation may be denoted as:
where x[n] represents the input signal; and w[n] represents the window. In some embodiments, m is discrete, while a> is continuous. In other embodiments, both m and a> are discrete and quantized.
[0141] Frames may be extracted from the input sequence by sliding a window. A STFT operation in a DNN may have a window length (also referred to as "frame length") and a frame step (also referred to as "stride"). The window length may indicate the number of
data elements in the window, i.e., the number of data elements in each frame. The frame step may indicate the number of data elements traversed per slide. The STFT operation may be converted to a sequence of matrix multiplication operations. Each matrix multiplication operation may be performed on a corresponding frame. In some embodiments, STFT operations may be represented as ID convolutions with frame step as stride and window length as the number of input channels. In an example of a STFT operation with 16000 data elements in the input sequence, a window length of 512, and a frame step of 128, the STFT operation may be converted to a matrix multiplication operation on a 257x512 weight tensor and a 512x1247 input tensor.
[0142] For the purpose of illustration, the STFT operation in the embodiment of FIG. 11 has a window length of 8 and a frame step of 1. FIG. 11 shows an input sequence 1110 that includes 14 data elements and a window 1120 that includes eight data elements. Each data element is represented by a box in FIG. 11. Frames 1130A-1130G (collectively referred to as "frames 1130" or "frame 1130") are extracted from the input sequence 1110 using the window 1120. The frames 1130 are represented by boxes filled with a dotted pattern in FIG. 11. The frame 1130A is generated from the first slide of the window 1120. The frame 1130B is generated from the second slide of the window 1120. This continues till the frame 1130G is generated. One data element is traversed per slide. In other embodiments, the input length (i.e., the length of the input sequence), window length, or frame step may have different values.
[0143] The frames 1130 may be generated by the DNN module 301. In some embodiments, the DNN module 301 stores the frames 1130 as separate activation vectors, e.g., in the memory 310 or local memory 340. The activation vectors may be used as contexts or operands. The load module 360 may load the frames 1130 into the sparse cell array 370 for the sparse cell array 370 to perform the matrix multiplications. In an example, a storage element may be used to store an activation vector having a spatial size of 1 x 1 x N, where N is an integer. N may equal the window length. The storage element may have a storage element pointer that stores the location of the storage element in the memory, such as the memory 310 or the local memory 340. Using the storage element pointers, ID input sequences can be stored as 2D matrices without any data movement operations. For instance, an input sequence having 16000 elements may be stored as a 512x1247 2D matrix. When one represents a storage element as with 128 elements, the input sequence can be
represented with 1250 storage element pointers. More details regarding mapping frames to the sparse cell array 370 are described below in conjunction with FIG. 13.
[0144] FIG. 12 illustrates another example sliding window pattern of a STFT operation, in accordance with various embodiments. The sliding window pattern in FIG. 12 requires padding, i.e., adding new data elements into the input signal. For the purpose of illustration, the STFT operation in the embodiment of FIG. 12 has an input length of 10, a window length of 8, and a frame step of 1.
[0145] As shown in FIG. 12, an input sequence 1210 includes 10 data elements and a window 1220 includes eight data elements. Each data element is represented by a box in FIG. 12. Frames 1230A-1230E (collectively referred to as "frames 1230" or "frame 1230") are extracted from the input sequence 1210 using the window 1220. The frames 1230 are represented by boxes filled with a dotted pattern in FIG. 12. Even though FIG. 12 shows four frames 1230, a different number of frames may be extracted from the input sequence 1210. In some embodiments, the total number of frames extracted from the input sequence may equal the input length divided by the frame step. For the input length of 10 and frame step of 1, the total number of frames may be 10.
[0146] The frame 1230A is generated from the first slide of the window 1220. The frame 1230B is generated from the second slide of the window 1220. The frame 1230C is generated from the third slide of the window 1220. One data element is traversed per slide. To generate the frame 1230D, a new data element is added to the end of the input sequence 1210 so that the frame 1230D can meet the window length. Also, to generate the frame 1230E, another new data element is further added so that the frame 1230E can meet the window length. In some embodiments, each new element is a zero. In other embodiments, the new elements may have other values. Also, further new elements may be added to generate more frames.
[0147] The frames 1230 may be generated by the DNN module 301. In some embodiments, the DNN module 301 stores the frames 1230 as separate contexts or separate operands, e.g., in the memory 310 or local memory 340. The load module 360 may load the frames 1230 into the sparse cell array 370 for the sparse cell array 370 to perform the matrix multiplications. More details regarding mapping frames to the sparse cell array 370 are described below in conjunction with FIG. 13.
[0148] FIG. 13 illustrates mapping frames 1310 of a STFT operation to a sparse cell array 1300, in accordance with various embodiments. The sparse cell array 1300 may be an example of the sparse cell array 370 in FIG. 3. For the purpose of illustration, the STFT operation in the embodiment of FIG. 13 has an input length of 7, a window length of 4, and a frame step of 2. STFT operations with different input lengths, window lengths, or frame steps may be mapped to the sparse cell array 1300 as well.
[0149] FIG. 13 shows four frames 1310, individually referred to as "frame 1310." Each frame 1310 includes 4 data elements. The four frames 1310 are loaded to four rows of MAC units, respectively. In some embodiments, the four frames 1310 are stored separately in the local memory 340 and loaded to activation register files in the sparse cell array 1300 as separate contexts or separate operands.
[0150] Four weight vectors 1320 are loaded to four columns of MAC units, respectively, in the sparse cell array 1300. As shown in FIG. 13, the weights are shifted to map the sliding window pattern of the frames 1310. Sparsity tensors may be used to shift the weights. The sparsity tensors can be processed by the sparsity modules (e.g., the sparsity modules 660) in the sparse cell array 1300.
Method of Executing Fourier Transform Operations
[0151] FIG. 14 is a flowchart showing a method 1400 of executing a Fourier transform operation, in accordance with various embodiments. The method 1400 may be performed by the DNN accelerator 302 in FIG. 3. Although the method 1400 is described with reference to the flowchart illustrated in FIG. 14, many other methods for executing Fourier transform operations may alternatively be used. For example, the order of execution of the steps in FIG. 14 may be changed. As another example, some of the steps may be changed, eliminated, or combined.
[0152] The DNN accelerator 302 receives 1410 an input tensor that represents an input signal of a DFT operation. In some embodiments, the input tensor is mapped onto a data processing unit as an activation tensor that comprises activations arranged in one or more rows and one or more columns. In some embodiments, the DNN accelerator 302 receives the input tensor from a plurality of storage elements. Each of the plurality of storage element storing corresponds to a different row in the input tensor and stores activations in the different row. In some embodiments, the input tensor is generated from the input
signal. A total number of activations in the input tensor is greater than a total number of data elements in the input signal.
[0153] The DNN accelerator 302 converts 1420 the discrete Fourier transform operation into one or more two-dimensional matrix multiplications between the input tensor and a transformation matrix of the discrete Fourier transform operation. In some embodiments, the transformation matrix is mapped onto the data processing unit as a weight tensor comprising weights. In some embodiments, the weight tensor is determined based on one or more twiddle factors of the DFT operation. In some embodiments, some or all of the elements in the weight tensor are twiddle factors of the DFT operation. In some embodiments, the weight tensor is determined by the DNN module 301 offline.
[0154] The DNN accelerator 302 performs 1430 MAC operations on the input tensor and the transformation matrix to generate an output tensor that represents at least part of the discrete Fourier transform of the input tensor. In some embodiments, the data processing unit performs the MAC operations by performing a first sequence of MAC operations and a second sequence of MAC operations. An MAC operation in the first sequence is performed on the weight tensor and a row in the input tensor. In some embodiments, the first sequence of MAC operations is performed by MAC units arranged in rows and columns. The DNN accelerator 302 provides activations in the row in the input tensor to a row of MAC units. The DNN accelerator 302 divides the weight tensor into weight vectors. The DNN accelerator 302 provides the weight vectors to different columns of MAC units.
[0155] An MAC operation in the second sequence is performed on the weight tensor and a column in the input tensor. In some embodiments, the second sequence of MAC operations is performed by the MAC units. In some embodiments, the DNN accelerator 302 transposes the input tensor to generate a transposed tensor. After transposing the input tensor, the DNN accelerator 302 performs the second sequence of MAC operations on the transposed tensor and the weight tensor. Activations in the column in the input tensor are arranged as a row in the transposed tensor. The MAC operation in the second sequence is performed on the weight tensor and the row in the transposed tensor.
[0156] In some embodiments, the DNN accelerator 302 divides the weight tensor into weight vectors by dividing a column in the weight tensor into a first weight vector and a second weight vector. A data element in the first weight vector represents a real component of a data element in the column in the weight tensor. A data element in the second weight
vector represents an imaginary component of the data element in the column in the weight tensor.
[0157] In some embodiments, a total number of data elements in the output tensor is smaller than a total number of data elements in the Fourier transform of the input tensor. In some embodiments, the total number of data elements in the output tensor is equal to one plus half of the total number of data elements in the Fourier transform of the input tensor. In some embodiments, the DNN accelerator 302 performs a third sequence of MAC operations to compute an output of an inverse DFT operation.
Example Computing Device
[0158] FIG. 15 is a block diagram of an example computing device 1500, in accordance with various embodiments. In some embodiments, the computing device 1500 can be used as at least part of the DNN system 300. A number of components are illustrated in FIG. 15 as included in the computing device 1500, but any one or more of these components may be omitted or duplicated, as suitable for the application. In some embodiments, some or all of the components included in the computing device 1500 may be attached to one or more motherboards. In some embodiments, some or all of these components are fabricated onto a single system on a chip (SoC) die. Additionally, in various embodiments, the computing device 1500 may not include one or more of the components illustrated in FIG. 15, but the computing device 1500 may include interface circuitry for coupling to the one or more components. For example, the computing device 1500 may not include a display device 1506, but may include display device interface circuitry (e.g., a connector and driver circuitry) to which a display device 1506 may be coupled. In another set of examples, the computing device 1500 may not include an audio input device 1518 or an audio output device 1508, but may include audio input or output device interface circuitry (e.g., connectors and supporting circuitry) to which an audio input device 1518 or audio output device 1508 may be coupled.
[0159] The computing device 1500 may include a processing device 1502 (e.g., one or more processing devices). The processing device 1502 processes electronic data from registers and/or memory to transform that electronic data into other electronic data that may be stored in registers and/or memory. The computing device 1500 may include a memory 1504, which may itself include one or more memory devices such as volatile memory (e.g., DRAM), nonvolatile memory (e.g., read-only memory (ROM)), high bandwidth memory
(HBM), flash memory, solid state memory, and/or a hard drive. In some embodiments, the memory 1504 may include memory that shares a die with the processing device 1502. In some embodiments, the memory 1504 includes one or more non-transitory computer- readable media storing instructions executable to perform operations for executing Fourier transform operations (e.g., the method 1400 described in conjunction with FIG. 14) or some operations performed by the DNN system 300. The instructions stored in the one or more non-transitory computer-readable media may be executed by the processing device 1502. [0160] In some embodiments, the computing device 1500 may include a communication chip 1512 (e.g., one or more communication chips). For example, the communication chip 1512 may be configured for managing wireless communications for the transfer of data to and from the computing device 1500. The term "wireless" and its derivatives may be used to describe circuits, devices, systems, methods, techniques, communications channels, etc., that may communicate data through the use of modulated electromagnetic radiation through a nonsolid medium. The term does not imply that the associated devices do not contain any wires, although in some embodiments they might not.
[0161] The communication chip 1512 may implement any of a number of wireless standards or protocols, including but not limited to Institute for Electrical and Electronic Engineers (IEEE) standards including Wi-Fi (IEEE 802.10 family), IEEE 802.16 standards (e.g., IEEE 802.16-2005 Amendment), Long-Term Evolution (LTE) project along with any amendments, updates, and/or revisions (e.g., advanced LTE project, ultramobile broadband (UMB) project (also referred to as "3GPP2"), etc.). IEEE 802.16 compatible Broadband Wireless Access (BWA) networks are generally referred to as WiMAX networks, an acronym that stands for worldwide interoperability for microwave access, which is a certification mark for products that pass conformity and interoperability tests for the IEEE 802.16 standards. The communication chip 1512 may operate in accordance with a Global System for Mobile Communication (GSM), General Packet Radio Service (GPRS), Universal Mobile Telecommunications System (UMTS), High Speed Packet Access (HSPA), Evolved HSPA (E- HSPA), or LTE network. The communication chip 1512 may operate in accordance with Enhanced Data for GSM Evolution (EDGE), GSM EDGE Radio Access Network (GERAN), Universal Terrestrial Radio Access Network (UTRAN), or Evolved UTRAN (E-UTRAN). The communication chip 1512 may operate in accordance with Code-division Multiple Access (CDMA), Time Division Multiple Access (TDMA), Digital Enhanced Cordless
Telecommunications (DECT), Evolution-Data Optimized (EV-DO), and derivatives thereof, as well as any other wireless protocols that are designated as 3G, 4G, 5G, and beyond. The communication chip 1512 may operate in accordance with other wireless protocols in other embodiments. The computing device 1500 may include an antenna 1522 to facilitate wireless communications and/or to receive other wireless communications (such as AM or FM radio transmissions).
[0162] In some embodiments, the communication chip 1512 may manage wired communications, such as electrical, optical, or any other suitable communication protocols (e.g., the Ethernet). As noted above, the communication chip 1512 may include multiple communication chips. For instance, a first communication chip 1512 may be dedicated to shorter-range wireless communications such as Wi-Fi or Bluetooth, and a second communication chip 1512 may be dedicated to longer-range wireless communications such as global positioning system (GPS), EDGE, GPRS, CDMA, WiMAX, LTE, EV-DO, or others. In some embodiments, a first communication chip 1512 may be dedicated to wireless communications, and a second communication chip 1512 may be dedicated to wired communications.
[0163] The computing device 1500 may include battery/power circuitry 1514. The battery/power circuitry 1514 may include one or more energy storage devices (e.g., batteries or capacitors) and/or circuitry for coupling components of the computing device 1500 to an energy source separate from the computing device 1500 (e.g., AC line power). [0164] The computing device 1500 may include a display device 1506 (or corresponding interface circuitry, as discussed above). The display device 1506 may include any visual indicators, such as a heads-up display, a computer monitor, a projector, a touchscreen display, a liquid crystal display (LCD), a light-emitting diode display, or a flat panel display, for example.
[0165] The computing device 1500 may include an audio output device 1508 (or corresponding interface circuitry, as discussed above). The audio output device 1508 may include any device that generates an audible indicator, such as speakers, headsets, or earbuds, for example.
[0166] The computing device 1500 may include an audio input device 1518 (or corresponding interface circuitry, as discussed above). The audio input device 1518 may include any device that generates a signal representative of a sound, such as microphones,
microphone arrays, or digital instruments (e.g., instruments having a musical instrument digital interface (MIDI) output).
[0167] The computing device 1500 may include a GPS device 1516 (or corresponding interface circuitry, as discussed above). The GPS device 1516 may be in communication with a satellite-based system and may receive a location of the computing device 1500, as known in the art.
[0168] The computing device 1500 may include another output device 1510 (or corresponding interface circuitry, as discussed above). Examples of the other output device 1510 may include an audio codec, a video codec, a printer, a wired or wireless transmitter for providing information to other devices, or an additional storage device.
[0169] The computing device 1500 may include another input device 1520 (or corresponding interface circuitry, as discussed above). Examples of the other input device 1520 may include an accelerometer, a gyroscope, a compass, an image capture device, a keyboard, a cursor control device such as a mouse, a stylus, a touchpad, a bar code reader, a Quick Response (QR) code reader, any sensor, or a radio frequency identification (RFID) reader.
[0170] The computing device 1500 may have any desired form factor, such as a handheld or mobile computer system (e.g., a cell phone, a smart phone, a mobile internet device, a music player, a tablet computer, a laptop computer, a netbook computer, an ultrabook computer, a personal digital assistant (PDA), an ultramobile personal computer, etc.), a desktop computer system, a server or other networked computing component, a printer, a scanner, a monitor, a set-top box, an entertainment control unit, a vehicle control unit, a digital camera, a digital video recorder, or a wearable computer system. In some embodiments, the computing device 1500 may be any other electronic device that processes data.
Select Examples
[0171] The following paragraphs provide various examples of the embodiments disclosed herein.
[0172] Example 1 provides a method, including receiving an input tensor that represents an input signal of a discrete Fourier transform operation; converting the discrete Fourier transform operation into one or more two-dimensional matrix multiplications between the input tensor and a transformation matrix of the discrete Fourier transform operation; and
performing MAC operations on the input tensor and the transformation matrix to generate an output tensor that represents at least part of the discrete Fourier transform of the input tensor.
[0173] Example 2 provides the method of example 1, in which the input tensor is mapped onto a data processing unit as an activation tensor including activations arranged in rows and columns, the transformation matrix is onto the data processing unit mapped as a weight tensor including weights, and the data processing unit performs the MAC operations. [0174] Example 3 provides the method of example 2, in which the data processing unit performs the MAC operations by: performing a first sequence of MAC operations, an MAC operation in the first sequence performed on the weight tensor and a row in the input tensor; and performing a second sequence of MAC operations, an MAC operation in the second sequence performed on the weight tensor and a column in the input tensor.
[0175] Example 4 provides the method of example 3, in which performing the second sequence of MAC operations includes transposing the input tensor to generate a transposed tensor; and after transposing the input tensor, performing the second sequence of MAC operations on the transposed tensor and the weight tensor, in which activations in the column in the input tensor are arranged as a row in the transposed tensor, and the MAC operation in the second sequence is performed on the weight tensor and the row in the transposed tensor.
[0176] Example 5 provides the method of example 3 or 4, in which the first sequence of MAC operations is performed by MAC units in the data processing unit, the MAC units are arranged in rows and columns, and performing the first sequence of MAC operations includes providing activations in the row in the input tensor to a row of MAC units; dividing the weight tensor into weight vectors; and providing the weight vectors to different columns of MAC units.
[0177] Example 6 provides the method of example 5, in which dividing the weight tensor into the weight vectors includes dividing a column in the weight tensor into a first weight vector and a second weight vector, in which a data element in the first weight vector represents a real component of a data element in the column in the weight tensor, and a data element in the second weight vector represents an imaginary component of the data element in the column in the weight tensor.
[0178] Example 7 provides the method of any one of examples 3-6, further including performing a third sequence of MAC operations to compute an output of an inverse discrete Fourier transform operation.
[0179] Example 8 provides the method of any one of examples 1-7, in which a total number of data elements in the output tensor is smaller than a total number of data elements in the discrete Fourier transform of the input tensor.
[0180] Example 9 provides the method of example 8, in which the total number of data elements in the output tensor is equal to one plus half of the total number of data elements in the discrete Fourier transform of the input tensor.
[0181] Example 10 provides the method of any one of examples 1-9, in which the input tensor is generated from the input signal, and a total number of elements in the input tensor is greater than a total number of data elements in the input signal.
[0182] Example 11 provides one or more non-transitory computer-readable media storing instructions executable to perform operations, the operations including receiving an input tensor that represents an input signal of a discrete Fourier transform operation; converting the discrete Fourier transform operation into one or more two-dimensional matrix multiplications between the input tensor and a transformation matrix of the discrete Fourier transform operation; and performing MAC operations on the input tensor and the transformation matrix to generate an output tensor that represents at least part of the discrete Fourier transform of the input tensor.
[0183] Example 12 provides the one or more non-transitory computer-readable media of example 11, in which the input tensor is mapped onto a data processing unit as an activation tensor including activations arranged in rows and columns, the transformation matrix is onto the data processing unit mapped as a weight tensor including weights, and the data processing unit performs the MAC operations.
[0184] Example 13 provides the one or more non-transitory computer-readable media of example 12, in which the data processing unit performs the MAC operations by: performing a first sequence of MAC operations, an MAC operation in the first sequence performed on the weight tensor and a row in the input tensor; and performing a second sequence of MAC operations, an MAC operation in the second sequence performed on the weight tensor and a column in the input tensor.
[0185] Example 14 provides the one or more non-transitory computer-readable media of example 13, in which performing the second sequence of MAC operations includes transposing the input tensor to generate a transposed tensor; and after transposing the input tensor, performing the second sequence of MAC operations on the transposed tensor and the weight tensor, in which activations in the column in the input tensor are arranged as a row in the transposed tensor, and the MAC operation in the second sequence is performed on the weight tensor and the row in the transposed tensor.
[0186] Example 15 provides the one or more non-transitory computer-readable media of example 13 or 14, in which the first sequence of MAC operations is performed by MAC units in the data processing unit, the MAC units are arranged in rows and columns, and performing the first sequence of MAC operations includes providing activations in the row in the input tensor to a row of MAC units; dividing the weight tensor into weight vectors; and providing the weight vectors to different columns of MAC units.
[0187] Example 16 provides the one or more non-transitory computer-readable media of example 15, in which dividing the weight tensor into the weight vectors includes dividing a column in the weight tensor into a first weight vector and a second weight vector, in which a data element in the first weight vector represents a real component of a data element in the column in the weight tensor, and a data element in the second weight vector represents an imaginary component of the data element in the column in the weight tensor.
[0188] Example 17 provides the one or more non-transitory computer-readable media of any one of examples 11-16, in which a total number of data elements in the output tensor is smaller than a total number of data elements in the discrete Fourier transform of the input tensor.
[0189] Example 18 provides an apparatus, including a computer processor for executing computer program instructions; and a non-transitory computer-readable memory storing computer program instructions executable by the computer processor to perform operations including receiving an input tensor that represents an input signal of a discrete Fourier transform operation, converting the discrete Fourier transform operation into one or more two-dimensional matrix multiplications between the input tensor and a transformation matrix of the discrete Fourier transform operation, and performing MAC operations on the input tensor and the transformation matrix to generate an output tensor that represents at least part of the discrete Fourier transform of the input tensor.
[0190] Example 19 provides the apparatus of example 18, in which the input tensor is mapped onto a data processing unit as an activation tensor including activations arranged in rows and columns, the transformation matrix is onto the data processing unit mapped as a weight tensor including weights, and the data processing unit performs the MAC operations. [0191] Example 20 provides the apparatus of example 19, in which the data processing unit performs the MAC operations by: performing a first sequence of MAC operations, an MAC operation in the first sequence performed on the weight tensor and a row in the input tensor; and performing a second sequence of MAC operations, an MAC operation in the second sequence performed on the weight tensor and a column in the input tensor.
Additional Select Examples
[0192] The following paragraphs provide various examples of the embodiments disclosed herein.
[0193] Example 1 provides a method, including receiving an input tensor that represents an input signal of a DFT operation, the input tensor including activations arranged in one or more rows and one or more columns; receiving a weight tensor that is determined based on one or more twiddle factors of the DFT operation; performing a first sequence of multiply- accumulate (MAC) operations, an MAC operation in the first sequence performed on the weight tensor and a row in the input tensor; performing a second sequence of MAC operations, an MAC operation in the second sequence performed on the weight tensor and a column in the input tensor; and generating an output tensor that represents at least part of the DFT of the input tensor.
[0194] Example 2 provides the method of example 1, in which performing the second sequence of MAC operations includes transposing the input tensor to generate a transposed tensor; and after transposing the input tensor, performing the second sequence of MAC operations on the transposed tensor and the weight tensor, in which activations in the column in the input tensor are arranged as a row in the transposed tensor, and the MAC operation in the second sequence is performed on the weight tensor and the row in the transposed tensor.
[0195] Example 3 provides the method of example 1 or 2, in which the first sequence of MAC operations is performed by MAC units arranged in rows and columns, and performing the first sequence of MAC operations includes providing activations in the row in the input
tensor to a row of MAC units; dividing the weight tensor into weight vectors; and providing the weight vectors to different columns of MAC units.
[0196] Example 4 provides the method of example 3, in which the second sequence of MAC operations is performed by the MAC units.
[0197] Example 5 provides the method of example 3 or 4, in which dividing the weight tensor into the weight vectors includes dividing a column in the weight tensor into a first weight vector and a second weight vector, in which a data element in the first weight vector represents a real component of a data element in the column in the weight tensor, and a data element in the second weight vector represents an imaginary component of the data element in the column in the weight tensor.
[0198] Example 6 provides the method of any one of examples 1-5, further including performing a third sequence of MAC operations to compute an output of an inverse DFT operation.
[0199] Example 7 provides the method of any one of examples 1-6, in which a total number of data elements in the output tensor is smaller than a total number of data elements in the DFT of the input tensor.
[0200] Example 8 provides the method of example 7, in which the total number of data elements in the output tensor is equal to one plus half of the total number of data elements in the DFT of the input tensor.
[0201] Example 9 provides the method of any one of examples 1-8, in which receiving the input tensor includes receiving the input tensor from a plurality of storage elements, each of the plurality of storage element storing corresponding to a different row in the input tensor and storing activations in the different row.
[0202] Example 10 provides the method of any one of examples 1-9, in which the input tensor is generated from the input signal, and a total number of activations in the input tensor is greater than a total number of data elements in the input signal.
[0203] Example 11 provides one or more non-transitory computer-readable media storing instructions executable to perform operations, the operations including receiving an input tensor that represents an input signal of a DFT operation, the input tensor including activations arranged in one or more rows and one or more columns; receiving a weight tensor that is determined based on one or more twiddle factors of the DFT operation; performing a first sequence of multiply-accumulate (MAC) operations, an MAC operation in
the first sequence performed on the weight tensor and a row in the input tensor; performing a second sequence of MAC operations, an MAC operation in the second sequence performed on the weight tensor and a column in the input tensor; and generating an output tensor that represents at least part of the DFT of the input tensor.
[0204] Example 12 provides the one or more non-transitory computer-readable media of example 11, in which performing the second sequence of MAC operations includes transposing the input tensor to generate a transposed tensor; and after transposing the input tensor, performing the second sequence of MAC operations on the transposed tensor and the weight tensor, in which activations in the column in the input tensor are arranged as a row in the transposed tensor, and the MAC operation in the second sequence is performed on the weight tensor and the row in the transposed tensor.
[0205] Example 13 provides the one or more non-transitory computer-readable media of example 11 or 12, in which the first sequence of MAC operations is performed by MAC units arranged in rows and columns, and performing the first sequence of MAC operations includes providing activations in the row in the input tensor to a row of MAC units; dividing the weight tensor into weight vectors; and providing the weight vectors to different columns of MAC units.
[0206] Example 14 provides the one or more non-transitory computer-readable media of any one of examples 11-13, in which the operations further include performing a third sequence of MAC operations to compute an output of an inverse DFT operation.
[0207] Example 15 provides the one or more non-transitory computer-readable media of any one of examples 11-14, in which a total number of data elements in the output tensor is smaller than a total number of data elements in the DFT of the input tensor.
[0208] Example 16 provides the one or more non-transitory computer-readable media of any one of examples 11-15, in which receiving the input tensor includes receiving the input tensor from a plurality of storage elements, each of the plurality of storage element storing corresponding to a different row in the input tensor and storing activations in the different row.
[0209] Example 17 provides the one or more non-transitory computer-readable media of any one of examples 11-16, in which the input tensor is generated from the input signal, and a total number of activations in the input tensor is greater than a total number of data elements in the input signal.
[0210] Example 18 provides an apparatus, including a computer processor for executing computer program instructions; and a non-transitory computer-readable memory storing computer program instructions executable by the computer processor to perform operations including receiving an input tensor that represents an input signal of a DFT operation, the input tensor including activations arranged in one or more rows and one or more columns; receiving a weight tensor that is determined based on one or more twiddle factors of the DFT operation; performing a first sequence of multiply-accumulate (MAC) operations, an MAC operation in the first sequence performed on the weight tensor and a row in the input tensor; performing a second sequence of MAC operations, an MAC operation in the second sequence performed on the weight tensor and a column in the input tensor; and generating an output tensor that represents at least part of the DFT of the input tensor.
[0211] Example 19 provides the apparatus of example 18, in which performing the second sequence of MAC operations includes transposing the input tensor to generate a transposed tensor; and after transposing the input tensor, performing the second sequence of MAC operations on the transposed tensor and the weight tensor, in which activations in the column in the input tensor are arranged as a row in the transposed tensor, and the MAC operation in the second sequence is performed on the weight tensor and the row in the transposed tensor.
[0212] Example 20 provides the apparatus of example 18 or 19, in which the first sequence of MAC operations is performed by MAC units arranged in rows and columns, and performing the first sequence of MAC operations includes providing activations in the row in the input tensor to a row of MAC units; dividing the weight tensor into weight vectors; and providing the weight vectors to different columns of MAC units.
[0213] The above description of illustrated implementations of the disclosure, including what is described in the Abstract, is not intended to be exhaustive or to limit the disclosure to the precise forms disclosed. While specific implementations of, and examples for, the disclosure are described herein for illustrative purposes, various equivalent modifications are possible within the scope of the disclosure, as those skilled in the relevant art will recognize. These modifications may be made to the disclosure in light of the above detailed description.
Claims
1. A method, comprising: receiving an input tensor that represents an input signal of a discrete Fourier transform operation; converting the discrete Fourier transform operation into one or more two- dimensional matrix multiplications between the input tensor and a transformation matrix of the discrete Fourier transform operation; and performing multiply-accumulate (MAC) operations on the input tensor and the transformation matrix to generate an output tensor that represents at least part of the discrete Fourier transform of the input tensor.
2. The method of claim 1, wherein the input tensor is mapped onto a data processing unit as an activation tensor comprising activations arranged in rows and columns, the transformation matrix is onto the data processing unit mapped as a weight tensor comprising weights, and the data processing unit performs the MAC operations.
3. The method of claim 2, wherein the data processing unit performs the MAC operations by: performing a first sequence of MAC operations, an MAC operation in the first sequence performed on the weight tensor and a row in the input tensor; and performing a second sequence of MAC operations, an MAC operation in the second sequence performed on the weight tensor and a column in the input tensor.
4. The method of claim 3, wherein performing the second sequence of MAC operations comprises: transposing the input tensor to generate a transposed tensor; and after transposing the input tensor, performing the second sequence of MAC operations on the transposed tensor and the weight tensor, wherein activations in the column in the input tensor are arranged as a row in the transposed tensor, and the MAC operation in the second sequence is performed on the weight tensor and the row in the transposed tensor.
5. The method of claim 3, wherein the first sequence of MAC operations is performed by MAC units in the data processing unit, the MAC units are arranged in rows and columns, and performing the first sequence of MAC operations comprises: providing activations in the row in the input tensor to a row of MAC units; dividing the weight tensor into weight vectors; and providing the weight vectors to different columns of MAC units.
6. The method of claim 5, wherein dividing the weight tensor into the weight vectors comprises: dividing a column in the weight tensor into a first weight vector and a second weight vector, wherein a data element in the first weight vector represents a real component of a data element in the column in the weight tensor, and a data element in the second weight vector represents an imaginary component of the data element in the column in the weight tensor.
7. The method of claim 3, further comprising: performing a third sequence of MAC operations to compute an output of an inverse discrete Fourier transform operation.
8. The method of claim 1, wherein a total number of data elements in the output tensor is smaller than a total number of data elements in the discrete Fourier transform of the input tensor.
9. The method of claim 8, wherein the total number of data elements in the output tensor is equal to one plus half of the total number of data elements in the discrete Fourier transform of the input tensor.
10. The method of claim 1, wherein the input tensor is generated from the input signal, and a total number of elements in the input tensor is greater than a total number of data elements in the input signal.
11. One or more non-transitory computer-readable media storing instructions executable to perform operations, the operations comprising: receiving an input tensor that represents an input signal of a discrete Fourier transform operation; converting the discrete Fourier transform operation into one or more two- dimensional matrix multiplications between the input tensor and a transformation matrix of the discrete Fourier transform operation; and performing multiply-accumulate (MAC) operations on the input tensor and the transformation matrix to generate an output tensor that represents at least part of the discrete Fourier transform of the input tensor.
12. The one or more non-transitory computer-readable media of claim 11, wherein the input tensor is mapped onto a data processing unit as an activation tensor comprising activations arranged in rows and columns, the transformation matrix is onto the data processing unit mapped as a weight tensor comprising weights, and the data processing unit performs the MAC operations.
13. The one or more non-transitory computer-readable media of claim 12, wherein the data processing unit performs the MAC operations by: performing a first sequence of MAC operations, an MAC operation in the first sequence performed on the weight tensor and a row in the input tensor; and performing a second sequence of MAC operations, an MAC operation in the second sequence performed on the weight tensor and a column in the input tensor.
14. The one or more non-transitory computer-readable media of claim 13, wherein performing the second sequence of MAC operations comprises: transposing the input tensor to generate a transposed tensor; and after transposing the input tensor, performing the second sequence of MAC operations on the transposed tensor and the weight tensor,
wherein activations in the column in the input tensor are arranged as a row in the transposed tensor, and the MAC operation in the second sequence is performed on the weight tensor and the row in the transposed tensor.
15. The one or more non-transitory computer-readable media of claim 13, wherein the first sequence of MAC operations is performed by MAC units in the data processing unit, the MAC units are arranged in rows and columns, and performing the first sequence of MAC operations comprises: providing activations in the row in the input tensor to a row of MAC units; dividing the weight tensor into weight vectors; and providing the weight vectors to different columns of MAC units.
16. The one or more non-transitory computer-readable media of claim 15, wherein dividing the weight tensor into the weight vectors comprises: dividing a column in the weight tensor into a first weight vector and a second weight vector, wherein a data element in the first weight vector represents a real component of a data element in the column in the weight tensor, and a data element in the second weight vector represents an imaginary component of the data element in the column in the weight tensor.
17. The one or more non-transitory computer-readable media of claim 11, wherein a total number of data elements in the output tensor is smaller than a total number of data elements in the discrete Fourier transform of the input tensor.
18. An apparatus, comprising: a computer processor for executing computer program instructions; and a non-transitory computer-readable memory storing computer program instructions executable by the computer processor to perform operations comprising: receiving an input tensor that represents an input signal of a discrete
Fourier transform operation,
converting the discrete Fourier transform operation into one or more two-dimensional matrix multiplications between the input tensor and a transformation matrix of the discrete Fourier transform operation, and performing multiply-accumulate (MAC) operations on the input tensor and the transformation matrix to generate an output tensor that represents at least part of the discrete Fourier transform of the input tensor.
19. The apparatus of claim 18, wherein the input tensor is mapped onto a data processing unit as an activation tensor comprising activations arranged in rows and columns, the transformation matrix is onto the data processing unit mapped as a weight tensor comprising weights, and the data processing unit performs the MAC operations.
20. The apparatus of claim 19, wherein the data processing unit performs the MAC operations by: performing a first sequence of MAC operations, an MAC operation in the first sequence performed on the weight tensor and a row in the input tensor; and performing a second sequence of MAC operations, an MAC operation in the second sequence performed on the weight tensor and a column in the input tensor.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/US2024/015312 WO2025174353A1 (en) | 2024-02-12 | 2024-02-12 | Executing fourier transform operations with deep neural network accelerator |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/US2024/015312 WO2025174353A1 (en) | 2024-02-12 | 2024-02-12 | Executing fourier transform operations with deep neural network accelerator |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025174353A1 true WO2025174353A1 (en) | 2025-08-21 |
Family
ID=96773396
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US2024/015312 Pending WO2025174353A1 (en) | 2024-02-12 | 2024-02-12 | Executing fourier transform operations with deep neural network accelerator |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025174353A1 (en) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN120950266A (en) * | 2025-10-17 | 2025-11-14 | 浙江大学 | A computational acceleration method that combines Fast Fourier Transform and neural network inference |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US10216703B2 (en) * | 2016-02-08 | 2019-02-26 | Spero Devices, Inc. | Analog co-processor |
| EP4261745A1 (en) * | 2022-04-14 | 2023-10-18 | Samsung Electronics Co., Ltd. | Apparatus for accelerating neural network computations |
| US11886974B1 (en) * | 2023-07-20 | 2024-01-30 | Chromatic Inc. | Neural network chip for ear-worn device |
-
2024
- 2024-02-12 WO PCT/US2024/015312 patent/WO2025174353A1/en active Pending
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US10216703B2 (en) * | 2016-02-08 | 2019-02-26 | Spero Devices, Inc. | Analog co-processor |
| EP4261745A1 (en) * | 2022-04-14 | 2023-10-18 | Samsung Electronics Co., Ltd. | Apparatus for accelerating neural network computations |
| US11886974B1 (en) * | 2023-07-20 | 2024-01-30 | Chromatic Inc. | Neural network chip for ear-worn device |
Non-Patent Citations (2)
| Title |
|---|
| JO HYEONJIN; SIM CHAERIN; PARK JAEWOO; LEE JONGEUN: "Accelerating Transformers with Fourier-Based Attention for Efficient On-Device Inference", 2023 20TH INTERNATIONAL SOC DESIGN CONFERENCE (ISOCC), IEEE, 25 October 2023 (2023-10-25), pages 203 - 204, XP034525114, DOI: 10.1109/ISOCC59558.2023.10396620 * |
| RUIQI SUN; SIWEI YE; JIE ZHAO; XIN HE; JIANZHE LIN; YIRAN LI; AN ZOU: "NeuralMatrix: Compute the Entire Neural Networks with Linear Matrix Operations for Efficient Inference", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 20 August 2024 (2024-08-20), 201 Olin Library Cornell University Ithaca, NY 14853, XP091845846 * |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN120950266A (en) * | 2025-10-17 | 2025-11-14 | 浙江大学 | A computational acceleration method that combines Fast Fourier Transform and neural network inference |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20240119269A1 (en) | Dynamic sparsity-based acceleration of neural networks | |
| US20240028895A1 (en) | Switchable one-sided sparsity acceleration | |
| US20230376765A1 (en) | Performing operation in neural network with storage pointer and sparsity map | |
| US20230325665A1 (en) | Sparsity-based reduction of gate switching in deep neural network accelerators | |
| US20230394312A1 (en) | Pruning activations and weights of neural networks with programmable thresholds | |
| US20230229917A1 (en) | Hybrid multipy-accumulation operation with compressed weights | |
| EP4530931A1 (en) | Real-time inference of temporal down-sampling convolutional networks | |
| WO2025096102A1 (en) | Approximating activation functions in neural networks with programmable look-up table | |
| WO2025136548A1 (en) | Approximating activation function in neural network with look-up table having hybrid architecture | |
| WO2025071788A1 (en) | Output drain path facilitating flexible schedule-based deep neural network accelerator | |
| WO2025091335A1 (en) | Multi-precision tensor multiplication in neural network | |
| US20230368030A1 (en) | Block-wise pruning of weights in deep neural network | |
| US20230229910A1 (en) | Transposing Memory Layout of Weights in Deep Neural Networks (DNNs) | |
| WO2025174353A1 (en) | Executing fourier transform operations with deep neural network accelerator | |
| US20230059976A1 (en) | Deep neural network (dnn) accelerator facilitating quantized inference | |
| WO2025189339A1 (en) | Reshaping convolution based on configuration of deep neural network accelerator | |
| WO2025184850A1 (en) | Executing matrix multiplication by performing convolution with deep neural network accelerator | |
| WO2025251247A1 (en) | Converting interpolation operation in neural network to depthwise convolution | |
| US20240265260A1 (en) | Compressing neural networks through unbiased minimum variance pruning | |
| WO2025025421A1 (en) | Tensor multiplication in neural network based on dequantization with shuffled data layout | |
| WO2025207084A1 (en) | Performing neural network operation based on spatial similarity in input data | |
| WO2025207091A1 (en) | Deep neural network accelerator with multifunctional data processing unit | |
| WO2025230560A1 (en) | Neural network accelerator with sparsity logic supporting various sparsity patterns and data precisions | |
| US20250307651A1 (en) | Training and fine-tuning neural network on neural processing unit | |
| WO2025230527A1 (en) | Deep neural network accelerator with intermediate storage facilitiating tensor permutation |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24925083 Country of ref document: EP Kind code of ref document: A1 |