affNIST
Download: here
The affNIST dataset for machine learning is based on the well-known MNIST dataset. MNIST, however,
has become quite a small set, given the power of today's computers, with their multiple CPU's and sometimes GPU's. affNIST is made by taking images
from MNIST and applying various reasonable affine transformations to them. In the process, the images become 40x40 pixels large, with significant
translations involved, so much of the challenge for the models is to learn that a digit means the same thing in the upper right corner as it does in
the lower left corner.
Research into "capsules" has suggested that it is beneficial to directly model the position (or more general "pose") in which an object is
found. affNIST aims to facilitate that by providing the exact transformation that has been applied to make each data case, as well as the original
28x28 image. This allows one to train a model to normalize the input, or to at least recognize in which ways it has been deformed from a more normal
image.
Another effect of the transformations is that there is simply much more data: every original MNIST image has been transformed in many
different ways. In theory it's an infinite dataset; in practice it's based on 70,000 originals and I've made 32 randomly chosen transformed versions
of each original (a different 32 for each original), leading to a total of about two million training + validation cases.
Here are some examples. The left column shows the original MNIST digit (centered in a 40x40 image), and the other 16 columns show transformed versions.
Data representation
The dataset is split into training, validation, and test data. The test data was created by transforming the 10,000 test cases from the original MNIST
dataset, the training data came from 50,000 MNIST training cases, and the validation data came from the remaining 10,000 MNIST training cases.
The data is provided in the widely used Matlab format, which is also perfectly legible for Python programs through the scipy.io.matlab.loadmat
function.
For completeness, three versions of the dataset are provided:
- First, of course, the regular transformed version.
- Second, an untransformed version, where the digits have simply been centered in the 40x40 image by adding an edge of 6 background pixels around
the original 28x28 images.
- Third, the original 28x28 images are also provided.
The data contains eight components. All of them are stored in a matrix where each column describes one training case.
- First, of course, there's the image. It is stored in a matrix of uint8 values (much like the original MNIST dataset). Each column of that matrix
describes one training case, in 40*40 = 1600 values. Like in the original MNIST dataset, the storage is row-wise: the first 40 of those 1600 values
describe the top row of pixels (from left to right); the second 40 describe the second row; etcetera. If you want a matrix per data case (instead of a
vector), take such a vector of 1600 values, reshape it to 40x40, and transpose it (in Matlab, that is; in C or Python do not transpose it).
- Second, there's the label: an integer from 0 to 9.
- Third, there's the label, again, in the one-of-N format that's commonly used in neural networks.
- Fourth, there's the transformation that has been applied to make the data case, in the "nice" format. This format consists of the following six
numbers:
- First, the amount of counter-clockwise rotation, in degrees. This is chosen uniformly between -20 and +20.
- Second, the amount of shearing. Shearing is applied to coordinates by adding x*shearing to the y coordinate. Thus, a shearing factor of 1 means
that a horizontal line turns into a line at 45 degrees. The shearing factor is chosen uniformly between -0.2 and +0.2.
- Third and fourth, the vertical expansion and the horizontal expansion. These are chosen uniformly between 0.8 (i.e. shrinking the digit image by 20%) and 1.2
(i.e. making the image 20% larger).
- Fifth and sixth, the vertical translation and the horizontal translation. These are only restricted by the requirement that no ink must fall off
the 40x40 image, and can therefore be quite large.
- Fifth, there's the affine transformation matrix that takes homogeneous coordinates of pixels in the original image to homogeneous coordinates of
pixels in the transformed image. This can be computed from the "nice" format transformation information. Only the first two rows of the matrix are
included, because the third row is always [0, 0, 1]. Of these six numbers, the first three are the first row of the matrix, and the second three are
the second row.
- Sixth, there's the affine transformation matrix that takes homogeneous coordinates of pixels in the transformed image to homogeneous coordinates
of pixels in the original image, which is simply the inverse matrix of the preceding. This is the transformation that was used to decide on the amount
of ink in each of the pixels of the transformed image. However, the original->transformed matrix is the more intuitively understandable of the two
transformation matrices, though the "nice" representation of the transformation is more "human readable" still.
- Seventh, the index (1-based) of the original image is provided. This is not the index in the MNIST data as
provided here, but rather in the "originals" version of the affNIST dataset, which is reshuffled in
such a way that all 10 digit classes nicely alternate, to make for the most "balanced" mini-batches possible. The training / validation data cases
have indices from 1 to 60,000, and the test data cases have indices from 1 to 10,000.
- Eighth, the index (1-based) of the data case in the affNIST dataset is provided. This might (or might not) be useful when one uses batches
instead of the whole dataset.
The representation of affine transformations
If one applies rotation, shearing, scaling, and translation, then the order of those operations matters. My experiments (unpublished) with capsules
suggest that the easiest order for a neural network (and perhaps for a human, too) to understand is that first rotation is applied, then shearing,
then scaling, and finally translation. For example, first applying translation and then rotation would have the undesirable effect that a translation
to the right might end up taking the image instead downward, if the rotation is 90 degrees.
Another way to make the numbers in the "nice" representation easier for both humans and artificial neural networks to understand is to make
the origin not be in the upper left corner, but rather in the center of the image, i.e. right between the 4 most central pixels.
On the other hand, the matrices that describe the transformation simply tell you how to linearly go from homogeneous coordinates in one space
to homogeneous coordinates in another. In both of those spaces, the origin is at the upper left pixel.
The matrices have the advantage that they describe a linear transformation of coordinates. The "nice" representation has the advantage that it
describes the transformation in a more intuitive way.
Miscellaneous
If you have any published work that uses affNIST, please let me know and I'll place a link to it here.
Neural networks enjoy having much training data, but computers can sometimes find it a bit hefty. To make the download easier, I've provided ZIP'd
versions of the files. However, after you unzip that, it's still big. In case your computer finds it easier to load just a little bit of training data
at a time (my computer certainly does), I've also made the data available split up in batches. Each batch contains one transformation of every MNIST
original.
I made 32 different transformation of each MNIST training case, meaning that there are about two million training / validation data cases. If you'd like to
use more, e.g. 64 different transformations, please let me know.
The affNIST dataset is made freely available, without restrictions, to whoever wishes to use it, in the hope that it may help advance machine learning
research, but without any warranty.
Most recent edit: August 5th, 2013.
Home page