neuroflow demos

MNIST Autoencoder

Autoencoder to compress and reconstruct handwritten digits from the MNIST dataset. Note, the model starts with pre trained weights for the purpose of this demo.

How It Works

This demo showcases a pre-trained autoencoder neural network that compresses MNIST digit images down to a compact latent representation of just 49 dimensions, before reconstructing them. Here's the process:

  1. The original 14x14 pixel image (196 dimensions) is fed into the encoder.
  2. The encoder compresses the image to a 98-dimensional latent space representation.
  3. The decoder then attempts to reconstruct the original image from this compact representation.
  4. The result is displayed alongside the original for comparison.

This technique is useful for dimensionality reduction, feature learning, and potentially generating new digit images.

Test Set Demonstration

The demo will cycle through different test images.

Select an activation function

Models trained with different activation functions are available to compare. Each model was trained for 20,000 steps with the same hyperparameters.

Original ImageReconstructed Image

Model Architecture

The autoencoder consists of an encoder network that compresses the input, and a decoder network that reconstructs it.