[go: up one dir, main page]

Skip to content

Reimplementation of "VAE with a VampPrior" by Jakub M. Tomczak et al., as part of the DD2434 Machine Learning, Advanced Course at KTH

Notifications You must be signed in to change notification settings

istresec/kth-aml-project

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

47 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

[RE] VAE with a VampPrior

Frano Rajič*, Ivan Stresec*, Matea Španić*, Adam Wiker*

Project report

We reimplement the methods and neural network architectures described in "VAE with a VampPrior" (Jakub M. Tomczak et al.). We briefly introduce and motivate variational autoencoders and the paper. To evaluate and compare the effects of different priors, we implement the standard 1-layered variational autoencoder architecture with a standard Gaussian prior and the VampPrior. Furthermore, we extend the VampPrior using the hierarchical, 2-layered architecture as described in the paper. We test both architectures with both priors on three datasets: MNIST, Omniglot, and Caltech101 (silhouettes). The results successfully re-verify the empirical findings of the original paper, showing the effectiveness of the VampPrior and the suggested hierarchical architecture.

About

Reimplementation of "VAE with a VampPrior" by Jakub M. Tomczak et al., as part of the DD2434 Machine Learning, Advanced Course at KTH

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •  

Languages