1

I have read many papers that recommend using Variational Autoencoders over Autoencoders since they have a more probabilistic approach and the ability to use KL divergence on the latent dimension. But when trying to test both networks I find that the variability of the output in Variational Autoencoders is reducing the accuracy of the network and I am getting better results when using Autoencoders. I am still working on very simple data and training my network on normal images that do not have any augmentation or changing background.

  • Does the performance of Variational Autoencoders increase with harder data or is there any other reason to choose it over Autoencoders?
  • Or do Autoencoders perform better in anomaly detection?
Stephen Rauch
  • 1,831
  • 11
  • 23
  • 34
Jack Farah
  • 11
  • 1
  • 2

1 Answers1

4

Variational autoencoders encourage the model to generalize features and reconstruct images as an aggregation of those features. This is what the latent space encodes, a compressed feature vector.

Vanilla autoencoders memorize the input and map to the output without the generalization. If you want to extrapolate from your dataset, variational is the way to go.