Variational Autoencoder

From Deep Learning Wiki
Jump to: navigation, search

Variational Autoencoder.jpg

Traditional autoencoders are models (usually multilayer artificial neural networks) designed to output a reconstruction of their input. Specifically, autoencoders sequentially deconstruct input data into hidden representations, then use these representations to sequentially reconstruct outputs that resemble the originals. Fittingly, this process of teasing out a mapping from input to hidden representation is called representation learning.

An end-to-end autoencoder (input to reconstructed input) can be split into two complementary networks: an encoder and a decoder. The encoder maps input xx to a latent representation, or so-called hidden code, zz. The decoder maps the hidden code to a reconstructed input value x~x~.