For questions related to “reparameterization” trick (that makes Variational Autoencoders (VAE) an eligible candidate for Backpropagation).
The reparameterization trick is a "trick" (really a technique) which became popular around the year 2015. This technique extends backpropogation to layers or features producing random noise. This allows a neural network to shape the output (the hyperparameters, e.g. the mean and standard deviation) of e.g. a Gaussian normal sampler.
Basically, “reparameterization” trick that makes Variational Autoencoders (VAE) an eligible candidate for Backpropagation.
The idea of the reparameterization trick is to take out the random sample node from the backpropagation loop. It achieves this by taking a sample epsilon from a Gaussian distribution and then multiplying this by the result of our standard deviation vector $σ$ and then adding $μ$. The formula for our latent vector is now this: $$z^{(i,l)}=\mu^{(i)}+\sigma^{(i)}\odot\varepsilon_i \\ \varepsilon_i \sim N(0,1)$$ The produced latent vectors will be the same as before, but making this change now allows the gradients to flow back through to the encoder part of the VAE.