Skip to main content

Variational autoender with Signed Distance Functions (SDFs) in Jax

Code

Part I

A VAE trained using a convolutional AE and vanilla Decoder. The model uses ELBO loss, consisting of a reconstruction term and KL divergence. This model should work with a DeepSDF decoder (included in the code). The decoder should take an input coordinate in addition to the latent space vector generated by the encoder. However, in practice, this gave poor reconstruction, where the predicted values did not span the whole -1 to 1 range. This is likely an issue with the KL divergence, as the sampled distribution may be too small. Instead, here are some results on the test set with a vanilla decoder, and so the SDFs are treated similarly to image values. (Increasing training iterations would give more accurate results).

image

We can see that sampling the latent space gives ‘imaginary’ numbers that still resemble actual values. This is due to using a distribution to represent the latent space, instead of mapping to a single vector.

image

Interpolation also works well.

image

Part II

A vanilla encoder transforms input SDFs into latent space vectors. DeepSDF, as the decoder, takes in a coordinate in 2d space and the latent, producing a predicted SDF. MSE loss is used to optimize the Autoencoder.

Reconstruction:

image

Interpolation between two SDFs randomly selected from the test set:

image

Sampling the latent space:

image
Testing out different latent sizes:
image

This code was created with the help of the following tutorials and repos: