We have Released Neural Network Libraries v1.15.0! We have implemented VQ-VAE, along with new BatchLogdet function!
Spotlight
VQ-VAE with PixcelCNN Prior
We have implemented VQ-VAE (Vector Quantized Variational Autoencoder
) proposed in this paper. VQ-VAE is a model to learn discrete latent representations, and has been a popular generative model along with generative adversarial networks (GANs), generating high quality images, videos, and speech. It enhances VAEs, particularly by circumventing posterior collapse, in which latent representation learned is ignored.
Following are examples of generated images with VQ-VAE:
– MNIST:
– CIFAR-10:
– ImageNet:
BatchLogdet function (CPU / GPU)
BatchLogdet is a batch-wise log absolute determinant function, defined as
\(Y_b = \log(|\det(X_b)|)\),
where \(X_b\) and \(Y_b\) are the \(b\)-th input and output. This function can come in handy when dealing with large-scale models.
Format Conversion
- improve optimize pb by add yacc rules.
- Fix GRU repeated parameters.
- Separate ONNX converter from nnabla.
- feat: sync api_level version from nnabla