We have released Neural Network Libraries v1.10.0!
New models for face reenactment and shape reconstruction have been added, as well as many interactive colab demos to experience our models even without background in programming or machine learning! Also, DLPack is now compatible with NNabla!
We have implemented ReenactGAN, a model for face reenactment that lets you control other people! Consisting of encoder, transformer, and decoder, ReenactGAN does not directly transfer images in the pixel space, but utilizes a boundary latent space for better translation. See the page for further details!
We have started adding interactive demos where you can play around without having to worry about the codes and the internal mechanism! Our current lineup is as following, and you can start right away from the links below! We will continue to add new demos on a weekly basis!
|Self-Attention GAN||Image Generation|
|Face Alignment Network||Facial Keypoint Detection|
|YOLO v2||Object Detection|
We have implemented a shape reconstruction model described in the paper “Implicit Geometric Regularization for Learning Shapes”. This work casts the surface reconstruction problem as the optimization problem to minimize the f(x) on the point cloud with the eikonal equation constraint over the 3d space to obtain the signed distance function on the 3d space.
DLPack is an open in-memory tensor structure which enables you to share tensors among frameworks without copy.
You can decode a DLPack to NNabla
NdArrray as following:
# Create a tensor of an external tool, and encode as an DLPack.
from torch.utils.dlpack import to_dlpack
t = torch.ones((5, 5), dtype=torch.float32,
dlp = to_dlpack(t)
# Borrow the DLPack tensor as nnabla.NdArray
from nnabla.utils.dlpack import from_dlpack
arr = from_dlpack(dlp)
If you want to move an ownership of DLPack to an exiting
from nnabla import NdArray
arr = NdArray()
To obtain a DLPack that owns an internal array object borrowed by a specified
# Create a nnabla.NdArray in CUDA.
import nnabla as nn
from nnabla.ext_utils import get_extension_context
ctx = get_extension_context('cudnn')
a = nn.NdArray.from_numpy_array(np.ones((5, 5), dtype=np.float32))
# Expose as a DLPack.
from nnabla.utils.dlpack import to_dlpack
dlp = to_dlpack(a)
# Use the DLPack in PyTorch.
from torch.utils.dlpack import from_dlpack
t = from_dlpack(dlp)
# Changing the values in Torch will also be affected in nnabla
# because they share memory.
print(a.d) # All values become 2.
We have also implemented Spatial Transformer Networks that enables the spatial manipulation of data within the network, resulting in invariance to translation, scale, and rotation. Spatial transformer consists of two functions;
AffineGrid (2d/3d) that generates the normalized grid by the affine transformation, and
WarpByGrid (2d/3d, linear/nearest, channel_first/last, zero/repeat/reflect pad) that warps the input image by the grid generated by
- Add DLPack interface to share a tensor on host or device among frameworks and libraries without copy
- nnp refactor, enable module-style network definition
- add infer API and param data type option
- Build Multi GPU option with Specified MPI version.
- Fix conflicts between nodes with same output name.
- Fix Deconvolution for ONNX Exporter.
- change naming rule for repeat node
- Fix DequantizeLinear for TensorRT
- Colab Interactive Demos for Image Classification & YOLO v2
- Add PSMNet interacive demo
- Add Colab Inference Demos for ESR-GAN, Self-Attention GAN, FAN
- Implicit Geometric Regularization
- Fix preparing data training with MPI