JA

Released Neural Network Libraries v1.10.0!

Thursday, August 06, 2020

Release

Posted by shin

We have released Neural Network Libraries v1.10.0!
New models for face reenactment and shape reconstruction have been added, as well as many interactive colab demos to experience our models even without background in programming or machine learning! Also, DLPack is now compatible with NNabla!

Spotlight

ReenactGAN

reenactgan
We have implemented ReenactGAN, a model for face reenactment that lets you control other people! Consisting of encoder, transformer, and decoder, ReenactGAN does not directly transfer images in the pixel space, but utilizes a boundary latent space for better translation. See the page for further details!

Colab Demos

We have started adding interactive demos where you can play around without having to worry about the codes and the internal mechanism! Our current lineup is as following, and you can start right away from the links below! We will continue to add new demos on a weekly basis!

Name Notebook Task
ESR-GAN Open In Colab Super-Resolution
Self-Attention GAN Open In Colab Image Generation
Face Alignment Network Open In Colab Facial Keypoint Detection
PSMNet Open In Colab Depth Estimation
ResNet/ResNeXt/SENet Open In Colab Image Classification
YOLO v2 Open In Colab Object Detection
StarGAN Open In Colab Image Translation

Implicit Geometric Regularization


We have implemented a shape reconstruction model described in the paper “Implicit Geometric Regularization for Learning Shapes”. This work casts the surface reconstruction problem as the optimization problem to minimize the f(x) on the point cloud with the eikonal equation constraint over the 3d space to obtain the signed distance function on the 3d space.

DLPack interface to share tensor on host or device among frameworks/libraries

DLPack is an open in-memory tensor structure which enables you to share tensors among frameworks without copy.

You can decode a DLPack to NNabla NdArrray as following:

# Create a tensor of an external tool, and encode as an DLPack.
import torch
from torch.utils.dlpack import to_dlpack
t = torch.ones((5, 5), dtype=torch.float32,
device=torch.device('cuda'))
dlp = to_dlpack(t)
# Borrow the DLPack tensor as nnabla.NdArray
from nnabla.utils.dlpack import from_dlpack
arr = from_dlpack(dlp)

If you want to move an ownership of DLPack to an exiting NdArray:

from nnabla import NdArray
arr = NdArray()
from_dlpack(dlp, arr=arr)

To obtain a DLPack that owns an internal array object borrowed by a specified NdArray:

# Create a nnabla.NdArray in CUDA.
import nnabla as nn
from nnabla.ext_utils import get_extension_context
ctx = get_extension_context('cudnn')
a = nn.NdArray.from_numpy_array(np.ones((5, 5), dtype=np.float32))
a.cast(np.float32, ctx)
# Expose as a DLPack.
from nnabla.utils.dlpack import to_dlpack
dlp = to_dlpack(a)
# Use the DLPack in PyTorch.
import torch
from torch.utils.dlpack import from_dlpack
t = from_dlpack(dlp)
# Changing the values in Torch will also be affected in nnabla
# because they share memory.
t.add_(1)
print(a.d) # All values become 2.

SpatialTransformer

We have also implemented Spatial Transformer Networks that enables the spatial manipulation of data within the network, resulting in invariance to translation, scale, and rotation. Spatial transformer consists of two functions; AffineGrid (2d/3d) that generates the normalized grid by the affine transformation, and WarpByGrid (2d/3d, linear/nearest, channel_first/last, zero/repeat/reflect pad) that warps the input image by the grid generated by AffineGrid.

Layers

Utilities

Format Conversion

Examples

Documentation

Bugfix