JA

Released Neural Network Libraries v1.16.0!

Wednesday, February 03, 2021

Release

Posted by Takuya Yashima

We have Released Neural Network Libraries v1.16.0!

Spotlight

Out-of-Core training (CPU / CUDA)

We have added out-of-core (OoC) training, which enables execution of model over GPU memory size by automatically assigning parameters and computations to CPU memory during training/inference with GPU. Details of the algorithm are shown in this paper, and this is the publicly released implementation of the method.
OoC functionality can be easily used by adding a few lines to nnabla code.

# Define scheduer
scheduler = lms_scheduler(ctx,
                          use_lms=True,
                          gpu_memory_size=3.5e9,
                          window_length=5e9)
...
# Select range to apply OoC by with scope
with scheduler:
  loss.forward(clear_no_need_grad=True)
  solver.zero_grad()
  loss.backward(clear_buffer=True)
  solver.update()

Also, a Colab demo for OoC is available at nnabla-examples repository. Details on the usage of OoC and its efficiency can be examined, so please give it a try.

CenterNet

We have added an object detection model CenterNet to our examples!
Unlike conventional models like YOLO that rely on anchors for object detection, CenterNet realizes fast and accurate detection by treating an object as a point.

Follwoing is an example of object detection with CenterNet implemented with NNabla
Image is taken from the original CenterNet repository).

Pre-trained models are also available so that you can immediately run inference with CenterNet. Please give this one a try as well!

Bugfix/Documentation

Build

Format Conversion

Utilities

Examples