JA

Released Neural Network Libraries v1.30.0 and nnabla-nas v0.13.0!

Monday, September 05, 2022

Release

Posted by shin

Spotlight

Support CUDA11.4 and remove CUDA10.2 support

From v1.30.0, we started supporting CUDA 11.4 extension.
Please find packages at https://pypi.org/project/nnabla-ext-cuda114.
OpenMPI 4.1.3 is included in multi-gpu docker images with CUDA 11.4 wheel packages, and can be found at https://hub.docker.com/r/nnabla/nnabla-ext-cuda-multi-gpu/tags.

Please note that CUDA 10.2 extension is no longer supported.

Introduce cuda/cudnn included wheel (CPU / GPU)

We have prepared wheel packages with all necessary CUDA and cuDNN libraries, for both Linux and Windows platform.
The default versions of the packages are CUDA 11.0.3 and cuDNN 8. The names of the packages have alllib affix.

Users can use these wheel packages directly on a GPU-enabled device without installing CUDA toolkit and cuDNN library. Note that a GPU driver that supports the default CUDA version of the packages is still necessary.

If you would like to enable multi-GPU execution with these wheel packages on Linux, you only need to install OpenMPI on a multi-GPU-enabled device. NCCL is not required as it is already packed in the packages.

pip install -U pip
pip install https://nnabla.org/whl/nnabla_ext_cuda_alllib/nnabla_ext_cuda_alllib-1.30.0-cp39-cp39-manylinux_2_17_x86_64.whl

Other wheels are listed in https://nnabla.org/install/#all-in-one_list .

Optimize TransposeCuda with cuTENSOR

We introduce cuTensor-based implementation to acceralate transpose in CUDA, especially in inner 2D transposition (ex. transpose(..., axis=(0, 1, 3, 2))).

As an example of actual use case, transpose used in channel last layout to channel first layout conversion (and vice versa) can now be processed faster than before.

Better error message in CUDNN.

Previous cudnn error messages did not point out what actually caused the error. This improvement shows the error code and navigates users to the CUDNN API guide to identify the possible causes of the error.
Redundant error message is suppressed so that the error message becomes more informative.

Before:

RuntimeError: target_specific error in cudnn_set_tensor_nd_descriptor_force_dim
/home/gitlab-runner/builds/jmdP2aBr/1/nnabla/builders/all/nnabla-ext-cuda/src/nbla/cuda/cudnn/cudnn.cpp:40
Failed `status == CUDNN_STATUS_SUCCESS`: NOT_SUPPORTED

After:

Caught exception "target_specific error in TestBody
nnabla-ext-cuda/src/nbla/cuda/test/test_nbla_cuda_utils.cpp:26
CUDNN_STATUS_ALLOC_FAILED occured in `FakeCudnnFunc()`. Please see CUDNN API documentation for the cause.

[Fairness] Prejudice Remover Regularizer (part1 / part2)

Prejudice Remover Regularizer has been added for fairness to mitigate data bias in machine learning.
In addition to tabular version, an image version is also available.
We also provide Colab tutorial that enables training with bias mitigation.

Name Notebook Task Example
Prejudice Remover Regularizer Open In Colab Mitigate the model bias with Prejudice Removal Technique
Prejudice Remover Regularizer for Images Open In Colab Mitigate the model bias with Prejudice Removal Technique for Images

Available training scripts for CLIP

Training code for CLIP is now available on top of the inference script we previously published.
Users can distribute large mini-batches required for contrastive learning across multiple GPUs.

nnabla

Bugfix

Build

Format Converter

Examples

Known Issues

 


 

NNabla-NAS

Bugfix

Build

Core

Documentation

Utility