We have released Neural Network Libraries v1.34.0!
Please see “Spotlight” section for important changes.
Remarks
Important bugfix
Fixed a bug in the variance calculation in channel last layout of SyncBatchNormalizationCuda in previous versions. See the pull request for details.
Spotlight
CUDA 11.6 Support
We started supporting CUDA 11.6 extension. And CUDA 11.4 extension is removed from this version, but users can keep using current CUDA environment because it has compatibility.
To use CUDA 11.6 extension, specify nnabla-ext-cuda116
and install it by:
pip install nnabla-ext-cuda116
Otherwise, if you don’t setup CUDA environment on your computer, you can use
all-in-one wheel. This wheel bundled with CUDA 11.6 runtime, so you only install nvidia-driver to use this wheel.
pip install https://nnabla.org/whl/nnabla_ext_cuda_alllib/nnabla_ext_cuda_alllib-1.34.0-cp310-cp310-manylinux_2_17_x86_64.whl
In addition, docker images are also available for each version, so please use these as well!
docker pull nnabla/nnabla-ext-cuda-multi-gpu:py310-cuda116-mpi4.1.3-v1.34.0
Please refer DockerHub(nnabla-ext-cuda-multi-gpu) or DockerHub(nnabla-ext-cuda) for detail.
nnabla
Bugfix
- Check consistency among memory layouts in SyncBatchNormalization
- Suppress unnecessary error msg for the deallocation of a child array
- Fix SyncBatchnormalizationCuda in channel last mode
- Add NonZero, NonMaxSuppression, OneHot, and Resize support to ONNX importer
Build
- CUDA11.6 Support (CPU / GPU)
- fix alllib wheel dependency in cuda11.6
- Sync api_level version from NNabla.