We have released Neural Network Libraries v1.9.0!
Japanese version of documentation has finally arrived! EfficientNet has also been made available, and NNabla models can now be converted to TF Lite models!
Spotlight
Add Japanese Documentation
Japanese version of documentation for Neural Network Libraries is now available! While there are parts that have not been translated yet, we will proceed to make it complete with future releases.
EfficientNet B0, B1, B2, B3
EfficientNet proposed by the researchers at Google has been added to NNNabla example! This model implements a compact yet high-performing efficient network by a new method that scales the resolution and the number of feature maps and layers.
Architectural variations exist for EfficientNet depending on the number of parameters of FLOPs, and we have added the versions B0, B1, B2, B3. We have also released the parameters of these models trained on ImageNet, so make sure to check it out!
Add support for converting nnabla to tflite
While it was already possible to convert a model written with Neural Network Libraries (.nnp
file format) to TensorFlow model (.pb
file format) using file format converter, now we can also perform conversion to TensorFlow Lite models with converter!
Please refer to the documentation for further details on how to perfor
Change of Default Value
Bugfix
- Add **kw args to ref_dequantize_linear and ref_grad_dequantize_linear.
- Add FusedConvolution interface and some enhancements & fixes
- Fix log double backward
- Allow calling multiple times and catch errors in FFT/IFFT
- Change data type of size from int to int64 to fix overflow.
Build
- Change default build environment from CentOS6 to CentOS7
- update devtoolset package version. update url of yum repository (CPU / CUDA)
Documentation
- Add Japanese Documentation
- Correct the warp_by_flow documentation
- Fix Documentation
- Update nnabla convert doc
- Add MNIST Tutorials
- Add DCGAN Tutorial
- Fix brand name
Format Converter
- ONNX Exporter: Fix PReLU
- ONNX Exporter: Support deconv’s output_padding
- ONNX Exporter: Fix Broadcast
- Update functions.yaml and api_level
- Add support for int8 and int16 fixed-point convolution
- Add support for converting nnabla to tflite
- C runtime: Fix no error when calling unimplemented func.
Layers
- Quantize/DequantizeLinear (CPU / CUDA)
- Add deconvolution output padding option (CPU / CUDA)
- Interpolation with half_pixel and half_pixel_for_nn (CPU / CUDA)
Utilities
- Save optimizer checkpoint in .h5 format
- Change cuDNN algorithm selection method to improve inference performance
- Add AdamW, Lars and SgdW to load.py
- Measure cpu/gpu load & write to status.json when psutil, pynvml are installed
- Relocate cg_load info from log to progress.txt and status.json
- Save optimizer states every epoch
- Fix unpooling for SNPE
- Fix nnp to nnp problem with h5
- Using sliced dataset for monitor
- Fix bug when reading h5 file in NNP
- Get device-id from local_rank
- Remove thread-based watchdog for allreduce
- Add a flag to choose convolution algorithm by a heuristic
Examples
Important Notice Regarding multi-GPU package
Starting from version 1.9.0, the following multi-GPU packages will be deprecated.
nnabla-ext-cuda90-nccl2-ubuntu16
nnabla-ext-cuda100-nccl2-ubuntu16
nnabla-ext-cuda100-nccl2-ubuntu18
nnabla-ext-cuda101-nccl2-ubuntu16
nnabla-ext-cuda101-nccl2-ubuntu18
Instead, following packages need to be installed. Please choose the one compatible with your OS/MPI version.
nnabla-ext-cuda90-nccl2-mpi1-10-2 (Ubuntu16.04 default)
nnabla-ext-cuda90-nccl2-mpi2-1-1 (Ubuntu18.04 default)
nnabla-ext-cuda90-nccl2-mpi3-1-6
nnabla-ext-cuda100-nccl2-mpi1-10-2 (Ubuntu16.04 default)
nnabla-ext-cuda100-nccl2-mpi2-1-1 (Ubuntu18.04 default)
nnabla-ext-cuda100-nccl2-mpi3-1-6
nnabla-ext-cuda102-nccl2-mpi1-10-2 (Ubuntu16.04 default)
nnabla-ext-cuda102-nccl2-mpi2-1-1 (Ubuntu18.04 default)
nnabla-ext-cuda102-nccl2-mpi3-1-6
Ubuntu package versions that are compatible with apt install openmpi-bin
are as following:
Ubuntu16.04: nnabla-ext-cuda???-nccl2-mpi1-10-2
Ubuntu18.04: nnabla-ext-cuda???-nccl2-mpi2-1-1