Today (6/27) is the first anniversary of Neural Network Libraries’ open source release! And we are now announcing the release of version 1.0 on this special day. We would like to express our gratitude, from the bottom of our hearts, to everyone who has used, contributed to, and given meaningful feedbacks to Neural Network Libraries.
Since the release of version 0.9, the first version to be open-sourced, a great amount of new and powerful functionalities have been added and updated.
Ubiquity
- Compatibility with both Python 2 and 3
- Compatibility with Windows
- Compatibility with MacOS
- Inference Sample with C++
- Training Sample with C++
- Build with Android NDK
- C Runtime
- Mutual Converter for ONNX
- Python binary for each CUDA runtime version
High Performance
- High-speed performance with NVIDIA Volta TensorCore
- High-speed parallel training with NVIDIA NCCL2
- High-speed parallel training by gradient exchange during backpropagation
Coherence with Neural Network Console (NNC)
Usability
Training and Inference of Various Recent Models
Repository with a collection of recent models is created.
GANs
Network compaction techniques
Visual recognition models
Misc.
Functions
- Quantization Layer (fixed-point and power-of-2)
- Depthwise Convolution / Deconvolution
- FFT/IFFT
- … And 141 more Functions | Parametric Functions
We will be updating our blog periodically with more detailed description of the functionalities.
We would also like to inform the users that an official cloud version of Neural Network Console, a GUI-based deep learning development tool from Sony as well, is available. High-speed training with multi-GPUs is readily available, without hassles of setting up the environment. We will further promote the coherence with Neural Network Console in the development flow, so we highly recommend that the users give it a try.
As with C runtime or ONNX converters, we will continue to further promote Neural Network Libraries’ status as a framework that supports the development flow in an end-to-end manner, from design to implementation and deployment of deep leaning.
Thank you again for your support!