Released Neural Network Libraries v1.35.0 and nnabla-rl v0.13.0!

Friday, March 31, 2023


Posted by Takuya Narihira

We have released Neural Network Libraries v1.35.0!
Along with this update, nnabla-rl has also been updated to v0.13.0!
Please see “Spotlight” for important changes.


Lion optimizer

We’ve added the new optimizer Lion, published in February, as a Solver! This optimizer has excellent memory efficiency since it only holds momentum as a state, and it has been reported to achieve reduced training time and high performance in several experiments. Please give it a try. For detailed technical information, refer to the following paper.

Xiangning Chen, et al. Symbolic Discovery of Optimization Algorithms. arXiv 2023.

prune the constant branches

The nnabla-converter supports various model conversions, but the tflite converter has significant improvements in this release!

# How to install nnabla-converter
pip install nnabla nnabla-converter

# convert nnp to tflite
nnabla_cli convert -b 1 input.nnp output.tflite

# convert nnp to tensorflow, then remove redundant operators on tensorflow model
nnabla_cli convert input.nnp output.pb
nnabla_cli optimize output.pb improved.pb
  1. Precompute the constant branches
    Andoid GPU delegates for TFLite don’t accept the operators with 2 constant inputs. We improve the tflite converter to eliminate those branches by precomputation.

  2. Remove Split and Slice operators if the input shape and output shape are the same
    Until previous versions, the converted TFLite model contained a few redundant operators. By eliminating these operators, the model is more compact and has better performance.

  3. Replace operators for quantized model conversion
    The different backends (or versions) have different support statuses, and the quantization model operators are a subset of TFLite operators. The quantized TFLite supports the core set operators, but some operators are not supported. So, these unsupported operators are replaced with supported operators.

  4. Support TRANSPOSE_CONV operator for quantized model
    TRANSPOSE_CONV seems to be supported on the Android GPU delegate for TFLite, so we optionally support this operator in the tflite converter.

  5. Improve TensorFlow to TensorFlow optimization to remove redundant Transpose
    The functionality to remove redundant operators with the “optimize” subcommand has been supported, and now it also supports TensorFlow v2.x.




Format Converter