Released Neural Network Libraries v1.21.0!

Thursday, September 09, 2021


Posted by shin

We have released Neural Network Libraries v1.21.0!

Various updates have been made with regards to XAI (eXplainable AI) and Fairness!

We have also made some important changes, such as making inplace option obsolete, adding a quantized tflite converter, optimizing PF.sync_batch_normalization, and many more!


Various XAI/Fairness updates


We have added SHAP (SHapley Additive exPlanation), which is an approach to explain the output of any machine learning model using game theory. It links optimal credit allocation with local explanations using the classic Shapley values from game theory and their related extensions.

[XAI] Influence Functions

We have also added influence functions that perform data cleansing by Understanding Black-box Predictions.

[Fairness] Gender Bias Mitigation

We have added a colab interactive demo of introduction of fairness workflow tutorial. In this tutorial, we have tried to give a gentle introduction to gender bias detection and mitigation to enthusiasts of Responsible AI. There are many ways to detect and mitigate bias, and this tutorial illustrates one simple method to detect bias and mitigate it with reweighing algorithm.

  • Interactive demo
Name Notebook Task
Introduction of Fairness Workflow Tutorial Open In Colab Dataset/Model Bias Check and Mitigation by Reweighing

[Fairness] Facial evaluation for skin color

We have also added a colab interactive demo of facial evaluation for skin colors using Individual Typology Angle (ITA), which represents skin color.

Figure: Individual Typology Angle (ITA) scores with different faces by the Masked Images version

  • Interactive demo
Name Notebook Task
Skin Color (Masked Images) Open In Colab Facial evaluation for skin color

Make inplace option in most of Function operations obsolete / GPU

In-place operations such as F.add_scalar(x, y, inplace=True) no longer perform computation in-place. The option inplace=True will be simply ignored by this change.

Add a quantized tflite converter to export nnp to int8 tflite

We have optimized our tflite converter, along with a new quantized tflite converter. We have also handled autopep8 encode error.

Optimize PF.sync_batch_normalization

We have optimized the implementation of PF.sync_batch_normalization, which synchronizes the statistics computed between the GPUs during multi-GPU distributed training! Compared to previous cuDNN implementation, it is up to 42 times faster for forward computation and up to 110 times faster for backward computation.

Boolean Indexing Functions / GPU

We have added boolean indexing functions BoolGather, BoolScatter, and BoolFill.

The first two are forward/backward correspondences and typically used as following:

import numpy as np
import nnabla as nn
import nnabla.functions as F


input0 = nn.Variable.from_numpy_array([[1, 2], [3, 4], [5, 6]])
mask = nn.Variable.from_numpy_array([1, 0, 1])
output0 = F.bool_gather(input0, mask)

input1 = output0 + 10 # do whatever for reduced array

output1 = F.bool_scatter(input1, mask)

print(output1.d)  # [[11, 12], [0, 0], [15, 16]]

BoolFill can be inplaced as:

import numpy as np
import nnabla as nn
import nnabla.functions as F


input = nn.Variable.from_numpy_array([[np.inf, 2], [3, np.nan]])
mask = nn.Variable.from_numpy_array([[1, 0], [0, 1]])
output = input.bool_fill(mask, 0)

print(output.d) # inf/nan are replaced with 0, and input.d == output.d



Format Converter



C Runtime