JA

Released Neural Network Libraries v1.26.0!

Friday, March 18, 2022

Release

Posted by Takuya Yashima

We have released Neural Network Libraries v1.26.0!We have prepared tutorial sessions for fairness in AI and have implemented inference code for CLIP! Also, nnabla-rl has been updated!

Spotlight

nnabla-rl v0.11.0

nnabla-rl v0.11.0 has been released! In v0.11.0, we added latest deep RL algorithms, such as Average TRPO and MME-SAC,
RNN layer support, expansion of n-step Q learning support, and various convenient functions!
Several bugfixes are also included in v0.11.0.

Download and try nnabla-rl with:

$ pip install nnabla-rl


Check also the release note of nnabla-rl for details.

Fairness Colab Demos

Accounting for fairness is an increasingly important topic in AI, yet few people are familiar with the concept. We have prepared a series of tutorial sessions to give a flavor of how fairness is approached in AI.

Name Notebook Task
Fairness Metrics Tutorial Open In Colab Dataset/Model Bias Check
Fairness Pre-processing Tutorial Open In Colab Dataset/Model Bias Check and Mitigation by Reweighing
Fairness In-processing Tutorial Open In Colab Model Bias Check and Mitigation by Adversarial Debiasing
Fairness Post-processing Tutorial Open In Colab Prediction Bias Check and Mitigation by ROC
Skin Color (Masked Images) Open In Colab Facial evaluation for skin color

CLIP Inference implementation

We have implemened inference code for CLIP (Contrastive Language-Image Pre-training), a model that learns visual concepts from natural language supervision, rather than conventional labels. CLIP has been shown to match the performance of GPT-3 and ResNet50 in zero-shot recognition tasks.

file

Add computation graph active inputs handling. (CPU / GPU)

We have added the concept of (in)active inputs to computation graph processing. In an existing computation graph, this allows to conditionally exclude selected function inputs from graph processing, effectively disabling computation for the sub-graph leading to the inactive input. Functions supporting this feature can be configured with set_active_input_mask(List[bool]). As of now, the functions currently supporting this are F.add_n and F.mul_n.
You can see the usage in the following.

input_shape = (2, 3, 4)  # shape of input Variable
n_inputs = 4  # the number of input Variables
n_active = 3  # the number of active input Variables

rng = np.random.RandomState()
inputs = [rng.randn(*input_shape).astype('f4') for _ in range(n_inputs)]
# generate a boolean array which indicates active/inactive
active = np.random.permutation(n_inputs) < n_active  # ex. array([ True,  True, False,  True])

# generate a graph using all the inputs Variables
y = F.add_n(*[nn.Variable.from_numpy_array(inp).apply(need_grad=True)
              for inp in inputs])
# by accessing its parent function (F.add_n) by y.parent and use set_active_input_mask
y.parent.set_active_input_mask(active)

# for reference, generate a graph which explicitly excludes inactive input Variables
y_ref = F.add_n(*[nn.Variable.from_numpy_array(inp).apply(need_grad=True)
                  for (act, inp) in zip(active, inputs) if act])

y.forward()
y_ref.forward()
np.allclose(y.d, y_ref.d)  # return True

Drop support for python3.6 and CUDA-cuDNN 10.0/7 (CPU/ GPU)

Please note that we have ceased to support python3.6, CUDA 10.0, and cuDNN7.

Utilities

Build

Core Functionalities

Layers