This week, we released nnabla packages v1.0.12! The main features updated from the previous version are described below:
Recurrent networks can now be trained in sequences more easily and faster with N-Step RNN/LSTM/GRU empowered by cuDNN. It is expected to be several times faster than our previous implementation of recurrent networks.
import nnabla as nn import nnabla.functions as F import nnabla.parametric_functions as PF x = nn.Variable((seq_len, batch_size, input_size)) h = nn.Variable((num_layers, num_directions, batch_size, hidden_size)) c = nn.Variable((num_layers, num_directions, batch_size, hidden_size)) y, hn, cn = PF.lstm(x, h, c)
N-Step recurrent models are currently available for GPUs only, and we are now working on implementing CPU versions.
Important Bug Fix
Previous release of v1.0.11 introduced
forward_all, which turned out to cause memory leak problems. We quickly noticed the problem and reacted to it immediately. If you have upgraded to v1.0.11, make sure to upgrade to v1.0.12 to avoid memory leakage.
- Fixed learning rate update interval when using multiple GPUs in CLI
- Fixed profiler for python2 compatibility