JA

Released Neural Network Libraries v1.25.0!

Thursday, January 27, 2022

Release

Posted by shin

We have released Neural Network Libraries v1.25.0! Fine-tuning script for BERT is available now!

NOTE: we have reorganized the directory structure of nnabla-examples repository so that all top directories are now task names. If you have been using a certain model, or have become used to the previous structure, please make sure to check the new path.

Spotlight

BERT Fine-Tuning

BERT (Bidirectional Encoder Representations from Transformers) is a transformer-based language model that has been a highly influential model architecture for a wide range of domains including natural language processing, speech recognition, and multimodal understanding. A countless number of variations have been proposed on top of BERT for a variety of tasks, including RoBERTa, ViLBERT, TinyBERT, just to name a few, testifying to its wide applicability and significance as a milestone model.

Unlike the earlier language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models.
b
For the nnabla version of BERT fine-tuning, we have converted the weights provided by the authors to nnabla parameters, which can be downloaded directly, but the users can also refer to the conversion code to better understand how the conversion has actually been carried out. The script can run on all tasks of GLUE benchmark dataset, and achieves rouhgly the same scores as reported by the paper.

We will shortly be releasing the pre-training code for BERT as well, so stay tuned!

Build

Bugfix

Layers

Documentation

Examples