Open-Unmix: music source separation with deep neural networks — a joint research of Inria and Sony.
Open-Unmix supports Neural Network Libraries, along with Pytorch, and Tensorflow/Keras.
https://open.unmix.app/paper.pdf
https://github.com/sigsep/open-unmix-nnabla
Music source separation is the task of decomposing music into its constitutive components, e.g., yielding separated stems for the vocals, bass and drums. Such a separation has many applications ranging from rearranging/repurposing the stems (remixing, repanning, upmixing) to full extraction (karaoke, sample creation, audio restoration).
Music separation has a long history of scientific activity as it is known to be a very challenging problem. In recent years, deep learning based systems – for the first time – yielded high quality separations that also lead to increased commercial interest. However, until now, no open-source implementation that achieves state-of-the-art results was available.
Open-Unmix closes this gap by providing a reference implementation based on deep neural networks. It serves two main purposes: Firstly, to accelerate academic research as Open-Unmix provides implementations for the most popular deep learning frameworks, giving researchers a flexible way to reproduce results; Secondly, we provide a pre-trained model for end users and even artists to try and use source separation. Furthermore, we designed Open-Unmix to be one core component in an open ecosystem on music separation, where we already provide open datasets, software utilities and open evaluation to foster reproducible research as the basis of future development.