Fork of deepsound-project/samplernn-pytorch "A PyTorch implementation of SampleRNN: An Unconditional End-to-End Neural Audio Generation Model"
- Using pytorch==1.5.1 torchvision==0.6.1 cudatoolkit=10.2
- Docker ready
Prepare a dataset yourself. It should be a directory in datasets/ filled with equal-length wav files. Or you can create your own dataset format by subclassing torch.utils.data.Dataset. It's easy, take a look at dataset.FolderDataset in this repo for an example.
The results - training log, loss plots, model checkpoints and generated samples will be saved in results/.
Continue the work of: