This is a Pytorch implementation of WaveRNN provided:
- Python 3.6 or newer
- PyTorch with CUDA enabled
- Set parameters in
utils/audio.py
, In particular, you should setsample_rate, hop_length, win_length
python process.py --wav_dir='wavs' --output='data'
train.py
is the entry point:
$ python train.py
Trained models are saved under the logdir
directory.
generate.py
is the entry point:
$ python generate.py --resume="ema_logdir"
audios are saved under the out
directory.