Skip to content

Releases: NVIDIA/NeMo

NVIDIA Neural Modules 1.5.0

20 Nov 01:55
Compare
Choose a tag to compare

Features

  • Megatron GPT pre-training with tensor model parallelism #2975
  • NMT encoder and decoder with different hidden size #2856
  • Logging timing of train/val/test steps #2936
  • Logging NMT encoder and decoder timing #2956
  • Logging timing per sentence length and tokenized text statistics #3004
  • Upgrade to PyTorch Lightning 1.5.0, bfloat support #2975
  • French Inverse Text Normalization #2921
  • Bucketing of tarred datasets for ASR models #2999
  • ASR with diarization #3007
  • Adding parallel transcribe for ASR models - suppports multi-gpu/multi-node #3017

Documentation Updates

  • RNNT

Contributors

@ericharper @michalivne @MaximumEntropy @VahidooX @titu1994 @blisc @okuchaiev @tango4j @erastorgueva-nv @fayejf @vadam5 @ekmb @yaoyu-33 @nithinraok @erhoo82 @tbartley94 @PeganovAnton @madhukarkm @yzhang123
(Please let us know if you have contributed to this release and we have missed you here.)

NVIDIA Neural Modules 1.4.0

02 Oct 00:49
0958184
Compare
Choose a tag to compare

Features

  • Improved speaker clustering #2729
  • Upgrade to NVIDIA PyTorch 21.08 container #2799
  • RNNT mAES beam search support #2802
  • Transfer learning for new speakers #2684
  • Simplify speaker scripts #2777
  • Perceiver-encoder architecture #2737
  • Relative paths in tarred datasets #2776
  • Torch only TTS package #2643
  • Inverse text normalization for Spanish #2489

Tutorial Notebooks

  • Duration and pitch control for TTS # 2700

Bug fixes

Contributors

@tango4j @titu1994 @paarthneekhara @nithinraok @michalivne @erastorgueva-nv @borisfom @blisc
(some contributors may not be listed explicitly)

NVIDIA Neural Modules 1.3.0

27 Aug 21:24
6d260c9
Compare
Choose a tag to compare

Added

  • RNNT Exportable to ONNX #2510
  • Multi-batch inference support for speaker diarization #2522
  • DALI Integration for char/subword ASR #2567
  • VAD Postprocessing #2636
  • Perceiver encoder for NMT #2621
  • gRPC NMT server #2656
  • German ITN # 2486
  • Russian TN and ITN #2519
  • Save/restore connector # 2592
  • PTL 1.4+ # 2600

Tutorial Notebooks

  • Non-English downstream NLP task #2532
  • RNNT Basics #2651

Bug Fixes

  • NMESE clustering for very small audio files #2566

Contributors

@pasandi20 @ekmb @nithinraok @titu1994 @ryanleary @yzhang123 @ericharper @michalivne @MaximumEntropy @fayejf
(some contributors may not be listed explicitly)

NVIDIA Neural Modules 1.2.0

30 Jul 20:05
9b36aae
Compare
Choose a tag to compare

Added

  • Improve performance of speak clustering (#2445)
  • Update Conformer for ONNX conversion (#2439)
  • Mean and length normalization for better embeddings speaker verification and diarization (#2397)
  • FastEmit RNNT Loss Numba for reducing latency (#2374)
  • Multiple datasets, right to left models, noisy channel re-ranking, ensembling for NMT (#2379)
  • Byte level tokenization (#2365)
  • Bottleneck with attention bridge for more efficient NMT training (#2390)
  • Tutorial notebook for NMT data cleaning and preprocessing (#2467)
  • Streaming Conformer inference script for long audio files (#2373)
  • Res2Net Ecapa equivalent implementation for speaker verification and diarization (#2468)
  • Update end-to-end tutorial notebook to use CitriNet (#2457)

Contributors

@nithinraok @tango4j @jbalam-nv @titu1994 @MaximumEntropy @mchrzanowski @michalivne @jbalam-nv @fayejf @okuchaiev

(some contributors may not be listed explicitly)

Known Issues

  • import nemo.collections.nlp as nemo_nlp will result in an error. This will be patched in the upcoming version. Please try to import the individual files as a work-around.

NVIDIA Neural Modules 1.1.0

02 Jul 21:51
Compare
Choose a tag to compare

NeMo 1.1.0 release is our first release in our new monthly release cadence. Monthly releases will focus on adding new features that enable new NeMo Models or improve existing ones.

Added

  • Pretrained Megatron-LM encoders (including model parallel) for NMT (#2238)
  • RNNT Numba loss (#1995)
  • Enable multiple models to be restored (#2245)
  • Audio based text normalization (#2285)
  • Multilingual NMT (#2160)
  • FastPitch export (#2355)
  • ASR fine-tuning tutorial for other languages (#2346)

Bugfixes

  • HiFiGan Export (#2279)
  • OmegaConf forward compatibilty (#2319)

Documentation

  • ONNX export documentation (#2330

Contributors

@borisfom @MaximumEntropy @ericharper @aklife97 @titu1994 @ekmb @yzhang123 @blisc

(some contributors may not be listed explicitly)

NVIDIA Neural Modules 1.0.2

11 Jun 01:45
Compare
Choose a tag to compare

Release 1.0.2

NeMo 1.0.2 is a minor change over 1.0.0 adding version checks for Hydra dependency.

NVIDIA Neural Modules 1.0.1

09 Jun 05:40
Compare
Choose a tag to compare

Release 1.0.1

NeMo 1.0.1 is a minor change over 1.0.0 adding proper version bounds for some external dependencies.

NVIDIA Neural Modules 1.0.0

03 Jun 22:43
Compare
Choose a tag to compare

Release 1.0.0

NeMo 1.0.0 release is a stable version of "1.0.0 release candidate". It substantially improves overall quality and documentation. This update adds support for new tasks such as neural machine translation and many new models pretrained in different languages. As a mature tool for ASR and TTS it also adds new features for text normalization and denormalization, dataset creation based on CTC-segmentation and speech data explorer. These updates will benefit researchers in academia and industry by making it easier for them to develop and train new conversational AI models.

To install this specific version from pip do:

apt-get update && apt-get install -y libsndfile1 ffmpeg
pip install Cython
pip install nemo-toolkit['all']==1.0.0

NVIDIA Neural Modules 1.0.0rc1

07 Apr 05:55
Compare
Choose a tag to compare
Pre-release

Release 1.0.0rc1

This release contains major new models, features and docs improvements.
It is a "candidate" release for 1.0.0.

To install from Pip do:

apt-get update && apt-get install -y libsndfile1 ffmpeg
pip install Cython
pip install nemo_toolkit['all']==1.0.0rc1

It adds the following model architectures:

  • CitriNet and Conformer-CTC for ASR
  • HiFiGan, MelGan, GlowTTS, UniGlow SqueezeWave for TTS

In NLP collections, a neural machine translation task (NMT) has been added with Transformer-based models.
This release includes pre-trained NMT models for these language pairs (in both directions):

  • En<->Es
  • En<->Ru
  • En<->Zh
  • En<->De
  • En<->Fr

For ASR task, we also added QuartzNet models, trained on the following languages from Mozilla's Common Voice dataset: Zh, Ru, Es, Pl, Ca, It, Fr and De.
In total, this release adds 60 new pre-trained models.

This release also adds new NeMo tools for:

  • Text normalization
  • Dataset Creation Tool Based on CTC-Segmentation
  • Speech Data Explorer

Known Issues

This version is not compatible with PyTorch 1.8.* Please use 1.7.* with it or use our container.

NVIDIA Neural Modules 1.0.0b4

16 Feb 05:27
c5cd85f
Compare
Choose a tag to compare
Pre-release

Release 1.0.0b4

This release is compatible with Jarvis and TLT public beta.
It also updates versions of many dependencies and contains minor bug fixes over 1.0.0b3.