Releases: hearbenchmark/hear-eval-kit
Releases · hearbenchmark/hear-eval-kit
v2021.1.3
Minor update to URLS within codebase and READMEs to reflect the updated organization name and transition to HEAR Benchmark from the NeurIPS challenge.
What's Changed
Full Changelog: v2021.1.2...v2021.1.3
v2021.1.2
Bugfix to support changes made in PyTorch Lightning 1.6
What's Changed
- Update task_predictions.py by @ltetrel in #363 - fixes issue caused by
test_dataloaders
parameter removed fromTrainer.test
in PyTorch Lightnin. See Lightning-AI/pytorch-lightning#10325 - Fix event prediction model epochs by @jorshi in #364 - fixes issue caused by the
current_epoch
now being updated after the training step during training in PyTorch Lightning. See Lightning-AI/pytorch-lightning#8578 pytorch-lightning>=1.6
is now required. Ran tests to confirm that evaluation results are not affected by this update.
New Contributors
Full Changelog: v2021.1.1...v2021.1.2
v2021.1.1
- Many README updates
- heareval.embedding race condition bugfix:
If heareval.embedding for the same (model, task) was running twice simultaneously, it was previously possible that the second job would remove embeddings and crash and the first job would run prediction over truncated embeddings. Now, the first job will crash before embedding is complete, and if heareval.embedding is run again it will just start from scratch.
v2021.1.0
v2021.0.5
- Downstream training on open tasks, as shown in the leaderboard
- Detailed instructions how to run downstream training