New Benchmarks
Zero Resource Speech Benchmark
As the ZeroSpeech organizers, we are pleased to announce that ZeroSpeech (https://zerospeech.com/) has undergone some long-awaited updates which are now available for general usage. We have :
- completely redone the website
- made it easier to download the datasets and evaluate the results
- created a command-line utility to do evaluation locally, and upload the results/embeddings so that scores appear on the dynamic leaderboard
- created an entirely new in-house system for submitting models (to be open publicly in February 2023), which replaces the old submission system that often posed technical difficulties
- reorganized the evaluation into four permanent Benchmarks (Task 1, 2, 3 and 4), which are no longer tied to specific challenge events.
Submissions thus are now open all year round! Tasks 1 (acoustic modelling), 2 (spoken term discovery), and 4 (language modelling) are currently open for submissions. We would also like to draw your attention towards a new set of conditions in ABX-LS benchmarks that propose to measure phonemic context-invariance in learned representations. Details are presented on the leaderboard (zerospeech.com/tasks/task_1/results/) and in this paper (arxiv.org/abs/2210.15775).