Zerospeech 2020
Leaderboards
The leaderboards present the results obtained by the participants on each track of the consolidating 2020 challenge.
2019 Task
The results listed with a * were submitted after the deadline and have not been evaluated by humans on MOS, CER and similarity.
The columns are sortable by clicking on the picture of each column header. A detailed view of the results is available by clicking on the picture of each row, it includes audio samples of speech synthesis.
The score columns are interpreted as follow (see Evaluations Metrics for more details):
-
MOS:
- mean opinion score on speech synthesis
- scale is $[1, 5]$ , bigger is better
-
CER:
- character error rate after human transcription of speech synthesis
- scale is $[0, 1]$ , lower is better
-
Similarity:
- similarity to the target voice of speech synthesis
- scale is $[1, 5]$ , bigger is better
-
ABX:
- ABX error rate on embeddings
- scale is $[0, 100]$ , lower is better
-
Bitrate:
- bitrate of the embeddings
- scale is $]0, +\infty[$ , lower is better
# | Authors | Surprise language | Training language (English) | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
MOS | CER | Similarity | ABX | Bitrate | MOS | CER | Similarity | ABX | Bitrate | |||
# | Authors | MOS | CER | Similarity | ABX | Bitrate | MOS | CER | Similarity | ABX | Bitrate | |
Surprise language | Training language (English) |
2017 Track 2 Task
Results of the submission to the 2017 track 2
Authors | English | French | mandarin | LANG1 | LANG2 | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
NED | cov | words | NED | cov | words | NED | cov | words | NED | cov | words | NED | cov | words |