WMT2017 German-English machine translation challenge for news

Translate news articles from German into English. [ver. 1.0.0]

# submitter when ver. description dev-0 BLEU dev-1 BLEU test-A BLEU
18 Yevheniia Tsapkova 2020-01-28 17:56 1.0.0 translation with ready made fairseq transformer.wmt19.de-en v2 fairseq ready-made-model 0.23399 0.27485 0.23698
17 Yevheniia Tsapkova 2020-01-27 20:40 1.0.0 translation with ready made fairseq transformer.wmt19.de-en fairseq ready-made-model N/A 0.27485 0.23698
126 [anonymised] 2020-01-27 12:19 1.0.0 CNN, sample_size = 5mln, epochs = 5 fairseq train 0.07706 0.09276 0.07834
1 [anonymised] 2020-01-15 09:58 1.0.0 Poprawienie Tokenizacji istniejacego rozwiazania v3 ready-made fairseq 0.39610 0.47024 0.41504
2 [anonymised] 2020-01-15 09:29 1.0.0 Poprawienie Tokenizacji istniejacego rozwiazania v2 ready-made fairseq 0.39264 0.46386 0.40909
159 [anonymised] 2020-01-15 09:15 1.0.0 Poprawienie Tokenizacji istniejacego rozwiazania N/A N/A N/A
3 [anonymised] 2020-01-14 20:53 1.0.0 fix tokenization of output ready-made fairseq 0.38549 0.45189 0.39879
7 [anonymised] 2020-01-07 11:35 1.0.0 ready-made Fairseq model fairseq ready-made-model 0.24760 0.31147 0.26579
6 [anonymised] 2019-12-30 09:59 1.0.0 Runed a ready-made Fairseq model fairseq ready-made-model 0.24760 0.31147 0.26579
127 [anonymised] 2019-05-22 21:02 1.0.0 marian 100k tg freq 10000 neural-network marian 0.06805 0.07651 0.06824
117 [anonymised] 2019-05-22 18:44 1.0.0 marian 100k freq 10000 neural-network marian 0.11676 0.13285 0.11359
14 [anonymised] 2019-05-22 12:01 1.0.0 marian 1M neural-network marian 0.23935 0.27904 0.24561
57 [anonymised] 2019-05-22 11:47 1.0.0 marian 1M tg neural-network marian 0.17381 0.20079 0.18072
19 [anonymised] 2019-02-05 11:36 1.0.0 type=s2s, corpseLen=1M, valid-freq 10000, early-stopping 5, workspace 2500, postproc sed deescapeSpecialChars detruecase awk sed neural-network 0.23399 0.27282 0.23674
20 [anonymised] 2019-01-22 17:17 1.0.0 type=amun, corpseLen=1M, valid-freq 10000, early-stopping 5, workspace 2500, postproc sed deescapeSpecialChars detruecase awk sed neural-network 0.23293 0.27193 0.23254
24 [anonymised] 2019-01-12 12:57 1.0.0 corpseLen=590k, valid-freq 10000, early-stopping 5, workspace 3000, postproc sed deescapeSpecialChars detruecase awk sed 0.20598 0.24117 0.21002
25 [anonymised] 2019-01-12 11:20 1.0.0 corpseLen=590k, valid-freq 10000, early-stopping 5, postproc sed deescapeSpecialChars detruecase awk 0.20547 0.24024 0.20857
26 [anonymised] 2019-01-11 10:18 1.0.0 corpseLen=590k, valid-freq 10000, early-stopping 5, postproc sed deescapeSpecialChars detruecase 0.20547 0.24024 0.20857
32 [anonymised] 2019-01-11 10:08 1.0.0 corpseLen=590k, valid-freq 10000, early-stopping 5, postproc sed deescapeSpecialChars 0.19604 0.23046 0.19946
39 [anonymised] 2019-01-11 10:06 1.0.0 corpseLen=590k, valid-freq 10000, early-stopping 5, postproc sed 0.18594 0.22086 0.19142
101 [anonymised] 2019-01-10 22:12 1.0.0 corpseLen=590k, valid-freq 10000, early-stopping 5, no postproc 0.16705 0.20042 0.17246
115 [anonymised] 2019-01-03 22:12 1.0.0 with awk simple postproc on out 0.13057 0.15454 0.12454
123 [anonymised] 2018-12-09 14:47 1.0.0 Second dell commit 0.09811 0.11533 0.09146
8 [anonymised] 2018-02-15 00:38 1.0.0 Tensorflow 80k iterations ; beam 4 alpha 0.9 ready-made neural-network 0.25882 0.30561 0.26322
9 [anonymised] 2018-02-15 00:05 1.0.0 Tensorflow 80k iterations ; beam 3 alpha 0.6 ready-made neural-network 0.25720 0.30334 0.26143
10 [anonymised] 2018-02-14 23:18 1.0.0 Tensorflow 86k iterations ; beam 3 alpha 0.6 ready-made neural-network 0.25409 0.30126 0.25949
15 [anonymised] 2018-02-14 11:47 1.0.0 Tensorflow 50k iterations ; beam 20 alpha 0.6 ready-made neural-network 0.23913 0.28295 0.24414
37 [anonymised] 2018-02-07 11:10 1.0.0 Add 5G data moses 0.19762 0.22481 0.19183
36 [anonymised] 2018-02-07 11:00 1.0.0 Add 5G data N/A 0.22481 0.19183
55 [anonymised] 2018-02-07 10:51 1.0.0 improve solution -stack 155 moses 0.18141 0.20750 0.18236
54 [anonymised] 2018-02-07 10:47 1.0.0 Improve sollution -stack 155 moses 0.18141 N/A 0.18236
130 [anonymised] 2018-02-04 23:59 1.0.0 'baseline' moses 0.02757 0.02569 0.02823
33 [anonymised] 2018-01-31 11:43 1.0.0 corpus=590616, NB_OF_EPOCHS=8, MAX_WORDS=46000 neural-network 0.18610 0.21637 0.19461
53 [anonymised] 2018-01-24 11:05 1.0.0 improve solution moses 0.18141 N/A 0.18236
110 p/tlen 2018-01-17 06:46 1.0.0 NMT with Marian, vocabulary=70K, epochs=7 0.14849 0.17603 0.15308
72 [anonymised] 2018-01-16 18:36 1.0.0 --search-algorithm 1 -s 2000 --cube-pruning-pop-limit 2000 --cube-pruning-diversity 100-b 0.1 --minimum-bayes-risk moses 0.17767 0.20160 0.17646
107 p/tlen 2018-01-15 09:13 1.0.0 NMT trained with Marian on 10%, 5 epochs, 40K dictionary neural-network 0.15263 0.17750 0.15966
132 [anonymised] 2018-01-14 16:45 1.0.0 'ibm self-made algo N/A N/A 0.02608
142 [anonymised] 2018-01-14 16:33 1.0.0 ibm1 N/A N/A 0.00762
73 [anonymised] 2018-01-13 21:39 1.0.0 Baseline 10%, stack 200 beam 0.1 moses 0.17469 0.19716 0.17625
102 [anonymised] 2018-01-13 21:22 1.0.0 0.17468 0.19761 0.17224
105 [anonymised] 2018-01-13 19:21 1.0.0 0.17009 0.19237 0.16779
129 [anonymised] 2018-01-13 19:12 1.0.0 0.06572 0.07063 0.06317
4 p/tlen 2018-01-09 18:10 1.0.0 WMT16 neural model (decoded with Amun) + de-escape apostrophes neural-network 0.27932 0.33703 0.28988
5 p/tlen 2018-01-08 21:13 1.0.0 neural model (decoded with Amun) neural-network 0.27358 0.33058 0.28454
74 [anonymised] 2018-01-08 18:13 1.0.0 0.17546 0.19893 0.17588
89 [anonymised] 2018-01-08 18:04 1.0.0 0.17546 0.19893 0.17369
46 [anonymised] 2018-01-08 17:57 1.0.0 TAU-2017-21 - improve solution by changing some decoding options: -stack 100 0.18151 0.20738 0.18358
42 [anonymised] 2018-01-08 16:35 1.0.0 TAU-2017-21 - improve solution by changing some decoding options: -stack 150 moses 0.18138 0.20709 0.18379
44 [anonymised] 2018-01-08 15:45 1.0.0 TAU-2017-21 - improve solution by changing some decoding options: -stack 200 0.18117 0.20689 0.18378
48 [anonymised] 2018-01-08 15:02 1.0.0 TAU-2017-21 - improve solution by changing some decoding options: -stack 2000 0.18119 0.20659 0.18328
51 [anonymised] 2018-01-08 14:58 1.0.0 TAU-2017-21 - improve solution by changing some decoding options: -search-algorithm 1 -stack 2000 -cube-pruning-pop-limit 2000 -cube-pruning-diversity 500 0.18131 0.20651 0.18324
47 [anonymised] 2018-01-07 21:47 1.0.0 TAU-2017-21 - improve solution by changing some decoding options: -search-algorithm 1 -stack 2000 0.18135 0.20681 0.18347
50 [anonymised] 2018-01-07 16:29 1.0.0 TAU-2017-21 - improve solution by changing some decoding options: -beam-threshold 0.01 0.18129 0.20673 0.18326
49 [anonymised] 2018-01-07 13:30 1.0.0 TAU-2017-21 - improve solution by changing some decoding options: -stack 1000 0.18122 0.20661 0.18327
45 [anonymised] 2018-01-07 13:25 1.0.0 TAU-2017-21 - improve solution by changing some decoding options: -search-algorithm 1 0.18098 0.20696 0.18370
59 [anonymised] 2018-01-03 14:57 1.0.0 MERTed && --beam-thresholds moses 0.17955 0.20203 0.17971
158 [anonymised] 2018-01-03 14:30 1.0.0 TAU-2017-20 - check 6 values for maximum stack size; plot graphs for BLEU and decoding time moses 0.17544 0.19840 N/A
98 [anonymised] 2018-01-03 13:30 1.0.0 improve solution search-algorithm 1 -s 0 --cube-pruning-pop-limit 5000 --cube-pruning-diversity 100 0.17607 0.19839 0.17336
88 [anonymised] 2018-01-03 09:39 1.0.0 improve solution search-algorithm 1 -s 0 0.17598 0.19873 0.17377
28 [anonymised] 2018-01-03 00:06 1.0.0 dev-0 dev-1 test-A -stack 1500 moses 0.21264 0.23117 0.20230
27 [anonymised] 2018-01-02 22:05 1.0.0 test-A/out.tsv -stack 1500 0.21264 0.23071 0.20230
31 [anonymised] 2018-01-02 20:30 1.0.0 dev-0/out.tsv 0.2126 -stack 1500 0.21264 0.23071 0.20213
91 [anonymised] 2018-01-02 18:18 1.0.0 improve solution search-algorithm 1 -cube-pruning-pop-limit 2000 -s 2000 0.17599 0.19856 0.17358
111 [anonymised] 2018-01-02 17:36 1.0.0 improve solution search-algorithm 1 and beam-threshold 10 0.13985 0.15876 0.13409
113 [anonymised] 2018-01-02 17:24 1.0.0 improve solution search-algorithm 1 and beam-threshold 100 0.13829 0.15653 0.13114
112 [anonymised] 2018-01-02 17:05 1.0.0 improve solution search-algorithm 1 and beam-threshold 100 0.13839 0.15655 0.13129
84 [anonymised] 2018-01-02 16:25 1.0.0 --search-algorithm 1=cube pruning --stack 100 moses 0.17572 0.19862 0.17403
95 [anonymised] 2018-01-02 16:15 1.0.0 --search-algorithm 0=normal stack -stack 100 moses 0.17569 0.19868 0.17357
77 [anonymised] 2018-01-02 16:00 1.0.0 --search-algorithm 1=cube pruning 0.17596 0.19845 0.17423
92 [anonymised] 2018-01-02 15:28 1.0.0 default 0.17561 0.19857 0.17358
30 [anonymised] 2018-01-02 11:29 1.0.0 dev-0/out.tsv 0.1982 -stack 1500 0.19816 0.23071 0.20213
58 [anonymised] 2018-01-01 16:01 1.0.0 slightly improved -beam-threshold 0.25 -stack 152 moses 0.17955 0.20203 0.17971
61 [anonymised] 2018-01-01 14:12 1.0.0 slightly improved -stack 154 0.17951 0.20215 0.17957
62 [anonymised] 2018-01-01 12:22 1.0.0 slightly improved -stack 152 0.17951 0.20216 0.17957
67 [anonymised] 2017-12-31 10:12 1.0.0 MERTed && --beam-threshold 0.0625 0.17950 0.20199 0.17956
66 [anonymised] 2017-12-31 09:11 1.0.0 MERTed && --beam-threshold 0.125 0.17950 0.20189 0.17956
60 [anonymised] 2017-12-31 00:49 1.0.0 MERTed && --beam-threshold 0.25 0.17948 0.20195 0.17965
68 [anonymised] 2017-12-31 00:05 1.0.0 MERTed && --beam-threshold 0.5 0.17871 0.20147 0.17916
108 [anonymised] 2017-12-30 20:25 1.0.0 MERTed && --beam-threshold 1 0.15859 0.18110 0.15936
114 [anonymised] 2017-12-30 20:00 1.0.0 MERTed && --beam-threshold 2 0.12788 0.15173 0.12880
118 [anonymised] 2017-12-30 19:29 1.0.0 MERTed && --beam-threshold 4 0.11115 0.12879 0.11071
119 [anonymised] 2017-12-30 18:59 1.0.0 MERTed && --beam-threshold 8 0.10379 0.11938 0.10410
120 [anonymised] 2017-12-30 18:33 1.0.0 MERTed && --beam-threshold 16 0.10225 0.11686 0.10241
121 [anonymised] 2017-12-30 18:08 1.0.0 MERTed && --beam-threshold 32 0.10135 0.11540 0.10147
122 [anonymised] 2017-12-30 17:17 1.0.0 MERTed && --beam-threshold 64 moses 0.10064 0.11469 0.10065
81 [anonymised] 2017-12-20 15:48 1.0.0 0.17542 0.19777 0.17413
82 [anonymised] 2017-12-19 21:42 1.0.0 N/A 0.19777 0.17413
96 kaczla 2017-12-19 19:23 1.0.0 Moses baseline on 10% utterances moses 0.17528 0.19827 0.17356
87 kaczla 2017-12-19 19:07 1.0.0 Moses baseline on 10% utterances (stack 400, search-algorithm 1 = cube pruning) moses 0.17560 0.19849 0.17378
83 kaczla 2017-12-19 18:57 1.0.0 Moses baseline on 10% utterances (search-algorithm 1 = cube pruning) moses 0.17550 0.19842 0.17413
100 kaczla 2017-12-19 18:11 1.0.0 Moses baseline on 10% utterances (stack 1000) moses 0.17540 0.19821 0.17328
99 kaczla 2017-12-19 16:17 1.0.0 Moses baseline on 10% utterances (stack 400) moses 0.17563 0.19837 0.17333
90 [anonymised] 2017-12-06 12:05 1.0.0 TAU-2017-16 - baseline with the probing multiplier parameter p in build_binary program changed moses 0.17543 0.19844 0.17366
29 [anonymised] 2017-12-05 20:10 1.0.0 Add 5G monolingual data + MERT v2 mert moses 0.21253 0.23071 0.20213
38 [anonymised] 2017-12-04 13:57 1.0.0 Add 5G monolingual data + MERT v1 mert moses 0.19765 0.22497 0.19174
43 [anonymised] 2017-11-30 18:42 1.0.0 TAU-2017-06 - add dictionary extracted from dict.cc to corpus only; clean corpus moses 0.18117 0.20689 0.18378
56 [anonymised] 2017-11-29 14:02 1.0.0 Mert - part of training set mert moses 0.18692 0.21585 0.18183
52 [anonymised] 2017-11-29 12:57 1.0.0 TAU-2017-06 - add dictionary extracted from dict.cc to corpus and model language moses 0.18173 0.20540 0.18324
85 kaczla 2017-11-29 11:57 1.0.0 Moses baseline on 10% utterances (6-gram model) 0.17542 0.19881 0.17380
86 kaczla 2017-11-29 11:54 1.0.0 Moses baseline on 10% utterances (6-gram model) moses 0.17542 0.19881 0.17380
93 kaczla 2017-11-29 11:50 1.0.0 Moses baseline on 10% utterances (5-gram model - trie data structure) moses 0.17536 0.19835 0.17358
94 kaczla 2017-11-29 11:48 1.0.0 Moses baseline on 10% utterances moses 0.17536 0.19835 0.17358
16 kaczla 2017-11-29 11:40 1.0.0 baseline Moses on 100% utterances + 40GB english monolingual data (4-gram + pruning) - without mert 0.24350 0.28709 0.24384
13 kaczla 2017-11-29 11:37 1.0.0 baseline Moses on 100% utterances + 40GB english monolingual data (4-gram + pruning + 2 iteration MERT) 0.23935 0.28978 0.25245
11 kaczla 2017-11-29 11:33 1.0.0 baseline Moses on 100% utterances + 40GB english monolingual data (4-gram + pruning + 9 iteration MERT - weights no change) 0.25819 0.29095 0.25513
41 [anonymised] 2017-11-26 09:13 1.0.0 5G monolingual data moses 0.19149 0.21697 0.18459
64 [anonymised] 2017-11-24 15:45 1.0.0 used MERT (tuned and tested on dev-0) mert moses 0.17950 0.20199 0.17956
63 [anonymised] 2017-11-24 15:39 1.0.0 used MERT (tuned and tested on dev-0) mert moses 0.17950 0.20199 0.17956
65 [anonymised] 2017-11-24 15:30 1.0.0 used MERT (tuned and tested on dev-0) mert moses 0.17950 0.20199 0.17956
157 [anonymised] 2017-11-22 15:41 1.0.0 Portuguese-english translation (+ dictionary improvement) moses N/A N/A N/A
22 [anonymised] 2017-11-22 15:05 1.0.0 40GB language model ready-made moses 0.23216 0.26505 0.23152
12 kaczla 2017-11-22 12:19 1.0.0 baseline Moses on 100% utterances + 40GB english monolingual data (4-gram + pruning + 2 iteration MERT) - without dev-1 0.23935 N/A 0.25245
21 [anonymised] 2017-11-22 09:06 1.0.0 Added 40GB corpora ready-made moses 0.23216 0.26505 0.23152
106 [anonymised] 2017-11-21 16:36 1.0.0 5G LM moses 0.17201 0.19406 0.16658
35 [anonymised] 2017-11-21 13:39 1.0.0 Add 5G monolingual data moses 0.19762 0.22481 0.19183
109 [anonymised] 2017-11-20 19:31 1.0.0 5G LM data 0.16343 0.18296 0.15704
103 [anonymised] 2017-11-19 19:27 1.0.0 test moses 0.17422 0.19771 0.17118
136 [anonymised] 2017-11-16 14:35 1.0.0 used MERT (tuned and tested on dev-0) mert moses 0.10792 N/A 0.00768
40 kaczla 2017-11-15 11:27 1.0.0 baseline Moses on 10% utterances + 40GB english monolingual data moses 0.18708 0.22068 0.18618
104 [anonymised] 2017-11-14 23:50 1.0.0 Use 10%, split compounds moses 0.17557 0.19846 0.17017
97 [anonymised] 2017-11-14 20:57 1.0.0 Moses baseline on 10% utterances v2 moses 0.17557 0.19846 0.17351
80 [anonymised] 2017-11-14 20:02 1.0.0 Moses baseline on 10% utterances 0.17568 0.19777 0.17413
124 [anonymised] 2017-11-13 19:22 1.0.0 MERT tune on a part of training set attempt 2 mert moses 0.09175 0.10364 0.09110
116 [anonymised] 2017-11-08 12:52 1.0.0 Split compound nouns moses 0.08586 0.09299 0.12383
125 [anonymised] 2017-11-08 01:24 1.0.0 Moses 0.08586 0.09299 0.08659
23 [anonymised] 2017-11-07 22:37 1.0.0 Moses 100% utterances compact phrase and lexical-tables ready-made moses 0.21408 0.24051 0.21523
156 [anonymised] 2017-11-07 21:26 1.0.0 Merge branch 'master' of ssh://gonito.net/siulkilulki/wmt-2017 ready-made moses 0.12443 N/A N/A
128 [anonymised] 2017-11-07 18:36 1.0.0 MERT tune one a part of training set mert moses N/A N/A 0.06720
69 kaczla 2017-11-07 17:40 1.0.0 baseline Moses on 10% utterances + Wikipedia title (with identical titles) and Wiktionary (all translation) moses 0.17806 0.19990 0.17846
70 kaczla 2017-11-06 21:05 1.0.0 baseline Moses on 10% utterances + Wiktionary (all translation) moses 0.17677 0.19965 0.17712
71 kaczla 2017-11-06 18:17 1.0.0 baseline Moses on 10% utterances + Wiktionary (only first translation) moses 0.17552 0.19828 0.17673
75 kaczla 2017-11-02 12:49 1.0.0 baseline Moses on 10% utterances + Wikipedia title (with identical titles) moses 0.17726 0.20027 0.17562
76 kaczla 2017-11-02 07:28 1.0.0 baseline Moses on 10% utterances + Wikipedia title (ignore identical titles) moses 0.17669 0.19682 0.17528
34 kaczla 2017-10-22 20:13 1.0.0 Moses baseline on 50% utterances ready-made moses 0.19606 0.22087 0.19433
134 kaczla 2017-10-21 07:58 1.0.0 Add script for counting words 0.01390 0.01587 0.01474
135 [anonymised] 2017-10-11 17:56 1.0.0 Hope better solution stupid N/A N/A 0.00768
140 [anonymised] 2017-10-11 16:19 1.0.0 my copy-solution stupid N/A N/A 0.00762
155 [anonymised] 2017-10-11 13:15 1.0.0 my brilliant solution2 stupid N/A N/A N/A
154 [anonymised] 2017-10-11 13:05 1.0.0 my brilliant solution2 N/A N/A N/A
143 [anonymised] 2017-10-11 12:49 1.0.0 TAU-2017-01 solution 01 stupid N/A N/A 0.00049
133 kaczla 2017-10-11 11:12 1.0.0 Popular german words stupid 0.01390 0.01587 0.01474
153 [anonymised] 2017-10-11 07:48 1.0.0 my brilliant solution N/A N/A N/A
150 kaczla 2017-10-11 05:40 1.0.0 Popular english words stupid 0.00000 0.00218 0.00000
149 kaczla 2017-10-11 05:36 1.0.0 Popular english words stupid 0.00000 0.00001 0.00000
79 p/tlen 2017-10-11 05:23 1.0.0 Moses baseline on 10% utterances ready-made 0.17568 0.19777 0.17413
78 p/tlen 2017-10-11 05:13 1.0.0 baseline Moses on 10% utterances 0.17568 N/A 0.17413
148 kaczla 2017-10-11 05:11 1.0.0 Popular english words stupid 0.00000 0.00000 0.00000
147 kaczla 2017-10-11 05:05 1.0.0 Popular german words stupid 0.00000 0.00000 0.00000
137 [anonymised] 2017-10-10 19:39 1.0.0 translated days stupid 0.00722 N/A 0.00766
139 [anonymised] 2017-10-09 20:42 1.0.0 test stupid N/A N/A 0.00762
152 [anonymised] 2017-10-09 20:40 1.0.0 empty output stupid N/A N/A N/A
131 [anonymised] 2017-10-09 09:43 1.0.0 my stupid solution stupid 0.03144 N/A 0.02706
151 [anonymised] 2017-10-08 17:10 1.0.0 [ ] 0.00000 N/A N/A
144 p/tlen 2017-10-05 06:04 1.0.0 empty output 0.00000 0.00000 0.00000
146 [anonymised] 2017-10-04 15:04 1.0.0 stupid stupid N/A 0.00000 0.00000
141 [anonymised] 2017-10-04 14:46 1.0.0 stupid solution 2 stupid N/A N/A 0.00762
145 [anonymised] 2017-10-04 14:42 1.0.0 stupid solution N/A N/A 0.00000
138 [anonymised] 2017-10-03 13:23 1.0.0 just checkin' stupid N/A N/A 0.00762