WMT2017 German-English machine translation challenge for news

Translate news articles from German into English. [ver. 1.0.0]

# submitter when ver. description dev-0 BLEU dev-1 BLEU test-A BLEU
1 PioBec 2020-01-15 09:58 1.0.0 Poprawienie Tokenizacji istniejacego rozwiazania v3 ready-made fairseq 0.39610 0.47024 0.41504
2 PioBec 2020-01-15 09:29 1.0.0 Poprawienie Tokenizacji istniejacego rozwiazania v2 ready-made fairseq 0.39264 0.46386 0.40909
156 PioBec 2020-01-15 09:15 1.0.0 Poprawienie Tokenizacji istniejacego rozwiazania N/A N/A N/A
3 Olga Kwaśniewska 2020-01-14 20:53 1.0.0 fix tokenization of output ready-made fairseq 0.38549 0.45189 0.39879
7 Gabi 2020-01-07 11:35 1.0.0 ready-made Fairseq model fairseq ready-made-model 0.24760 0.31147 0.26579
6 [anonymised] 2019-12-30 09:59 1.0.0 Runed a ready-made Fairseq model fairseq ready-made-model 0.24760 0.31147 0.26579
124 MSz 2019-05-22 21:02 1.0.0 marian 100k tg freq 10000 neural-network marian 0.06805 0.07651 0.06824
115 MSz 2019-05-22 18:44 1.0.0 marian 100k freq 10000 neural-network marian 0.11676 0.13285 0.11359
14 MSz 2019-05-22 12:01 1.0.0 marian 1M neural-network marian 0.23935 0.27904 0.24561
55 MSz 2019-05-22 11:47 1.0.0 marian 1M tg neural-network marian 0.17381 0.20079 0.18072
17 MSz 2019-02-05 11:36 1.0.0 type=s2s, corpseLen=1M, valid-freq 10000, early-stopping 5, workspace 2500, postproc sed deescapeSpecialChars detruecase awk sed neural-network 0.23399 0.27282 0.23674
18 MSz 2019-01-22 17:17 1.0.0 type=amun, corpseLen=1M, valid-freq 10000, early-stopping 5, workspace 2500, postproc sed deescapeSpecialChars detruecase awk sed neural-network 0.23293 0.27193 0.23254
22 MSz 2019-01-12 12:57 1.0.0 corpseLen=590k, valid-freq 10000, early-stopping 5, workspace 3000, postproc sed deescapeSpecialChars detruecase awk sed 0.20598 0.24117 0.21002
23 MSz 2019-01-12 11:20 1.0.0 corpseLen=590k, valid-freq 10000, early-stopping 5, postproc sed deescapeSpecialChars detruecase awk 0.20547 0.24024 0.20857
24 MSz 2019-01-11 10:18 1.0.0 corpseLen=590k, valid-freq 10000, early-stopping 5, postproc sed deescapeSpecialChars detruecase 0.20547 0.24024 0.20857
30 MSz 2019-01-11 10:08 1.0.0 corpseLen=590k, valid-freq 10000, early-stopping 5, postproc sed deescapeSpecialChars 0.19604 0.23046 0.19946
37 MSz 2019-01-11 10:06 1.0.0 corpseLen=590k, valid-freq 10000, early-stopping 5, postproc sed 0.18594 0.22086 0.19142
99 MSz 2019-01-10 22:12 1.0.0 corpseLen=590k, valid-freq 10000, early-stopping 5, no postproc 0.16705 0.20042 0.17246
113 MSz 2019-01-03 22:12 1.0.0 with awk simple postproc on out 0.13057 0.15454 0.12454
121 MSz 2018-12-09 14:47 1.0.0 Second dell commit 0.09811 0.11533 0.09146
8 Durson 2018-02-15 00:38 1.0.0 Tensorflow 80k iterations ; beam 4 alpha 0.9 ready-made neural-network 0.25882 0.30561 0.26322
9 Durson 2018-02-15 00:05 1.0.0 Tensorflow 80k iterations ; beam 3 alpha 0.6 ready-made neural-network 0.25720 0.30334 0.26143
10 Durson 2018-02-14 23:18 1.0.0 Tensorflow 86k iterations ; beam 3 alpha 0.6 ready-made neural-network 0.25409 0.30126 0.25949
15 Durson 2018-02-14 11:47 1.0.0 Tensorflow 50k iterations ; beam 20 alpha 0.6 ready-made neural-network 0.23913 0.28295 0.24414
35 dk 2018-02-07 11:10 1.0.0 Add 5G data moses 0.19762 0.22481 0.19183
34 dk 2018-02-07 11:00 1.0.0 Add 5G data N/A 0.22481 0.19183
53 dk 2018-02-07 10:51 1.0.0 improve solution -stack 155 moses 0.18141 0.20750 0.18236
52 dk 2018-02-07 10:47 1.0.0 Improve sollution -stack 155 moses 0.18141 N/A 0.18236
127 EmEm 2018-02-04 23:59 1.0.0 'baseline' moses 0.02757 0.02569 0.02823
31 deinonzch 2018-01-31 11:43 1.0.0 corpus=590616, NB_OF_EPOCHS=8, MAX_WORDS=46000 neural-network 0.18610 0.21637 0.19461
51 dk 2018-01-24 11:05 1.0.0 improve solution moses 0.18141 N/A 0.18236
108 p/tlen 2018-01-17 06:46 1.0.0 NMT with Marian, vocabulary=70K, epochs=7 0.14849 0.17603 0.15308
70 deinonzch 2018-01-16 18:36 1.0.0 --search-algorithm 1 -s 2000 --cube-pruning-pop-limit 2000 --cube-pruning-diversity 100-b 0.1 --minimum-bayes-risk moses 0.17767 0.20160 0.17646
105 p/tlen 2018-01-15 09:13 1.0.0 NMT trained with Marian on 10%, 5 epochs, 40K dictionary neural-network 0.15263 0.17750 0.15966
129 EmEm 2018-01-14 16:45 1.0.0 'ibm self-made algo N/A N/A 0.02608
139 EmEm 2018-01-14 16:33 1.0.0 ibm1 N/A N/A 0.00762
71 MF 2018-01-13 21:39 1.0.0 Baseline 10%, stack 200 beam 0.1 moses 0.17469 0.19716 0.17625
100 MF 2018-01-13 21:22 1.0.0 0.17468 0.19761 0.17224
103 MF 2018-01-13 19:21 1.0.0 0.17009 0.19237 0.16779
126 MF 2018-01-13 19:12 1.0.0 0.06572 0.07063 0.06317
4 p/tlen 2018-01-09 18:10 1.0.0 WMT16 neural model (decoded with Amun) + de-escape apostrophes neural-network 0.27932 0.33703 0.28988
5 p/tlen 2018-01-08 21:13 1.0.0 neural model (decoded with Amun) neural-network 0.27358 0.33058 0.28454
72 MF 2018-01-08 18:13 1.0.0 0.17546 0.19893 0.17588
87 MF 2018-01-08 18:04 1.0.0 0.17546 0.19893 0.17369
44 [anonymised] 2018-01-08 17:57 1.0.0 TAU-2017-21 - improve solution by changing some decoding options: -stack 100 0.18151 0.20738 0.18358
40 [anonymised] 2018-01-08 16:35 1.0.0 TAU-2017-21 - improve solution by changing some decoding options: -stack 150 moses 0.18138 0.20709 0.18379
42 [anonymised] 2018-01-08 15:45 1.0.0 TAU-2017-21 - improve solution by changing some decoding options: -stack 200 0.18117 0.20689 0.18378
46 [anonymised] 2018-01-08 15:02 1.0.0 TAU-2017-21 - improve solution by changing some decoding options: -stack 2000 0.18119 0.20659 0.18328
49 [anonymised] 2018-01-08 14:58 1.0.0 TAU-2017-21 - improve solution by changing some decoding options: -search-algorithm 1 -stack 2000 -cube-pruning-pop-limit 2000 -cube-pruning-diversity 500 0.18131 0.20651 0.18324
45 [anonymised] 2018-01-07 21:47 1.0.0 TAU-2017-21 - improve solution by changing some decoding options: -search-algorithm 1 -stack 2000 0.18135 0.20681 0.18347
48 [anonymised] 2018-01-07 16:29 1.0.0 TAU-2017-21 - improve solution by changing some decoding options: -beam-threshold 0.01 0.18129 0.20673 0.18326
47 [anonymised] 2018-01-07 13:30 1.0.0 TAU-2017-21 - improve solution by changing some decoding options: -stack 1000 0.18122 0.20661 0.18327
43 [anonymised] 2018-01-07 13:25 1.0.0 TAU-2017-21 - improve solution by changing some decoding options: -search-algorithm 1 0.18098 0.20696 0.18370
57 MSz 2018-01-03 14:57 1.0.0 MERTed && --beam-thresholds moses 0.17955 0.20203 0.17971
155 [anonymised] 2018-01-03 14:30 1.0.0 TAU-2017-20 - check 6 values for maximum stack size; plot graphs for BLEU and decoding time moses 0.17544 0.19840 N/A
96 deinonzch 2018-01-03 13:30 1.0.0 improve solution search-algorithm 1 -s 0 --cube-pruning-pop-limit 5000 --cube-pruning-diversity 100 0.17607 0.19839 0.17336
86 deinonzch 2018-01-03 09:39 1.0.0 improve solution search-algorithm 1 -s 0 0.17598 0.19873 0.17377
26 Weronika 2018-01-03 00:06 1.0.0 dev-0 dev-1 test-A -stack 1500 moses 0.21264 0.23117 0.20230
25 Weronika 2018-01-02 22:05 1.0.0 test-A/out.tsv -stack 1500 0.21264 0.23071 0.20230
29 Weronika 2018-01-02 20:30 1.0.0 dev-0/out.tsv 0.2126 -stack 1500 0.21264 0.23071 0.20213
89 deinonzch 2018-01-02 18:18 1.0.0 improve solution search-algorithm 1 -cube-pruning-pop-limit 2000 -s 2000 0.17599 0.19856 0.17358
109 deinonzch 2018-01-02 17:36 1.0.0 improve solution search-algorithm 1 and beam-threshold 10 0.13985 0.15876 0.13409
111 deinonzch 2018-01-02 17:24 1.0.0 improve solution search-algorithm 1 and beam-threshold 100 0.13829 0.15653 0.13114
110 deinonzch 2018-01-02 17:05 1.0.0 improve solution search-algorithm 1 and beam-threshold 100 0.13839 0.15655 0.13129
82 deinonzch 2018-01-02 16:25 1.0.0 --search-algorithm 1=cube pruning --stack 100 moses 0.17572 0.19862 0.17403
93 deinonzch 2018-01-02 16:15 1.0.0 --search-algorithm 0=normal stack -stack 100 moses 0.17569 0.19868 0.17357
75 deinonzch 2018-01-02 16:00 1.0.0 --search-algorithm 1=cube pruning 0.17596 0.19845 0.17423
90 deinonzch 2018-01-02 15:28 1.0.0 default 0.17561 0.19857 0.17358
28 Weronika 2018-01-02 11:29 1.0.0 dev-0/out.tsv 0.1982 -stack 1500 0.19816 0.23071 0.20213
56 MSz 2018-01-01 16:01 1.0.0 slightly improved -beam-threshold 0.25 -stack 152 moses 0.17955 0.20203 0.17971
59 MSz 2018-01-01 14:12 1.0.0 slightly improved -stack 154 0.17951 0.20215 0.17957
60 MSz 2018-01-01 12:22 1.0.0 slightly improved -stack 152 0.17951 0.20216 0.17957
65 MSz 2017-12-31 10:12 1.0.0 MERTed && --beam-threshold 0.0625 0.17950 0.20199 0.17956
64 MSz 2017-12-31 09:11 1.0.0 MERTed && --beam-threshold 0.125 0.17950 0.20189 0.17956
58 MSz 2017-12-31 00:49 1.0.0 MERTed && --beam-threshold 0.25 0.17948 0.20195 0.17965
66 MSz 2017-12-31 00:05 1.0.0 MERTed && --beam-threshold 0.5 0.17871 0.20147 0.17916
106 MSz 2017-12-30 20:25 1.0.0 MERTed && --beam-threshold 1 0.15859 0.18110 0.15936
112 MSz 2017-12-30 20:00 1.0.0 MERTed && --beam-threshold 2 0.12788 0.15173 0.12880
116 MSz 2017-12-30 19:29 1.0.0 MERTed && --beam-threshold 4 0.11115 0.12879 0.11071
117 MSz 2017-12-30 18:59 1.0.0 MERTed && --beam-threshold 8 0.10379 0.11938 0.10410
118 MSz 2017-12-30 18:33 1.0.0 MERTed && --beam-threshold 16 0.10225 0.11686 0.10241
119 MSz 2017-12-30 18:08 1.0.0 MERTed && --beam-threshold 32 0.10135 0.11540 0.10147
120 MSz 2017-12-30 17:17 1.0.0 MERTed && --beam-threshold 64 moses 0.10064 0.11469 0.10065
79 MF 2017-12-20 15:48 1.0.0 0.17542 0.19777 0.17413
80 MF 2017-12-19 21:42 1.0.0 N/A 0.19777 0.17413
94 kaczla 2017-12-19 19:23 1.0.0 Moses baseline on 10% utterances moses 0.17528 0.19827 0.17356
85 kaczla 2017-12-19 19:07 1.0.0 Moses baseline on 10% utterances (stack 400, search-algorithm 1 = cube pruning) moses 0.17560 0.19849 0.17378
81 kaczla 2017-12-19 18:57 1.0.0 Moses baseline on 10% utterances (search-algorithm 1 = cube pruning) moses 0.17550 0.19842 0.17413
98 kaczla 2017-12-19 18:11 1.0.0 Moses baseline on 10% utterances (stack 1000) moses 0.17540 0.19821 0.17328
97 kaczla 2017-12-19 16:17 1.0.0 Moses baseline on 10% utterances (stack 400) moses 0.17563 0.19837 0.17333
88 [anonymised] 2017-12-06 12:05 1.0.0 TAU-2017-16 - baseline with the probing multiplier parameter p in build_binary program changed moses 0.17543 0.19844 0.17366
27 Weronika 2017-12-05 20:10 1.0.0 Add 5G monolingual data + MERT v2 mert moses 0.21253 0.23071 0.20213
36 Weronika 2017-12-04 13:57 1.0.0 Add 5G monolingual data + MERT v1 mert moses 0.19765 0.22497 0.19174
41 [anonymised] 2017-11-30 18:42 1.0.0 TAU-2017-06 - add dictionary extracted from dict.cc to corpus only; clean corpus moses 0.18117 0.20689 0.18378
54 MF 2017-11-29 14:02 1.0.0 Mert - part of training set mert moses 0.18692 0.21585 0.18183
50 [anonymised] 2017-11-29 12:57 1.0.0 TAU-2017-06 - add dictionary extracted from dict.cc to corpus and model language moses 0.18173 0.20540 0.18324
83 kaczla 2017-11-29 11:57 1.0.0 Moses baseline on 10% utterances (6-gram model) 0.17542 0.19881 0.17380
84 kaczla 2017-11-29 11:54 1.0.0 Moses baseline on 10% utterances (6-gram model) moses 0.17542 0.19881 0.17380
91 kaczla 2017-11-29 11:50 1.0.0 Moses baseline on 10% utterances (5-gram model - trie data structure) moses 0.17536 0.19835 0.17358
92 kaczla 2017-11-29 11:48 1.0.0 Moses baseline on 10% utterances moses 0.17536 0.19835 0.17358
16 kaczla 2017-11-29 11:40 1.0.0 baseline Moses on 100% utterances + 40GB english monolingual data (4-gram + pruning) - without mert 0.24350 0.28709 0.24384
13 kaczla 2017-11-29 11:37 1.0.0 baseline Moses on 100% utterances + 40GB english monolingual data (4-gram + pruning + 2 iteration MERT) 0.23935 0.28978 0.25245
11 kaczla 2017-11-29 11:33 1.0.0 baseline Moses on 100% utterances + 40GB english monolingual data (4-gram + pruning + 9 iteration MERT - weights no change) 0.25819 0.29095 0.25513
39 MF 2017-11-26 09:13 1.0.0 5G monolingual data moses 0.19149 0.21697 0.18459
62 MSz 2017-11-24 15:45 1.0.0 used MERT (tuned and tested on dev-0) mert moses 0.17950 0.20199 0.17956
61 MSz 2017-11-24 15:39 1.0.0 used MERT (tuned and tested on dev-0) mert moses 0.17950 0.20199 0.17956
63 MSz 2017-11-24 15:30 1.0.0 used MERT (tuned and tested on dev-0) mert moses 0.17950 0.20199 0.17956
154 patrycja 2017-11-22 15:41 1.0.0 Portuguese-english translation (+ dictionary improvement) moses N/A N/A N/A
20 siulkilulki 2017-11-22 15:05 1.0.0 40GB language model ready-made moses 0.23216 0.26505 0.23152
12 kaczla 2017-11-22 12:19 1.0.0 baseline Moses on 100% utterances + 40GB english monolingual data (4-gram + pruning + 2 iteration MERT) - without dev-1 0.23935 N/A 0.25245
19 siulkilulki 2017-11-22 09:06 1.0.0 Added 40GB corpora ready-made moses 0.23216 0.26505 0.23152
104 MF 2017-11-21 16:36 1.0.0 5G LM moses 0.17201 0.19406 0.16658
33 Weronika 2017-11-21 13:39 1.0.0 Add 5G monolingual data moses 0.19762 0.22481 0.19183
107 MF 2017-11-20 19:31 1.0.0 5G LM data 0.16343 0.18296 0.15704
101 MF 2017-11-19 19:27 1.0.0 test moses 0.17422 0.19771 0.17118
133 MSz 2017-11-16 14:35 1.0.0 used MERT (tuned and tested on dev-0) mert moses 0.10792 N/A 0.00768
38 kaczla 2017-11-15 11:27 1.0.0 baseline Moses on 10% utterances + 40GB english monolingual data moses 0.18708 0.22068 0.18618
102 Weronika 2017-11-14 23:50 1.0.0 Use 10%, split compounds moses 0.17557 0.19846 0.17017
95 Weronika 2017-11-14 20:57 1.0.0 Moses baseline on 10% utterances v2 moses 0.17557 0.19846 0.17351
78 Weronika 2017-11-14 20:02 1.0.0 Moses baseline on 10% utterances 0.17568 0.19777 0.17413
122 MF 2017-11-13 19:22 1.0.0 MERT tune on a part of training set attempt 2 mert moses 0.09175 0.10364 0.09110
114 Weronika 2017-11-08 12:52 1.0.0 Split compound nouns moses 0.08586 0.09299 0.12383
123 Weronika 2017-11-08 01:24 1.0.0 Moses 0.08586 0.09299 0.08659
21 siulkilulki 2017-11-07 22:37 1.0.0 Moses 100% utterances compact phrase and lexical-tables ready-made moses 0.21408 0.24051 0.21523
153 siulkilulki 2017-11-07 21:26 1.0.0 Merge branch 'master' of ssh://gonito.net/siulkilulki/wmt-2017 ready-made moses 0.12443 N/A N/A
125 MF 2017-11-07 18:36 1.0.0 MERT tune one a part of training set mert moses N/A N/A 0.06720
67 kaczla 2017-11-07 17:40 1.0.0 baseline Moses on 10% utterances + Wikipedia title (with identical titles) and Wiktionary (all translation) moses 0.17806 0.19990 0.17846
68 kaczla 2017-11-06 21:05 1.0.0 baseline Moses on 10% utterances + Wiktionary (all translation) moses 0.17677 0.19965 0.17712
69 kaczla 2017-11-06 18:17 1.0.0 baseline Moses on 10% utterances + Wiktionary (only first translation) moses 0.17552 0.19828 0.17673
73 kaczla 2017-11-02 12:49 1.0.0 baseline Moses on 10% utterances + Wikipedia title (with identical titles) moses 0.17726 0.20027 0.17562
74 kaczla 2017-11-02 07:28 1.0.0 baseline Moses on 10% utterances + Wikipedia title (ignore identical titles) moses 0.17669 0.19682 0.17528
32 kaczla 2017-10-22 20:13 1.0.0 Moses baseline on 50% utterances ready-made moses 0.19606 0.22087 0.19433
131 kaczla 2017-10-21 07:58 1.0.0 Add script for counting words 0.01390 0.01587 0.01474
132 MSz 2017-10-11 17:56 1.0.0 Hope better solution stupid N/A N/A 0.00768
137 MSz 2017-10-11 16:19 1.0.0 my copy-solution stupid N/A N/A 0.00762
152 MSz 2017-10-11 13:15 1.0.0 my brilliant solution2 stupid N/A N/A N/A
151 MSz 2017-10-11 13:05 1.0.0 my brilliant solution2 N/A N/A N/A
140 [anonymised] 2017-10-11 12:49 1.0.0 TAU-2017-01 solution 01 stupid N/A N/A 0.00049
130 kaczla 2017-10-11 11:12 1.0.0 Popular german words stupid 0.01390 0.01587 0.01474
150 MSz 2017-10-11 07:48 1.0.0 my brilliant solution N/A N/A N/A
147 kaczla 2017-10-11 05:40 1.0.0 Popular english words stupid 0.00000 0.00218 0.00000
146 kaczla 2017-10-11 05:36 1.0.0 Popular english words stupid 0.00000 0.00001 0.00000
77 p/tlen 2017-10-11 05:23 1.0.0 Moses baseline on 10% utterances ready-made 0.17568 0.19777 0.17413
76 p/tlen 2017-10-11 05:13 1.0.0 baseline Moses on 10% utterances 0.17568 N/A 0.17413
145 kaczla 2017-10-11 05:11 1.0.0 Popular english words stupid 0.00000 0.00000 0.00000
144 kaczla 2017-10-11 05:05 1.0.0 Popular german words stupid 0.00000 0.00000 0.00000
134 MF 2017-10-10 19:39 1.0.0 translated days stupid 0.00722 N/A 0.00766
136 Weronika 2017-10-09 20:42 1.0.0 test stupid N/A N/A 0.00762
149 Weronika 2017-10-09 20:40 1.0.0 empty output stupid N/A N/A N/A
128 deinonzch 2017-10-09 09:43 1.0.0 my stupid solution stupid 0.03144 N/A 0.02706
148 deinonzch 2017-10-08 17:10 1.0.0 [ ] 0.00000 N/A N/A
141 p/tlen 2017-10-05 06:04 1.0.0 empty output 0.00000 0.00000 0.00000
143 siulkilulki 2017-10-04 15:04 1.0.0 stupid stupid N/A 0.00000 0.00000
138 EmEm 2017-10-04 14:46 1.0.0 stupid solution 2 stupid N/A N/A 0.00762
142 EmEm 2017-10-04 14:42 1.0.0 stupid solution N/A N/A 0.00000
135 [anonymised] 2017-10-03 13:23 1.0.0 just checkin' stupid N/A N/A 0.00762